CN116110270A - Multi-degree-of-freedom driving simulator based on mixed reality - Google Patents

Multi-degree-of-freedom driving simulator based on mixed reality Download PDF

Info

Publication number
CN116110270A
CN116110270A CN202310154487.8A CN202310154487A CN116110270A CN 116110270 A CN116110270 A CN 116110270A CN 202310154487 A CN202310154487 A CN 202310154487A CN 116110270 A CN116110270 A CN 116110270A
Authority
CN
China
Prior art keywords
mixed reality
connecting rod
freedom
seat platform
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310154487.8A
Other languages
Chinese (zh)
Inventor
黄辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bangkang Industrial Robot Technology Co ltd
Original Assignee
Shenzhen Bangkang Industrial Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bangkang Industrial Robot Technology Co ltd filed Critical Shenzhen Bangkang Industrial Robot Technology Co ltd
Priority to CN202310154487.8A priority Critical patent/CN116110270A/en
Publication of CN116110270A publication Critical patent/CN116110270A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/05Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles the view from a vehicle being simulated
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/052Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles characterised by provision for recording or measuring trainee's performance
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/12Motion systems for aircraft simulators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/301Simulation of view from aircraft by computer-processed or -generated image
    • G09B9/302Simulation of view from aircraft by computer-processed or -generated image the image being transformed by computer processing, e.g. updating the image to correspond to the changing point of view
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
    • G09B9/30Simulation of view from aircraft
    • G09B9/307Simulation of view from aircraft by helmet-mounted projector or display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a mixed reality-based multi-degree-of-freedom driving simulator, which comprises a control terminal, a multi-degree-of-freedom mechanical arm, a seat platform and mixed reality glasses, wherein the control terminal is connected with the seat platform; a physical operation part is arranged in the seat platform; the control terminal obtains the operation of an operator on the operation part and controls the seat platform to simulate acceleration, uniform speed, deceleration and turning; the mixed reality glasses acquire real object views in the seat platform in real time, and track and acquire position and posture information when an operator observes; the control terminal also generates a virtual view, fuses the real view and the virtual view to obtain a fused view, and sends the fused view to the mixed reality glasses for display, and simultaneously, the control terminal synchronously updates and renders the fused view matched with the position and posture in the mixed reality glasses according to the position and posture information when an operator observes. The invention improves the effect of simulating the real sense of driving and reduces the cost compared with full-physical simulation.

Description

Multi-degree-of-freedom driving simulator based on mixed reality
Technical Field
The invention relates to the technical field of driving simulators, in particular to a mixed reality-based multi-degree-of-freedom driving simulator applied to industries such as automobiles and airplanes.
Background
The simulation platform of the existing simulators such as airplanes and automobiles applies the virtual reality technology to the multi-degree-of-freedom motion platform, virtual vision, sound effects and motion simulation in the driving process are generated through the computer technology, so that a driver is immersed into the virtual driving environment to generate the real driving feeling, and driving in the real world is experienced, known and learned. The current motion simulator is generally a full-physical simulator and a virtual reality simulator.
The full-object simulator takes a large screen or projection and a 1:1 seat platform as technical characteristics to realize the simulation of the environment inside and outside the cabin, and all equipment in the 1:1 cabin, such as a switch, a button and the like, are realized by objects. In addition, the operation equipment of the cockpit adopts a triaxial operation system with force feedback to realize the force sense simulation of main operation. The full-physical simulator has the advantages of powerful function, high simulation fidelity, huge volume, structural load and high cost.
The virtual reality simulator adopts VR technology to realize the visual simulation (HMD is generally adopted as visual display equipment), the cabin display equipment of the virtual reality simulator is simulated and presented in a virtual reality mode, and the virtual reality simulator has the advantages of small size, low cost, strong universality and more application in the military field. However, the virtual reality simulator emphasizes that the vision, hearing, touch and the like of an operator in a virtual environment are completely generated by a computer, various senses of the operator are relatively isolated from the real world, the operator is completely immersed in the virtual environment which is completely generated by the computer, the actions of the operator are not reflected in real time in the virtual environment, the operator cannot watch the actions of the operator, people in life cannot mainly guide hands to finish operation behaviors through visual feedback, the inconsistency of the seen contents of the operator and the motion process cannot achieve the effect of 'seeing accurate' and 'touching accurate', the somatosensory effect in driving simulation is poor, and the reality is not high.
Accordingly, the prior art is in need of improvement.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a multi-degree-of-freedom driving simulator based on mixed reality, which aims to improve the effect of simulating the real sense of driving and reduce the cost compared with full-physical simulation.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a mixed reality-based multiple degree of freedom driving simulator, comprising:
the system comprises a control terminal, a multi-degree-of-freedom mechanical arm, a seat platform and mixed reality glasses;
the control terminal is electrically connected with the multi-degree-of-freedom mechanical arm, the seat platform and the mixed reality glasses, and the seat platform is connected to the output end of the multi-degree-of-freedom mechanical arm;
the seat platform is internally provided with a seat and a physical operation part which is the same as that in the real cockpit;
the control terminal obtains the operation action of an operator on the operation part, and controls the multi-degree-of-freedom mechanical arm to perform multi-degree-of-freedom movement according to the operation action, so as to drive the seat platform to perform simulation of acceleration, uniform speed, deceleration and turning driving movement;
the mixed reality glasses acquire real object views in the seat platform in real time and track and acquire position and posture information of the operator in real time when the operator observes the real object views;
the control terminal also generates a virtual view, the virtual view is an image which can be observed inside and outside a cockpit in the driving process, the control terminal fuses the real view and the virtual view to obtain a fused view and sends the fused view to the mixed reality glasses for display, and meanwhile, the control terminal synchronously updates and renders the fused view matched with the position and posture in the mixed reality glasses according to the position and posture information observed by an operator.
The control terminal is connected with a motion capturing sensor and is used for capturing the direction and the force of the hand and foot motions of an operator.
Wherein, the mixed reality glasses are provided with binocular cameras and inertial sensors;
the binocular camera is used for shooting a physical image in the seat platform in real time;
and the inertial sensor acquires the angular speed and the acceleration of the operator in real time.
The multi-degree-of-freedom mechanical arm comprises a base, a horizontal rotary table, a first connecting rod, a vertical rotary table, a second connecting rod, a third connecting rod, a horizontal rotary motor, a first connecting rod rotary motor, a vertical rotary motor and a second connecting rod rotary motor;
the base is rotationally connected with the horizontal turntable, and the horizontal turntable is driven to rotate in an xy plane relative to the base by a horizontal rotating motor arranged on the horizontal turntable;
two ends of the first connecting rod are respectively connected with the horizontal plane turntable and the vertical plane turntable in a rotating way, one end of the first connecting rod drives the first connecting rod to rotate in an xz plane relative to the horizontal plane turntable through a first connecting rod rotating motor arranged on the horizontal plane turntable, and the other end of the first connecting rod drives the vertical plane turntable to rotate in the xz plane relative to the first connecting rod through a vertical rotating motor arranged on the vertical plane turntable;
two ends of the second connecting rod are respectively connected with the vertical surface turntable and the third connecting rod in a rotating way, and one end of the second connecting rod connected with the vertical surface turntable drives the second connecting rod to rotate relative to the central shaft of the second connecting rod through a second connecting rod rotating motor arranged on the vertical surface turntable;
the third connecting rod rotates in the xz plane relative to the second connecting rod, and one end, far away from the second connecting rod, of the third connecting rod is connected with the seat platform.
The horizontal plane turntable is further provided with a telescopic cylinder capable of swinging up and down, a pull rod capable of extending or recovering is arranged in the telescopic cylinder, and the extending end of the pull rod is hinged with the first connecting rod.
The mixed reality glasses track and acquire the position and posture information observed by the operator in real time by adopting a dynamic scene virtual-real fusion positioning algorithm, and the dynamic scene virtual-real fusion positioning algorithm comprises:
s10, acquiring the head posture of an operator, including:
s11, acquiring the angular speed and the acceleration of an operator through an inertial sensor in the mixed reality glasses;
s12, calibrating a camera in the mixed reality glasses, and shooting a physical image in the seat platform through the camera;
s13, calculating to obtain the head posture of the operator by adopting a visual inertia fusion semantic SLAM algorithm based on the angular speed, the acceleration and the physical image;
s20, acquiring three-dimensional coordinates of a seat platform, including:
s21, performing marker detection on the real object image acquired by the camera to acquire a marker graph;
s22, calculating the relative pose of the identification map and the mixed reality glasses display screen to obtain the three-dimensional coordinates of the seat platform.
The visual inertia fusion semantic SLAM algorithm comprises the following steps:
s30, preprocessing measured values, including:
s31, pre-integrating the angular velocity and the acceleration acquired by the inertial sensor;
s32, performing feature point detection and tracking, image semantic segmentation and semantic label adding processing on the feature points on the object image shot by the camera;
s40, initializing:
performing visual motion reconstruction and visual inertia alignment according to the pre-integral and the feature points;
s50, local BA optimization and repositioning:
performing nonlinear, tightly-coupled and sliding window visual inertial navigation BA optimization on the data preprocessed in the step S30 and the initialized data in the step S40;
s60, loop detection: carrying out image semantic segmentation and carrying out search matching on the key frames added with semantic tags, judging whether a loop is formed or not, and eliminating accumulated errors;
s70, global pose map optimization: and (5) performing four-degree-of-freedom pose diagram optimization on the diagram after the local BA optimization and repositioning in the S50 and loop detection in the S60.
The image semantic segmentation algorithm comprises the following steps:
s81, using a group of images with weak marks, the encoder f is mapped according to the classified data set enc Training is performed as shown in the formula:
Figure BDA0004091806490000051
wherein x is input, y is output, θ enc Represents f enc Parameters e of (2) c The cross entropy loss of classification is that I is the domain composed of input and output;
s82, encoder f enc Applied to the attention training dataset, filter frames that are independent of their class labels, and generate an attention map of the target class;
s83, through solving an optimization problem, in each relevant video interval, combining the attention map with the color and motion prompt, carrying out space-time object segmentation and attaching labels;
s84, the split label pair decoder f obtained in the previous stage dec Training is performed as shown in the formula:
Figure BDA0004091806490000052
where V is the velocity, V is the set of velocities, P is the pixel, P is the set of pixels, c is the cutting variable, θ dec Is f dec Is a parameter of (a).
S85, by applying the entire encoder f enc Decoder f dec The network implements semantic segmentation of the static image.
The motion of the multi-degree-of-freedom mechanical arm is controlled by adopting a mechanical arm somatosensory simulation algorithm based on perception feedback, the mechanical arm somatosensory simulation algorithm based on perception feedback adopts a vestibule model of a human body as perception, and the simulation algorithm specifically comprises the following steps:
defining an otolith model as:
Figure BDA0004091806490000053
/>
wherein t is a ,t s ,t L Is an otolith model parameter, and k is a gain factor;
defining a semicircular canal model as follows:
Figure BDA0004091806490000061
wherein T is L ,T s ,T a Is a semi-regular model parameter;
setting a specific force, and outputting the actual displacement of the seat platform by using the translation low-pass channel, the translation high-pass channel and the otolith model feedback channel;
setting angular velocity, and outputting the actual angular velocity of the seat platform by using the rotary high-pass channel and the semicircular canal model feedback channel.
The multi-degree-of-freedom mechanical arm adopts an Actor-Critic algorithm of reinforcement learning to autonomously correct the abrasion error of the mechanical arm in an initialization stage.
According to the multi-degree-of-freedom driving simulator based on mixed reality, the real object operation part is arranged in the seat platform, the operation actions of an operator, the real object operation part and the virtual view are displayed after being fused through the mixed reality glasses, and meanwhile, the motion experiences of acceleration, uniform speed, deceleration, turning and the like corresponding to the operation actions are timely output through the multi-degree-of-freedom mechanical arm, so that the operator can see the actions of hands and feet of the operator in the fused view, the content observed by the operator has high synchronous consistency with the motion process, the body feeling effect is real, and the immersion feeling of the driving real machine is generated. Meanwhile, the cost is reduced because full physical simulation is not needed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a driving simulator with multiple degrees of freedom based on mixed reality;
fig. 2 is a schematic structural diagram of a first embodiment of a driving simulator with multiple degrees of freedom based on mixed reality according to the present invention;
FIG. 3 is a schematic illustration of the connection of a multiple degree of freedom mechanical arm to a seat platform according to the present invention;
FIG. 4 is an exploded view of the structure of FIG. 3;
FIG. 5 is a schematic view of the seat platform in a raised position;
FIG. 6 is a schematic diagram of a dynamic scene virtual-real fusion positioning algorithm according to the present invention;
FIG. 7 is a schematic diagram of a visual inertial fusion semantic SLAM algorithm according to the present invention;
FIG. 8 is a schematic diagram of an image semantic segmentation algorithm according to the present invention;
fig. 9 is a schematic diagram of a motion sensing simulation algorithm of a mechanical arm based on sensing feedback.
Reference numerals illustrate:
100-simulator, 1-control terminal, 2-multi-degree of freedom mechanical arm, 201-base, 202-horizontal plane turntable, 203-first connecting rod, 204-vertical plane turntable, 205-second connecting rod, 206-third connecting rod, 207-horizontal rotation motor, 208-first connecting rod rotation motor, 209-vertical rotation motor, 210-second connecting rod rotation motor, 211-telescopic cylinder, 212-pull rod, 3-seat platform, 31-seat, 4-mixed reality glasses, 41-camera, 42-inertial sensor, 5-motion capture sensor, 200-operator.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present invention are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
In the present invention, unless explicitly specified and limited otherwise, the terms "connected," "fixed," and the like are to be construed broadly, and for example, "connected" may be either a fixed connection or a removable connection or integrated; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Referring to fig. 1 to 5, the present invention provides a driving simulator 100 with multiple degrees of freedom based on mixed reality, which includes a control terminal 1, a mechanical arm 2 with multiple degrees of freedom, a seat platform 3 and mixed reality glasses 4.
The control terminal 1 is electrically connected with the multi-degree-of-freedom mechanical arm 2, the seat platform 3 and the mixed reality glasses (MR) 4, and the seat platform 3 is connected to the output end of the multi-degree-of-freedom mechanical arm 2.
The seat platform 3 is internally provided with a seat 31 and the same physical operation parts as in a real cockpit.
The physical operation parts can be adaptively configured according to the project of simulating driving, for example, the physical operation parts comprise a steering wheel, a brake, an accelerator, a clutch, a gear shifting hand lever and the like when simulating automobile driving. Also, the operation parts in the cockpit of the aircraft are arranged in the seat platform when the aircraft is simulated to be piloted.
The multi-degree-of-freedom mechanical arm 2 of the invention performs multi-degree-of-freedom motions including translational motions along the x-axis, the y-axis and the z-axis, rotational motions around the x-axis, the y-axis and the z-axis, and the like. The mechanical arm can be arranged into a serial mechanical arm with 5 degrees of freedom or 6 degrees of freedom according to the requirement.
In the prior art, six parallel-connection type motion platforms driven by hydraulic cylinders are generally adopted, but the parallel-connection six-degree-of-freedom motion platforms are influenced by the motion strokes and output forces of a plurality of cylinders, so that acceleration actually required by equipment cannot be continuously output, and the simulation reliability and the actual experience are reduced. The invention adopts the serial multi-degree-of-freedom mechanical arm 2, has higher flexibility and larger movement range, and can realize the possibility of any end effector gesture in the working space.
The control terminal 1 of the invention obtains the operation action of the operator 200 on the operation part, and controls the mechanical arm 2 with multiple degrees of freedom to move with multiple degrees of freedom according to the operation action, so as to drive the seat platform 3 to simulate the acceleration, the uniform speed, the deceleration and the turning driving movement. The control terminal 1 of the invention is connected with a motion capture sensor 5 for capturing the direction and force of the hand and foot motion of an operator 200 so as to obtain the operation parameters of physical operation components such as steering wheel, accelerator, speed change, brake and the like, such as 30 degrees steering wheel right, 20% accelerator input and the like. After the operation parameters of the physical operation component are obtained, the control terminal 1 calculates the position and the gesture of the seat platform 3 according to the operation parameters, and plans the motion track of the mechanical arm 2 and executes the joint motion control after calculation so as to realize the simulation of the acceleration, the uniform speed, the deceleration and the turning driving motion of the seat platform 3.
The mixed reality glasses 4 acquire real object views in the seat platform 3 in real time, and track and acquire position and posture information of the operator 200 when in observation in real time. The physical view includes physical operational components within the seat platform 3, hands and feet of the operator 200, and the like.
The control terminal 1 also generates a virtual view, which is an image that can be seen inside and outside the cockpit during driving. The cockpit is a virtual cockpit corresponding to a cockpit project, such as an automobile cockpit or an airplane cockpit, and images which can be watched inside and outside the cockpit during the cockpit, such as in-cabin meters, display screens, off-cabin roads, sceneries and the like, are generated by the control terminal 1.
And the control terminal 1 fuses the real object view and the virtual view to obtain a fused view and sends the fused view to the mixed reality glasses 4 for display. The mixed reality glasses 4 of the present invention are provided with a binocular camera 41 and an inertial sensor 42, wherein the binocular camera 41 is used for capturing real-time images of objects in the seat platform 3, and the inertial sensor 42 is used for acquiring the angular velocity and acceleration of the operator 200 in real time.
The binocular camera 41 of the mixed reality glasses 4 collects high-definition images of real object operation components such as a steering wheel, an accelerator, a human hand and the like in the seat platform 3, then the control terminal 1 performs foreground extraction, light and shadow processing and three-dimensional reconstruction on the collected images and real-time fusion of the images and the virtual views, and the fusion process performs shielding processing on the operation components in the virtual cockpit, so that the collected real object operation components and the human hand are embedded into the virtual reality, and virtual-real fusion is realized.
Meanwhile, the control terminal 1 of the simulator 100 of the present invention synchronously updates and renders the fusion view matching the position and posture in the mixed reality glasses 4 according to the position and posture information when the operator 200 observes. The binocular camera 41 and the inertial sensor 42 of the mixed reality glasses 4 achieve the effects that the locator function captures the spatial locating information and the gesture information of the operator 200 when the operator observes in the seat platform 3, updates the view corresponding to the position gesture information in real time, achieves perfect fusion of the driving view angle and the virtual view, enables the operator 200 to see the actions of hands and feet of the operator in real time in the fused view, and enables the observed content to be highly synchronous with the motion process, so that the effect of 'seeing accurate' and 'touching accurate' is achieved, and the immersion sense of the driving real machine is generated.
Meanwhile, the simulator 100 does not need full physical simulation, only needs to truly simulate the operation parts such as steering wheel, accelerator, brake and the like which are actually watched and touched by the operator 200 in the driving process, and reduces the equipment cost.
Specifically, as shown in fig. 3 and 4, the multi-degree of freedom mechanical arm 2 includes a base 201, a horizontal turntable 202, a first link 203, a vertical turntable 204, a second link 205, a third link 206, and further includes a horizontal rotation motor 207, a first link rotation motor 208, a vertical rotation motor 209, and a second link rotation motor 210.
The base 201 is rotatably connected to the horizontal turntable 202, and the horizontal turntable 202 is driven to rotate in the xy plane relative to the base 201 by a horizontal rotation motor 207 mounted on the horizontal turntable 202. The horizontal plane turntable 202 drives the whole mechanical arm 2 to rotate 360 degrees in the horizontal plane. The rotation of the horizontal turntable 202 is one degree of freedom.
The two ends of the first connecting rod 203 are respectively connected with the horizontal plane turntable 202 and the vertical plane turntable 204 in a rotating way, one end of the first connecting rod drives the first connecting rod 203 to rotate in an xz plane relative to the horizontal plane turntable 202 through a first connecting rod rotating motor 208 arranged on the horizontal plane turntable 202, and the other end of the first connecting rod drives the vertical plane turntable 204 to rotate in the xz plane relative to the first connecting rod 203 through a vertical rotating motor 209 arranged on the vertical plane turntable 204. The relative rotation of the two ends of the first link 203 is in two degrees of freedom.
The two ends of the second connecting rod 205 are respectively connected with the vertical surface turntable 204 and the third connecting rod 206 in a rotating way, and one end connected with the vertical surface turntable 204 drives the second connecting rod 205 to rotate relative to the central shaft of the second connecting rod 205 through a second connecting rod rotating motor 201 arranged on the vertical surface turntable 204. The rotation of the second link 205 about its central axis is one degree of freedom.
The third link 206 rotates in the xz plane relative to the second link 205, and an end of the third link 206 remote from the second link 205 is connected to the seat platform 3. The rotation of the third link 206 relative to the second link 205 is one degree of freedom.
In this embodiment, when the connection between the third link 206 and the seat platform 3 is a fixed connection, the motion of the whole multi-degree-of-freedom mechanical arm 2 is 5 degrees of freedom, and when the connection between the third link 206 and the seat platform 3 is a rotational connection, the motion of the whole multi-degree-of-freedom mechanical arm 2 is 6 degrees of freedom. The combination of the multi-degree-of-freedom motions can realize the up-down, left-right, front-back movement and tilting of the seat platform 3 and experience the experiences of acceleration, deceleration, turning and the like. The seat platform 3 is raised to a high position as shown in fig. 5.
The horizontal rotation motor 207, the first link rotation motor 208, the vertical rotation motor 209, and the second link rotation motor 210 of the embodiment of the present invention may all be dc motors.
Preferably, the horizontal turntable 202 is further provided with a telescopic cylinder 211 capable of swinging up and down, an extendable or retractable pull rod 212 is arranged in the telescopic cylinder 211, and an extended end of the pull rod 212 is hinged with the first link 203. The arrangement of the telescopic pull rod 212 improves the supporting strength of the first connecting rod 203, and prompts the stability of the whole mechanical arm 2.
The mixed reality glasses 4 of the invention track and acquire the position and posture information of the operator 200 in real time by adopting a dynamic scene virtual-real fusion positioning algorithm.
As shown in fig. 6, the dynamic scene virtual-real fusion positioning algorithm includes:
s10, acquiring the head posture of the operator 200, specifically comprising:
s11, the angular velocity and acceleration of the operator 200 are acquired by the inertial sensor 42 in the mixed reality glasses 4.
And S12, calibrating the camera 41 in the mixed reality glasses 4, and shooting a physical image in the seat platform through the camera 41.
S13, calculating to obtain the head posture of the operator 200 by adopting a visual inertia fusion semantic SLAM algorithm based on the angular velocity, the acceleration and the physical image. SLAM (Simultaneous Localization and Mapping), referred to as synchronized locating and mapping.
The head posture is the head orientation of a person in three-dimensional space, and the head posture of the person in operation, that is, the posture of the person when the person views the person, is acquired.
S20, acquiring three-dimensional coordinates of the seat platform 3, wherein the three-dimensional coordinates specifically comprise:
s21, performing marker detection on the physical image acquired by the camera 41 to obtain a marker map.
And S22, calculating the relative pose of the identification map and the display screen of the mixed reality glasses 4 to obtain the three-dimensional coordinates of the seat platform 3.
The three-dimensional coordinates of the seat platform 3 are positions in three-dimensional space when viewed by the operator.
After the head posture of the operator 200 and the three-dimensional coordinates of the seat platform 3 are obtained, the position posture information of the operator 200 when the operator observes can be calculated, and then virtual-real fusion is performed according to the position posture information.
The virtual-real fusion is specifically to superimpose and fuse a part of real object operation components in the seat platform, such as a steering wheel, a gear shifting operation rod, a real object which can be seen in the cockpit, such as hands and feet of an operator, with virtual images inside and outside the cockpit, such as instruments in the cockpit, roads outside the cockpit, scenery and the like in the driving process.
Specifically, as shown in fig. 7, the visual inertia fusion semantic SLAM algorithm of the present invention includes:
s30, preprocessing measured values, wherein the method specifically comprises the following steps:
s31, pre-integrating the angular velocity and the acceleration acquired by the inertial sensor.
S32, performing feature point detection and tracking, image semantic segmentation and semantic label adding processing on the feature points on the object image shot by the camera.
S40, initializing:
and performing visual motion reconstruction and visual inertia alignment according to the pre-integral and the characteristic points.
S50, local BA optimization and repositioning:
and (3) performing nonlinear, tightly-coupled and sliding window visual inertial navigation BA optimization, namely Bundle Adjustment optimization, on the data preprocessed in the step (S30) and the initialized data in the step (S40).
S60, loop detection: and carrying out image semantic segmentation and carrying out search matching on the key frames added with the semantic tags, judging whether a loop is formed, and eliminating accumulated errors.
S70, global pose map optimization: and (5) performing four-degree-of-freedom pose diagram optimization on the diagram after the local BA optimization and repositioning in the S50 and loop detection in the S60.
The visual inertia fusion semantic SLAM algorithm can improve the accuracy of head gesture positioning.
Specifically, as shown in fig. 8, the image semantic segmentation algorithm according to the embodiment of the present invention includes:
s81, using a group of images with weak marks, the encoder f is mapped according to the classified data set enc Training is performed as shown in the formula:
Figure BDA0004091806490000131
wherein x is input, y is output, θ enc Represents f enc Parameters e of (2) c Is the cross entropy loss of classification, I is the domain composed of input and output.
S82, encoder f enc Applied to filter frames that are independent of their class labels on the attention training dataset and generate an attention map of the target class.
S83, through solving an optimization problem, in each relevant video interval, combining the attention map with the color and motion prompt, carrying out space-time object segmentation and attaching labels.
S84, the split label pair decoder f obtained in the previous stage dec Training is performed as shown in the formula:
Figure BDA0004091806490000132
where V is the velocity, V is the set of velocities, P is the pixel, P is the set of pixels, c is the cutting variable, θ dec Is f dec Is a parameter of (a).
S85, by applying the entire encoder f enc Decoder f dec The network implements semantic segmentation of the static image.
As shown in fig. 9, the motion of the multi-degree-of-freedom mechanical arm 2 of the present invention is controlled by using a mechanical arm somatosensory simulation algorithm based on perceptual feedback, wherein the mechanical arm somatosensory simulation algorithm based on perceptual feedback uses a vestibular model of a human body as perception, and the simulation algorithm specifically includes:
defining an otolith model as:
Figure BDA0004091806490000141
wherein t is a ,t s ,t L Is an otolith model parameter, and k is a gain factor.
Defining a semicircular canal model as follows:
Figure BDA0004091806490000142
wherein T is L ,T s ,T a Is a semi-regular model parameter.
Setting specific force, and outputting the actual displacement of the seat platform by using the translation low-pass channel, the translation high-pass channel and the otolith model feedback channel. The translational low pass channel uses a translational low pass filter and the translational high pass channel uses a translational high pass filter.
Setting angular velocity, and outputting the actual angular velocity of the seat platform by using the rotary high-pass channel and the semicircular canal model feedback channel. The rotary high pass channel uses a rotary high pass filter.
The mechanical arm somatosensory simulation algorithm based on the perception feedback consists of a low-pass force filtering channel, a high-pass force filtering channel and a high-pass angle filtering channel, and corresponding inclination angles, instant acceleration, rotation and the like are respectively generated, so that a realistic motion feel is generated on the multi-degree-of-freedom motion simulator, a vestibular model of a human body is added as perception, the fidelity of simulation is improved, the mechanical arm 2 is controlled in the motion range of a small space, and an operator has real motion experience.
Preferably, the multi-degree-of-freedom mechanical arm 2 of the simulator 100 of the invention adopts a reinforcement learning Actor-Critic algorithm to autonomously correct the abrasion error of the mechanical arm in the initialization stage.
According to the multi-degree-of-freedom driving simulator based on mixed reality, the real object operation part is arranged in the seat platform, the operation actions of an operator and the real object operation part are fused with the virtual view through the mixed reality glasses and then displayed, and meanwhile, the motion experiences of acceleration, uniform speed, deceleration, turning and the like corresponding to the operation actions are timely output through the multi-degree-of-freedom mechanical arm, so that the operator can see the actions of hands and feet of the operator in the fused view, the content observed by the operator has high synchronous consistency with the motion process, the body feeling effect is real, and the immersion feeling of the driving real machine is generated. Meanwhile, the cost is reduced because full physical simulation is not needed.
The foregoing is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims, rather, as the description of the invention covers all embodiments of the invention.

Claims (10)

1. A mixed reality-based multiple degree of freedom driving simulator, comprising:
the system comprises a control terminal, a multi-degree-of-freedom mechanical arm, a seat platform and mixed reality glasses;
the control terminal is electrically connected with the multi-degree-of-freedom mechanical arm, the seat platform and the mixed reality glasses, and the seat platform is connected to the output end of the multi-degree-of-freedom mechanical arm;
the seat platform is internally provided with a seat and a physical operation part which is the same as that in the real cockpit;
the control terminal obtains the operation action of an operator on the operation part, and controls the multi-degree-of-freedom mechanical arm to perform multi-degree-of-freedom movement according to the operation action, so as to drive the seat platform to perform simulation of acceleration, uniform speed, deceleration and turning driving movement;
the mixed reality glasses acquire real object views in the seat platform in real time and track and acquire position and posture information of the operator in real time when the operator observes the real object views;
the control terminal also generates a virtual view, the virtual view is an image which can be observed inside and outside a cockpit in the driving process, the control terminal fuses the real view and the virtual view to obtain a fused view and sends the fused view to the mixed reality glasses for display, and meanwhile, the control terminal synchronously updates and renders the fused view matched with the position and posture in the mixed reality glasses according to the position and posture information observed by an operator.
2. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the control terminal is connected with a motion capturing sensor for capturing the direction and force of the hand and foot motions of an operator.
3. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the mixed reality glasses are provided with binocular cameras and inertial sensors;
the binocular camera is used for shooting a physical image in the seat platform in real time;
and the inertial sensor acquires the angular speed and the acceleration of the operator in real time.
4. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the multiple degree of freedom mechanical arm comprises a base, a horizontal plane turntable, a first connecting rod, a vertical plane turntable, a second connecting rod, a third connecting rod, a horizontal rotation motor, a first connecting rod rotation motor, a vertical rotation motor and a second connecting rod rotation motor;
the base is rotationally connected with the horizontal turntable, and the horizontal turntable is driven to rotate in an xy plane relative to the base by a horizontal rotating motor arranged on the horizontal turntable;
two ends of the first connecting rod are respectively connected with the horizontal plane turntable and the vertical plane turntable in a rotating way, one end of the first connecting rod drives the first connecting rod to rotate in an xz plane relative to the horizontal plane turntable through a first connecting rod rotating motor arranged on the horizontal plane turntable, and the other end of the first connecting rod drives the vertical plane turntable to rotate in the xz plane relative to the first connecting rod through a vertical rotating motor arranged on the vertical plane turntable;
two ends of the second connecting rod are respectively connected with the vertical surface turntable and the third connecting rod in a rotating way, and one end of the second connecting rod connected with the vertical surface turntable drives the second connecting rod to rotate relative to the central shaft of the second connecting rod through a second connecting rod rotating motor arranged on the vertical surface turntable;
the third connecting rod rotates in the xz plane relative to the second connecting rod, and one end, far away from the second connecting rod, of the third connecting rod is connected with the seat platform.
5. The mixed reality-based multiple degree of freedom driving simulator of claim 4, wherein the horizontal plane turntable is further provided with a telescopic cylinder capable of swinging up and down, an extendable or recyclable pull rod is arranged in the telescopic cylinder, and an extending end of the pull rod is hinged with the first connecting rod.
6. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the mixed reality glasses track and acquire the position and posture information of the operator in real time by adopting a dynamic scene virtual-real fusion positioning algorithm, and the dynamic scene virtual-real fusion positioning algorithm comprises:
s10, acquiring the head posture of an operator, including:
s11, acquiring the angular speed and the acceleration of an operator through an inertial sensor in the mixed reality glasses;
s12, calibrating a camera in the mixed reality glasses, and shooting a physical image in the seat platform through the camera;
s13, calculating to obtain the head posture of the operator by adopting a visual inertia fusion semantic SLAM algorithm based on the angular speed, the acceleration and the physical image;
s20, acquiring three-dimensional coordinates of a seat platform, including:
s21, performing marker detection on the real object image acquired by the camera to acquire a marker graph;
s22, calculating the relative pose of the identification map and the mixed reality glasses display screen to obtain the three-dimensional coordinates of the seat platform.
7. The mixed reality-based multiple degree of freedom driving simulator of claim 6, wherein the visual inertia fusion semantic SLAM algorithm comprises:
s30, preprocessing measured values, including:
s31, pre-integrating the angular velocity and the acceleration acquired by the inertial sensor;
s32, performing feature point detection and tracking, image semantic segmentation and semantic label adding processing on the feature points on the object image shot by the camera;
s40, initializing:
performing visual motion reconstruction and visual inertia alignment according to the pre-integral and the feature points;
s50, local BA optimization and repositioning:
performing nonlinear, tightly-coupled and sliding window visual inertial navigation BA optimization on the data preprocessed in the step S30 and the initialized data in the step S40;
s60, loop detection: carrying out image semantic segmentation and carrying out search matching on the key frames added with semantic tags, judging whether a loop is formed or not, and eliminating accumulated errors;
s70, global pose map optimization: and (5) performing four-degree-of-freedom pose diagram optimization on the diagram after the local BA optimization and repositioning in the S50 and loop detection in the S60.
8. The mixed reality based multiple degree of freedom driving simulator of claim 7, wherein the image semantic segmentation algorithm comprises:
s81, using a group of images with weak marks, the encoder f is mapped according to the classified data set enc Training is performed as shown in the formula:
Figure FDA0004091806470000041
wherein x is input, y is output, θ enc Represents f enc Parameters e of (2) c The cross entropy loss of classification is that I is the domain composed of input and output;
s82, encoder f enc Applied to the attention training dataset, filter frames that are independent of their class labels, and generate an attention map of the target class;
s83, through solving an optimization problem, in each relevant video interval, combining the attention map with the color and motion prompt, carrying out space-time object segmentation and attaching labels;
s84, the split label pair decoder f obtained in the previous stage dec Training is performed as shown in the formula:
Figure FDA0004091806470000042
where V is the velocity, V is the set of velocities, P is the pixel, P is the set of pixels, c is the cutting variable, θ dec Is f dec Is a parameter of (a).
S85, byUsing the whole encoder f enc Decoder f dec The network implements semantic segmentation of the static image.
9. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the motion of the multiple degree of freedom mechanical arm is controlled by a mechanical arm somatosensory simulation algorithm based on perception feedback, the mechanical arm somatosensory simulation algorithm based on perception feedback adopts a vestibular model of a human body as perception, and the simulation algorithm specifically comprises:
defining an otolith model as:
Figure FDA0004091806470000051
wherein t is a ,t s ,t L Is an otolith model parameter, and k is a gain factor;
defining a semicircular canal model as follows:
Figure FDA0004091806470000052
wherein T is L ,T s ,T a Is a semi-regular model parameter;
setting a specific force, and outputting the actual displacement of the seat platform by using the translation low-pass channel, the translation high-pass channel and the otolith model feedback channel;
setting angular velocity, and outputting the actual angular velocity of the seat platform by using the rotary high-pass channel and the semicircular canal model feedback channel.
10. The mixed reality-based multiple degree of freedom driving simulator of claim 1, wherein the multiple degree of freedom mechanical arm adopts an Actor-Critic algorithm of reinforcement learning to autonomously correct the wear error of the mechanical arm in an initialization stage.
CN202310154487.8A 2023-02-10 2023-02-10 Multi-degree-of-freedom driving simulator based on mixed reality Pending CN116110270A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310154487.8A CN116110270A (en) 2023-02-10 2023-02-10 Multi-degree-of-freedom driving simulator based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310154487.8A CN116110270A (en) 2023-02-10 2023-02-10 Multi-degree-of-freedom driving simulator based on mixed reality

Publications (1)

Publication Number Publication Date
CN116110270A true CN116110270A (en) 2023-05-12

Family

ID=86256044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310154487.8A Pending CN116110270A (en) 2023-02-10 2023-02-10 Multi-degree-of-freedom driving simulator based on mixed reality

Country Status (1)

Country Link
CN (1) CN116110270A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193530A (en) * 2023-09-04 2023-12-08 深圳达普信科技有限公司 Intelligent cabin immersive user experience method and system based on virtual reality technology
CN117506940A (en) * 2024-01-04 2024-02-06 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193530A (en) * 2023-09-04 2023-12-08 深圳达普信科技有限公司 Intelligent cabin immersive user experience method and system based on virtual reality technology
CN117193530B (en) * 2023-09-04 2024-06-11 深圳达普信科技有限公司 Intelligent cabin immersive user experience method and system based on virtual reality technology
CN117506940A (en) * 2024-01-04 2024-02-06 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium
CN117506940B (en) * 2024-01-04 2024-04-09 中国科学院自动化研究所 Robot track language description generation method, device and readable storage medium

Similar Documents

Publication Publication Date Title
CN116110270A (en) Multi-degree-of-freedom driving simulator based on mixed reality
CN105849771B (en) For tracking the method and system of mobile device
Zollmann et al. Flyar: Augmented reality supported micro aerial vehicle navigation
JP4871270B2 (en) System and method for operating in a virtual three-dimensional space and system for selecting operations via a visualization system
CN107209854A (en) For the support system and method that smoothly target is followed
US10146332B2 (en) Information processing device, information processing system, block system, and information processing method
CN109311639A (en) Remote control device for crane, construction machinery and/or tray truck
CN103050028B (en) Driving simulator with stereoscopic vision follow-up function
US20110264303A1 (en) Situational Awareness for Teleoperation of a Remote Vehicle
KR102097180B1 (en) Training simulator and method for special vehicles using argmented reality technology
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
CN107154197A (en) Immersion flight simulator
JPH07311857A (en) Picture compositing and display device and simulation system
CN110610547A (en) Cabin training method and system based on virtual reality and storage medium
CN107134194A (en) Immersion vehicle simulator
CN111161586A (en) Rescue vehicle simulation training device and operation method
CN108986223A (en) A kind of method and apparatus of 3 D scene rebuilding
CN109891347A (en) For simulating the method and system of loose impediment state
CN109166181A (en) A kind of mixing motion capture system based on deep learning
KR101507014B1 (en) Vehicle simulation system and method to control thereof
JP2022549562A (en) Vehicle driving assistance method and system
CN209821674U (en) Multi-degree-of-freedom virtual reality movement device
US10130883B2 (en) Information processing device and information processing method
CN113238556A (en) Water surface unmanned ship control system and method based on virtual reality
CN208880729U (en) Enforce the law robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination