CN116653778A - Vehicle and vehicle control method - Google Patents

Vehicle and vehicle control method Download PDF

Info

Publication number
CN116653778A
CN116653778A CN202310651325.5A CN202310651325A CN116653778A CN 116653778 A CN116653778 A CN 116653778A CN 202310651325 A CN202310651325 A CN 202310651325A CN 116653778 A CN116653778 A CN 116653778A
Authority
CN
China
Prior art keywords
vehicle
virtual reality
reality glasses
vehicle control
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310651325.5A
Other languages
Chinese (zh)
Inventor
汪善平
唐杰
郭现磊
沈志顺
刘冬冬
刘红迪
付光何
赵鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery New Energy Automobile Co Ltd
Original Assignee
Chery New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery New Energy Automobile Co Ltd filed Critical Chery New Energy Automobile Co Ltd
Priority to CN202310651325.5A priority Critical patent/CN116653778A/en
Publication of CN116653778A publication Critical patent/CN116653778A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/72Electric energy management in electromobility

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The application provides a vehicle and a vehicle control method, wherein the vehicle comprises a vehicle body main body, image acquisition equipment, a whole vehicle controller and virtual reality glasses equipment; the image acquisition equipment is arranged at the periphery of the body main body and is used for acquiring the peripheral environment image of the vehicle and transmitting the acquired peripheral environment image of the vehicle to the virtual reality glasses equipment; the virtual reality glasses device is used for generating a vehicle environment panoramic image based on the vehicle peripheral environment image; the virtual reality glasses device is embedded with a vehicle control interface and is also used for determining a vehicle control signal based on the vehicle control interface and the position of the eye point of regard of human eyes and sending the vehicle control signal to the whole vehicle controller; the whole vehicle controller is used for controlling the vehicle to perform state adjustment based on the vehicle control signal. With the vehicle, passengers can acquire more comprehensive vehicle environment information, and the driving flexibility is higher.

Description

Vehicle and vehicle control method
Technical Field
The application relates to the technical field of vehicles, in particular to a vehicle and a vehicle control method.
Background
As the related technology of vehicles is continuously developed and matured, the requirements of people on the driving experience of the vehicles are higher and higher. With the continuous development of virtual reality technology, more and more technicians are focusing on combining virtual reality technology with vehicles to improve vehicle driving experience.
In the related art, based on the head-up display principle, the front windshield of the vehicle is used as a display screen of the head-up display device, so that vehicle driving prompt information and the like are displayed on the front windshield of the vehicle, a driver does not need frequent low head-up, and driving experience of the driver is improved.
However, the information that the head-up display device can bring to the driver is limited in terms of speed of time, navigation, etc., and the driver still obtains less driving-related information.
Disclosure of Invention
In view of the above, the present application provides a vehicle and a vehicle driving method, in which passengers can obtain more comprehensive vehicle environment information, and the driving flexibility is higher.
Specifically, the method comprises the following technical scheme:
in one aspect, the application provides a vehicle, comprising a body main body, an image acquisition device, a vehicle controller and virtual reality glasses equipment;
the image acquisition equipment is arranged at the periphery of the body main body and is used for acquiring a vehicle peripheral environment image and transmitting the acquired vehicle peripheral environment image to the virtual reality glasses equipment;
the virtual reality glasses device is used for generating a vehicle environment panoramic image based on the vehicle peripheral environment image;
the virtual reality glasses device is embedded with a vehicle control interface, and is further used for determining a vehicle control signal based on the vehicle control interface and the position of a human eye gaze point and sending the vehicle control signal to the whole vehicle controller;
the whole vehicle controller is used for controlling the vehicle to adjust the state based on the vehicle control signal.
Optionally, the vehicle control interface is provided with a virtual key corresponding to at least one of forward, backward, left turn, right turn, acceleration, deceleration or stop;
the virtual reality glasses device is used for taking a control signal corresponding to a target virtual key as the vehicle control signal when detecting that the time length of the eye point of regard falling on the target virtual key is greater than a preset time length threshold.
Optionally, the vehicle further includes: holographic projection equipment and an information host;
the holographic projection device is used for acquiring holographic projection data of a target object from the information host, and carrying out three-dimensional holographic projection of the target object in a passenger cabin of the vehicle, wherein the target object comprises at least one of an avatar, a car entertainment page or a vehicle control page.
Optionally, the vehicle further includes: a motion capture device;
the motion capture equipment is used for capturing the motions of passengers in the passenger cabin, generating motion interaction instructions and sending the motion interaction instructions to the information host;
and the information host receives the action interaction instruction, and responds to the action interaction instruction to update holographic projection data of the target object so as to update the state of the target object projected by the holographic projection equipment.
Optionally, the motion capture device comprises a passenger cabin three-dimensional model generation component and a gesture recognition component;
the passenger cabin three-dimensional model generation component is used for acquiring three-dimensional coordinate information of objects in the passenger cabin and generating a three-dimensional model of the passenger cabin;
the gesture recognition component is used for capturing hand coordinate information of the passenger, determining gesture actions of the passenger based on the hand coordinate information and the three-dimensional model of the passenger cabin, and generating the action interaction instruction based on the gesture actions.
Optionally, the vehicle further comprises a voice detection component;
the voice detection component is used for detecting sound in the passenger cabin, identifying a voice interaction instruction from the sound detection component and sending the voice interaction instruction to the information host;
and the information host receives the voice interaction instruction, and responds to the voice interaction instruction to update holographic projection data of the target object, so that the state of the target object projected by the holographic projection equipment is updated.
Optionally, the number of the image acquisition devices is plural, and the plural image acquisition devices are used for acquiring the peripheral environment images of the vehicle in front of, behind, on the left side, on the right side, above or below the vehicle.
Optionally, the vehicle further comprises a braking system and a drive motor controller;
the whole vehicle controller is used for sending a first control instruction to the driving motor controller based on the vehicle control signal so that the driving motor controller can adjust the running state of the vehicle motor based on the first control instruction;
or the whole vehicle controller is used for sending a second control instruction to the braking system based on the vehicle control signal so that the braking system controls the braking state of the vehicle.
Optionally, the vehicle further comprises a radar;
the radar is used for detecting obstacle information around the vehicle and transmitting the obstacle information to the virtual reality glasses device;
the virtual reality glasses device is used for displaying corresponding obstacle identifications in the vehicle environment panoramic image based on the obstacle information.
In another aspect, the present application provides a vehicle control method, applied to a vehicle, where the vehicle includes a body main body, an image acquisition device, a vehicle controller, and a virtual reality glasses device, the image acquisition device is disposed at the periphery of the body main body, and a vehicle control interface is embedded in the virtual reality glasses device;
the method comprises the following steps:
collecting a vehicle peripheral environment image through the image collecting device, and sending the collected vehicle peripheral environment image to the virtual reality glasses device;
generating, with the virtual reality eyewear device, a vehicle environment panoramic image based on the vehicle peripheral environment image;
determining a vehicle control signal based on the vehicle control interface and the position of the eye point of the human eye by using the virtual reality glasses equipment, and sending the vehicle control signal to the whole vehicle controller;
and controlling the vehicle to adjust the state based on the vehicle control signal by the whole vehicle controller.
The embodiment of the application provides a vehicle and a vehicle control method. The image acquisition equipment is arranged on the periphery of the vehicle body of the vehicle, the virtual reality glasses equipment in the vehicle can generate a vehicle environment panoramic image based on the vehicle periphery environment image acquired by the image acquisition equipment, and then passengers of the vehicle can observe the environment image of the vehicle more intuitively by means of the virtual reality glasses equipment, so that more comprehensive vehicle environment information is acquired, and the accuracy of driving decision is improved. Further, the vehicle control interface is embedded in the virtual reality glasses device, and the passenger can operate the vehicle control interface by changing the position of the gaze point of the eyes, so that the state of the vehicle is adjusted on the premise of no manual operation, the driving flexibility is effectively improved, and the driving experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a logic structure of a vehicle according to an embodiment of the present application;
fig. 2 is a schematic diagram of a vehicle control interface of a virtual reality eye device in a vehicle according to an embodiment of the present application;
fig. 3 is a schematic view of a part of a vehicle according to an embodiment of the present application;
FIG. 4 is a schematic view of a portion of another vehicle according to an embodiment of the present application;
FIG. 5 is a schematic view of a portion of another vehicle according to an embodiment of the present application;
FIG. 6 is a logic diagram of holographic projection in a vehicle according to an embodiment of the present application;
fig. 7 is a flowchart of a vehicle control method according to an embodiment of the present application.
Reference numerals in the drawings are respectively expressed as:
100-a body main body; 101-an image acquisition device; 102-a whole vehicle controller; 103-virtual reality glasses apparatus; 1031-a vehicle control interface; 10311-virtual keys; 104-a holographic projection device; 1041-a front-facing holographic projection device; 1042-rear holographic projection device; 105-a motion capture device; 1051-a passenger compartment three-dimensional model generation component; 1052-a gesture recognition component; 106-a voice detection component; 107-a braking system; 108-a drive motor controller; 109-radar; 110-information host;
111-a central console; 112-signal lines; 113-seat.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms of orientation, such as "upper", "lower", "side", etc., in the embodiments of the present application are generally based on the relative orientation of the structures shown in the drawings or when the vehicle is custom-placed, and are used merely to more clearly describe the structure and the relationship between the structures, and are not intended to describe an absolute orientation. The orientation may change when the vehicle is placed in different poses, for example "up", "down" may be interchanged.
Unless defined otherwise, all technical terms used in the embodiments of the present application have the same meaning as commonly understood by one of ordinary skill in the art.
In order to make the technical scheme and advantages of the present application more apparent, embodiments of the present application will be described in further detail with reference to the accompanying drawings.
In a first aspect, the present application provides a vehicle. Referring to fig. 1, the vehicle includes a body main body 100, an image pickup device 101, a whole vehicle controller 102, and a virtual reality glasses device 103;
the image acquisition device 101 is arranged at the periphery of the body main body 100, and is used for acquiring a vehicle peripheral environment image and transmitting the acquired vehicle peripheral environment image to the virtual reality glasses device 103;
the virtual reality glasses device 103 is used for generating a vehicle environment panoramic image based on the vehicle peripheral environment image;
the virtual reality glasses device 103 is embedded with a vehicle control interface 1031, and the virtual reality glasses device 103 is further configured to determine a vehicle control signal based on the vehicle control interface 1031 and a position of a point of gaze of human eyes, and send the vehicle control signal to the vehicle controller 102;
the vehicle controller 102 is configured to control the vehicle to perform a state adjustment based on the vehicle control signal.
In the present embodiment, the vehicle has the image pickup device 101 arranged at the periphery of the vehicle body main body 100, the image pickup device 101 can pick up the environmental image of the periphery of the vehicle, the virtual reality glasses device 103 can generate the vehicle environmental panoramic image based on the environmental image of the periphery of the vehicle, and further the panoramic image of the periphery of the vehicle can be seen when the occupant wears the virtual reality glasses device 103, without being limited to the screen of the window opening position of the vehicle. Also, with the virtual reality glasses apparatus, the occupant can also dynamically observe the environment around the vehicle, or view the relationship between the vehicle and the surrounding environment at a higher angle of view, for example, can realize the relationship between the vehicle and the surrounding environment from high altitude.
Further, the virtual reality glasses device 103 has a vehicle control interface 1031 embedded therein, and an occupant can adjust the entire vehicle state by controlling the position of the eye point of his/her glasses by means of the vehicle control interface 1031, thereby realizing virtual reality assisted driving. In particular applications, the virtual reality eyeglass device 103 may be worn directly by the primary driver, the primary driver may drive the vehicle based on the vehicle control interface 1031 in the virtual reality eyeglass device 103, or the virtual reality eyeglass device 103 may be worn by an auxiliary driver, such as a secondary driver, so that the auxiliary driver may co-drive with the primary driver.
In summary, the embodiment of the application provides a vehicle and a vehicle control method. The image acquisition equipment is arranged on the periphery of the vehicle body of the vehicle, the virtual reality glasses equipment in the vehicle can generate a vehicle environment panoramic image based on the vehicle periphery environment image acquired by the image acquisition equipment, and then passengers of the vehicle can observe the environment image of the vehicle more intuitively by means of the virtual reality glasses equipment, so that more comprehensive vehicle environment information is acquired, and the accuracy of driving decision is improved. Further, the vehicle control interface is embedded in the virtual reality glasses device, and the passenger can operate the vehicle control interface by changing the position of the gaze point of the eyes, so that the state of the vehicle is adjusted on the premise of no manual operation, the driving flexibility is effectively improved, and the driving experience of the user is improved.
In some embodiments, the number of image capturing devices 101 is plural, and the plurality of image capturing devices 101 are configured to capture images of the vehicle surroundings in front of, behind, to the left, to the right, above, or below the vehicle. By combining a plurality of image capturing apparatuses 101, panoramic image generation of 360 degrees, 540 degrees, or 720 degrees can be achieved, providing more comprehensive information to the occupant.
Optionally, referring to fig. 2, the vehicle control interface 1031 has virtual keys 10311 therein corresponding to at least one of forward, backward, left-turn, right-turn, acceleration, deceleration, or stop;
the virtual reality glasses device 103 is configured to, when detecting that a time period when the eye gaze point falls on the target virtual key is longer than a preset time period threshold, take a control signal corresponding to the target virtual key as a vehicle control signal.
Based on the above settings, the user can purposefully control the position of the gaze point and the gazing duration of the user, and the virtual key 10311 in the vehicle control interface 1031 enables the virtual reality glasses device 103 to send out corresponding vehicle control signals, so as to realize convenient vehicle control.
Alternatively, the preset duration threshold may be 3 seconds. The vehicle control interface 1031 may be floatingly displayed over the vehicle environment panoramic image and presented in a relatively transparent form to avoid obscuring the vehicle environment panoramic image. Also, when the vehicle environment panoramic image changes display content as the head of the occupant moves, the position of the vehicle control interface 1031 may be constant at all times to ensure that the occupant can control the vehicle in real time.
In some embodiments, referring to fig. 1, the vehicle further comprises: holographic projection device 104 and information host 110;
the holographic projection device 104 is configured to obtain holographic projection data of a target object from the information host 110, and perform three-dimensional holographic projection of the target object within a passenger compartment of the vehicle, where the target object includes at least one of an avatar, an in-vehicle entertainment page, or a vehicle control page.
In this embodiment, the holographic projection device 104 in the vehicle can acquire holographic projection data of the target object from the information host 110, and project the target object in the vehicle, for example, can project a virtual three-dimensional character or a small car game, so as to provide more diversified entertainment modes for passengers. Alternatively, the holographic projection device 104 may project a control page of the vehicle in the passenger compartment, so that each passenger may obtain vehicle information such as a control state of the vehicle.
In some embodiments, the virtual reality glasses device 103 may also acquire virtual reality data from the information host 110, and generate a corresponding virtual reality screen, so that the occupant can perform some entertainment activities or experience virtual driving with the virtual reality glasses device in the vehicle.
In some embodiments, referring to fig. 1, the vehicle further comprises: a motion capture device 105;
the motion capture device 105 is used for capturing motions of passengers in the passenger compartment, generating motion interaction instructions, and sending the motion interaction instructions to the information host 110;
information host 110 receives the motion interaction instructions, updates holographic projection data of the target object in response to the motion interaction instructions, and causes a state of the target object projected by holographic projection device 104 to be updated.
In this embodiment, the actions of the occupant in the passenger cabin may be captured and a corresponding action interaction instruction may be generated, and the action interaction instruction may be used to adjust the state of the target object, so as to implement virtual interaction between the occupant and the target object. For example, when the target object is a virtual three-dimensional character, the occupant may perform action interaction with the virtual three-dimensional character, such as handshake; when the target object is a car game, the car game can be a three-dimensional game; when the target object is a control page of the vehicle, the occupant can control the vehicle state at any seat position, for example, turning on an air conditioner, turning up volume, adjusting the state of a window, or the like. Therefore, the use mode of the vehicle is more flexible, and the user experience is better.
Optionally, the motion capture device 105 includes an information transfer unit, which may transfer the motion interaction instruction to the virtual reality glasses device 103 in addition to the information host 110 after the motion capture device 105 generates the motion interaction instruction; the virtual reality glasses device 103 may change the displayed picture in response to the action interaction instruction, such as adjusting the viewing angle of the picture or the scaling of the picture.
Alternatively, the user may control the zoom ratio of the screen to be increased or decreased by a movement of approaching or separating two fingers, and control the viewing angle to be switched between a first person viewing angle or a third person viewing angle by a movement of sliding up or down the palm, where the first person viewing angle may be a viewing angle when the driver sits in the environment outside the vehicle on the driver seat, and the third person viewing angle may be a viewing angle when the relationship between the vehicle as a whole and the external environment is observed from high altitude.
Of course, to avoid misoperation, the information host 110 or the virtual reality glasses device 103 may have a determining unit therein, configured to determine whether the action interaction instruction is used to control the holographic projection device 104 or the virtual reality glasses device 103, and only when the action object of the action interaction instruction is the holographic projection device 104, the information host 110 updates holographic projection data of the target object; only when it is determined that the acting object of the action interaction instruction is the virtual reality glasses apparatus 103, the virtual reality glasses apparatus 103 updates the virtual reality display screen.
Optionally, referring to FIG. 1, the motion capture device 105 includes a passenger compartment three-dimensional model generation component 1051 and a gesture recognition component 1052;
the passenger cabin three-dimensional model generating component 1051 is used for acquiring three-dimensional coordinate information of objects in the passenger cabin and generating a three-dimensional model of the passenger cabin;
the gesture recognition component 1052 is configured to capture hand coordinate information of an occupant, determine a gesture motion of the occupant based on the hand coordinate information and a three-dimensional model of the occupant compartment, and generate a motion interaction instruction based on the gesture motion.
Based on the arrangement, the generation of the three-dimensional model of the passenger cabin can be realized, the gesture actions of the passengers can be accurately acquired on the basis of the three-dimensional model of the passenger cabin, and the action interaction instruction can be accurately generated.
Alternatively, the passenger compartment three-dimensional model generation component 1051 may include a depth sensor or a depth camera. To ensure accuracy of the three-dimensional model, a depth sensor or depth camera may be provided at various locations within the vehicle passenger compartment. The passenger compartment three-dimensional model generation component 1051 may also have a model calculation and generation unit for integrating data acquired by the depth sensor or depth camera to generate a three-dimensional model of the passenger compartment. Alternatively, the three-dimensional model generation component 1051 may be in a real-time operation mode, i.e. after generating the three-dimensional model of the passenger compartment, the three-dimensional model is calculated and updated in real time in response to the latest detected data, so that the accuracy of the three-dimensional model is higher.
Optionally, gesture recognition component 1052 may include a depth sensor, a depth camera, or a general camera. The gesture recognition assembly 1052 may be mounted in a position facing the occupant seat, such as between a center console or front and rear rows of seats; and/or the gesture recognition assembly 1052 may be mounted at the top of the passenger compartment opposite the passenger seat. The gesture recognition component 1052 may further include a gesture matching unit, which may determine hand coordinate information of the occupant based on the depth information of the hand acquired by the depth sensor or the depth camera, determine a gesture motion of the occupant based on the hand coordinate information and a three-dimensional model of the occupant compartment, match the gesture motion of the occupant with a gesture template motion (a correspondence between a plurality of gesture template motions and motion interaction commands may be pre-stored) stored or acquired in advance, and determine a motion interaction command corresponding to the gesture template motion with the highest matching degree as a motion interaction command corresponding to the current gesture motion of the occupant. When the calculated matching degree values are all lower than the threshold value, it is determined that the matching is not successful, and the gesture control flow is not performed (i.e. the process of controlling the state of the target object through the action interaction instruction is not performed). Alternatively, the unit may recognize gesture actions of the occupant from the images based on hand images acquired by a normal camera or a depth camera. Alternatively, the gesture recognition component 1052 may be located in any one of the vehicle locations where manual actuation by the occupant is available.
In some embodiments, referring to fig. 1, the vehicle further includes a voice detection component 106;
the voice detection component 106 is configured to detect a sound in the passenger compartment, recognize a voice interaction instruction therefrom, and send the voice interaction instruction to the information host 110;
information host 110 receives the voice interaction instructions and updates holographic projection data of the target object in response to the voice interaction instructions, causing the status of the target object projected by holographic projection device 104 to be updated.
In this embodiment, the passenger can also interact with the target object directly through voice, so that the state of the target object is updated, and the interaction flexibility is improved.
Optionally, the voice detection component 106 may pre-store or acquire correspondence between different voice segments and voice interaction instructions, when detecting a sound in the passenger cabin, perform preprocessing such as silence removal or segmentation on the detected sound, perform similarity calculation on an audio segment obtained by the preprocessing and different voice segments in the correspondence, determine a most similar voice segment, and use a voice interaction instruction corresponding to the most similar voice segment as a voice interaction instruction corresponding to the detected audio. When the calculated similarity values are all lower than the threshold value, it is determined that the matching is unsuccessful, and the voice control flow is not performed (i.e., the process of controlling the state of the target object through the voice interaction instruction is not performed).
In some embodiments, referring to fig. 1, the vehicle further includes a braking system 107 and a drive motor controller 108;
the vehicle controller 102 is configured to issue a first control instruction to the driving motor controller 108 based on the vehicle control signal, so that the driving motor controller 108 adjusts an operation state of the vehicle motor based on the first control instruction;
or, the vehicle controller 102 is configured to issue a second control command to the brake system 107 based on the vehicle control signal, so that the brake system 107 controls the braking state of the vehicle.
In this embodiment, the whole vehicle controller 102 adjusts the running state of the vehicle driving motor through the driving motor controller 108 according to the content of the received vehicle control signal, so as to increase or decrease the power of the vehicle, and so on; or to control the braking state of the vehicle by means of the braking system 107, for example to slow down the braking of the vehicle, etc.
Alternatively, the brake system 107 may be a linear brake system, typically mounted on the vehicle chassis, for braking the entire vehicle chassis, and four-wheel dynamic torque control, which can be used to control the forward, reverse, left turn, right turn, slow down, accelerate, stop, etc. of the entire vehicle. The drive motor controller 108 is generally disposed around the vehicle motor for controlling the rotational speed of the motor, etc.
In some embodiments, referring to fig. 1, the vehicle further comprises a radar 109;
the radar 109 is for detecting obstacle information around the vehicle and transmitting the obstacle information to the virtual reality glasses apparatus 103;
the virtual reality glasses apparatus 103 is configured to display a corresponding obstacle identification in the vehicle environment panoramic image based on the obstacle information.
In this embodiment, the radar 109 of the vehicle may provide the virtual reality glasses device 103 with obstacle information around the vehicle, so that the obstacle identifier around the vehicle is more comprehensively displayed in the panoramic image of the vehicle environment, and more comprehensive driving information is provided for the passenger. In addition, the detection distance of the radar is large, the detection range of the vehicle environment information can be effectively enlarged, the user can know surrounding possible obstacle information earlier, and more risk avoiding time can be striven for the user in an emergency.
Alternatively, the radar 109 may be an external radar, i.e., disposed at the periphery of the vehicle, so that the information of the environment outside the vehicle can be acquired more accurately. Radar includes, but is not limited to: millimeter wave radar, ultrasonic radar, or angular radar.
Optionally, a speed sensor in the vehicle may also transmit the current speed of the vehicle to the virtual reality glasses device 103, so that the virtual reality glasses device 103 displays the current speed information of the vehicle. The navigation system of the vehicle may also transmit navigation information to the virtual reality glasses apparatus 103, causing the virtual reality glasses apparatus 103 to display the navigation information.
Alternatively, the information host 110 may be respectively connected to the vehicle controller 102, the voice detection component 106, the radar 109, the holographic projection device 104, the image acquisition device 101, the virtual reality glasses device 103, and the motion capture device 105, and serve as an information transmission medium between these devices or components. For example, the information host 110 may acquire data from the image acquisition device 101 or the radar 109 and transfer the data to the virtual reality glasses device 103; the vehicle control signal sent by the virtual reality glasses device 103 may be sent to the information host 110, and then sent to the vehicle controller 102 after being determined and identified by the information host 110.
Alternatively, referring to fig. 3-5, a vehicle may have front and rear rows of seats 113 in the passenger compartment with at least two holographic projection devices 104. When there are two holographic projection devices 104, they may be disposed at the top of the passenger compartment at positions corresponding to the primary and secondary steering positions, respectively. When the number of the holographic projection devices is more than two, at least one of the holographic projection devices can be set as a master device, the other holographic projection devices are set as slave devices, the master device is connected with the information host through the central console or directly connected with the information host, and the slave devices and the master device realize data transmission in a wired or wireless mode, so that the combined holographic projection cockpit is realized.
Alternatively, the number of the virtual reality glasses apparatuses 103 may be one or two or more, and when there is only one, one may be provided between the main driver's seat and the auxiliary driver's seat, and when there are at least two, one may be provided for each of the main driver's seat and the auxiliary driver's seat.
Optionally, the vehicle may also have a battery for powering the aforementioned items of powered equipment (holographic projection device 104, virtual reality glasses device 103, image acquisition device 101, radar 109, motion capture device, information host, braking system, vehicle controller, main drive motor controller, motor, etc.). Referring to fig. 3-5, the vehicle may have a center console 111, and the center console 111 may be signal-connected with the virtual reality glasses device 103 or the hologram projection device 104 through a signal line 112, or directly through a wireless connection. The center console 111 may be in signal connection with the information host 110 to obtain information from the information host 110 and send the information to the virtual reality glasses device 103 or the holographic projection device 104.
Alternatively, referring to fig. 6, the vehicle may have a front-facing holographic projection device 1041 and a rear-facing holographic projection device 1042, and after the central console 111 acquires holographic projection data of the target object from the information host 110, the front-facing holographic projection device 1041 may acquire holographic projection data from the central console 111; the rear hologram projection device 1042 may acquire hologram projection data from the front hologram projection device 1041 or directly acquire hologram projection data from the center console 111. Finally, front holographic projection device 1041 and rear holographic projection device 1042 may perform holographic projection of the target object in concert or individually.
In summary, based on the vehicle provided by the embodiment of the application, on one hand, the passenger can realize the immersive viewing of the surrounding environment image or the obstacle information of the vehicle by means of the virtual glasses device, and conveniently control the state of the vehicle; on the other hand, the interactive entertainment with the virtual object can be realized by means of the holographic projection equipment, so that the using mode of the vehicle is more flexible, and the driving experience of passengers is better.
In another aspect, the present application provides a vehicle control method. The application is to any of the vehicles described previously. The vehicle comprises a vehicle body main body 100, an image acquisition device 101, a whole vehicle controller 102 and a virtual reality glasses device 103, wherein the image acquisition device 101 is arranged on the periphery of the vehicle body main body 100, and the virtual reality glasses device 103 is embedded with a vehicle control interface 1031.
Referring to fig. 7, the method includes:
step 701: collecting a vehicle peripheral environment image through the image collecting device 101, and transmitting the collected vehicle peripheral environment image to the virtual reality glasses device 103;
step 702: generating a vehicle environment panoramic image based on the vehicle peripheral environment image using the virtual reality glasses device 103;
step 703: determining a vehicle control signal based on the vehicle control interface 1031 and the position of the eye gaze point by using the virtual reality glasses apparatus 103, and transmitting the vehicle control signal to the whole vehicle controller 102;
step 704: the vehicle controller 102 controls the vehicle to perform state adjustment based on the vehicle control signal.
The details of the method according to the embodiment of the present application correspond to the control logic of each component in the vehicle according to the foregoing embodiment, and reference may be made to the foregoing vehicle embodiment for more details, which are not repeated here.
By the method provided by the embodiment of the application, the passenger can more intuitively observe the environment image of the vehicle by means of the virtual reality glasses equipment, acquire more comprehensive vehicle environment information, and is beneficial to improving the accuracy of driving decisions. Further, the vehicle control interface is embedded in the virtual reality glasses device, and the passenger can operate the vehicle control interface by changing the position of the gaze point of the eyes, so that the state of the vehicle is adjusted on the premise of no manual operation, the driving flexibility is effectively improved, and the driving experience of the user is improved.
In the present disclosure, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" refers to two or more, unless explicitly defined otherwise.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. The specification and examples are to be regarded in an illustrative manner only.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A vehicle, characterized by comprising a vehicle body main body (100), an image acquisition device (101), a whole vehicle controller (102) and a virtual reality glasses device (103);
the image acquisition device (101) is arranged at the periphery of the vehicle body main body (100) and is used for acquiring a vehicle peripheral environment image and transmitting the acquired vehicle peripheral environment image to the virtual reality glasses device (103);
the virtual reality glasses device (103) is used for generating a vehicle environment panoramic image based on the vehicle peripheral environment image;
the virtual reality glasses device (103) is embedded with a vehicle control interface (1031), and the virtual reality glasses device (103) is further used for determining a vehicle control signal based on the vehicle control interface (1031) and the positions of the eye gaze points and sending the vehicle control signal to the whole vehicle controller (102);
the whole vehicle controller (102) is used for controlling the vehicle to adjust the state based on the vehicle control signal.
2. The vehicle of claim 1, characterized in that the vehicle control interface (1031) has virtual keys (10311) therein corresponding to at least one of forward, reverse, left turn, right turn, acceleration, deceleration, or stop;
the virtual reality glasses device (103) is used for taking a control signal corresponding to a target virtual key as the vehicle control signal when the time length of the eye gaze point falling on the target virtual key is detected to be larger than a preset time length threshold.
3. The vehicle of claim 1, characterized in that the vehicle further comprises: a holographic projection device (104) and an information host (110);
the holographic projection device (104) is used for acquiring holographic projection data of a target object from the information host (110), and carrying out three-dimensional holographic projection of the target object in a passenger cabin of the vehicle, wherein the target object comprises at least one of an avatar, a car entertainment page or a car control page.
4. A vehicle according to claim 3, characterized in that the vehicle further comprises: a motion capture device (105);
the motion capture device (105) is used for capturing the motions of passengers in the passenger cabin, generating motion interaction instructions and sending the motion interaction instructions to the information host (110);
the information host (110) receives the action interaction instruction, and updates holographic projection data of the target object in response to the action interaction instruction, so that the state of the target object projected by the holographic projection device (104) is updated.
5. The vehicle of claim 4, characterized in that the motion capture device (105) comprises a passenger compartment three-dimensional model generation component (1051) and a gesture recognition component (1052);
the passenger cabin three-dimensional model generation component (1051) is used for acquiring three-dimensional coordinate information of objects in the passenger cabin and generating a three-dimensional model of the passenger cabin;
the gesture recognition component (1052) is used for capturing hand coordinate information of an occupant, determining gesture actions of the occupant based on the hand coordinate information and a three-dimensional model of the occupant compartment, and generating the action interaction instructions based on the gesture actions.
6. A vehicle according to claim 3, characterized in that the vehicle further comprises a voice detection assembly (106);
the voice detection component (106) is used for detecting sound in the passenger cabin, identifying a voice interaction instruction from the sound detection component and sending the voice interaction instruction to the information host (110);
the information host (110) receives the voice interaction instruction and updates holographic projection data of the target object in response to the voice interaction instruction, so that the state of the target object projected by the holographic projection device (104) is updated.
7. The vehicle according to claim 1, characterized in that the number of the image capturing devices (101) is plural, and the plurality of the image capturing devices (101) are configured to capture the vehicle peripheral environment image in front of, behind, on the left side, on the right side, above or below the vehicle.
8. The vehicle of claim 1, further comprising a braking system (107) and a drive motor controller (108);
the whole vehicle controller (102) is used for sending a first control instruction to the driving motor controller (108) based on the vehicle control signal so that the driving motor controller (108) can adjust the running state of a vehicle motor based on the first control instruction;
or, the whole vehicle controller (102) is used for sending a second control instruction to the braking system (107) based on the vehicle control signal so that the braking system (107) controls the braking state of the vehicle.
9. The vehicle according to claim 1, characterized in that the vehicle further comprises a radar (109);
the radar (109) is configured to detect obstacle information around the vehicle and transmit the obstacle information to the virtual reality glasses apparatus (103);
the virtual reality glasses device (103) is used for displaying corresponding obstacle identifications in the vehicle environment panoramic image based on the obstacle information.
10. A vehicle control method, characterized by being applied to a vehicle, the vehicle comprising a vehicle body main body (100), an image acquisition device (101), a whole vehicle controller (102) and a virtual reality glasses device (103), wherein the image acquisition device (101) is arranged at the periphery of the vehicle body main body (100), and a vehicle control interface (1031) is embedded in the virtual reality glasses device (103);
the method comprises the following steps:
collecting a vehicle peripheral environment image through the image collecting device (101), and sending the collected vehicle peripheral environment image to the virtual reality glasses device (103);
generating, with the virtual reality glasses device (103), a vehicle environment panoramic image based on the vehicle peripheral environment image;
determining, with the virtual reality glasses apparatus (103), a vehicle control signal based on the vehicle control interface (1031) and a position of a human eye gaze point, and transmitting the vehicle control signal to the vehicle control unit (102);
and controlling the vehicle to adjust the state based on the vehicle control signal by the whole vehicle controller (102).
CN202310651325.5A 2023-06-01 2023-06-01 Vehicle and vehicle control method Pending CN116653778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310651325.5A CN116653778A (en) 2023-06-01 2023-06-01 Vehicle and vehicle control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310651325.5A CN116653778A (en) 2023-06-01 2023-06-01 Vehicle and vehicle control method

Publications (1)

Publication Number Publication Date
CN116653778A true CN116653778A (en) 2023-08-29

Family

ID=87714871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310651325.5A Pending CN116653778A (en) 2023-06-01 2023-06-01 Vehicle and vehicle control method

Country Status (1)

Country Link
CN (1) CN116653778A (en)

Similar Documents

Publication Publication Date Title
CN111931579B (en) Automatic driving assistance system and method using eye tracking and gesture recognition techniques
JP7060031B2 (en) Driver monitoring system
KR101730321B1 (en) Driver assistance apparatus and control method for the same
CN109636924B (en) Vehicle-mounted multi-mode augmented reality system based on real road condition information three-dimensional modeling
CN103019524B (en) Vehicle operating input equipment and the control method for vehicle operating input equipment
KR101655818B1 (en) Wearable glass, control method thereof and vehicle control system
KR101544524B1 (en) Display system for augmented reality in vehicle, and method for the same
JP3228086B2 (en) Driving operation assist device
US10674003B1 (en) Apparatus and system for identifying occupants in a vehicle
CN107310476A (en) Eye dynamic auxiliary voice interactive method and system based on vehicle-mounted HUD
CN109968979A (en) Vehicle-mounted projection processing method, device, mobile unit and storage medium
CN107054225A (en) Display system for vehicle and vehicle
CN111263133B (en) Information processing method and system
JP6822325B2 (en) Maneuvering support device, maneuvering support method, program
US20180297471A1 (en) Support to handle an object within a passenger interior of a vehicle
JP5136948B2 (en) Vehicle control device
JP7342637B2 (en) Vehicle control device and driver condition determination method
JP2019200544A (en) Safety driving assist device
JP7342636B2 (en) Vehicle control device and driver condition determination method
CN116653778A (en) Vehicle and vehicle control method
EP3820726B1 (en) Optical flow for motion sickness reduction
CN113525402B (en) Advanced assisted driving and unmanned visual field intelligent response method and system
KR20230050535A (en) Display system and method for improving autonomous driving safety of electric bus
CN115018942A (en) Method and apparatus for image display of vehicle
WO2017150466A1 (en) Driver monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination