CN108099790B - Driving assistance system based on augmented reality head-up display and multi-screen voice interaction - Google Patents

Driving assistance system based on augmented reality head-up display and multi-screen voice interaction Download PDF

Info

Publication number
CN108099790B
CN108099790B CN201711271533.3A CN201711271533A CN108099790B CN 108099790 B CN108099790 B CN 108099790B CN 201711271533 A CN201711271533 A CN 201711271533A CN 108099790 B CN108099790 B CN 108099790B
Authority
CN
China
Prior art keywords
information
screen
driver
display
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711271533.3A
Other languages
Chinese (zh)
Other versions
CN108099790A (en
Inventor
姜立军
张泽权
李哲林
张瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201711271533.3A priority Critical patent/CN108099790B/en
Publication of CN108099790A publication Critical patent/CN108099790A/en
Application granted granted Critical
Publication of CN108099790B publication Critical patent/CN108099790B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Multimedia (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Navigation (AREA)
  • Instrument Panels (AREA)

Abstract

The invention discloses a driving assistance system based on augmented reality head-up display and multi-screen voice interaction, which comprises an augmented reality head-up display system, a multi-screen voice operating system and a double-screen central control display operating system; the augmented reality head-up display system identifies complex road information, and the driver is reminded to navigation, warning information stack on real scene, many screen voice operating system realize that car HUD, HDD, HCS, CS's mobile device is many screen interaction and man-machine voice interaction through convenient touch key, two screen central authorities control and show operating system and let the co-pilot operate when not influencing the main pilot and drive. The driving assistance system can effectively provide assistance for the driver in the driving task, enhance the situational awareness of the driver, reduce the occurrence probability of unintentional blind vision, changing blind vision, vague vision and the like in the driving process, and reduce the cognitive load of the driver, thereby providing safer driving experience.

Description

Driving assistance system based on augmented reality head-up display and multi-screen voice interaction
Technical Field
The invention relates to a system for assisting a driver in driving by utilizing an augmented reality technology through a head-up display technology, a multi-screen interaction ending technology, a voice interaction technology and a double-center-control-screen technology.
Background
Head-Up Display (Head Up Display, also called Head Up Display, HUD for short) is used on the flight auxiliary system of military aircraft at first, can let the pilot need not to look at the instrument with the Head down and just can see important information, can effectively reduce the frequency that the instrument was looked over to the Head down, avoids reducing situation consciousness and interrupting attention. In recent years, HUDs have been accelerating into vehicle-mounted ranks for displaying vehicle driving status information, navigation information, and even communication entertainment information. With the development of holographic and laser imaging technologies and transparent OLED (Organic Light Emitting Display, abbreviated as OLED) Display technologies, vehicle windshields become a new Display carrier, and the application space of HUDs is expanded.
Augmented Reality (AR) is a technology that can superimpose virtual world information on real world information to enhance the perception and interaction ability of users to the real world. The AR system has three prominent features: integrating real world and virtual information; secondly, real-time interactivity is achieved; adding a positioning virtual object in the three-dimensional scale space; as a novel human-computer interface and an anti-counterfeiting tool, AR receives increasingly wide attention, plays an important role and shows great potential.
When a driver uses the information equipment in the driving process, the consumption of visual resources is a main factor influencing the driving safety due to links of information input and output. The manual input of information or the operation of equipment causes the occupation of body resources, and certain influence is caused to the driving. Aiming at the problem, a novel interaction mode represented by voice recognition can reduce the dependence of a human-computer interaction process on hands and eyes to a certain extent. The device is directly input and operated by speaking in a natural mode, and corresponding information is output by voice prompt, so that the occupation of visual resources and body resources is reduced. The maturity of the voice recognition technology enables a driver to more conveniently operate information equipment such as a mobile phone and the like by using a natural interaction mode of voice interaction, and enjoy various conveniences brought by the information equipment with rich functions.
With the rapid increase of the demands of the public on the real-time performance, accuracy, convenience, interactivity and the like of information, the present media form boundary begins to fade rapidly, and the world of multi-screen interaction comes. From traditional TV, screen to computer, intelligent mobile device to public display again, include: the outdoor media such as the liquid crystal advertising machine, the interactive inquiry machine, the self-service terminal, the large screen display and the like are added vigorously, and nowadays, a complete ecosystem is formed between the screen and the screen, exists in the corners of our surroundings, and enriches our lives. In modern automobiles, the Display functions of the automobile can be divided into four displays, namely, a HUD (Head Down Display, also called a low Head Display, HDD for short), a HCS (high Center Stack, also called a high Center console, HCS for short), and a CS (Center Stack, also called a Center console, CS for short), and in addition, mobile phone displays and other screens are added, so that the information Display forms are more and more diversified, but the classification boundary of the information functions is fuzzy, all the displays in the automobile are communicated, and the demand for flexibly displaying various types of information is more and more increased.
Disclosure of Invention
The invention combines the augmented reality technology and the head-up display technology, combines the multi-screen interaction technology and the voice interaction technology, develops a driving auxiliary system, can superpose information such as navigation, warning and the like on a real scene while not influencing the driving of a driver, plays a role of enhancing perception, and meets the operation requirements of the driver in various aspects while not increasing the operation burden of the driver through a natural voice interaction mode and a multi-screen interaction mode.
The purpose of the invention is realized by the following technical scheme.
The driving assistance system based on the augmented reality head-up display and multi-screen voice interaction comprises an augmented reality head-up display system, a multi-screen voice operating system and a double-screen central control display operating system; the augmented reality head-up display system identifies complex road information, and the driver is reminded to navigation, warning information stack on real scene, many screen voice operating system realize that car HUD, HDD, HCS, CS's mobile device is many screen interaction and man-machine voice interaction through convenient touch key, two screen central authorities control and show operating system and let the co-pilot operate when not influencing the main pilot and drive.
Further, the augmented reality head-up display system comprises a road information detection module, a computer vision analysis module and an augmented reality head-up display module;
the road information detection module records dynamic road scene information in front of the vehicle through a double sensor in front of the vehicle to form a video stream with depth information and high identification degree, namely, the road scene can be clearly restored at night, the video stream acquired by the road information detection module is analyzed through a computer vision algorithm by the computer vision analysis module to obtain significant characteristic information with the depth information, the augmented reality head-up display module converts the analysis result into augmented reality image information according to the computer vision analysis module, and the augmented reality head-up display module is superposed on a real scene on the premise of not influencing the sight of a driver through the augmented reality head-up display screen to play a role in enhancing perception.
Further, the computer vision analysis module analyzes the video stream acquired by the road information detection module, extracts significant feature information through features, and then compares the significant feature information with various information in a feature library to respectively identify features; and then, calculating a prompting weight by integrating the information, the vehicle driving information and the operation habit of the driver, arranging the significant information according to the weight, and determining a prompting or warning mode to eliminate the information which is not important or does not form a threat.
Further, the content displayed by the HUD can intelligently adjust the color, brightness, size, imaging distance and range of the display according to the conditions of weight, driving environment and the like.
Furthermore, the system also comprises a touch key module, a multi-screen operation module and an intelligent voice module;
the touch key module controls display contents to be converted in multiple screens through touch sliding, or voice instruction input is carried out through clicking, the multiple screen operation module is connected with information of multiple screens in an automobile HUD, an HDD, an HCS, a CS and mobile equipment and achieves interconnection, seamless butt joint is achieved, the information appears on any screen required by a driver, the driver can flexibly control the display position of the information without lowering the head, the intelligent voice module is a driving intelligent assistant for the driver, the driver activates after clicking the touch key, the driver only needs to speak out an instruction after the activation, and the intelligent voice module can recognize and execute the instruction.
Furthermore, the mobile device is connected with the system through Bluetooth or other wireless connection modes, and after the connection, the vehicle-mounted system application program pre-installed in the mobile device can help the mobile device to communicate information with the vehicle-mounted system, so that the mobile device becomes a part of a display of the vehicle-mounted system.
Furthermore, the system also comprises a copilot independent sound effect module and a main driving central control screen and copilot central control screen interaction module.
The MCS (Main Driver Center Stack, also called Main Driver Center Panel, MCS for short) Main Driver Center control panel is opposite to the MCS Main Driver Center control panel when the Driver lowers the head to see, the visual line is just vertical to the screen, the CCS (Co-Driver Center Stack, also called auxiliary Driver Center Panel, CS for short) auxiliary Driver Center control panel is opposite to the MCS Main Driver Center control panel, the screen angle is more inclined to the auxiliary Driver, similarly, the visual line is just vertical to the screen when the auxiliary Driver lowers the head to see, information irrelevant to the driving task can be seen, the auxiliary Driver independent sound effect module indicates that the sound effect of the CCS screen is independent of the sound effect of the whole vehicle, and is played near the auxiliary Driver position to prevent the interference to the Main Driver.
Furthermore, split screen operation can be carried out between the control screen in the MCS main driving and the control screen in the CCS copilot, and information exchange is carried out.
Compared with the prior art, the invention has the beneficial effects that:
the driving assistance system based on the augmented reality head-up display technology and multi-screen voice interaction can effectively provide assistance for a driver in a driving task, enhance the situational awareness of the driver, reduce the occurrence probability of unintentional blind vision, changing blind vision, distraction and the like in the driving process, and reduce the cognitive load of the driver, thereby providing safer driving experience.
Drawings
FIG. 1 is a block diagram of a system in an example of the invention.
FIG. 2 is a schematic diagram of the general orientation of the touch keys on the steering wheel in the embodiment.
FIG. 3 is a schematic diagram of the positions of the HUD, HDD, HCS, CS on the center console of the automobile.
FIG. 4 is a schematic diagram of the positions of the MCS and the CCS on the console of the vehicle in the embodiment.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. The described embodiments are only some embodiments of the invention, not all embodiments. Based on the embodiments of the invention. All other embodiments obtained by a person skilled in the art without making any inventive step are within the scope of protection of the present invention.
The driving assistance system based on the augmented reality head-up display and multi-screen voice interaction comprises an augmented reality head-up display system, a multi-screen voice operating system and a double-screen central control display operating system; the augmented reality head-up display system identifies complex road information, and the driver is reminded to navigation, warning information stack on real scene, many screen voice operating system realize that car HUD, HDD, HCS, CS's mobile device is many screen interaction and man-machine voice interaction through convenient touch key, two screen central authorities control and show operating system and let the co-pilot operate when not influencing the main pilot and drive.
In the embodiment, the road information detection module in the augmented reality head-up display system records the dynamic road scene information in front of the vehicle through the double sensors in front of the vehicle to form a video stream with depth information and high recognition degree. The computer vision analysis module analyzes the video stream acquired by the road information detection module through a computer vision algorithm to acquire the significant characteristic information with depth information; the augmented reality head-up display module converts the analysis result into augmented reality image information according to the computer vision analysis module, and the augmented reality head-up display screen is superposed on a real scene on the premise of not influencing the sight of a driver.
The dual sensors in front of the vehicle may be composed of a camera with a Charge-coupled Device (CCD) having high infrared night vision light compensation and near infrared sensitivity, and may record dynamic road scene information in front of the vehicle at night or in heavy fog weather to form a video stream having depth information and high recognition. The distance between the two sensors cannot be too close or too far, so that the standard that the obstacle which is 10-20cm away from the front of a central shaft of the vehicle can be detected at the same time is taken as a standard, the deviation of depth information obtained by the sensors which are too close is large, the obstacles which are close to the vehicle cannot be detected even if the sensors are too far, and meanwhile, the visual angle of the sensors is large enough to detect most of the area in front of the vehicle.
The computer vision analysis module can analyze the video stream acquired by the road information detection module through an algorithm, extract significant feature information through some features (such as whether the features are dynamic, shape, size, color and the like), and then compare the significant feature information with various information in a feature library to respectively identify features, such as vehicles, pedestrians, traffic signs, obstacles and the like, wherein the features comprise various information, for example: individual velocity, relative velocity, position, distance, shape, volume, etc. And then calculating a prompting weight by integrating the information, the vehicle driving information, the driver operation habit and the like, arranging the significant information according to the weight, and determining a prompting or warning mode to eliminate the information which is not important or does not form a threat. For example, if the distance ahead is short and the vehicle that suddenly stops is the highest weight, the system will alert on the HUD with a voice warning. If the vehicle is stopped suddenly and the weight of the vehicle is the lowest (no collision) at a distance of 5m from the front, the system selectively ignores the vehicle.
The augmented reality head-up display module converts the augmented reality image information into the augmented reality image information according to the analysis result of the computer vision analysis module. The displayed information includes: basic information of the vehicle, navigation information, driving safety warning information, social entertainment information and an auxiliary image system. The driving safety warning information includes three types including obstacle prompt information, lane prompt information and identification information. The obstacle presenting information is information for presenting an obstacle such as a specific vehicle or a pedestrian, which interferes with driving, so as to highlight the target; lane guidance information, which is information for presenting a driving behavior in such a manner that a predicted vehicle travel track is displayed; the identification information is information for prompting the road traffic identification in a display mode of a natural icon; the information with different weights is displayed in different modes, and is distinguished by colors, shapes and the like, and the information with high weight even starts a voice system to remind; the imaging distances of different types of information on the augmented reality head-up display are different, the imaging distance of basic information, navigation information, social entertainment information and auxiliary image information of the vehicle is about 2.5m, and the imaging distance of driving safety warning information is about 7.5 m. The horizontal imaging range of the augmented reality head-up display is within a horizontal viewing angle of about-9-5 degrees, and the vertical imaging range is preferably within a vertical viewing angle of about 3-10 degrees. Meanwhile, the brightness, color and contrast of the augmented reality interface can be adjusted according to the video stream acquired by the road information detection module, so that the identifiability of interface elements is improved.
In this embodiment, the touch key module in the multi-screen voice operating system can conveniently control the conversion of the display content in the multi-screen through touch sliding, or click to input the voice instruction. The multi-screen operation module is connected with information of multiple screens in the automobile such as the HUD, the HDD, the HCS, the CS, the mobile equipment and the like and realizes interconnection. The intelligent voice module allows a driver to carry out voice instruction operation and can feed back information to the driver in a voice mode.
The touch key is an entity key, has a touch function, can be arranged on the steering wheel in a built-in mode, or is fixed on the steering wheel in an external buckle, nylon buckle or magnetic attraction mode, and is connected with the system in a Bluetooth mode. The touch keys are positioned at the lower right corner of the steering wheel, and the driver can easily operate the touch keys by the thumb when holding the steering wheel with the right hand, and the general direction is as shown in fig. 2. The touch key can identify six simple operations including: up-slide, down-slide, left-slide, right-slide, click, and double click. The position of the current activation information can be moved through sliding in four directions, and the information can be hidden by double-clicking, for example, when the mobile phone navigation prompts that the front is turned to the right for 200 meters (at the moment, the mobile phone navigation is the current activation information), and a driver does not determine which intersection to turn to the right, the touch key is slid (can be set) upwards, the map navigation information on the mobile phone screen is transferred to the head-up display, and after the driver finishes seeing the navigation confirmation, the touch key is double-clicked, so that the map information is hidden; for another example, a pedestrian path is in front of the vehicle, a pedestrian path prompt is displayed on the augmented reality head-up display, if the driver feels that the prompt just interferes with the sight line, the touch key (which can be set) can be slid to the right, and the pedestrian path prompt is transferred to the HCS. If the driver wants to control the information display position of the inactivated window, the voice assistant can be called out by clicking the touch key, and corresponding instructions are spoken. For example, in the driving process, the driver does not know which way to walk, at the moment, the traffic is complex, the driver is not allowed to look down at the mobile phone navigation, at the moment, the touch key is clicked and the instruction "the navigation is displayed on the HUD" is spoken, the navigation information in the mobile phone appears in the HUD, at the moment, the navigation information is the current activation information, and therefore after the driver confirms, the touch key can be clicked twice to hide the navigation information.
As shown in fig. 3, there are four common visual interactive interfaces in the current design of automobile interior to provide the driver with vehicle information, including HDD, HCS, CS, and HUD. And many screen operation module use the split screen technique, with all visual interaction interfaces of car inside, including HUD, HDD, HCS, CS, mobile device etc. hookup one-tenth a whole, can realize the information interconnection between a plurality of screens, seamless butt joint lets information appear on any screen that the driver needs, breaks through traditional car inside information display's limitation, lets the driver need not the low head alright nimble control information's display position, effectively avoids distracting. Mobile devices such as mobile phones and tablet computers are connected with the system through Bluetooth or other wireless connection modes, after connection, a vehicle-mounted system application program pre-installed in the mobile devices can help the mobile devices to communicate information with the vehicle-mounted system, and the mobile devices become a part of a display of the vehicle-mounted system. At the moment, the driver can operate the mobile device through the touch keys and the voice system, and even can display the content of the mobile device on the HUD, such as short messages or incoming call reminding, and the accidents caused by the operation of lowering the head of the mobile phone are prevented.
The driver can activate the driving intelligent assistant only by clicking the touch key, and the driving intelligent assistant can accurately identify and execute the indication of the driver. When the voice assistant plays, the HUD can display the voice content, so that a driver who does not hear the voice content can conveniently confirm the voice content by reading, and the interaction between the driver and the voice assistant is more accurate due to the cooperation of vision and hearing. And the whole interaction process is carried out under the condition of not influencing the driving task of the driver, so that the driver can be prevented from being distracted. When the problem that the driver needs to judge yes and no occurs in voice operation, the driver can determine through operating touch keys, for example, left sliding represents no, and right sliding represents yes. When a plurality of choices are made by voice operation, the option content appears on the HUD, the driver can select by operating the touch key, for example, the driver wants to go to the nearest shopping mall, the voice assistant finds 5 shopping malls nearby by searching, then the names of the 5 shopping malls are displayed on the HUD, and the driver selects by sliding the touch key up and down. Simultaneously, the intelligent voice module can also warn and broadcast some emergency situations, so that the warning effect is improved, and the situation that the driver does not see the driving safety warning information due to the occurrence of unintentional blind vision is prevented.
In the embodiment, the MCS main driving center control screen in the dual-screen CS operating system is more inclined to the main driver in screen angle relative to the CS center control screen of the common vehicle, and the sight line of the driver is just vertical to the screen when the driver looks down. The CCS copilot control screen is opposite to the MCS main driving control screen, the screen angle is more inclined to the copilot, and similarly, the sight of the copilot when looking down is just vertical to the screen. The independent sound effect module for the copilot refers to that the sound effect of the CCS screen is independent of the sound effect of the whole vehicle and is played near the position of the copilot.
Wherein the control screen angle is partial to the main driver in the MCS main driving, can effectively prevent dazzling, reduces the picture deformation that viewing angle problem leads to, makes the more nature of reading information, reduces the consumption of driver cognitive resource. Meanwhile, the touch screen operation is more convenient and natural.
The CCS copilot control screen is opposite to the MCS main driving control screen, the screen angle is more inclined to the copilot, and similarly, the sight of the copilot when looking down is just perpendicular to the screen, so that information irrelevant to a driving task can be watched, and meanwhile, the influence on the driver is avoided. The positions of the MCS and CCS in the center console of the automobile are shown in FIG. 4. Meanwhile, the screen splitting interaction can be carried out between the CCS copilot control screen and the MCS main driving control screen, the copilot can slide left on the CCS, information is transferred to the MCS, the primary driver can conveniently watch the information, and meanwhile, the primary driver can transfer the information to the CCS to watch the copilot through right sliding touch control keys or the MCS. For example, when the road situation is complex, the driver needs the assistant driver to perform auxiliary navigation, so that the map navigation can be transferred to the CCS. For example, when a driver is looking for a gas station and the driver is driving with the main driver focusing attention, the assistant driver can find the target point through the CCS navigation map and transfer the target point to the driver for watching. However, the operating authority of the CCS is limited to prevent the influence on the driving task of the driver, for example, in the above example, when the navigation target point is transmitted from the CCS to the MCS, the information is not immediately displayed, but a request is displayed, and the driver can receive the information from the CCS only by touching the control key, the voice assistant or simply sliding directly on the MCS.
Wherein the independent audio module of copilot is activated when the copilot operation CCS needs the audio, independent and whole car audio, has near the playback of copilot position, and sound is less relatively, and the diffusivity is low, can let the copilot listen on the basis clear, less interference to the main driving. The independent audio module of copilot can intelligent regulation volume size, when the main driving needs whole car audio (for example when carrying out voice operation or voice navigation), can reduce the volume automatically. Meanwhile, the independent sound effect module of the co-driver supports the output of the earphones, and when the CCS is operated by the co-driver to use the earphones, the system does not automatically adjust the volume.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (1)

1. The driving assistance system based on the augmented reality head-up display and multi-screen voice interaction is characterized by comprising an augmented reality head-up display system, a multi-screen voice operating system and a double-screen central control display operating system; the augmented reality head-up display system identifies complex road information, navigation and warning information is superposed on a real scene to remind a driver, the multi-screen voice operating system realizes multi-screen interaction and man-machine voice interaction of an automobile HUD, an HDD, an HCS, a CS and mobile equipment through a convenient touch key, and the dual-screen central control display operating system enables a co-driver to operate while the driving of the main driver is not influenced; the augmented reality head-up display system comprises a road information detection module, a computer vision analysis module and an augmented reality head-up display module;
the road information detection module records dynamic road scene information in front of the vehicle through a double sensor in front of the vehicle to form a video stream with depth information and high identification degree, namely, the road scene can be clearly restored at night, the computer vision analysis module analyzes the video stream acquired by the road information detection module through a computer vision algorithm to obtain significant characteristic information with the depth information, the augmented reality head-up display module converts the analysis result into augmented reality image information according to the analysis result of the computer vision analysis module, and the augmented reality head-up display screen is superposed on a real scene on the premise of not influencing the sight of a driver to play a role in enhancing perception; the computer vision analysis module analyzes the video stream acquired by the road information detection module, extracts significant feature information through features, and then compares the significant feature information with various information in a feature library to respectively identify features; then, the information, the vehicle driving information and the operation habits of the driver are integrated to calculate the prompting weight, then the significant information is arranged according to the weight, and the adopted prompting or warning mode is determined to eliminate the information which is not important or does not form the threat; the HUD display content can intelligently adjust the color, brightness, size, imaging distance and range of the HUD display according to the weight and the driving environment;
the driving assistance system further comprises a touch key module, a multi-screen operation module and an intelligent voice module;
the touch key module controls display contents to be converted in multiple screens through touch sliding, or voice instruction input is carried out through clicking, the multiple screen operation module is connected with an automobile HUD, an HDD, an HCS, a CS and a mobile device to realize information interconnection of multiple screens in the automobile, seamless butt joint is realized, information is displayed on any screen required by a driver, the driver can flexibly control the display position of the information without lowering the head, the intelligent voice module is an intelligent assistant for driving of the driver, the driver is activated after clicking the touch key, the driver only needs to speak an instruction after the activation, and the intelligent voice module can recognize and execute the instruction;
the mobile equipment is connected with the system through Bluetooth or other wireless connection modes, and after the connection, a pre-installed vehicle-mounted system application program in the mobile equipment helps the mobile equipment to communicate information with the vehicle-mounted system, so that the mobile equipment becomes a part of a display of the vehicle-mounted system;
the driving assistance system also comprises a copilot independent sound effect module and a main driving central control screen and copilot central control screen interaction module;
compared with a CS (Central control System) central control screen of an ordinary vehicle, the MCS central control screen is more inclined to a primary driver, the sight of the driver is just vertical to the screen when the driver looks down, the CCS auxiliary control screen is opposite to the MCS central control screen, the screen angle is more inclined to a secondary driver, and similarly, the sight of the secondary driver looks down is also just vertical to the screen and can view information irrelevant to a driving task; the MCS main driving middle control screen and the CCS assistant driving middle control screen can be subjected to screen splitting operation to exchange information.
CN201711271533.3A 2017-12-05 2017-12-05 Driving assistance system based on augmented reality head-up display and multi-screen voice interaction Active CN108099790B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711271533.3A CN108099790B (en) 2017-12-05 2017-12-05 Driving assistance system based on augmented reality head-up display and multi-screen voice interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711271533.3A CN108099790B (en) 2017-12-05 2017-12-05 Driving assistance system based on augmented reality head-up display and multi-screen voice interaction

Publications (2)

Publication Number Publication Date
CN108099790A CN108099790A (en) 2018-06-01
CN108099790B true CN108099790B (en) 2021-07-20

Family

ID=62209093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711271533.3A Active CN108099790B (en) 2017-12-05 2017-12-05 Driving assistance system based on augmented reality head-up display and multi-screen voice interaction

Country Status (1)

Country Link
CN (1) CN108099790B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846920A (en) * 2018-07-06 2018-11-20 丘莲清 A kind of automobile data recorder based on AR technology
CN109191587B (en) * 2018-08-23 2019-12-31 百度在线网络技术(北京)有限公司 Color recognition method and device, electronic equipment and storage medium
EP4378758A1 (en) * 2018-09-28 2024-06-05 Koito Manufacturing Co., Ltd. Lamp system
JP2020055348A (en) * 2018-09-28 2020-04-09 本田技研工業株式会社 Agent device, agent control method, and program
DE102018008045B4 (en) * 2018-10-11 2020-07-23 Daimler Ag Method and device for controlling display content on an output means of a vehicle
JP2020080503A (en) * 2018-11-14 2020-05-28 本田技研工業株式会社 Agent device, agent presentation method, and program
CN111469663A (en) * 2019-01-24 2020-07-31 宝马股份公司 Control system for a vehicle
CN110395269B (en) * 2019-07-25 2021-04-13 广州小鹏汽车科技有限公司 Vehicle-based human-computer interaction method and system and vehicle
CN111831244A (en) * 2020-06-08 2020-10-27 北京百度网讯科技有限公司 Information display method and device, electronic equipment and storage medium
CN112002186B (en) * 2020-09-04 2022-05-06 语惠科技(南京)有限公司 Information barrier-free system and method based on augmented reality technology
CN112102836B (en) * 2020-11-18 2022-12-30 北京声智科技有限公司 Voice control screen display method and device, electronic equipment and medium
CN113212316B (en) * 2021-03-25 2023-02-07 武汉华星光电技术有限公司 Vehicle-mounted display system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202986846U (en) * 2013-01-06 2013-06-12 赵长义 Double-screen small automobile center console
CN104627078A (en) * 2015-02-04 2015-05-20 刘波 Automobile drive simulation system based on flexible transparent OLED and control method thereof
CN204968049U (en) * 2015-09-23 2016-01-13 东莞市摩卡电子科技有限公司 On -vehicle interactive system of shielding more

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8994558B2 (en) * 2012-02-01 2015-03-31 Electronics And Telecommunications Research Institute Automotive augmented reality head-up display apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202986846U (en) * 2013-01-06 2013-06-12 赵长义 Double-screen small automobile center console
CN104627078A (en) * 2015-02-04 2015-05-20 刘波 Automobile drive simulation system based on flexible transparent OLED and control method thereof
CN204968049U (en) * 2015-09-23 2016-01-13 东莞市摩卡电子科技有限公司 On -vehicle interactive system of shielding more

Also Published As

Publication number Publication date
CN108099790A (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN108099790B (en) Driving assistance system based on augmented reality head-up display and multi-screen voice interaction
US20240127496A1 (en) Ar display apparatus and ar display method
CN106218506B (en) Vehicle display device and vehicle including the vehicle display device
EP3243687B1 (en) Control device for vehicle
US9645640B2 (en) Device and method for navigating within a menu for controlling a vehicle, and selecting a menu entry from the menu
US11127373B2 (en) Augmented reality wearable system for vehicle occupants
KR101730315B1 (en) Electronic device and method for image sharing
US10942566B2 (en) Navigation service assistance system based on driver line of sight and vehicle navigation system using the same
EP3267659B1 (en) Mobile terminal communicating with a vehicle system with a display
JP6280134B2 (en) Helmet-based navigation notification method, apparatus, and computer program
US9594248B2 (en) Method and system for operating a near-to-eye display
KR20170141484A (en) Control device for a vehhicle and control metohd thereof
US8907887B2 (en) Methods and systems for operating avionic systems based on user gestures
CN105675008A (en) Navigation display method and system
JP2005199992A (en) Vehicle information display system
US9404765B2 (en) On-vehicle display apparatus
CN218198110U (en) Mobile device
WO2018100377A1 (en) Multi-dimensional display
Maroto et al. Head-up Displays (HUD) in driving
US20160070101A1 (en) Head mounted display device, control method for head mounted display device, information system, and computer program
US11227494B1 (en) Providing transit information in an augmented reality environment
KR20180053290A (en) Control device for a vehhicle and control metohd thereof
US10067341B1 (en) Enhanced heads-up display system
KR101698102B1 (en) Apparatus for controlling vehicle and method for controlling the same
JP2010538884A (en) Complex navigation system for menu controlled multifunctional vehicle systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant