CN113306491A - Intelligent cabin system based on real-time streaming media - Google Patents

Intelligent cabin system based on real-time streaming media Download PDF

Info

Publication number
CN113306491A
CN113306491A CN202110671389.2A CN202110671389A CN113306491A CN 113306491 A CN113306491 A CN 113306491A CN 202110671389 A CN202110671389 A CN 202110671389A CN 113306491 A CN113306491 A CN 113306491A
Authority
CN
China
Prior art keywords
image
vehicle
information
real
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110671389.2A
Other languages
Chinese (zh)
Inventor
戴勇
李进
蒋卫刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Original Assignee
SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN PERCHERRY TECHNOLOGY CO LTD filed Critical SHENZHEN PERCHERRY TECHNOLOGY CO LTD
Priority to CN202110671389.2A priority Critical patent/CN113306491A/en
Publication of CN113306491A publication Critical patent/CN113306491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/365Guidance using head up displays or projectors, e.g. virtual vehicles or arrows projected on the windscreen or on the road itself
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

The application relates to the technical field of intelligent cabin systems, in particular to an intelligent cabin system based on real-time streaming media. The acquisition module is used for acquiring information inside and outside the vehicle and receiving the acquired information through the processor and the CAN server. And comparing the acquired information with the information in the database, and displaying all the information and instructions through the display screen. According to the intelligent cockpit display method and system, through the acquisition module and the multi-mode interactive terminal, the acquisition module acquires the in-vehicle environment, the danger and the behavior information of a driver, the acquired data are transmitted to the processing module to be processed, meanwhile, the display screen performs the out-vehicle real scene image, the vehicle driving virtual scene and/or the navigation virtual map, the interaction with the driver is completed, and the intelligent cockpit can conveniently display the application and provide corresponding services through the interaction with the driver.

Description

Intelligent cabin system based on real-time streaming media
Technical Field
The invention relates to the technical field of intelligent cabin systems, in particular to an intelligent cabin system based on real-time streaming media.
Background
At present, intelligent transportation vehicle and road intelligence is realized by means of a network, big data and road test equipment. The sensing of the surrounding environment of the vehicle body is realized by an image sensor, laser and a millimeter wave radar which are arranged on the vehicle. The sensing of the vehicle body depends on various vehicle sensors, and the vehicle body interaction is realized through various Electronic Control Units (ECU), also called as a traveling computer and a vehicle-mounted computer. In the related art, the intelligent cabin system usually mainly displays digital or text contents on a single screen.
The above prior art solutions have the following drawbacks: the single-screen display information amount is small, the interactivity with the driver is poor, and the convenience and the flexibility for displaying the application and providing the corresponding service through the interaction with the driver are poor.
Disclosure of Invention
The invention aims to provide an intelligent cockpit system based on real-time streaming media, which has the characteristics of multi-screen display and flexible interaction with a driver.
The above object of the present invention is achieved by the following technical solutions:
an intelligent cockpit system based on real-time streaming media, comprising:
the acquisition module is used for acquiring in-vehicle sensing signals, out-vehicle real scene images and vehicle driving data and transmitting the vehicle driving data to the CAN server;
the multi-mode interactive terminal comprises a database and a processing module, and is provided with a display screen and a CAN interface, wherein the CAN interface is used for receiving a vehicle running virtual image fed back by a CAN server based on vehicle running data;
the processing module is used for converting and identifying the induction signals, acquiring identification characteristics, comparing the identification characteristics with a mapping relation table which is prestored in a database and corresponds to the identification characteristics and the control instructions one by one, and acquiring the control instructions corresponding to the identification characteristics;
and the display screen displays the outdoor scene image, the vehicle running virtual scene and/or the navigation virtual map in a split screen mode based on the control command.
By adopting the technical scheme, the acquisition module is used for acquiring the sensing signal, the outdoor live-action image and the vehicle running data, the outdoor live-action image is displayed through the display screen, the vehicle running data is sent to the CAN server through the CAN interface, and the multi-mode interactive terminal receives the vehicle running virtual image data generated by the CAN server based on the vehicle running data and displays the data in a split screen mode through the display screen; the driver can interact with the terminal in various modes such as specific gestures, or sound, operation of a display screen or triggering of keys, interactive information is obtained in the form of induction signals through an acquisition module or an induction device of the terminal, the processing module receives the induction signals, identifies the induction signals and then obtains identification characteristics, calls the database to obtain control instructions corresponding to the identification characteristics through the mapping relation table, and then displays the control instructions on the display screen, so that the interaction between the intelligent cabin and the driver can be displayed, and the convenience and the flexibility of corresponding services are better.
The present invention in a preferred example may be further configured to: the acquisition module comprises:
the driving sensor is arranged on the vehicle body and is provided with an induction end and a CAN interface, the induction end is used for collecting vehicle driving data, and the CAN interface is connected with the CAN server through a CAN bus;
the external camera is arranged on the vehicle body and used for collecting multi-angle outdoor live-action images, and the output interface is connected with the processing module;
the processing module comprises:
and the splicing submodule is used for splicing the multi-angle live-action images to obtain spliced images.
By adopting the technical scheme, the driving sensor is used for acquiring vehicle driving data such as the speed, the tire pressure, the position and the distance of a vehicle and transmitting the vehicle driving data to the CAN server through the CAN interface and the CAN bus, the CAN server processes and calculates the vehicle driving data, the generated vehicle driving virtual image data is fed back to the terminal, and the terminal CAN display the virtual driving scene of the vehicle through the display screen, so that a driver CAN conveniently know the vehicle driving condition; the external cameras can be arranged in a plurality, external environment live-action images of the vehicle at different angles are respectively collected and transmitted to the processing module through the output interface, the multi-angle live-action images are spliced through the splicing submodule and then can be displayed on the display screen, so that a driver can know the conditions of a plurality of angles in front of, behind, on the left of and on the right of the vehicle in an all-round manner, the spliced panoramic images can be used for realizing the connection among the images, the visual perception of the driver is improved, the full-round visual angle scene can be more clearly viewed in the front, and the safe driving is facilitated; the processing module processes the image or directly shows the image to the terminal through multiple screens or a single screen, and a driver can conveniently know the surrounding conditions of the vehicle.
The present invention in a preferred example may be further configured to: the acquisition module further comprises:
the TOF sensor is used for collecting gesture information of a driver, and the output interface is connected with the processing module;
the processing module further comprises:
and the gesture recognition submodule is used for performing analog-to-digital conversion on the gesture information and recognizing the gesture to obtain gesture recognition characteristics and obtaining a gesture control instruction based on the mapping relation between the gesture recognition characteristics and the control instruction.
By adopting the technical scheme, the TOF sensor can collect gesture information of a driver, the driver can control the terminal through a specific gesture, the TOF sensor collects the gesture information of the driver and obtains a gesture operation control instruction after the gesture information is processed by the gesture recognition sub-module, and the terminal is controlled based on the gesture operation instruction; when the touch screen or the keys of the terminal are operated inconveniently by a driver, information interaction with the terminal can be achieved through the multiple modes, the interaction mode and flexibility of the driver and the terminal are greatly improved, operation of a user is facilitated, and experience is improved.
The present invention in a preferred example may be further configured to: the acquisition module comprises:
the voice sensor is used for collecting voice signals in the vehicle, and the output interface is connected with the processing module;
the processing module further comprises:
and the voice recognition submodule is used for performing analog-to-digital conversion and keyword extraction on the voice information to obtain keyword recognition characteristics, and obtaining a voice control instruction based on the mapping relation between the gesture recognition characteristics and the control instruction.
By adopting the technical scheme, the voice sensor is used for acquiring voice signals in the vehicle, a driver can control the terminal through a specific voice instruction, and the voice signals in the vehicle are acquired through the voice sensor and processed through the voice recognition submodule to obtain a voice control instruction; when the touch screen or the keys of the terminal are operated inconveniently by a driver, information interaction with the terminal can be achieved through the multiple modes, the interaction mode and flexibility of the driver and the terminal are greatly improved, operation of a user is facilitated, and experience is improved.
The present invention in a preferred example may be further configured to: the processing module further comprises:
the touch control identification submodule is used for carrying out analog-to-digital conversion on the received touch control information, identifying the position of the touch control information to obtain position identification characteristics, and obtaining a touch control command based on the mapping relation between the position identification characteristics and the control command;
and the display screen displays the multi-angle live-action image, the spliced image, the vehicle running virtual scene and/or the navigation virtual map in a split screen mode based on the gesture control instruction, the voice control instruction and/or the touch control instruction.
By adopting the technical scheme, when a user touches the display screen, a touch control command is obtained after the touch identification submodule processes the touch control command; the three instructions of the gesture control instruction, the voice control instruction and the touch control instruction can be repeated or not, and can be a playing instruction for a display screen, a display switching instruction, an original function which is not started and the like; the same function can be controlled by selecting any one of the forms, and the man-machine interaction performance of the terminal is further improved.
The present invention in a preferred example may be further configured to: the acquisition module further comprises:
the in-vehicle camera is arranged in the vehicle and used for collecting image information of a driver, and the output interface is connected with the processing module;
the processing module further comprises:
the driving abnormity submodule extracts characteristic parts in the image information of the driver, identifies the characteristic parts through an image processing algorithm and outputs abnormal state information based on an identification result;
the multimode interactive terminal also comprises a loudspeaker, and the driving abnormity submodule is connected with the loudspeaker and sends state abnormity information to the loudspeaker and/or the display screen.
Through adopting above-mentioned technical scheme, gather driver's image through camera in the car, then draw and judge driver's image information through driving unusual submodule piece, after judging output abnormal state information, warn the driver through speaker and/or display screen, reduce the potential safety hazard that driver fatigue led to.
The present invention in a preferred example may be further configured to: the database is still including establishing the discernment characteristic model that compares object characteristic, the image processing submodule outside the car includes:
the fusion submodule extracts the characteristics of an object in the real-time image based on the real-time image of the automobile front, gives identification characteristics based on the identification characteristic model, and superposes the object image given the identification characteristics and the real-time image to form a first display image;
extracting real-time navigation map information from a satellite navigation map;
superposing the real-time navigation map information and the first display image to form a second image with navigation information;
and displaying the second image through the display screen.
By adopting the technical scheme, the fusion sub-module is used for superposing the object in the image in front of the vehicle and the real-time image, so that the characteristics of the object in front of the vehicle can be displayed in the real-time image; and then, the real-time image with the object characteristics is overlapped with the real-time navigation map information, so that the real-time image and the navigation map information are fused, the real-scene navigation is simulated, and the real visual feeling of a driver is improved by displaying through a display screen, and the navigation applicability is further improved.
The present invention in a preferred example may be further configured to: the road condition abnormity sub-module is used for sending out an alarm signal when the object with the identification characteristics in the image reaches an alarm threshold value;
the display screen, speaker and/or alarm may be responsive to an alarm signal to emit an image alarm, an audible alarm and/or an audible and/or visual alarm.
By adopting the technical scheme, when the road condition is abnormal, the driver is alarmed in a mode of image, sound or combination thereof through the display screen, the loudspeaker and the alarm.
Through collection module and multi-mode interactive terminal, collection module carries out the collection of environment, danger and driver's action information in the car, then transmits the data after gathering to processing module and handles, and the display screen carries out the outdoor scene image simultaneously, the virtual scene of vehicle driving and/or navigation virtual map, accomplishes the interaction with the driver, makes intelligent passenger cabin can show the application and provide corresponding service through the interaction with the driver conveniently.
Drawings
Fig. 1 is a block diagram of a real-time streaming media-based intelligent cockpit system structure according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The present embodiment is only for explaining the present invention, and it is not limited to the present invention, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present invention.
The embodiment of the invention provides an intelligent cabin system based on real-time streaming media, which comprises: the system comprises an acquisition module and a multi-mode interactive terminal. The acquisition module is used for acquiring in-vehicle sensing signals, outside-vehicle live-action images and vehicle driving data and transmitting the vehicle driving data to the CAN server. The multi-mode interactive terminal comprises a database and a processing module, and is provided with a display screen, a loudspeaker and a CAN interface.
The CAN interface is used for receiving the vehicle running virtual image fed back by the CAN server based on the vehicle running data. The database is pre-stored with a mapping relation table which corresponds the identification characteristics to the control instructions one by one, and also comprises an identification characteristic model which is established for comparing the object characteristics, and alarm threshold information is also stored. The processing module is used for converting and identifying the induction signals, obtaining identification characteristics, comparing the identification characteristics with a mapping relation table in a database, and obtaining control instructions corresponding to the identification characteristics.
The acquisition module comprises a driving sensor, an external camera, a TOF sensor, a voice sensor and an in-vehicle camera. The processing module comprises a gesture recognition submodule, a voice recognition submodule, a touch recognition submodule, a splicing submodule and a driving abnormity submodule.
The driving sensor is installed on a vehicle body and is provided with a sensing end and a CAN interface, the sensing end is used for collecting vehicle driving data, the vehicle driving data comprises vehicle data such as vehicle speed, and the CAN interface is connected with a CAN server through a CAN bus.
The external cameras are arranged on the vehicle body and are four, the external cameras are used for collecting multi-angle live-action images outside the vehicle, and the output interface is connected with the processing module. And the splicing submodule is used for splicing the multi-angle live-action images to obtain spliced images, and splicing the images of all the external cameras to display the images around the vehicle body. For example, the cameras arranged at the front left position, the front right position, the rear left position and the rear right position of the vehicle body are used for respectively collecting scenes at four angles, the images at the four angles can be spliced into a 360-degree panoramic image through an image splicing technology, and the 360-degree panoramic image is displayed through the display screen, so that a driver can see the scenes at any angle around the vehicle body through display, and the safe driving performance is further improved.
The TOF sensor is used for collecting gesture information of a driver, the gesture information comprises a gesture instruction of the driver, an output interface of the TOF sensor is connected with the processing module, the TOF sensor transmits continuous infrared light pulses with specific wavelengths to a target based on a TOF (time of flight) technology, then the specific sensor receives light signals transmitted back by an object to be detected, and the round-trip flight time or phase difference of the light is calculated, so that the depth information of the target object is obtained, and in addition, the three-dimensional outline of the object can be presented in a topographic map mode that different colors represent different distances by combining with traditional camera shooting; the sensor collects the gesture information of the driver, the gesture can be accurately obtained, the distance between the hand and the camera can be recognized, for example, when a plurality of gestures are collected, the gesture information with the nearest distance reflected on the image can be used as recognition information, and the difficulty in recognition of a program is prevented. The gesture recognition sub-module is used for recognizing gestures presented in the images through a known image recognition algorithm to obtain gesture recognition characteristics, and obtaining gesture control instructions based on the mapping relation between the gesture recognition characteristics and the control instructions. The driver can carry out different controls through different gestures, for example "increase the volume" through the upper swing hand, "reduce the volume" through the lower swing hand etc. the corresponding adjustment of display screen to show the volume regulator.
The voice sensor is used for collecting voice signals in the vehicle, the voice signals comprise voice instructions sent by a driver, and the output interface is connected with the processing module. The sensor is internally provided with a capacitance type electret microphone sensitive to sound, and the sound wave enables an electret film in the microphone to vibrate to cause the change of capacitance so as to generate tiny voltage which changes correspondingly. The voice recognition submodule is used for carrying out analog-to-digital conversion and keyword extraction on voice information collected by the voice sensor to obtain keyword recognition characteristics, and obtaining a voice control instruction based on the mapping relation between the voice recognition characteristics and the control instruction. The driver can perform different operations through different voices, for example, the driver makes sounds such as "increase volume", "decrease volume", "switch to multi-screen panoramic display", and the like, the display screen is correspondingly adjusted, and a volume adjuster or multi-screen panoramic display is displayed.
The camera in the vehicle is arranged in the vehicle and used for collecting image information of a driver, and the output interface is connected with the processing module. And the driving abnormity submodule extracts the characteristic part in the image information of the driver, identifies the characteristic part through an image processing algorithm and outputs abnormal state information based on the identification result. The state abnormal information comprises abnormal information such as yawning, smoking, eye closing time, head inclination and the like. The opening degree of the human face and the mouth, the identification of slender images of the mouth, the closing time or frequency of eyes, the inclination angle and speed of the head and the like in the image of the driver can be extracted through a known image identification algorithm, so that abnormal actions such as yawning, smoking actions, sleepiness and the like of the driver are judged, abnormal alarm information is sent out, and the driver is warned to have a rest or be warned.
If a user in the cabin operates the terminal through the touch display screen, the corresponding switch is triggered by touch pressing, touch information is sent out, the touch identification sub-module performs analog-to-digital conversion on the received touch information, identifies the position of the touch information to obtain position identification characteristics, and obtains a touch control instruction based on the mapping relation between the position identification characteristics and the control instruction at the corresponding position on the operation interface.
And the fusion sub-module is used for extracting the characteristics of the object in the real-time image based on the real-scene image in front of the vehicle, endowing the characteristics with the identification characteristics based on the identification characteristic model, overlapping the object image endowed with the identification characteristics with the real-time image to form a first display image, extracting the real-time navigation map information from the satellite navigation map and overlapping the real-time navigation map information with the first display image to form a second image with navigation information.
For example, a real-time image of the front of a vehicle is collected and output to a digital image conversion module, the image of the front of the vehicle is converted into a digital signal, if the real-time image of the front of the vehicle is collected by a digital camera, the digital image conversion module is not needed, an image of a set area is intercepted and transmitted to a processing module, the image is separated into two synchronous paths, one path is used for display on a display screen, and the other path is used for extracting identification features Balustrades, non-motorized vehicles, and the like; when a plurality of objects are determined, preferentially keeping a determined object right in front of the vehicle, for example, if the comparison of the automobile is successful, giving the automobile identification characteristics to the image corresponding to the determined object; the image and the real-time image are superposed to form a first display image, for example, if the identification feature of the automobile is a quadrangle frame, the first display image is that a quadrangle frame image is added on a running vehicle right in front of the vehicle.
The second image is formed by the following steps: planning a navigation path in a real-time navigation map, and extracting the navigation path, the landmark information at two sides and longitude and latitude information; the navigation path and the first display image are superposed through an image processing algorithm to obtain a third image, longitude and latitude position information of the image collected by a camera is obtained, landmark information on two sides of the navigation path is superposed into the third image based on the longitude and latitude position information to obtain a second image, the second image is displayed through a terminal display screen for a driver to observe and refer, live-action navigation is realized, and the driving information of the vehicle and the warning information of a front determination object can be displayed in the form of images or voice signals to remind the driver of safe operation and prompt when the vehicle runs off a route; for example: the vehicle is over a safe distance from the front vehicle, deviates from a driving lane, and is gathered into overspeed emergency avoidance on the left side of the rear vehicle.
The road condition abnormity sub-module is used for sending out an alarm signal when the object with the identification characteristics in the image reaches an alarm threshold value; the abnormal road conditions include too close distance to the front vehicle, deviation from a driving road and the like.
And the display screen displays the multi-angle live-action image, the spliced image, the vehicle running virtual scene and/or the navigation virtual map and the second image in a split screen mode based on the gesture control instruction, the voice control instruction and the touch control instruction. The display screen is provided with a multi-angle live-action image only displaying navigation, other areas display different functions according to different instructions of a driver, the display screen can display one or more functions simultaneously, and a general function inlet always exists on the display screen.
It should be noted that the gesture control instruction, the voice control instruction, and the touch control instruction may be a navigation on/off instruction, an audio switching instruction, an audio on/off instruction, a volume adjustment instruction, a display control instruction, a display switching instruction, and the like, where the same instruction may be controlled by the above three modes at the same time, and the driver selects a convenient operation mode based on actual needs, for example, in the driving process, the driver may control the music playing to stop by a gesture, wake up the navigation by voice, display an omnidirectional live-action picture by voice, or switch the display by multiple screens, and the like, which is flexible and variable.
The loudspeaker and/or the display screen prompt the abnormal state information, different character display is carried out through the display screen according to different conditions, and alarm is carried out through different sounds.
The display screen, the loudspeaker and/or the alarm respond to the alarm signal to send out image alarm, sound alarm and/or sound-light alarm, different character display is carried out through the display screen according to different conditions, and alarm is carried out through different sounds.
The implementation principle of the intelligent cabin system based on real-time streaming media in the embodiment of the application is as follows: after the vehicle starts, show real-time car image on the inside display screen of intelligence passenger cabin, make things convenient for the driver to carry out the observation of the condition around the vehicle, it has navigation function to show on the display screen simultaneously, makes the driver can utilize navigation function to carry out the driving of vehicle. When a driver sends an instruction through a touch display screen, voice or gestures, a sensor in the vehicle is triggered, then the behaviors are judged to be non-abnormal behaviors through a processing module, and after signal processing is carried out, functions required by the driver are displayed on the display screen.
When a driver dozes off or calls, the behavior of the driver is collected through the camera, the behavior is judged to be abnormal behavior through the processing module, and the processing module processes the judged behavior signal and warns the driver through the display screen and the loudspeaker to enable the driver to standardize the behavior of the driver.
When the situation that the driver is too close to the front vehicle and the like occurs, the situation outside the vehicle is collected by the camera outside the vehicle, the behavior is judged to be abnormal behavior through the processing module, and the driver is warned through the display screen and the loudspeaker after the judged behavior signal is processed by the processing module, so that the driver can avoid risks in time.

Claims (8)

1. An intelligent cockpit system based on real-time streaming media, comprising:
the acquisition module is used for acquiring in-vehicle sensing signals, out-vehicle real scene images and vehicle driving data and transmitting the vehicle driving data to the CAN server;
the multi-mode interactive terminal comprises a database and a processing module, and is provided with a display screen and a CAN interface;
the CAN interface is used for receiving a vehicle running virtual image fed back by the CAN server based on the vehicle running data;
mapping relation tables corresponding to the identification features and the control instructions one by one are prestored in the database;
the processing module is used for converting and identifying the induction signals, acquiring identification characteristics, comparing the identification characteristics with a mapping relation table in a database and acquiring control instructions corresponding to the identification characteristics;
and the display screen displays the outdoor scene image, the vehicle running virtual scene and/or the navigation virtual map in a split screen mode based on the control command.
2. The intelligent cabin system based on real-time streaming media according to claim 1, wherein the collection module comprises:
the driving sensor is arranged on the vehicle body and is provided with an induction end and a CAN interface, the induction end is used for collecting vehicle driving data, and the CAN interface is connected with the CAN server through a CAN bus;
the external camera is arranged on the vehicle body and used for collecting multi-angle outdoor live-action images, and the output interface is connected with the processing module;
the processing module comprises:
and the splicing submodule is used for splicing the multi-angle live-action images to obtain spliced images.
3. The intelligent cabin system based on real-time streaming media of claim 2, wherein the acquisition module further comprises:
the TOF sensor is used for collecting gesture information of a driver, and the output interface is connected with the processing module;
the processing module further comprises:
and the gesture recognition submodule is used for recognizing the gesture information to obtain gesture recognition characteristics and obtaining a gesture control instruction based on the mapping relation between the gesture recognition characteristics and the control instruction.
4. The intelligent cabin system based on real-time streaming media according to claim 3, wherein the acquisition module comprises:
the voice sensor is used for collecting voice signals in the vehicle, and the output interface is connected with the processing module;
the processing module further comprises:
and the voice recognition submodule is used for performing analog-to-digital conversion and keyword extraction on the voice information to obtain keyword recognition characteristics, and obtaining a voice control instruction based on the mapping relation between the keyword recognition characteristics and the control instruction.
5. The intelligent cabin system based on real-time streaming media of claim 4, wherein the processing module further comprises:
the touch control identification submodule is used for carrying out analog-to-digital conversion on the received touch control information, identifying the position of the touch control information to obtain position identification characteristics, and obtaining a touch control command based on the mapping relation between the position identification characteristics and the control command;
and the display screen displays the multi-angle live-action image, the spliced image, the vehicle running virtual scene and/or the navigation virtual map in a split screen mode based on the gesture control instruction, the voice control instruction and/or the touch control instruction.
6. The intelligent cabin system based on real-time streaming media of claim 5, wherein the acquisition module further comprises:
the in-vehicle camera is arranged in the vehicle and used for collecting image information of a driver, and the output interface is connected with the processing module;
the processing module further comprises:
the driving abnormity submodule extracts characteristic parts in the image information of the driver, identifies the characteristic parts through an image processing algorithm and outputs abnormal state information based on an identification result;
the multimode interactive terminal also comprises a loudspeaker, and the driving abnormity submodule is connected with the loudspeaker and sends state abnormity information to the loudspeaker and/or the display screen.
7. The intelligent cockpit system based on real-time streaming media of claim 6 wherein said database further comprises an identification feature model for comparing object features, said processing module further comprises,
the fusion submodule extracts the characteristics of an object in the real-time image based on the real-time image of the automobile front, gives identification characteristics based on the identification characteristic model, and superposes the object image given the identification characteristics and the real-time image to form a first display image;
extracting real-time navigation map information from a satellite navigation map;
superposing the real-time navigation map information and the first display image to form a second image with navigation information;
and displaying the second image through the display screen.
8. The intelligent cabin system based on real-time streaming media according to claim 7, wherein the database further stores alarm threshold information, and the processing module further comprises:
the road condition abnormity sub-module is used for sending out an alarm signal when the object with the identification characteristics in the image reaches an alarm threshold value;
the display screen, speaker and/or alarm may be responsive to an alarm signal to emit an image alarm, an audible alarm and/or an audible and/or visual alarm.
CN202110671389.2A 2021-06-17 2021-06-17 Intelligent cabin system based on real-time streaming media Pending CN113306491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110671389.2A CN113306491A (en) 2021-06-17 2021-06-17 Intelligent cabin system based on real-time streaming media

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110671389.2A CN113306491A (en) 2021-06-17 2021-06-17 Intelligent cabin system based on real-time streaming media

Publications (1)

Publication Number Publication Date
CN113306491A true CN113306491A (en) 2021-08-27

Family

ID=77379377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110671389.2A Pending CN113306491A (en) 2021-06-17 2021-06-17 Intelligent cabin system based on real-time streaming media

Country Status (1)

Country Link
CN (1) CN113306491A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115095843A (en) * 2022-06-24 2022-09-23 中国第一汽车股份有限公司 Car lamp structure capable of realizing sound and light integration and control method thereof
CN115469751A (en) * 2022-10-10 2022-12-13 复旦大学 Multi-mode human-computer interaction system and vehicle
CN115509366A (en) * 2022-11-21 2022-12-23 科大讯飞股份有限公司 Intelligent cabin multi-modal man-machine interaction control method and device and electronic equipment
CN115695698A (en) * 2022-10-29 2023-02-03 重庆长安汽车股份有限公司 Processing method, system, equipment and medium for driving information storage
CN117193530A (en) * 2023-09-04 2023-12-08 深圳达普信科技有限公司 Intelligent cabin immersive user experience method and system based on virtual reality technology

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115095843A (en) * 2022-06-24 2022-09-23 中国第一汽车股份有限公司 Car lamp structure capable of realizing sound and light integration and control method thereof
CN115095843B (en) * 2022-06-24 2023-11-10 中国第一汽车股份有限公司 Car lamp structure capable of realizing sound and light integration and control method thereof
CN115469751A (en) * 2022-10-10 2022-12-13 复旦大学 Multi-mode human-computer interaction system and vehicle
CN115695698A (en) * 2022-10-29 2023-02-03 重庆长安汽车股份有限公司 Processing method, system, equipment and medium for driving information storage
CN115509366A (en) * 2022-11-21 2022-12-23 科大讯飞股份有限公司 Intelligent cabin multi-modal man-machine interaction control method and device and electronic equipment
CN117193530A (en) * 2023-09-04 2023-12-08 深圳达普信科技有限公司 Intelligent cabin immersive user experience method and system based on virtual reality technology

Similar Documents

Publication Publication Date Title
CN113306491A (en) Intelligent cabin system based on real-time streaming media
JP5492962B2 (en) Gaze guidance system
JP4475308B2 (en) Display device
US20080231703A1 (en) Field watch apparatus
CN105938657A (en) Auditory perception and intelligent decision making system of unmanned vehicle
KR102227489B1 (en) Apparatus and method for visualizing sound source
WO2015162764A1 (en) Vehicle-mounted information device and function limiting method for vehicle-mounted information device
CN107097793A (en) Driver assistance and the vehicle with the driver assistance
CN111016820B (en) Agent system, agent control method, and storage medium
KR20120127830A (en) User interface method for terminal of vehicle and apparatus tererof
WO2007074842A1 (en) Image processing apparatus
CN111016905A (en) Interaction method and system for automatic driving vehicle and driving remote control terminal
US10901503B2 (en) Agent apparatus, agent control method, and storage medium
CN110544368B (en) Fatigue driving augmented reality early warning device and early warning method
CN111216127A (en) Robot control method, device, server and medium
JP2002133596A (en) Onboard outside recognition device
CN216128208U (en) Intelligent cabin system based on real-time streaming media
KR20200020313A (en) Vehicle and control method for the same
CN108417061B (en) Method and device for detecting the signal state of at least one signaling device
JP2020020987A (en) In-car system
CN112334354A (en) Head-up display device
JP2002049998A (en) Drive support device
EP4124073A1 (en) Augmented reality device performing audio recognition and control method therefor
KR101816570B1 (en) Display apparatus for vehicle
CN110139205B (en) Method and device for auxiliary information presentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination