CN105654753A - Intelligent vehicle-mounted safe driving assistance method and system - Google Patents
Intelligent vehicle-mounted safe driving assistance method and system Download PDFInfo
- Publication number
- CN105654753A CN105654753A CN201610013625.0A CN201610013625A CN105654753A CN 105654753 A CN105654753 A CN 105654753A CN 201610013625 A CN201610013625 A CN 201610013625A CN 105654753 A CN105654753 A CN 105654753A
- Authority
- CN
- China
- Prior art keywords
- driver
- vehicle
- information
- driving
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0967—Systems involving transmission of highway information, e.g. weather, speed limits
- G08G1/096708—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
- G08G1/096725—Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Atmospheric Sciences (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses an intelligent vehicle-mounted safe driving assistance method and a system. The method comprises the steps of acquiring the driving state information of a driver, acquiring the state information around a vehicle, and sending the above acquired information to a local server to process the information. The system comprises a first camera shooting device, a second camera shooting device, a driver state monitoring system, a driving environment monitoring system and an HUD. The system of the present invention is provided with two camera shooting devices, so that the condition inside a vehicle and the condition outside the vehicle can be monitored at the same time. In combination with the state of the driver of the vehicle and the driving environment, the driver is prompted of paying attention to the driving safety. Based on the above method, a heads-up display (HUD), which does not affect the driving safety of the driver at all, is adopted as a server carrier. Therefore, the information is directly presented for the driver. The occurrence of traffic accidents is avoided maximally.
Description
Technical field
The present invention relates to intelligent driving aid system, particularly to intelligent vehicle-carried safe driving householder method and system.
Background technology
HUD HUD (HeadUpDisplay), is generally use flight supplementary instrument on aircraft. What come back is meant that pilot need not bow just it can be seen that his important information of needing. Because the convenience of HUD and can improve flight safety, airliner also follows up installation one after another. HUD is the principle utilizing optical reflection, important flight relevent information is incident upon above a sheet glass. This what passenger cabin front end, sheet glass position, highly substantially becomes level with the eyes of pilot, and the word of projection and image adjustment, on the distance of focal length infinity, time pilot sees toward front through HUD, will not hamper the running of eyes, and maintenance clearly indicates.
The basic framework of HUD comprises two parts: data processing unit and image display. Data processing unit is the Hou of the data conformity process by system each on aircraft, and the patten transformation according to selecting becomes symbol set in advance, figure or the kenel with word or numeral to export. Signal is processed and is divided into two devices with image output by some product, but is generally all similar working method. Image display is just mounted in passenger cabin front, between pilot and canopy spatially. Image display receives the information from data processing device, is incident upon above glass. Display device and with control panel, it is possible to reconcile or change the image of output.
The HUD of a new generation improvement in image display includes adopting full figure photography (Holographic) display mode, expand the scope of show image, especially the field-of-view angle in increase level, reduce the thickness of support to the restriction in the what visual field and impact, strengthen different luminosity and the display adjustment under external environment, the definition of strengthening image, with coordinating that other optical images export, for example can the aircraft forward image that infrared image camera produces be projected directly on HUD, show with other data fusion, coordinate the use of night vision goggles and adopt chromatic image display data. improvement on data processing unit includes speed and the efficiency that raising processes, and image is incident upon on the fixing device in passenger cabin front by HUD, and when pilot's rotation head time, these images will his field range away from keyboard.The HUD of a new generation is more suitable for being used on automobile widely.
In the intrinsic notion of people, driving that what should focus on is safety naturally, but universal along with smart mobile phone, cellphone subscribers are independent of facility that mobile phone brings and quick all the time. Phone, note, wechat real-time communication, multimedia use, digital map navigation instrument etc. these, but in " race of bowing " increasing today, mobile phone brings the safety that our facility but strong influence is driven. The vehicle accident of various ways is all owing to car owner causes due to use mobile phone in driving procedure. Automobile vendors come to realise the importance of middle control screen, add vehicle as maximum terminal unit, more allow this block " screen " on car become place contested by all strategists. But the existence of vehicle-mounted middle control screen really to allow driving variable obtain safer, but in real experiences, still have the every drawback on vehicle-mounted middle control screen and inconvenient, still can allow driver distraction.
In driving procedure, there will be driver's distraction and road surface emergency case unavoidably, and intelligent driving aid system can situation inside and outside monitoring car, remind driver to note driving safety and road surface emergency case when being necessary. Current DAS (Driver Assistant System) substantially all only comprises a photographic head, it is impossible to the situation in detection car and outside car simultaneously. In addition the Integrated Information Processing System of existing DAS (Driver Assistant System) is car machine, but use car machine to have data as informix process system and present not direct, driver needs to be transferred to sight line from road surface car machine, this operation is inherently unsafe for driving, and therefore cannot be truly realized the purpose of safe driving.
Summary of the invention
The technical problem to be solved in the present invention is, simultaneously the situation in detection car and outside car, and is directly presented by HUD.
Solve above-mentioned technical problem, the invention provides a kind of intelligent vehicle-carried safe driving householder method, including,
Obtain driver drives vehicle status information,
Obtain the status information of vehicle periphery,
Above-mentioned status information transmission is processed to home server.
Described acquisition driver drives vehicle status information adopts the first camera head to obtain, the status information of described acquisition vehicle periphery adopts the second camera head to obtain, described first camera head, the second camera head include, infrared camera, IP Camera.
Described driver drives vehicle status information includes, and obtains the gesture identification instruction of driver, according to the identity of facial information identification driver and monitoring driver whether fatigue driving.
The status information of described vehicle periphery includes, and obtains the video image of vehicle periphery, detection Current vehicle whether run-off-road, if deviation, carries out lane departure warning; Whether detection driving front has pedestrian, if having and in collision distance, if existing, carrying out anti-collision warning, and whether the running distance of front truck exists in collision distance, if existing, carrying out anti-collision warning.
Described home server is arranged on HUD, and described HUD is connected with high in the clouds, stores to high in the clouds in order to the information on home server to be sent, is downloaded on home server in use and processes.
Present invention also offers a kind of intelligent vehicle-carried safe driving assistant system, including,
First camera head, in order to obtain driver drives vehicle status information,
Second camera head, in order to obtain the status information of vehicle periphery,
Driver status monitoring system, carries out driver status monitoring in order to the information obtained according to the first camera head;
Driving environment monitoring system, carries out driving environment monitoring in order to the vehicle periphery state obtained according to the second camera head;
HUD, in order to the result by above-mentioned driver status monitoring and driving environment monitoring, reminds after process accordingly.
Described driver status monitoring system, in order to,
The gesture instruction of driver detected, complete corresponding operating;
Identify the identity information of driver, and the identity information according to driver carries out corresponding personal settings;
Detection driver whether fatigue driving, and reminder message is sent to HUD.
Described driving environment monitoring system, in order to obtain the video image of vehicle periphery,
Detection Current vehicle whether run-off-road, if deviation, carries out lane departure warning;
Whether detection driving front has pedestrian, if having and in collision distance, if existing, carries out anti-collision warning;
And whether the running distance of front truck exists in collision distance, if existing, carry out anti-collision warning.
Described HUD, in order to the driving environment sent at the driver status and described driving environment monitoring system that receive the monitoring system transmission of described driver status, carry out information alert, described information alert includes, the operational order that gesture identification result is corresponding, the operational order that identification result is corresponding, the operational order that fatigue driving result is corresponding, what vehicles or pedestrians testing result was corresponding prevents collision instruction.
Described driver status monitoring system carries out effective gestures detection, Face datection and human eye detection; Described driving environment monitoring system carries out lane detection, HOG feature detection, Image Multiscale change detection.
Beneficial effects of the present invention:
1) method of the present invention uses the situation inside and outside two camera heads monitoring car simultaneously, the state of comprehensive driver and driving environment, reminds driver to drive with caution.
2) use the HUD (HUD) not affecting driving safety as server carrier so that information is directly presented on driver at the moment, the generation avoided traffic accident significantly; Including detection Current vehicle whether run-off-road, if deviation, carry out lane departure warning; Whether detection driving front has pedestrian, if having and in collision distance, if existing, carries out anti-collision warning; And whether the running distance of front truck exists in collision distance, if existing, carry out anti-collision warning.
3) system of the present invention can provide personalized prompting and setting according to the identification system of driver, when occurring that non-default personnel drive vehicle, it is possible to provide warning, it is ensured that the safety of vehicle.
Accompanying drawing explanation
Fig. 1 is the schematic diagram in the present invention in an embodiment of intelligent vehicle-carried safe driving householder method.
Fig. 2 is a kind of detailed description of the invention schematic diagram in Fig. 1.
Fig. 3 is a kind of detailed description of the invention schematic diagram obtaining driver drives vehicle status information in Fig. 1.
Fig. 4 is a kind of detailed description of the invention schematic diagram of the status information of the vehicle periphery in Fig. 1.
Fig. 5 is the embodiment schematic diagram of the further operational approach in Fig. 1.
Fig. 6 is based on the schematic diagram in an embodiment of the intelligent vehicle-carried safe driving assistant system of HUD in the present invention.
Fig. 7 be in figure 6 safe driving assistant system operating process schematic diagram (based on obtain driver drives vehicle status information).
Fig. 8 is safe driving assistant system operating process schematic diagram (status information of vehicle periphery) in figure 6.
The schematic diagram of another embodiment of intelligent vehicle-carried safe driving householder method in Fig. 9 present invention.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearly understand, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.In the present embodiment with vehicle-mounted new line equipment HUD for optimum embodiment, it is not used to be limited to protection scope of the present invention.
Fig. 1 is the schematic diagram in the present invention in an embodiment of intelligent vehicle-carried safe driving householder method.
The intelligent vehicle-carried safe driving householder method based on HUD in the present embodiment, comprises the steps:
Step S101 obtains driver drives vehicle status information, can by the mode of the mode of image acquisition, sound collection, the mode of described image acquisition is frame of video or photo, the mode of sound collection is sound after denoising in car, ambient sound or the sound abated the noise, specifically can adopt dual microphone noise reduction, Lou Shi high s/n ratio silicon wheat.
The mode of wherein said sound collection identifies after following mode can be adopted to be corrected:
Initially set up RP sound bank, training tone modeling and phoneme model, obtain the relevant ternary tone modeling of standard context and language spectrum phoneme model; From the voice of driver, extract acoustic features, including speech spectral characteristics and tone feature, and tone feature is carried out post processing; According to standard tone modeling and phoneme model, utilize based on hidden Markov model (HiddenMarkovModel, HMM) context-sensitive ternary tone modeling calculates the grading parameters of reflection tone quality, obtains tone and evaluates score, tone posterior probability; Based on source-filter model synthesis, there is standard target tone and the new speech of driver's speech spectral characteristics, and feed back to driver; Adopt the average weighted quadratic function of tone posterior probability to draw tone contour, and the tone contour of standard tone contour and actual pronunciation is fed back to driver. The mode of described image acquisition can adopt CN201110238773.X, presents the method for image in car, realizes. Adopt CN201510202036.2 simultaneously, a kind of based on the CAN radio communication monitor in real time vehicular security system controlled, carry out auxiliary display. Those skilled in the art can understand, the mode of wherein said image acquisition can adopt following mode to be identified: according to image acquisition such as image input, pretreatment, feature extraction, classification and couplings. Wherein pretreatment can also include: several parts such as image segmentation, image enhaucament, binaryzation and refinement. Decoder processes meets the video signal of NTSC (NationalTelevisionStandardsCommittee, (U.S.) NTSC standard) and extracts the multiple component of signals (Y, Cb, Cr) including in this video signal respectively and carry out A/D conversion and generation can be used for the data image signal of image procossing.
Preferred as in the present embodiment, the mode of described image acquisition can adopt video camera, described camera arrangements is in the such as car of the vehicle of such as automobile near operating seat, and the presumptive area in the travel direction front visual through front windshield is imaged. All imaged by main camera as theme when the travel path in travel direction front of vehicle traveling, a part for vehicle body for vehicle, front vehicles, opposed vehicle etc. And it is actually and repeatedly catches image and export the image captured constantly within the specific frame period (such as, 20 frames per second). Described video camera can also be arranged in such as car near operating seat, still further can be near Auto Instrument desk, when driver into the car after, the face of driver, iris etc. are identified. Additionally video camera can also be arranged near such as dickey seats, and the presumptive area at the travel direction rear visual through rear seat windscreen is imaged. The travel path at travel direction rear, a part for vehicle body for vehicle, front vehicle, opposed vehicle etc. are all imaged by rear view camera as theme. The image recognition MCU being arranged in HUD has the function of the predetermined image procossing performing such as pattern identification.When the candidate item judging road surface identification based on the image being captured exported from video camera, when the candidate item of road surface identification meets preassigned, candidate item is identified as road surface identification by image recognition MCU.
Step S102 obtains the status information of vehicle periphery, it is possible to including the Anticollision Radar when status information of rearview mirror, the running information of vehicle mounted guidance instrument record and reversing, these information is displayed on HUD by CAN communication bus in car.
Above-mentioned status information transmission is processed by step S103 to home server. Described home server is the internal storage location of HUD, in order to data cached and process, processor: ARM4 core CPU, internal memory 1GRAM, described processor, by performing to have previously been stored in the pre-set program of inside, controls HUD and performs the function of various necessity. Specifically, when when controlling the information that interface reading is inputted by various interfaces of processor, the record of processor as the information of transport information set in advance, or compare the road surface identification of the state of vehicle Yu image recognition identification to record its comparative result and as transport information. Such as, processor reads each special time (such as, 0.5 second) speed that obtains, engine revolution etc., and generate and exceed number of times corresponding to such as time of vehicle operation, operating range, maximum speed, average speed, hypervelocity time, hypervelocity number of times, engine revolution overtime, revolution, anxious start, anxious accelerate, anxious slow down and the file of transport information of free time etc., and this document is stored in memory.
Described memorizer is the memorizer constructed by ROM, includes but not limited to: the memorizer and the memorizer constructed by EEPROM that are constructed by SDRAM are connected to processor. Memorizer stores the constant data such as used by the program performed or various controlled unit in advance. The memorizer of SDRAM structure is for storing the ephemeral data of the data in such as processing. The data of the various constants of reference in the memorizer storage processor of EEPROM structure. Owing to the memorizer of EEPROM structure is nonvolatile memory, even if at which under the state that the power supply of power supply stops, data also can be preserved. If it is required, the content of the data being stored in the memorizer of EEPROM structure can be rewritten.
HUD can also include being attached with following signaling interface in the present embodiment:
Signals of vehicles interface (I/F) performs such as input the various signal from vehicle side output and convert the signal into the process of the signal that can be processed by the processor. The signal processed by signals of vehicles interface includes triggering output, igniting (IGN), hand switch (SW), brake, left-side signal lamp, right-side signal lamp, vehicle speed pulse etc.
GPS interface (I/F) performs such as to input and positions information from GPS and convert this information into the process of the signal that signal can be processed by the processor. Gps receiver is to receive, for incoming radio wave and the acquisition received from multiple GPS (GPS system) satellite, the current location put, that is, by calculating the current location (lat/lon) of vehicle based on the time difference between this radio wave. Therefore, processor can obtain the current location of vehicle from gps receiver.
CAN interface (I/F) performs such as to input the information from other equipment being arranged in vehicle and the process converting this information into the signal that signal can be processed by the processor.
Sound interface (I/F) is connected to the voice output part of such as speaker (not shown), and performs such as to be inputted by the acoustical signal from processor and by the process of the voice output according to the acoustical signal from voice output part.
Record bearer interface (I/F) has the slot that the predetermined record carrier of such as storage card can be attached to, and includes signal processing function necessary to processor access record carrier. Control system process CPU20 can pass through record bearer interface by the file storage of transport information record carrier in. It addition, by administrator PC etc., manage the manager about the transport information of vehicle and can obtain by processor by recording the file of the transport information of vector accumulation.
It is for connecting the interface that the host apparatus of such as PC is connected to HUD that PC sets interface (I/F). Interface is set by host apparatus being connected to PC, for instance, the various threshold values violating condition and the processor reference judged can be changed by the instruction from host apparatus.
Vibrator output part includes miniature motor etc., and notes with reminding passengers according to the signal generator tool vibration carrying out processor.
Fig. 2 is a kind of detailed description of the invention schematic diagram in Fig. 1.
Wherein 201 include infrared camera, IP Camera, described IP Camera and in infrared camera car near operating seat, and the presumptive area in the travel direction front visual through front windshield is imaged. Infrared camera is in order to when night or light are poor, to be made directly identification to face or iris. Front-facing camera, infrared night vision camera, may be used to catch user's gesture.
Step S201 adopts the first camera head to obtain, and enters S101.
Step S202 adopts the second camera head to obtain, and enters S102.
Fig. 3 is a kind of detailed description of the invention schematic diagram obtaining driver drives vehicle status information in Fig. 1.
Step S101 obtains driver drives vehicle status information, enters into step S301;
Step S301 monitors driver's whether fatigue driving, first the facial image information of camera acquisition is carried out binary conversion treatment in the present invention, the character matrix of binaryzation luminance component is carried out respectively level again, vertically adds up, intercept face's pixel, draw face locating frame; Carry out calculating level complexity, draw eye aligning frame; By the ratio to complexity first and second peak value, the ratio of the first peak value and the horizontal width of face locating frame, what the whether judgement in threshold range of the horizontal level of the first peak value drew driver's eyes opens closed state; So that it is determined that whether driver is in fatigue driving state; If judging to show that driver is in fatigue driving state, then automatically start warning.
Step S302 obtains the gesture identification instruction of driver, and as two dimension hand-type, gesture identification instruction is identified as example, and the hand-type identification of described two dimension is based on two dimension aspect, needs the two-dimensional signal without depth information as inputting in processing procedure. Just containing two-dimensional signal just as the photograph of gained of taking pictures at ordinary times, the two dimensional image having only to use single photographic head to capture is as input, then passing through the computer vision technique two dimensional image to input to be analyzed, obtaining information, thus realizing gesture identification. Those skilled in the art can understand, computer vision (ComputerVision, CV) in, the treatment technology of visual information depends on image processing method, and it includes the contents such as image enhaucament, data encoding and transmission, smooth, edge sharpening, segmentation, feature extraction, image recognition and understanding. The quality exporting image after these process obtains considerable degree of improvement, has both improved the visual effect of image, has been easy to again computer and image is analyzed, processes and identifies. Such as, obtain two dimension hand-type image by image acquisition, two dimension hand-type image can be carried out image enhaucament, for instance rectangular histogram conversion can be passed through and enhance picture contrast, it is achieved single channel image strengthens, and gradation of image threshold value is stretched to 0-255.Then carrying out data encoding and transmission again, this step is in order to image is compressed, and image smoothing main purpose is in that denoising, removes probably due to environmental factors the image fault that factors such as () vehicle jolts, turns, dark causes. Complete edge sharpening and can adopt " image sharpening method based on rim detection ". Zeng Jialiang. method. After completing above-mentioned operation, feature extraction can be carried out again, it is possible to adopt the linear transformation carried out based on karhunen-Le Wei expansion. Or employing nonlinear mapping method, for instance multidimensional scaling method and parameter mapping method, eventually completes the identification of image.
The step S303 identity according to facial information identification driver, first carried out identification before vehicle launch and whether authentication determination driver is legal driver, if so, then can start vehicle, otherwise limit vehicle launch driver; In the process of moving, it is scanned driver face identifying, it is judged that whether the driver recognized and the driver confirmed before driving are same driver, if not same driver, then remind driver to carry out identity information certification, if certification success, can drive. Also include in the process of moving in real time the fatigue state of driver being detected and being identified, and the rank according to fatigue driving carries out reminding and speed limit. If same driver, then can start after waking up to drive.
Fig. 4 is a kind of detailed description of the invention schematic diagram of the status information of the vehicle periphery in Fig. 1.
S102 obtains the status information of vehicle periphery, enters S401;
S401 obtains the video image of vehicle periphery, detection Current vehicle whether run-off-road, if deviation, carries out lane departure warning; Relation based on car, people and road three judges, it is possible to adopts RRS method, triggers deviation early warning in time when road boundary will be touched in automobile front tyre outer. Preferably, it is possible to adopt BLIS blind area monitoring system sensed signal sources to be positioned at vehicle tail by radar monitoring back car, and the former judges whether vehicle has run-off-road to act on by being positioned at the detection of the photographic head at rear-viewing mirror in vehicle rear. Doubling auxiliary can also be called that blind area is monitored, and the form of this device is in the rearview mirror of two, left and right or reminds driver rear to send a car elsewhere. BLIS blind area monitoring system detection range is 15 meters, that is exterior vehicle will be monitored when entering 15 meters of range areas of this vehicle, system judges the hypotelorism of two cars, no matter whether this car is positioned at blind area, no matter driver is either with or without playing steering indicating light, it is built-in with two photographic head below the illuminator of two, left and right, being affected the blind area at rear feeds back on the display screen of car running computer, synchronously, and have doubling attention gatter to remind driver to note the blind area in this direction on the pillar of rearview mirror.
Whether S402 detection driving front has pedestrian, if having and in collision distance, if existing, carries out anti-collision warning;
Whether the running distance of S403 front truck exists in collision distance, if existing, carries out anti-collision warning.
In step S403, front and back car offset track collision distance is set based on following judgment rule:
When rear car is full of rearview mirror,
When in terms of right rear view mirror, rear car is full of rearview mirror, before and after now, car gap should be 3 meters; But when rear car is full of left-hand mirror, actual range only has 1 meter. When rearview mirror is full of by rear car (no matter left and right), two spacings are fairly close.
When rear car is full of the 2/3 of rearview mirror,
When rear car accounts for left-hand mirror 2/3, actual range only has 3 meters, is different from 5 meters of right rear view mirror, and in daily driving, such running distance is still too near, is unfit to do any doubling and overtakes other vehicles lane-change.
When rear car is full of the 1/2 of rearview mirror,
When rear car is full of left-hand mirror 1/2, the distance from rear car is not 5 meters, with the 9 of right rear view mirror meters of distance differences to some extent. In daily driving, such distance is also comparatively safe.
When rear car is full of the 1/3 of rearview mirror,
When rear car is full of left-hand mirror 1/3 from the distance respectively 10 meters of rear car, closely located with the 12 of right rear view mirror meters. When daily driving, such distance compares safe distance substantially at last, when daily driving and speed are slower, such distance can carry out the operations such as doubling, when but speed is very fast or under the circumstances that expressway travels, the as easy as rolling off a log generation caused the accident that such distance carries out overtaking other vehicles or front truck is brought to a halt. Collision distance is set according to above-mentioned judgment rule, is recorded in the MCU of HUD, and carry out anti-collision warning by modes such as display lamp flicker, audible alarms.
In step S403, front and back car collision distance is set based on following judgment rule:
When rear car photographic head just sees the tire lower edge of front truck (SUV vehicle), now two cars are at a distance of 4.6-5 rice.
When rear car driver just sees the whole tailstock and the tire of front truck (general car vehicle), now two cars are at a distance of 3-4 rice.
When rear car driver just sees the bumper lower edge of front truck (SUV vehicle), now two cars are at a distance of 2.8-3 rice.
When rear car driver just sees that, along (almost the whole tailstock) under front truck (general car vehicle) bumper, now two cars are at a distance of 2-2.5 rice.
When rear car driver just sees the bumper upper limb of front truck (SUV vehicle), now two cars are at a distance of 0.8-1 rice.
When on the bumper that rear car driver just sees front truck (general car vehicle) along above position, now two cars are at a distance of 1-1.2 rice.
In step S402, pedestrian impact distance is set based on following judgment rule:
Drive can be described as test driver's technology greatly in the intensive street of pedestrian, driver not only needs correct to judge car position on road, also can interpolate that the Herba Plantaginis pedestrian general distance from oneself.
When driver just sees the foot of above pedestrian's (pedestrian is high 1.75 meters), now car is 4.4 meters with the distance of pedestrian.
Position after the knee that driver just sees above pedestrian, now car is 2.3 meters with the distance of pedestrian.
When driver just sees the buttocks of above pedestrian, now car is 0.3 meter with the distance of pedestrian.
Further, it is possible to have certain difference according to the distance that different types of automobile is measured. So, distance mentioned herein is relatively suitable for three-box car, and it can be considered as a reference value by the driver of other vehicles, and is stored in the processor of HUD.
Fig. 5 is the embodiment schematic diagram of the further operational approach in Fig. 1.
Above-mentioned status information transmission is processed by step S103 to home server, information after home server process is directly presented on driver at the moment by described HUD, above-mentioned status information is carried out buffer memory by the processor in HUD, is displayed by the display screen of HUD simultaneously.
Home server described in step S501 is arranged on HUD,
Described in step S502, HUD is connected with high in the clouds, store to high in the clouds in order to the information on home server is sent, described transmission is that the first camera head collects to the information in high in the clouds, the presumptive area in the travel direction front that traverse front windshield is visual images, simultaneously need to carry out image recognition coupling, if it is undesirable, then it is made without storing.
Step S503 is downloaded in use on home server and processes, storage area that is that be downloaded to home server and that be not take up this locality.
Fig. 6 is the schematic diagram in an embodiment of intelligent vehicle-carried safe driving assistant system in the present invention.
First camera head 101, in order to obtain driver drives vehicle status information,
Second camera head 108, in order to obtain the status information of vehicle periphery,
Driver status monitoring system: gesture recognition module 102, driver identity identification module 103, fatigue driving detection module 104 can be included, carry out driver status monitoring in order to the information obtained according to the first camera head; Described gesture recognition module 102 is in order to be identified the gesture of driver, and the gesture of driver includes but not limited to: the five fingers palm (opens), forefinger, clench fist, the gesture resting state such as shears hands, the five fingers palm (closing up), " OK " gesture. By described gesture, it is possible to realize free incoming call answering or refusal incoming call, play or switching Music Radio, reply or check the chat softwares such as wechat. Described driver identity identification module 103, in order to identify the identity information of driver, including of identity information: whether driver is the driving habits (height at seat, distance etc.) of car owner oneself, the sex of driver, driver. Described fatigue driving detection module 104, for detecting whether driver is carrying out fatigue driving, it is possible to first carry out driver identity certification, according to driver's video image at present period, by recognition of face, the driver identity currently driving vehicle is authenticated; Synchronously carry out driving time to add up, the identity authentication result according to driver, obtain the driving time record of driver; Also including, emotional state extracts, and according to driver's video image at present period, extracts driver's emotional state at present period; Also including, limbs state is extracted, and according to driver's video image at present period, extracts driver's limbs state at present period; Also including, traffic information is collected, and according to the video image that scene photographic head shoots at present period, obtains the environmental information of driver vehicle-surroundings; Finally carry out vehicle performance state extraction, the driving condition of combining road condition information and vehicle, extracts driver in the present period manipulation state to vehicle; To sum up, using the accumulative driving time of present period as the timestamp of driver's emotional state, limbs state and vehicle performance state, update the data following the tracks of drivers working states change, analyze the state change of driver in more new data, it is judged that whether current time driver is in fatigue driving. By recognition of face, following the tracks of driver's facial expression and limb action elapses in time, present and change from normal condition to fatigue state, such as closed-eye time is elongated, the frequency of bowing increase etc.; Following the tracks of driver vehicle and manipulate whether state declines in time, such as steering wheel rotates variance and becomes big etc., detects whether driver is in fatigue driving state with this.
Driving environment monitoring system: can include detection Current vehicle whether run-off-road module 105, whether front has pedestrian's module 106, whether the running distance of front truck exists collision distance inner module 107 in detection driving, carries out driving environment monitoring in order to the vehicle periphery state obtained according to the second camera head; Described detection Current vehicle whether run-off-road module 105, in order to detect whether the vehicle in driving process deviate from track, system can be monitored according to BLIS blind area to detect, and have doubling attention gatter to remind driver by being arranged on the pillar of rearview mirror.Whether described detection driving front has pedestrian's module 106, in order to carry out pedestrian's Distance Judgment according to collision distance, if more than collision distance, does not then carry out early warning, if less than collision distance, then needs to carry out early warning. Whether the running distance of described front truck exists collision distance inner module 107, in order to carry out leading vehicle distance calculating according to collision distance, and carries out early warning, it is possible to by the information of camera collection front truck, and can coordinate with alternate manner.
HUD109, in order to the result by above-mentioned driver status monitoring and driving environment monitoring, reminds accordingly after process, reminds and include the prompting on audio unit, the applying action on physical unit, such as submit the height of seat, inclination angle etc. to. The surface of the instrument desk that described HUD109 is positioned in described car.
Fig. 7 be in figure 6 safe driving assistant system operating process schematic diagram (based on obtain driver drives vehicle status information).
Step S701 input picture, includes but not limited to facial image, driver's images of gestures, includes iris identification image, face recognition image at facial image.
Carry out gesture identification:
Step S702 Image semantic classification, carries out pretreatment to image, and the process of pretreatment includes but not limited to: has digitized, geometric transformation, normalization, smooth, recovery and strengthens. The gray value of digitized one width original photo is the continuous function of space variable (successive value of position). Photo gray scale sampled by M �� N dot matrix and is quantified (being classified as one of 2 tonal gradations), it is possible to obtaining the digital picture that computer can process. The conversion that the random error of geometric transformation systematic error and instrumented site for correcting image capturing system carries out. Normalization makes some feature of image have a kind of graphics standard form of invariance under given conversion. Some character of image, for instance the profile of finger, just had constant character originally rotating for coordinate. The technology of random noise in smooth elimination image. It is that not make image outline or lines thicken eliminating noise while unclear to the basic demand of smoothing technique. Can adopt: median method, local are averaging method and k neighbour's averaging method. Basic recovery technique is that (x, (x, y) with ideal image f (x, convolution y) y) to regard degenrate function h as the degraded image g obtained. Their Fourier transformation exist relation G (u, v)=H (and u, v) F (u, v). After determining degenrate function according to degradation mechanism, so that it may from then on relational expression obtain F (u, v), then with Fourier inversion obtain f (x, y). Strengthen and the information in image is strengthened selectively and suppresses, to improve the visual effect of image, or image is changed into the form being more suitable for machine processing, in order to data pick-up or identification. Such as Image Intensified System can highlight the contour line of image by high pass filter, thus enabling the machine to measure the profile of contour line. Image enhancement technique can adopt, and contrast broadening, logarithmic transformation, density stratification and histogram equalization etc. can be used in changing image tone and prominent details.
S703 gestures detection, gestures detection is carried out according to above-mentioned result, those skilled in the art can understand, the images of gestures detected can be identified coupling by described gestures detection in the gesture library pre-seted, such as, it is " ok " gesture that detection analysis obtains gestures detection result, then described " ok " gesture is mated in gesture library, and performs the operational order of correspondence.
Whether step S704 is effective gesture, determine whether effective gesture, if not effective, then do not trigger any operational order, be not recorded in server yet, if it is effective, then trigger action instruction, described operational order includes but not limited to: wake instruction process, navigation instruction, telephone order up, musical instruction, wechat instruction.
If step S705 then exports gesture instruction, gesture instruction can be converted into the operational order of machine by this step, and exports either directly through HUD.
Carry out fatigue detecting:
Step S706 Face datection, it is possible to be carried out as follows: adopt integral image (integralimage), in order to quickly to calculate Haar-like feature; Then utilize Adaboost learning algorithm to carry out feature selection and classifier training, Weak Classifier is combined into strong classifier; Finally, grader cascade is adopted to improve efficiency.
Step S707 human eye detection, mainly includes iris identification, including infrared transmitter, iris recognition camera head and infrared induction module, above-mentioned infrared induction module comprise infrared inductor and be arranged at this infrared inductor sensing position before infrared filter; When the mode of operation of iris authentication system is iris identification pattern, collects iris data by infrared transmitter and iris recognition camera head, the iris data collected is carried out bio-identification; When the mode of operation of iris authentication system is image pickup mode, obtains the image that iris recognition camera head collects, and synchronize to obtain the infrared signal that infrared induction module senses; According to the infrared signal obtained, the image synchronizing to obtain is modified.
Step S708 judges whether to fatigue driving, it is possible to judge whether to fatigue driving in the foregoing manner.
Step S709 reminds driver to have a rest, and alerting pattern can directly display on HUD, or sends warning.
Carry out identification:
Step S710 recognition of face, it is possible to the information of the car owner that first prestores is in face database, and in order to follow-up identification, the information of car owner includes but not limited to: the human face photo of car owner or video flowing, and with car owner binding cell-phone number.
Step S711 determines whether car owner, mates with the owner information prestored, in order to determine whether car owner.
Step S712 then reminds car owner if not, if not car owner, then can pass through sending short message by mobile phone to car owner, and reminding host vehicle is non-car owner operation.
Fig. 8 is safe driving assistant system operating process schematic diagram (status information of vehicle periphery) in figure 6.
Step S801 input picture
Step S802 Image semantic classification, it is preferable that adopt the parameter extraction in the position adjustments algorithm of video camera obtain the area-of-interest (ROI) of image and carry out rim detection, increase the effective information of image, it is suppressed that interference. According under varying environment (daytime, cloudy day, night), the testing result obtained can be arranged parameter.
Step S803 lane detection, it is preferable that adopt the lane detection of HOUGH conversion:
��=xcos ��+ysin ��
Wherein, (x, what y) represent is some point in image space, and �� is the image space cathetus distance to zero, and �� is the angle of straight line and x-axis. Coordinate transform projection is carried out for the impact point of image space and obtains parameter space, by the point that total ballot number of times in statistical parameter space is more, find the linear equation that image space is corresponding.It is typically distributed across the situation of road the right and left for the lane line of road in image, the voting space scope of �� and �� is defined. Limit polar angle and the footpath, pole of lane line around, by test repeatedly, it is possible to obtain Polar angle constraints region and the constraint, footpath, pole of impact point, namely obtain ROI. The last situation adopting GABOR wave filter unintelligible for terrain vehicle diatom again or to there are other mark interference.
Step S804 judges vehicle position in track, according to the above-mentioned polar angle obtaining lane line and footpath, pole in monitored area is, it is possible to detect the position of lane line rapidly and accurately. And by GABOR filter row publish picture as when turning, lane change or the image information that collects during camera position skew.
Does step S805 judge whether to cross over track? can be judged by the area-of-interest (ROI) of image.
Step S806 reminds driver to note track, if above-mentioned judgement run-off-road line, then sends the flickers such as warning or signal and reminds.
Step S807HOG feature detection, HOG feature, be namely histograms of oriented gradients (HistogramofOrientedGradient, HOG) be characterized by a kind of in computer vision and image procossing for carrying out the Feature Descriptor of object detection. It carrys out constitutive characteristic by the gradient orientation histogram calculated with statistical picture regional area.
First the pedestrian's image collected is divided into little connected region, obtains cell factory, then gather the gradient of each pixel in described cell factory or the direction histogram at edge; Finally the described set of histograms obtained be can be formed by profiler altogether, namely obtain the Feature Descriptor of pedestrian detection. Preferably, the method that can adopt is: first calculates above-mentioned each rectangular histogram density in block interval, then according to this density, each cell factory in interval is done normalization. After this normalization, illumination variation and shade can be obtained better effect.
Step S808 pedestrian detection,
One target or scanning window are proceeded as follows by HOG feature extracting method exactly:
1) gray processing of image, regards the 3-D view of x, y, a z (gray scale) as by image;
2) adopt Gamma correction method that input picture carries out the standardization (normalization) of color space; Purpose is the contrast regulating image, the impact that the shade of reduction image local and illumination variation cause, and can suppress the interference of noise simultaneously;
3) gradient (including size and Orientation) of each pixel of image is calculated; It is primarily to and catches profile information, simultaneously the interference of further weakened light photograph;
4) cell factory (such as 6*6 pixel/cell factory) is divided an image into;
5) add up the histogram of gradients (numbers of different gradients) of each cell factory, the descriptor descriptor of each cell factory can be formed;
6) every several cell factory being formed a block interval (such as 3*3 cell/block), in a block, the feature descriptor of all cell is together in series and just obtains the HOG feature descriptor of this block;
7) the HOG feature descriptor of all block in image image is together in series can be obtained by the HOG feature descriptor of target window.
Further, HOG feature has been widely used in image recognition in conjunction with SVM classifier. Pedestrian detection HOG+SVM, extract positive negative sample HOG feature, put into SVM classifier training, obtain model, model generate detection, utilize detection son detection negative sample, obtain hardexample, extract the HOG feature of hardexample and go into training together in conjunction with the feature in the first step, finally detected son.
Step S809 multi-scale transform, based on wavelet transformation, can realize decomposing into many there is different spatial resolutions, frequency characteristic and directional characteristic from band signal extracting the picture signal that obtains, in order to realize low frequency long time feature and high frequency short-time characteristic while process.
Step S810 vehicle detection, according to above-mentioned HOG feature in conjunction with SVM classifier, and multi-scale transform, obtain vehicle detecting information after being acquired the process of image,
Step S811 calculates the distance of pedestrian/vehicle and this vehicle by camera inside and outside parameter and lane information, and such as, when rear car is full of left-hand mirror 1/2, the distance from rear car is not 5 meters, with the 9 of right rear view mirror meters of distance differences to some extent. In daily driving, such distance is also comparatively safe. Again such as, when driver just sees the foot of above pedestrian's (pedestrian is high 1.75 meters), now car is 4.4 meters with the distance of pedestrian.
Step S812 reminds driver to note front vehicles or pedestrian, reminds according to the result in above-mentioned steps S811.
Based on the schematic diagram of another embodiment of the intelligent vehicle-carried safe driving householder method of HUD in Fig. 9 present invention.
The intelligent vehicle-carried safe driving assistant system based on HUD in the present embodiment, in order to accept following result: one or more in 901 gesture identification results, 902 identification results, 903 fatigue detection result.
Is step S901 vehicle/pedestrian in vehicle driving direction?
Does step S904 vehicle detection result, enter step S901 vehicle/pedestrian in vehicle driving direction? if then entering step S903 voice reminder to drive.
Does step S905 pedestrian detection result, enter step S901 vehicle/pedestrian in vehicle driving direction? if then entering step S903 voice reminder to drive.
Step S906 deviation result, do you enter step S901 vehicle/pedestrian in vehicle driving direction? if then entering step S903 voice reminder to drive, if then entering step S902 to determine whether unconscious lane change? if then entering S903 voice reminder to drive.
In road surface identification identification, for the situation of the good road surface situation of such as fair weather, multiple positions of the candidate item that road surface identification clearly illustrated and be judged as road surface identification in the image caught meet standard, and are identified as road surface identification. As a result, in being judged as the position of candidate item of road surface identification, the quantity being not recognized as the candidate item of the road surface identification of road surface identification is tended to reduce. On the other hand, in the situation of severe road environment, owing in the image caught, the road surface identification of display is unclear, although road surface identification is judged as the candidate item of road surface identification, but the quantity that candidate item is unsatisfactory for standard and the candidate item that is not recognized as road surface identification increases. Therefore, by considering the quantity of the candidate item of the road surface identification being not recognized as road surface identification, it is possible to correctly judge that pavement behavior is good or severe. As a result, it is possible to correctly recording traffic information.
Those of ordinary skill in the field it is understood that more than; described be only specific embodiments of the invention, be not limited to the present invention, all within the spirit and principles in the present invention; any amendment of being made, equivalent replacement, improvement etc., should be included within protection scope of the present invention.
Claims (10)
1. an intelligent vehicle-carried safe driving householder method, its feature is being in that, including,
Obtain driver drives vehicle status information,
Obtain the status information of vehicle periphery,
Above-mentioned status information transmission is processed to home server.
2. intelligent vehicle-carried safe driving householder method according to claim 1, it is characterized in that, described acquisition driver drives vehicle status information adopts the first camera head to obtain, the status information of described acquisition vehicle periphery adopts the second camera head to obtain, described first camera head, the second camera head include, infrared camera, IP Camera.
3. intelligent vehicle-carried safe driving householder method according to claim 1, it is characterized in that, described driver drives vehicle status information includes, and obtains the gesture identification instruction of driver, according to the identity of facial information identification driver and monitoring driver whether fatigue driving.
4. intelligent vehicle-carried safe driving householder method according to claim 1, it is characterised in that the status information of described vehicle periphery includes, obtains the video image of vehicle periphery, detection Current vehicle whether run-off-road, if deviation, carries out lane departure warning; Whether detection driving front has pedestrian, if having and in collision distance, carries out anti-collision warning; And whether the running distance of front truck exists in collision distance, if having and in collision distance, carry out anti-collision warning.
5. intelligent vehicle-carried safe driving householder method according to claim 1, it is characterized in that, described home server is arranged on HUD, information after home server process is directly presented on driver at the moment by described HUD, described HUD is connected with high in the clouds, store to high in the clouds in order to the information on home server is sent, be downloaded on home server in use and process.
6. an intelligent vehicle-carried safe driving assistant system, it is characterised in that include,
First camera head, in order to obtain driver drives vehicle status information,
Second camera head, in order to obtain the status information of vehicle periphery,
Driver status monitoring system, carries out driver status monitoring in order to the information obtained according to the first camera head;
Driving environment monitoring system, carries out driving environment monitoring in order to the vehicle periphery state obtained according to the second camera head;
HUD, in order to the result by above-mentioned driver status monitoring and driving environment monitoring, reminds after process accordingly.
7. intelligent vehicle-carried safe driving assistant system according to claim 6, it is characterised in that described driver status monitoring system, in order to,
The gesture instruction of driver detected, complete corresponding operating;
Identify the identity information of driver, and the identity information according to driver carries out corresponding personal settings;
Detection driver whether fatigue driving, and reminder message is sent to HUD.
8. intelligent vehicle-carried safe driving assistant system according to claim 6, it is characterised in that described driving environment monitoring system, in order to obtain the video image of vehicle periphery,
Detection Current vehicle whether run-off-road, if deviation, carries out lane departure warning;
Whether detection driving front has pedestrian, if having and in collision distance, if existing, carries out anti-collision warning;
And whether the running distance of front truck exists in collision distance, if existing, carry out anti-collision warning.
9. intelligent vehicle-carried safe driving assistant system according to claim 6, it is characterized in that, described HUD, in order to the driving environment sent at the driver status and described driving environment monitoring system that receive the monitoring system transmission of described driver status, carry out information alert, described information alert includes, the operational order that gesture identification result is corresponding, the operational order that identification result is corresponding, the operational order that fatigue driving result is corresponding, what vehicles or pedestrians testing result was corresponding prevents collision instruction.
10. intelligent vehicle-carried safe driving assistant system according to claim 6, it is characterised in that carry out effective gestures detection, Face datection and human eye detection in described driver status monitoring system;Described driving environment monitoring system carries out lane detection, HOG feature detection, Image Multiscale change detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610013625.0A CN105654753A (en) | 2016-01-08 | 2016-01-08 | Intelligent vehicle-mounted safe driving assistance method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610013625.0A CN105654753A (en) | 2016-01-08 | 2016-01-08 | Intelligent vehicle-mounted safe driving assistance method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105654753A true CN105654753A (en) | 2016-06-08 |
Family
ID=56486488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610013625.0A Pending CN105654753A (en) | 2016-01-08 | 2016-01-08 | Intelligent vehicle-mounted safe driving assistance method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654753A (en) |
Cited By (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096053A (en) * | 2016-06-29 | 2016-11-09 | 北京奇虎科技有限公司 | Driving information data query method, device and assistant driving control method, device |
CN106114502A (en) * | 2016-06-29 | 2016-11-16 | 烟台腾联信息科技有限公司 | A kind of intelligent automobile aid system |
CN106157695A (en) * | 2016-08-03 | 2016-11-23 | 奇瑞汽车股份有限公司 | The based reminding method of dangerous driving behavior and device |
CN106226905A (en) * | 2016-08-23 | 2016-12-14 | 北京乐驾科技有限公司 | A kind of head-up display device |
CN106671984A (en) * | 2016-11-04 | 2017-05-17 | 大连文森特软件科技有限公司 | Driving assistance system based on AR augmented reality |
CN106791662A (en) * | 2016-12-27 | 2017-05-31 | 重庆峰创科技有限公司 | A kind of multifunctional vehicle mounted HUD display systems |
CN106781280A (en) * | 2016-11-24 | 2017-05-31 | 上海海事大学 | A kind of vehicle safety travel real-time monitoring device |
CN106915277A (en) * | 2016-08-19 | 2017-07-04 | 蔚来汽车有限公司 | Power assembly of electric automobile control system |
CN107195199A (en) * | 2017-07-11 | 2017-09-22 | 珠海利安达智能科技有限公司 | Road safety early warning system and method |
CN107220629A (en) * | 2017-06-07 | 2017-09-29 | 上海储翔信息科技有限公司 | A kind of method of the high discrimination Human detection of intelligent automobile |
CN107358680A (en) * | 2017-08-29 | 2017-11-17 | 无锡北斗星通信息科技有限公司 | A kind of personnel characteristics' deep treatment method |
CN107578489A (en) * | 2016-07-05 | 2018-01-12 | 上海小享网络科技有限公司 | A kind of drive recorder for motor vehicles |
CN107577995A (en) * | 2017-08-21 | 2018-01-12 | 西安万像电子科技有限公司 | The processing method and processing device of view data |
CN107657237A (en) * | 2017-09-28 | 2018-02-02 | 东南大学 | Car crass detection method and system based on deep learning |
WO2018023772A1 (en) * | 2016-08-05 | 2018-02-08 | 王志强 | Iris-based extreme speed limitation method, and speed limitation system |
WO2018023773A1 (en) * | 2016-08-05 | 2018-02-08 | 王志强 | Data acquisition method for iris-based speed limitation technology, and speed limitation system |
WO2018027407A1 (en) * | 2016-08-06 | 2018-02-15 | 吕秋萍 | Data collection method for iris matched speed limiting technology, and speed limiting system |
CN107776579A (en) * | 2017-09-14 | 2018-03-09 | 中国第汽车股份有限公司 | A kind of direct feeling driver status alarm set |
CN107813817A (en) * | 2016-08-25 | 2018-03-20 | 大连楼兰科技股份有限公司 | Unmanned Systems, unmanned method and vehicle |
CN107818658A (en) * | 2017-10-24 | 2018-03-20 | 罗燕 | Car steering indicating risk method and its device |
WO2018058274A1 (en) * | 2016-09-27 | 2018-04-05 | 深圳智乐信息科技有限公司 | Mobile terminal-based driver-assistance method and system |
WO2018058275A1 (en) * | 2016-09-27 | 2018-04-05 | 深圳智乐信息科技有限公司 | Smart driving method and system employing mobile terminal |
CN108010383A (en) * | 2017-09-29 | 2018-05-08 | 北京车和家信息技术有限公司 | Blind zone detection method, device, terminal and vehicle based on driving vehicle |
CN108182818A (en) * | 2018-01-15 | 2018-06-19 | 陈世辉 | A kind of intelligent transportation early warning system |
CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
CN108256487A (en) * | 2018-01-19 | 2018-07-06 | 北京工业大学 | A kind of driving state detection device and method based on reversed binocular |
CN108256307A (en) * | 2018-01-12 | 2018-07-06 | 重庆邮电大学 | A kind of mixing enhancing intelligent cognition method of intelligent business Sojourn house car |
CN108268034A (en) * | 2016-01-04 | 2018-07-10 | 通用汽车环球科技运作有限责任公司 | For the expert mode of vehicle |
CN108334871A (en) * | 2018-03-26 | 2018-07-27 | 深圳市布谷鸟科技有限公司 | The exchange method and system of head-up display device based on intelligent cockpit platform |
CN108388919A (en) * | 2018-02-28 | 2018-08-10 | 大唐高鸿信息通信研究院(义乌)有限公司 | The identification of vehicle-mounted short haul connection net security feature and method for early warning |
CN108407813A (en) * | 2018-01-25 | 2018-08-17 | 惠州市德赛西威汽车电子股份有限公司 | A kind of antifatigue safe driving method of vehicle based on big data |
CN108583550A (en) * | 2018-03-02 | 2018-09-28 | 浙江鼎奕科技发展有限公司 | Vehicle automatic braking method and device |
CN108877120A (en) * | 2018-08-31 | 2018-11-23 | 惠州市名商实业有限公司 | On-vehicle safety drives terminal and safety driving system |
CN108860165A (en) * | 2018-05-11 | 2018-11-23 | 深圳市图灵奇点智能科技有限公司 | Vehicle assistant drive method and system |
CN108928294A (en) * | 2018-06-04 | 2018-12-04 | Oppo(重庆)智能科技有限公司 | Driving dangerous based reminding method, device, terminal and computer readable storage medium |
CN109002757A (en) * | 2018-06-04 | 2018-12-14 | 上海商汤智能科技有限公司 | Drive management method and system, vehicle intelligent system, electronic equipment, medium |
CN109229016A (en) * | 2018-09-10 | 2019-01-18 | 深圳融易保数据运营有限公司 | Vehicle-mounted voice reminding method and system |
WO2019029195A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Driving state monitoring method and device, driver monitoring system, and vehicle |
CN109435844A (en) * | 2018-11-16 | 2019-03-08 | 深圳前海车米云图科技有限公司 | A kind of driving auxiliary and drivers ' behavior detection method |
CN109506949A (en) * | 2018-11-12 | 2019-03-22 | 百度在线网络技术(北京)有限公司 | Object identification method, device, equipment and the storage medium of automatic driving vehicle |
CN109615856A (en) * | 2018-12-20 | 2019-04-12 | 北京无线电计量测试研究所 | A kind of intelligent transportation server, intelligent transportation lamp apparatus and car-mounted terminal |
CN109683613A (en) * | 2018-12-24 | 2019-04-26 | 驭势(上海)汽车科技有限公司 | It is a kind of for determining the method and apparatus of the ancillary control information of vehicle |
CN110021185A (en) * | 2019-04-04 | 2019-07-16 | 邵沈齐 | A kind of wisdom traffic management system |
CN110053554A (en) * | 2019-03-15 | 2019-07-26 | 深圳市元征科技股份有限公司 | Auxiliary driving method, auxiliary driving device and vehicle-mounted unmanned aerial vehicle |
CN110178104A (en) * | 2016-11-07 | 2019-08-27 | 新自动公司 | System and method for determining driver distraction |
CN110198425A (en) * | 2019-05-22 | 2019-09-03 | 未来(北京)黑科技有限公司 | Vehicular video recording method, device, storage medium and electronic device |
CN110244778A (en) * | 2019-06-20 | 2019-09-17 | 京东方科技集团股份有限公司 | A kind of head-up following control system and control method based on human eye tracking |
CN110281950A (en) * | 2019-07-08 | 2019-09-27 | 睿镞科技(北京)有限责任公司 | Apparatus of transport control based on three-dimensional sound image sensor is experienced with visible environment |
CN110427850A (en) * | 2019-07-24 | 2019-11-08 | 中国科学院自动化研究所 | Driver's super expressway lane-changing intention prediction technique, system, device |
CN110570689A (en) * | 2019-09-05 | 2019-12-13 | 成都亿盟恒信科技有限公司 | vehicle monitoring system based on driving record and anti-collision early warning |
CN110570623A (en) * | 2018-06-05 | 2019-12-13 | 宁波欧依安盾安全科技有限公司 | unsafe behavior prompt system |
CN110696614A (en) * | 2018-07-10 | 2020-01-17 | 福特全球技术公司 | System and method for controlling vehicle functions via driver HUD and passenger HUD |
CN110717436A (en) * | 2019-09-30 | 2020-01-21 | 上海商汤临港智能科技有限公司 | Data analysis method and device, electronic equipment and computer storage medium |
CN110737688A (en) * | 2019-09-30 | 2020-01-31 | 上海商汤临港智能科技有限公司 | Driving data analysis method and device, electronic equipment and computer storage medium |
CN110780934A (en) * | 2019-10-23 | 2020-02-11 | 深圳市商汤科技有限公司 | Deployment method and device of vehicle-mounted image processing system |
CN110889306A (en) * | 2018-09-07 | 2020-03-17 | 广州汽车集团股份有限公司 | Vehicle-mounted gesture recognition method and system based on camera |
CN111216662A (en) * | 2018-11-26 | 2020-06-02 | 重庆杜米亚柯科技有限公司 | Multifunctional vehicle-mounted system and control method thereof |
CN111216637A (en) * | 2020-01-22 | 2020-06-02 | 同济大学 | Vehicle-mounted head-up display system oriented to safety auxiliary function |
CN111476975A (en) * | 2020-04-14 | 2020-07-31 | 大唐信通(浙江)科技有限公司 | Multi-information fusion display system of intelligent rearview mirror of vehicle |
CN111640330A (en) * | 2020-05-29 | 2020-09-08 | 深圳市元征科技股份有限公司 | Anti-collision method based on edge calculation and related device |
CN111698459A (en) * | 2019-04-26 | 2020-09-22 | 泰州阿法光电科技有限公司 | Real-time analysis method for object parameters |
CN112041907A (en) * | 2018-04-25 | 2020-12-04 | 株式会社日立物流 | Management assistance system |
CN112216067A (en) * | 2020-09-07 | 2021-01-12 | 邢台林樾科技有限公司 | Image processing method based on vehicle-mounted wide-angle camera |
CN112277957A (en) * | 2020-10-27 | 2021-01-29 | 广州汽车集团股份有限公司 | Early warning method and system for driver distraction correction and storage medium |
CN112306221A (en) * | 2019-08-02 | 2021-02-02 | 上海擎感智能科技有限公司 | Intelligent vehicle-mounted machine interaction method and device, storage medium and terminal |
US10915769B2 (en) | 2018-06-04 | 2021-02-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
CN112433619A (en) * | 2021-01-27 | 2021-03-02 | 国汽智控(北京)科技有限公司 | Human-computer interaction method and system for automobile, electronic equipment and computer storage medium |
CN112543959A (en) * | 2018-07-31 | 2021-03-23 | 日本电信电话株式会社 | Driving support device, method, and storage medium storing program |
US10970571B2 (en) | 2018-06-04 | 2021-04-06 | Shanghai Sensetime Intelligent Technology Co., Ltd. | Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium |
CN112837553A (en) * | 2021-01-19 | 2021-05-25 | 英博超算(南京)科技有限公司 | Road right attribution warning system for automatic driving vehicle |
CN113581194A (en) * | 2021-08-06 | 2021-11-02 | 武汉极目智能技术有限公司 | Automatic early warning interaction system and method based on vehicle-mounted vision detection |
CN113647087A (en) * | 2019-03-27 | 2021-11-12 | 索尼集团公司 | State detection device, state detection system and state detection method |
CN113939858A (en) * | 2019-06-19 | 2022-01-14 | 三菱电机株式会社 | Automatic driving assistance device, automatic driving assistance system, and automatic driving assistance method |
CN114004982A (en) * | 2021-10-27 | 2022-02-01 | 中国科学院声学研究所 | Acoustic Haar feature extraction method and system for underwater target recognition |
CN114049794A (en) * | 2021-12-02 | 2022-02-15 | 华中科技大学 | Vehicle-mounted safety auxiliary driving system based on smart phone |
CN114228491A (en) * | 2021-12-29 | 2022-03-25 | 重庆长安汽车股份有限公司 | Head-up display system and method with night vision enhanced virtual reality |
CN114267206A (en) * | 2021-12-28 | 2022-04-01 | 上汽大众汽车有限公司 | Security alarm method, security alarm device, security alarm system, and computer-readable storage medium |
CN114274954A (en) * | 2021-12-16 | 2022-04-05 | 上汽大众汽车有限公司 | Vehicle active pipe connecting system and method combining vehicle internal and external sensing |
US11305766B2 (en) * | 2016-09-26 | 2022-04-19 | Iprd Group, Llc | Combining driver alertness with advanced driver assistance systems (ADAS) |
CN114390254A (en) * | 2022-01-14 | 2022-04-22 | 中国第一汽车股份有限公司 | Rear row cockpit monitoring method and device and vehicle |
CN114760400A (en) * | 2022-04-12 | 2022-07-15 | 阿维塔科技(重庆)有限公司 | Camera device, vehicle and in-vehicle image acquisition method |
CN116901975A (en) * | 2023-09-12 | 2023-10-20 | 深圳市九洲卓能电气有限公司 | Vehicle-mounted AI security monitoring system and method thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470951A (en) * | 2008-01-08 | 2009-07-01 | 徐建荣 | Vehicle security drive monitoring system |
EP2591969A1 (en) * | 2011-11-08 | 2013-05-15 | Audi Ag | Method for operating a vehicle system of a motor vehicle and motor vehicle |
CN103310202A (en) * | 2013-06-27 | 2013-09-18 | 西安电子科技大学 | System and method for guaranteeing driving safety |
CN103578293A (en) * | 2012-06-22 | 2014-02-12 | 通用汽车环球科技运作有限责任公司 | Alert system and method for vehicle |
CN103871243A (en) * | 2014-04-16 | 2014-06-18 | 武汉欧普威科技有限公司 | Wireless vehicle management system and method based on active safety platform |
-
2016
- 2016-01-08 CN CN201610013625.0A patent/CN105654753A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101470951A (en) * | 2008-01-08 | 2009-07-01 | 徐建荣 | Vehicle security drive monitoring system |
EP2591969A1 (en) * | 2011-11-08 | 2013-05-15 | Audi Ag | Method for operating a vehicle system of a motor vehicle and motor vehicle |
CN103578293A (en) * | 2012-06-22 | 2014-02-12 | 通用汽车环球科技运作有限责任公司 | Alert system and method for vehicle |
CN103310202A (en) * | 2013-06-27 | 2013-09-18 | 西安电子科技大学 | System and method for guaranteeing driving safety |
CN103871243A (en) * | 2014-04-16 | 2014-06-18 | 武汉欧普威科技有限公司 | Wireless vehicle management system and method based on active safety platform |
Cited By (112)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108268034A (en) * | 2016-01-04 | 2018-07-10 | 通用汽车环球科技运作有限责任公司 | For the expert mode of vehicle |
CN106114502A (en) * | 2016-06-29 | 2016-11-16 | 烟台腾联信息科技有限公司 | A kind of intelligent automobile aid system |
CN106096053A (en) * | 2016-06-29 | 2016-11-09 | 北京奇虎科技有限公司 | Driving information data query method, device and assistant driving control method, device |
CN107578489A (en) * | 2016-07-05 | 2018-01-12 | 上海小享网络科技有限公司 | A kind of drive recorder for motor vehicles |
CN106157695B (en) * | 2016-08-03 | 2019-12-10 | 奇瑞汽车股份有限公司 | Dangerous driving behavior reminding method and device |
CN106157695A (en) * | 2016-08-03 | 2016-11-23 | 奇瑞汽车股份有限公司 | The based reminding method of dangerous driving behavior and device |
WO2018023773A1 (en) * | 2016-08-05 | 2018-02-08 | 王志强 | Data acquisition method for iris-based speed limitation technology, and speed limitation system |
WO2018023772A1 (en) * | 2016-08-05 | 2018-02-08 | 王志强 | Iris-based extreme speed limitation method, and speed limitation system |
WO2018027407A1 (en) * | 2016-08-06 | 2018-02-15 | 吕秋萍 | Data collection method for iris matched speed limiting technology, and speed limiting system |
US10486545B2 (en) | 2016-08-19 | 2019-11-26 | Nio Nextev Limited | System for controlling powertrain of an electric vehicle |
CN106915277B (en) * | 2016-08-19 | 2020-02-07 | 蔚来汽车有限公司 | Electric automobile power assembly control system |
CN106915277A (en) * | 2016-08-19 | 2017-07-04 | 蔚来汽车有限公司 | Power assembly of electric automobile control system |
US11079594B2 (en) | 2016-08-23 | 2021-08-03 | Beijing Ileja Tech. Co. Ltd. | Head-up display device |
CN106226905A (en) * | 2016-08-23 | 2016-12-14 | 北京乐驾科技有限公司 | A kind of head-up display device |
CN107813817A (en) * | 2016-08-25 | 2018-03-20 | 大连楼兰科技股份有限公司 | Unmanned Systems, unmanned method and vehicle |
US11305766B2 (en) * | 2016-09-26 | 2022-04-19 | Iprd Group, Llc | Combining driver alertness with advanced driver assistance systems (ADAS) |
WO2018058274A1 (en) * | 2016-09-27 | 2018-04-05 | 深圳智乐信息科技有限公司 | Mobile terminal-based driver-assistance method and system |
WO2018058275A1 (en) * | 2016-09-27 | 2018-04-05 | 深圳智乐信息科技有限公司 | Smart driving method and system employing mobile terminal |
CN106671984A (en) * | 2016-11-04 | 2017-05-17 | 大连文森特软件科技有限公司 | Driving assistance system based on AR augmented reality |
CN110178104A (en) * | 2016-11-07 | 2019-08-27 | 新自动公司 | System and method for determining driver distraction |
CN106781280A (en) * | 2016-11-24 | 2017-05-31 | 上海海事大学 | A kind of vehicle safety travel real-time monitoring device |
CN106791662A (en) * | 2016-12-27 | 2017-05-31 | 重庆峰创科技有限公司 | A kind of multifunctional vehicle mounted HUD display systems |
CN107220629B (en) * | 2017-06-07 | 2018-07-24 | 上海储翔信息科技有限公司 | A kind of method of the high discrimination Human detection of intelligent automobile |
CN107220629A (en) * | 2017-06-07 | 2017-09-29 | 上海储翔信息科技有限公司 | A kind of method of the high discrimination Human detection of intelligent automobile |
CN107195199A (en) * | 2017-07-11 | 2017-09-22 | 珠海利安达智能科技有限公司 | Road safety early warning system and method |
CN110399767A (en) * | 2017-08-10 | 2019-11-01 | 北京市商汤科技开发有限公司 | Occupant's dangerous play recognition methods and device, electronic equipment, storage medium |
WO2019029195A1 (en) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | Driving state monitoring method and device, driver monitoring system, and vehicle |
CN109937152A (en) * | 2017-08-10 | 2019-06-25 | 北京市商汤科技开发有限公司 | Driving condition supervision method and apparatus, driver's monitoring system, vehicle |
US10853675B2 (en) | 2017-08-10 | 2020-12-01 | Beijing Sensetime Technology Development Co., Ltd. | Driving state monitoring methods and apparatuses, driver monitoring systems, and vehicles |
CN107577995A (en) * | 2017-08-21 | 2018-01-12 | 西安万像电子科技有限公司 | The processing method and processing device of view data |
CN107358680B (en) * | 2017-08-29 | 2019-07-23 | 上海旗沃信息技术有限公司 | A kind of personnel characteristics' deep treatment method |
CN107358680A (en) * | 2017-08-29 | 2017-11-17 | 无锡北斗星通信息科技有限公司 | A kind of personnel characteristics' deep treatment method |
CN107776579A (en) * | 2017-09-14 | 2018-03-09 | 中国第汽车股份有限公司 | A kind of direct feeling driver status alarm set |
CN107657237A (en) * | 2017-09-28 | 2018-02-02 | 东南大学 | Car crass detection method and system based on deep learning |
CN107657237B (en) * | 2017-09-28 | 2020-03-31 | 东南大学 | Automobile collision detection method and system based on deep learning |
CN108010383A (en) * | 2017-09-29 | 2018-05-08 | 北京车和家信息技术有限公司 | Blind zone detection method, device, terminal and vehicle based on driving vehicle |
CN107818658A (en) * | 2017-10-24 | 2018-03-20 | 罗燕 | Car steering indicating risk method and its device |
CN108229345A (en) * | 2017-12-15 | 2018-06-29 | 吉利汽车研究院(宁波)有限公司 | A kind of driver's detecting system |
CN108256307A (en) * | 2018-01-12 | 2018-07-06 | 重庆邮电大学 | A kind of mixing enhancing intelligent cognition method of intelligent business Sojourn house car |
CN108256307B (en) * | 2018-01-12 | 2021-04-02 | 重庆邮电大学 | Hybrid enhanced intelligent cognitive method of intelligent business travel motor home |
CN108182818A (en) * | 2018-01-15 | 2018-06-19 | 陈世辉 | A kind of intelligent transportation early warning system |
CN108256487A (en) * | 2018-01-19 | 2018-07-06 | 北京工业大学 | A kind of driving state detection device and method based on reversed binocular |
CN108407813A (en) * | 2018-01-25 | 2018-08-17 | 惠州市德赛西威汽车电子股份有限公司 | A kind of antifatigue safe driving method of vehicle based on big data |
CN108388919B (en) * | 2018-02-28 | 2021-08-10 | 大唐高鸿信息通信(义乌)有限公司 | Vehicle-mounted short-distance communication network safety feature identification and early warning method |
CN108388919A (en) * | 2018-02-28 | 2018-08-10 | 大唐高鸿信息通信研究院(义乌)有限公司 | The identification of vehicle-mounted short haul connection net security feature and method for early warning |
CN108583550A (en) * | 2018-03-02 | 2018-09-28 | 浙江鼎奕科技发展有限公司 | Vehicle automatic braking method and device |
CN108334871A (en) * | 2018-03-26 | 2018-07-27 | 深圳市布谷鸟科技有限公司 | The exchange method and system of head-up display device based on intelligent cockpit platform |
CN112041907A (en) * | 2018-04-25 | 2020-12-04 | 株式会社日立物流 | Management assistance system |
CN112041907B (en) * | 2018-04-25 | 2021-10-15 | 株式会社日立物流 | Management assistance system |
CN108860165A (en) * | 2018-05-11 | 2018-11-23 | 深圳市图灵奇点智能科技有限公司 | Vehicle assistant drive method and system |
CN108860165B (en) * | 2018-05-11 | 2021-06-11 | 深圳市图灵奇点智能科技有限公司 | Vehicle driving assisting method and system |
CN109002757A (en) * | 2018-06-04 | 2018-12-14 | 上海商汤智能科技有限公司 | Drive management method and system, vehicle intelligent system, electronic equipment, medium |
CN108928294A (en) * | 2018-06-04 | 2018-12-04 | Oppo(重庆)智能科技有限公司 | Driving dangerous based reminding method, device, terminal and computer readable storage medium |
US10970571B2 (en) | 2018-06-04 | 2021-04-06 | Shanghai Sensetime Intelligent Technology Co., Ltd. | Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium |
US10915769B2 (en) | 2018-06-04 | 2021-02-09 | Shanghai Sensetime Intelligent Technology Co., Ltd | Driving management methods and systems, vehicle-mounted intelligent systems, electronic devices, and medium |
CN108928294B (en) * | 2018-06-04 | 2021-02-12 | Oppo(重庆)智能科技有限公司 | Driving danger reminding method and device, terminal and computer readable storage medium |
CN110570623A (en) * | 2018-06-05 | 2019-12-13 | 宁波欧依安盾安全科技有限公司 | unsafe behavior prompt system |
CN110696614A (en) * | 2018-07-10 | 2020-01-17 | 福特全球技术公司 | System and method for controlling vehicle functions via driver HUD and passenger HUD |
CN110696614B (en) * | 2018-07-10 | 2024-04-16 | 福特全球技术公司 | System and method for controlling vehicle functions via driver HUD and passenger HUD |
CN112543959A (en) * | 2018-07-31 | 2021-03-23 | 日本电信电话株式会社 | Driving support device, method, and storage medium storing program |
CN108877120A (en) * | 2018-08-31 | 2018-11-23 | 惠州市名商实业有限公司 | On-vehicle safety drives terminal and safety driving system |
CN110889306A (en) * | 2018-09-07 | 2020-03-17 | 广州汽车集团股份有限公司 | Vehicle-mounted gesture recognition method and system based on camera |
CN109229016A (en) * | 2018-09-10 | 2019-01-18 | 深圳融易保数据运营有限公司 | Vehicle-mounted voice reminding method and system |
CN109506949A (en) * | 2018-11-12 | 2019-03-22 | 百度在线网络技术(北京)有限公司 | Object identification method, device, equipment and the storage medium of automatic driving vehicle |
CN109506949B (en) * | 2018-11-12 | 2020-08-04 | 百度在线网络技术(北京)有限公司 | Object recognition method, device, equipment and storage medium for unmanned vehicle |
CN109435844A (en) * | 2018-11-16 | 2019-03-08 | 深圳前海车米云图科技有限公司 | A kind of driving auxiliary and drivers ' behavior detection method |
CN111216662A (en) * | 2018-11-26 | 2020-06-02 | 重庆杜米亚柯科技有限公司 | Multifunctional vehicle-mounted system and control method thereof |
CN109615856A (en) * | 2018-12-20 | 2019-04-12 | 北京无线电计量测试研究所 | A kind of intelligent transportation server, intelligent transportation lamp apparatus and car-mounted terminal |
CN109683613B (en) * | 2018-12-24 | 2022-04-29 | 驭势(上海)汽车科技有限公司 | Method and device for determining auxiliary control information of vehicle |
CN109683613A (en) * | 2018-12-24 | 2019-04-26 | 驭势(上海)汽车科技有限公司 | It is a kind of for determining the method and apparatus of the ancillary control information of vehicle |
CN110053554A (en) * | 2019-03-15 | 2019-07-26 | 深圳市元征科技股份有限公司 | Auxiliary driving method, auxiliary driving device and vehicle-mounted unmanned aerial vehicle |
CN113647087A (en) * | 2019-03-27 | 2021-11-12 | 索尼集团公司 | State detection device, state detection system and state detection method |
CN113647087B (en) * | 2019-03-27 | 2024-04-02 | 索尼集团公司 | State detection device, state detection system and state detection method |
CN110021185A (en) * | 2019-04-04 | 2019-07-16 | 邵沈齐 | A kind of wisdom traffic management system |
CN111698459B (en) * | 2019-04-26 | 2021-07-27 | 广东邦盛北斗科技股份公司 | Real-time analysis method for object parameters |
CN111698459A (en) * | 2019-04-26 | 2020-09-22 | 泰州阿法光电科技有限公司 | Real-time analysis method for object parameters |
CN110198425A (en) * | 2019-05-22 | 2019-09-03 | 未来(北京)黑科技有限公司 | Vehicular video recording method, device, storage medium and electronic device |
CN113939858A (en) * | 2019-06-19 | 2022-01-14 | 三菱电机株式会社 | Automatic driving assistance device, automatic driving assistance system, and automatic driving assistance method |
CN110244778A (en) * | 2019-06-20 | 2019-09-17 | 京东方科技集团股份有限公司 | A kind of head-up following control system and control method based on human eye tracking |
CN110281950A (en) * | 2019-07-08 | 2019-09-27 | 睿镞科技(北京)有限责任公司 | Apparatus of transport control based on three-dimensional sound image sensor is experienced with visible environment |
CN110281950B (en) * | 2019-07-08 | 2020-09-04 | 睿镞科技(北京)有限责任公司 | Three-dimensional acoustic image sensor-based vehicle control and visual environment experience |
US11538251B2 (en) | 2019-07-08 | 2022-12-27 | Rayz Technologies Co. Ltd. | Vehicle control and 3D environment experience with or without visualization based on 3D audio/visual sensors |
CN110427850A (en) * | 2019-07-24 | 2019-11-08 | 中国科学院自动化研究所 | Driver's super expressway lane-changing intention prediction technique, system, device |
CN112306221A (en) * | 2019-08-02 | 2021-02-02 | 上海擎感智能科技有限公司 | Intelligent vehicle-mounted machine interaction method and device, storage medium and terminal |
CN110570689A (en) * | 2019-09-05 | 2019-12-13 | 成都亿盟恒信科技有限公司 | vehicle monitoring system based on driving record and anti-collision early warning |
CN110717436A (en) * | 2019-09-30 | 2020-01-21 | 上海商汤临港智能科技有限公司 | Data analysis method and device, electronic equipment and computer storage medium |
CN110737688A (en) * | 2019-09-30 | 2020-01-31 | 上海商汤临港智能科技有限公司 | Driving data analysis method and device, electronic equipment and computer storage medium |
CN110780934B (en) * | 2019-10-23 | 2024-03-12 | 深圳市商汤科技有限公司 | Deployment method and device of vehicle-mounted image processing system |
CN110780934A (en) * | 2019-10-23 | 2020-02-11 | 深圳市商汤科技有限公司 | Deployment method and device of vehicle-mounted image processing system |
WO2021147590A1 (en) * | 2020-01-22 | 2021-07-29 | 同济大学 | Safety assistance function-oriented vehicle-mounted head-up display system |
CN111216637B (en) * | 2020-01-22 | 2022-09-20 | 同济大学 | Vehicle-mounted head-up display system oriented to safety auxiliary function |
CN111216637A (en) * | 2020-01-22 | 2020-06-02 | 同济大学 | Vehicle-mounted head-up display system oriented to safety auxiliary function |
CN111476975A (en) * | 2020-04-14 | 2020-07-31 | 大唐信通(浙江)科技有限公司 | Multi-information fusion display system of intelligent rearview mirror of vehicle |
CN111640330A (en) * | 2020-05-29 | 2020-09-08 | 深圳市元征科技股份有限公司 | Anti-collision method based on edge calculation and related device |
CN112216067A (en) * | 2020-09-07 | 2021-01-12 | 邢台林樾科技有限公司 | Image processing method based on vehicle-mounted wide-angle camera |
CN112277957A (en) * | 2020-10-27 | 2021-01-29 | 广州汽车集团股份有限公司 | Early warning method and system for driver distraction correction and storage medium |
CN112277957B (en) * | 2020-10-27 | 2022-06-24 | 广州汽车集团股份有限公司 | Early warning method and system for driver distraction correction and storage medium |
CN112837553A (en) * | 2021-01-19 | 2021-05-25 | 英博超算(南京)科技有限公司 | Road right attribution warning system for automatic driving vehicle |
CN112433619A (en) * | 2021-01-27 | 2021-03-02 | 国汽智控(北京)科技有限公司 | Human-computer interaction method and system for automobile, electronic equipment and computer storage medium |
CN113581194A (en) * | 2021-08-06 | 2021-11-02 | 武汉极目智能技术有限公司 | Automatic early warning interaction system and method based on vehicle-mounted vision detection |
CN114004982A (en) * | 2021-10-27 | 2022-02-01 | 中国科学院声学研究所 | Acoustic Haar feature extraction method and system for underwater target recognition |
CN114049794B (en) * | 2021-12-02 | 2023-12-29 | 华中科技大学 | Vehicle-mounted safety auxiliary driving system based on smart phone |
CN114049794A (en) * | 2021-12-02 | 2022-02-15 | 华中科技大学 | Vehicle-mounted safety auxiliary driving system based on smart phone |
CN114274954A (en) * | 2021-12-16 | 2022-04-05 | 上汽大众汽车有限公司 | Vehicle active pipe connecting system and method combining vehicle internal and external sensing |
CN114267206A (en) * | 2021-12-28 | 2022-04-01 | 上汽大众汽车有限公司 | Security alarm method, security alarm device, security alarm system, and computer-readable storage medium |
CN114228491A (en) * | 2021-12-29 | 2022-03-25 | 重庆长安汽车股份有限公司 | Head-up display system and method with night vision enhanced virtual reality |
CN114228491B (en) * | 2021-12-29 | 2024-05-14 | 重庆长安汽车股份有限公司 | System and method for enhancing virtual reality head-up display with night vision |
CN114390254A (en) * | 2022-01-14 | 2022-04-22 | 中国第一汽车股份有限公司 | Rear row cockpit monitoring method and device and vehicle |
CN114390254B (en) * | 2022-01-14 | 2024-04-19 | 中国第一汽车股份有限公司 | Rear-row cockpit monitoring method and device and vehicle |
CN114760400A (en) * | 2022-04-12 | 2022-07-15 | 阿维塔科技(重庆)有限公司 | Camera device, vehicle and in-vehicle image acquisition method |
CN116901975A (en) * | 2023-09-12 | 2023-10-20 | 深圳市九洲卓能电气有限公司 | Vehicle-mounted AI security monitoring system and method thereof |
CN116901975B (en) * | 2023-09-12 | 2023-11-21 | 深圳市九洲卓能电气有限公司 | Vehicle-mounted AI security monitoring system and method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105654753A (en) | Intelligent vehicle-mounted safe driving assistance method and system | |
US20210357670A1 (en) | Driver Attention Detection Method | |
US10867195B2 (en) | Systems and methods for monitoring driver state | |
AU2015276536B2 (en) | Device, method, and computer program for detecting momentary sleep | |
US9881221B2 (en) | Method and system for estimating gaze direction of vehicle drivers | |
CN102696041B (en) | The system and method that the cost benefit confirmed for eye tracking and driver drowsiness is high and sane | |
CN104013414A (en) | Driver fatigue detecting system based on smart mobile phone | |
CN111274881A (en) | Driving safety monitoring method and device, computer equipment and storage medium | |
CN105527710A (en) | Intelligent head-up display system | |
CN102752458A (en) | Driver fatigue detection mobile phone and unit | |
US11783600B2 (en) | Adaptive monitoring of a vehicle using a camera | |
CN110741424B (en) | Dangerous information collecting device | |
CN112382115B (en) | Driving risk early warning device and method based on visual perception | |
CN111179552A (en) | Driver state monitoring method and system based on multi-sensor fusion | |
KR102266354B1 (en) | Apparatus and Method for Authenticating Biometric Information for Multi Preference | |
KR101337554B1 (en) | Apparatus for trace of wanted criminal and missing person using image recognition and method thereof | |
CN110781718A (en) | Cab infrared vision system and driver attention analysis method | |
CN115690750A (en) | Driver distraction detection method and device | |
CN112698660A (en) | Driving behavior visual perception device and method based on 9-axis sensor | |
CN117095680A (en) | Vehicle control method, device, equipment and storage medium | |
CN112348718B (en) | Intelligent auxiliary driving guiding method, intelligent auxiliary driving guiding device and computer storage medium | |
CN112506353A (en) | Vehicle interaction system, method, storage medium and vehicle | |
CN112258813A (en) | Vehicle active safety control method and device | |
Vinodhini et al. | A behavioral approach to detect somnolence of CAB drivers using convolutional neural network | |
WO2022004413A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 102208 Beijing city Changping District Huilongguan Longyu street 1 hospital floor A loe Center No. 1 floor 5 Room 518 Applicant after: BEIJING LEJIA TECHNOLOGY CO., LTD. Address before: 100193 Beijing City, northeast of Haidian District, South Road, No. 29, building 3, room 3, room 3558 Applicant before: BEIJING LEJIA TECHNOLOGY CO., LTD. |
|
CB02 | Change of applicant information | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160608 |
|
RJ01 | Rejection of invention patent application after publication |