US11748974B2 - Method and apparatus for assisting driving - Google Patents

Method and apparatus for assisting driving Download PDF

Info

Publication number
US11748974B2
US11748974B2 US16/756,607 US201716756607A US11748974B2 US 11748974 B2 US11748974 B2 US 11748974B2 US 201716756607 A US201716756607 A US 201716756607A US 11748974 B2 US11748974 B2 US 11748974B2
Authority
US
United States
Prior art keywords
moving object
video frames
training
vehicle
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/756,607
Other languages
English (en)
Other versions
US20200262419A1 (en
Inventor
Sinan KARABURUN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Assigned to BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT reassignment BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KARABURUN, Sinan
Publication of US20200262419A1 publication Critical patent/US20200262419A1/en
Application granted granted Critical
Publication of US11748974B2 publication Critical patent/US11748974B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera
    • B60W2420/42
    • G05D2201/0213
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to a method and apparatus for assisting driving.
  • Vehicles can be equipped with video capture devices such as cameras to allow for recording of driving scenes.
  • video capture devices such as cameras to allow for recording of driving scenes.
  • Embodiments of the present disclosure provide a method and apparatus for assisting driving.
  • a method for assisting driving may comprise identifying one or more set of video frames from captured video regarding surrounding condition of a vehicle, wherein the one or more set of video frames may comprise a moving object; extracting one or more features indicating motion characteristics of the moving object from the one or more set of video frames; and predicting motion intention of the moving object in the one or more set of video frames based on the one or more features.
  • an apparatus for assisting driving may comprise a camera and a processor.
  • the camera may be configured to capture video regarding surrounding condition of a vehicle.
  • the processor may be configured to: identify one or more set of video frames from the video, wherein the one or more set of video frames may comprise a moving object; extract one or more features indicating motion characteristics of the moving object from the one or more set of video frames; and predict motion intention of the moving object in the one or more set of video frames based on the one or more features.
  • FIG. 1 illustrates a method for assisting driving in accordance with some embodiments of the present disclosure.
  • FIG. 2 illustrates a method for obtaining a pre-trained prediction model in accordance with some embodiments of the present disclosure.
  • FIG. 3 illustrates an apparatus for assisting driving in accordance with some embodiments of the present disclosure.
  • FIG. 4 illustrates a vehicle in accordance with some embodiments of the present disclosure.
  • FIG. 5 illustrates a block diagram of computing device which is an example of the hardware device that may be applied to the aspects of the present disclosure in accordance with some embodiments of the present disclosure.
  • vehicles in the traffic can recognize safety related scenes (e.g. overtaking trucks on the highway, cut-in by other vehicles) by using sensors while something is happening.
  • the vehicles merely identify the safety risk while something is happening. Intentions of the moving objects (e.g., overtaking trucks, or vehicles which may try to cut-in) are not recognized.
  • Intention recognition of moving objects in the traffic can support vehicles before something happens. For example, if it is assumed that two trucks (e.g., a first truck and a second truck thereafter) are running on a highway in a lane right to the lane in which a vehicle is now driving. The second truck drives 20 km/h faster than the first truck. The distance between the two trucks is getting shorter. Thus, the intention of the second truck to overtake the first truck raises. The safety risk probability for the driver of the vehicle raises as well, as the second truck may overtake the first truck by getting into the lane in which the vehicle is now running.
  • two trucks e.g., a first truck and a second truck thereafter
  • the second truck drives 20 km/h faster than the first truck.
  • the distance between the two trucks is getting shorter.
  • the safety risk probability for the driver of the vehicle raises as well, as the second truck may overtake the first truck by getting into the lane in which the vehicle is now running.
  • FIG. 1 illustrates a method 100 for assisting driving in accordance with some embodiments of the present disclosure.
  • the method 100 may comprise identifying at step S 110 one or more set of video frames from captured video regarding surrounding condition of a vehicle.
  • the one or more set of video frames may comprise a moving object.
  • the video may be captured by a video capture device of the vehicle, such as a camera of the vehicle.
  • the video may indicate surrounding condition of the vehicle.
  • the video may comprise a series of video frames. It is noted that some fragments of video as captured may comprise no moving object as there is no moving object around the vehicle and that analyzing of these fragments of video is of no help to assisting driving. Therefore, at this step of the method 100 , one or more set of video frames comprising a moving object are identified from the video as captured. Video frames comprising the moving object may be identified by any object recognition method.
  • the object recognition method may comprise matching the video as captured with templates for moving objects.
  • the templates may be provided in advance.
  • the object recognition method may comprise the following operations: obtaining training images, training an object identification model, and using the model as trained to identify objects in the video as captured.
  • the object identification model may comprise any existing machine learning models or machine learning models developed in the future.
  • the moving object may comprise one or more of: a vehicle, a pedestrian, a non-motor vehicle (e.g., a bicycle, a tricycle, an electric bicycle, a disabled motorized wheelchairs, or an animal vehicle etc.), and an animal (e.g., a dog, a cat, a cow, or other animals involved in the traffic).
  • a non-motor vehicle e.g., a bicycle, a tricycle, an electric bicycle, a disabled motorized wheelchairs, or an animal vehicle etc.
  • an animal e.g., a dog, a cat, a cow, or other animals involved in the traffic.
  • the method 100 may further comprise extracting at step S 120 one or more features indicating motion characteristics of the moving object from the one or more set of video frames as identified.
  • the one or more features indicating motion characteristics of the moving object may comprise one or more of: velocity of the moving object, moving orientation of the moving object, distance between the moving object and other moving objects in the traffic, distance between the moving object and the vehicle, and acceleration of the moving object.
  • features indicating motion characteristics of the moving object may be extracted by analyzing the one or more set of video frames video frames alone or in combination with sensor data of the vehicle.
  • the method 100 may further comprise predicting at step S 130 motion intention of the moving object in the one or more set of video frames based on the one or more features.
  • the motion intention may comprise one or more of: crossing road, acceleration, deceleration, sudden stop, cut-in, parking, and overtaking.
  • a motion intention of overtaking can be determined when the following conditions are satisfied:
  • the distance between the moving object M 2 and the moving object before M 2 (e.g., a first truck) at time t 2 is smaller than the distance therebetween at time t 1 .
  • the velocity of the moving object and the distance as utilized may have been extracted from the one or more set of video frames as identified and may be indicated by the features as extracted in step S 120 .
  • time t 1 and time t 2 may be timestamps of the video frames.
  • a motion intention of overtaking can be determined when additional conditions are satisfied:
  • the distance between the moving object M 2 and the moving object before M 2 at time t 2 is larger than a first distance threshold and smaller than a second distance threshold.
  • acceleration threshold as well as the first distance threshold and the second distance threshold can be set as required.
  • time t 1 and time t 2 may be timestamps of the video frames.
  • a motion intention of sudden stop can be determined when the following conditions are satisfied:
  • the distance between the moving object M 2 and the moving object before M 2 at time t 2 is smaller than the distance therebetween at time t 1 .
  • time t 1 and time t 2 may be timestamps of the video frames.
  • a motion intention of sudden stop can be determined when additional conditions are satisfied:
  • acceleration threshold as well as the distance threshold can be set as required.
  • time t 1 and time t 2 may be timestamps of the video frames.
  • a motion intention of cut-in can be determined when the following conditions are satisfied:
  • the velocity of the moving object, the distance and the moving orientation as utilized may have been extracted from the one or more set of video frames as identified and may be indicated by the features as extracted in step S 120 .
  • time t 1 and time t 2 may be timestamps of the video frames.
  • a motion intention of cut-in can be determined when additional conditions are satisfied:
  • acceleration threshold as well as the distance threshold can be set as required.
  • time t 1 and time t 2 may be timestamps of the video frames.
  • the interval between the second time t 2 and the first time t 1 can be set as required.
  • ⁇ t can be set as 1 second, 1 minutes or other values as required.
  • predicting at step S 130 motion intention of the moving object in the one or more set of video frames based on the one or more features may comprise predicting motion intention of the moving object in the one or more set of video frames based on the one or more features by utilizing a pre-trained prediction model.
  • FIG. 2 illustrates a method 200 for obtaining the pre-trained prediction model in accordance with some embodiments of the present disclosure.
  • one or more set of training video frames are identified from pre-recorded training video fragments.
  • the one or more set of training video frames may comprise a training moving object.
  • the pre-recorded training video fragments may be recorded by cameras of vehicles. Identifying of the one or more set of training video frames may be similar as the identifying in step S 110 of method 100 . Alternatively, the one or more set of training video frames may be identified by human.
  • step S 220 real motion intention of the training moving object in the one or more set of training video frames are determined.
  • the real motion intention of the moving object may be determined by human.
  • the real motion intention may be determined by analyzing the one or more set of training video frames.
  • step S 230 one or more training features indicating motion characteristics of the training moving object are extracted from the one or more set of training video frames.
  • the one or more training features indicating motion characteristics of the training moving object may comprise one or more of: velocity of the training moving object, moving orientation of the training moving object, distance between the training moving object and other training moving objects in the one or more set of training video frames, distance between the moving object and the vehicle via which the training video fragments are recorded, and acceleration of the training moving object.
  • features indicating motion characteristics of the training moving object may be extracted by analyzing the training video frames alone or in combination with sensor data of the vehicle via which the training video fragments are recorded.
  • step S 240 motion intention of the training moving object is predicted based on the one or more training features extracted from the one or more set of training video frames by utilizing a prediction model, thereby obtaining the predicted motion intention of the training moving object.
  • the prediction model may comprise one or more of: generative adversarial networks, auto-encoding variational bayes, and auto-regression model, etc.
  • step S 250 parameters of the prediction model are modified based on the real motion intention and the predicted motion intention.
  • the parameters of the prediction model are modified such that the real motion intention is matched with the predicted motion intention.
  • modification of the parameters of the prediction model may be performed iteratively.
  • the one or more features indicating motion characteristics of the moving object as extracted from the one or more set of video frames are input to the trained prediction model, with output of the trained model being the motion intention of the moving object.
  • the method 100 may further comprise prompting driver of the vehicle of the motion intention of the moving object.
  • the driver of the vehicle can be prompted of the motion intention of the moving object visually, audibly, or haptically.
  • the driver of the vehicle can be prompted of the motion intention of the moving object by an image displayed on a screen in the vehicle, or by voice played by speakers in the vehicle, or by haptic effects played by tactile elements embedded in the driver's seat, the safety belt or the steering wheel.
  • the motion intention can be predicted periodically. Accordingly, the driver of the vehicle can be prompted periodically.
  • the period of the predicting can be set as required.
  • the method 100 may further comprise controlling the vehicle based on the predicted motion intention of the moving object, to alleviate or reduce potential influence to the vehicle associated with the motion intention of the moving object.
  • the speed of the vehicle can be controlled (e.g., slowed down).
  • the steering system of the vehicle can be controlled such that the vehicle may switch to another lane before the overtake occurs, thereby alleviating or reducing potential influence to the vehicle by the overtaking of the moving object M 2 .
  • the method 100 may further comprise determining a motion score of the motion intention based on the one or more features.
  • the motion score of the motion intention may be calculated based on a simple model constructed as below.
  • typical values of features indicating motion characteristics (e.g., speed, distance, acceleration, etc.) of the moving object involved in the scenario can be set.
  • these features can be normalized.
  • a vector can be constructed by the features as normalized, thereby obtaining a typical feature vector for the overtaking scenario.
  • the correlation coefficient between a vector constructed by the actual features as extracted and then normalized and the typical feature vector for the overtaking scenario can be calculated.
  • the correlation coefficient as calculated can be used as the motion score.
  • the motion score can be calculated by a set of pre-trained prediction models, wherein each model in the set is dedicated for a motion intention.
  • the object value for samples with real motion intention being overtaking will be set to 100, while the object value for samples with real motion intention not being overtaking will be set to 0.
  • features indicating motion characteristics of vehicles in the samples may be extracted from the training video frames and may be used as inputs of the prediction model, while output of the prediction model may be used as the predicted output value.
  • parameters of the prediction model is adjusted, in order to reduce the difference between the predicted output value from the model and the object value as set.
  • gradient descent method can be used to train the model, thereby obtaining a trained prediction model.
  • the one or more features indicating motion characteristics of the moving object as extracted from the one or more set of video frames can be input to the trained prediction model for the motion intention, with output of the prediction model being the motion score of the moving object for the motion intention.
  • the method 100 may further comprise prompting driver of the vehicle of the motion score of the motion intention.
  • the driver of the vehicle can be prompted of the motion scores visually, audibly, or haptically.
  • the driver of the vehicle can be prompted of the motion score of the motion intention by an image displayed on a screen in the vehicle, or by voice played by speakers in the vehicle, or by haptic effects played by tactile elements embedded in the driver's seat, the safety belt or the steering wheel.
  • the motions scores may be sorted and provided to the driver of the vehicle in an ascending order or in a descending order.
  • the motion intention with the highest motion score will be provided firstly.
  • the motion intention with the lowest motion score will be provided firstly.
  • the mention scores can be calculated periodically. Accordingly, the driver can be prompted periodically. It is also noted that the period for calculating the mention scores can be set as required.
  • FIG. 3 illustrates an apparatus 300 for assisting driving in accordance with some embodiments of the present disclosure.
  • the apparatus 300 may comprise a camera 310 and a processor 320 .
  • the camera 310 may be configured to capture video regarding surrounding condition of a vehicle.
  • the processor 320 may be configured to identify one or more set of video frames from the video, wherein the one or more set of video frames may comprise a moving object; extract one or more features indicating motion characteristics of the moving object from the one or more set of video frames; and predict motion intention of the moving object in the one or more set of video frames based on the one or more features.
  • the processor 320 may be further configured to: determine a motion score of the motion intention based on the one or more features.
  • the processor 320 may be further configured to: predict motion intention of the moving object in the one or more set of video frames based on the one or more features by utilizing a pre-trained prediction model.
  • the pre-trained prediction model may be obtained through the following operations: identifying one or more set of training video frames from pre-recorded video fragments comprising a training moving object; determining real motion intention of the training moving object in the one or more set of training video frames; extracting one or more training features indicating motion characteristics of the training moving object from the one or more set of training video frames; predicting motion intention of the training moving object based on the one or more training features extracted from the one or more set of training video frames by utilizing a prediction model; and modifying parameters of the prediction model based on the real motion intention and the predicted motion intention.
  • the processor 320 may be further configured to: prompt driver of the vehicle of the motion intention of the moving object.
  • the processor 320 may be further configured to: prompt driver of the vehicle of the motion score of the motion intention.
  • the processor 320 may be further configured to: control the vehicle based on the predicted motion intention of the moving object, to alleviate or reduce potential influence to the vehicle associated with the motion intention of the moving object.
  • FIG. 4 illustrates a vehicle 400 in accordance with some embodiments of the present disclosure.
  • the vehicle 400 may comprise the apparatus 300 .
  • the camera 310 of the apparatus 300 may be mounted on the top of the vehicle, in order to capture video regarding surrounding condition of the vehicle 400 .
  • the processor 320 may embedded in the inside of the vehicle 400 .
  • a non-transitory computer readable medium comprising instructions stored thereon for performing the method 100 or the method 200 .
  • FIG. 5 illustrates a block diagram of computing device which is an example of the hardware device that may be applied to the aspects of the present disclosure in accordance with some embodiments of the present disclosure.
  • the computing device 500 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any in combination.
  • the aforementioned various apparatuses/server/client device may be wholly or at least partially implemented by the computing device 500 or a similar device or system.
  • the computing device 500 may comprise elements that are connected with or in communication with a bus 502 , possibly via one or more interfaces.
  • the computing device 500 may comprise the bus 502 , and one or more processors 504 , one or more input devices 506 and one or more output devices 508 .
  • the one or more processors 504 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips).
  • the input devices 506 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control.
  • the output devices 508 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer.
  • the computing device 500 may also comprise or be connected with non-transitory storage devices 510 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory), a RAM (Random Access Memory), a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code.
  • non-transitory storage devices 510 may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid
  • the non-transitory storage devices 510 may be detachable from an interface.
  • the non-transitory storage devices 510 may have data/instructions/code for implementing the methods and steps which are described above.
  • the computing device 500 may also comprise a communication device 512 .
  • the communication device 512 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a BluetoothTM device, 1302.11 device, Wi-Fi device, WiMAX device, cellular communication facilities and/or the like.
  • the computing device 500 When the computing device 500 is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on. In this way, the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • external device for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.
  • the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • other facilities such as an engine system, a wiper, an anti-lock Braking System or the like
  • non-transitory storage devices 510 may have map information and software elements so that the processor 504 may perform route guidance processing.
  • the output device 506 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle.
  • the output device 506 may also comprise a speaker or interface with an ear phone for audio guidance.
  • the bus 502 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 502 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • CAN Controller Area Network
  • the computing device 500 may also comprise a working memory 514 , which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504 , and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • a working memory 514 may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504 , and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • Software elements may be located in the working memory 514 , including but are not limited to an operating system 516 , one or more application programs 518 , drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 518 , and the means/units/elements of the aforementioned various apparatuses/server/client device may be implemented by the processor 504 reading and executing the instructions of the one or more application programs 518 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Environmental Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)
US16/756,607 2017-11-28 2017-11-28 Method and apparatus for assisting driving Active 2039-10-18 US11748974B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/113317 WO2019104471A1 (en) 2017-11-28 2017-11-28 Method and apparatus for assisting driving

Publications (2)

Publication Number Publication Date
US20200262419A1 US20200262419A1 (en) 2020-08-20
US11748974B2 true US11748974B2 (en) 2023-09-05

Family

ID=66665286

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/756,607 Active 2039-10-18 US11748974B2 (en) 2017-11-28 2017-11-28 Method and apparatus for assisting driving

Country Status (4)

Country Link
US (1) US11748974B2 (zh)
CN (1) CN111278708B (zh)
DE (1) DE112017008236T5 (zh)
WO (1) WO2019104471A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112009470B (zh) * 2020-09-08 2022-01-14 科大讯飞股份有限公司 车辆行驶控制方法、装置、设备及存储介质
CN112560995A (zh) * 2020-12-26 2021-03-26 浙江天行健智能科技有限公司 一种基于gm-hmm的停车意图辨别方法
TWI786893B (zh) * 2021-10-19 2022-12-11 財團法人車輛研究測試中心 艙內監控與情境理解感知方法及其系統

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077624A (ja) 2005-12-07 2008-04-03 Nissan Motor Co Ltd 物体検出装置および物体検出方法
JP2008158640A (ja) 2006-12-21 2008-07-10 Fuji Heavy Ind Ltd 移動物体検出装置
US20090087085A1 (en) 2007-09-27 2009-04-02 John Eric Eaton Tracker component for behavioral recognition system
DE102007049706A1 (de) 2007-10-17 2009-04-23 Robert Bosch Gmbh Verfahren zur Schätzung der Relativbewegung von Video-Objekten und Fahrerassistenzsystem für Kraftfahrzeuge
US20100205132A1 (en) 2007-08-27 2010-08-12 Toyota Jidosha Kabushiki Kaisha Behavior predicting device
JP2011170762A (ja) 2010-02-22 2011-09-01 Toyota Motor Corp 運転支援装置
US20110313664A1 (en) 2009-02-09 2011-12-22 Toyota Jidosha Kabushiki Kaisha Apparatus for predicting the movement of a mobile body
JP2012033075A (ja) 2010-07-30 2012-02-16 Toyota Motor Corp 行動予測装置及び行動予測方法及び運転支援装置
WO2013081984A1 (en) 2011-11-28 2013-06-06 Magna Electronics, Inc. Vision system for vehicle
JP2014006776A (ja) 2012-06-26 2014-01-16 Honda Motor Co Ltd 車両周辺監視装置
CN104584097A (zh) 2012-08-09 2015-04-29 丰田自动车株式会社 物体检测装置和驾驶辅助装置
DE102016001772A1 (de) 2016-02-16 2016-08-11 Daimler Ag Verfahren zur Bewegungs- und Verhaltensprognose in einer Fahrzeugumgebung befindlicher Objekte
US9581997B1 (en) 2011-04-22 2017-02-28 Angel A. Penilla Method and system for cloud-based communication for automatic driverless movement
US20170101056A1 (en) 2015-10-07 2017-04-13 Lg Electronics Inc. Vehicle and control method for the same
CN106740864A (zh) 2017-01-12 2017-05-31 北京交通大学 一种驾驶行为意图判断与预测方法
CN107369166A (zh) 2017-07-13 2017-11-21 深圳大学 一种基于多分辨率神经网络的目标跟踪方法及系统
US20190236941A1 (en) * 2013-03-15 2019-08-01 John Lindsay Vehicular Communication System

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008077624A (ja) 2005-12-07 2008-04-03 Nissan Motor Co Ltd 物体検出装置および物体検出方法
JP2008158640A (ja) 2006-12-21 2008-07-10 Fuji Heavy Ind Ltd 移動物体検出装置
US20100205132A1 (en) 2007-08-27 2010-08-12 Toyota Jidosha Kabushiki Kaisha Behavior predicting device
US20090087085A1 (en) 2007-09-27 2009-04-02 John Eric Eaton Tracker component for behavioral recognition system
DE102007049706A1 (de) 2007-10-17 2009-04-23 Robert Bosch Gmbh Verfahren zur Schätzung der Relativbewegung von Video-Objekten und Fahrerassistenzsystem für Kraftfahrzeuge
US20110313664A1 (en) 2009-02-09 2011-12-22 Toyota Jidosha Kabushiki Kaisha Apparatus for predicting the movement of a mobile body
CN102307769A (zh) 2009-02-09 2012-01-04 丰田自动车株式会社 用于预测移动体的移动的设备
JP2011170762A (ja) 2010-02-22 2011-09-01 Toyota Motor Corp 運転支援装置
JP2012033075A (ja) 2010-07-30 2012-02-16 Toyota Motor Corp 行動予測装置及び行動予測方法及び運転支援装置
US9581997B1 (en) 2011-04-22 2017-02-28 Angel A. Penilla Method and system for cloud-based communication for automatic driverless movement
WO2013081984A1 (en) 2011-11-28 2013-06-06 Magna Electronics, Inc. Vision system for vehicle
JP2014006776A (ja) 2012-06-26 2014-01-16 Honda Motor Co Ltd 車両周辺監視装置
CN104584097A (zh) 2012-08-09 2015-04-29 丰田自动车株式会社 物体检测装置和驾驶辅助装置
US20150298621A1 (en) 2012-08-09 2015-10-22 Toyota Jidosha Kabushiki Kaisha Object detection apparatus and driving assistance apparatus
US20190236941A1 (en) * 2013-03-15 2019-08-01 John Lindsay Vehicular Communication System
US20170101056A1 (en) 2015-10-07 2017-04-13 Lg Electronics Inc. Vehicle and control method for the same
DE102016001772A1 (de) 2016-02-16 2016-08-11 Daimler Ag Verfahren zur Bewegungs- und Verhaltensprognose in einer Fahrzeugumgebung befindlicher Objekte
CN106740864A (zh) 2017-01-12 2017-05-31 北京交通大学 一种驾驶行为意图判断与预测方法
CN107369166A (zh) 2017-07-13 2017-11-21 深圳大学 一种基于多分辨率神经网络的目标跟踪方法及系统

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Chinese-language Office Action issued in Chinese Application No. 201780096344.9 dated Sep. 2, 2022 with English translation (21 pages).
International Search Report (PCT/ISA/210) issued in PCT Application No. PCT/CN2017/113317 dated Sep. 4, 2018 (two (2) pages).
Written Opinion (PCT/ISA/237) issued in PCT Application No. PCT/CN2017/113317 dated Sep. 4, 2018 (three (3) pages).

Also Published As

Publication number Publication date
DE112017008236T5 (de) 2020-08-20
CN111278708A (zh) 2020-06-12
WO2019104471A1 (en) 2019-06-06
CN111278708B (zh) 2023-02-14
US20200262419A1 (en) 2020-08-20

Similar Documents

Publication Publication Date Title
CN109429518B (zh) 基于地图图像的自动驾驶交通预测
US10336252B2 (en) Long term driving danger prediction system
US11315026B2 (en) Systems and methods for classifying driver behavior
US10817751B2 (en) Learning data creation method, learning method, risk prediction method, learning data creation device, learning device, risk prediction device, and recording medium
KR20190013689A (ko) 자율 주행 차량 교통 예측에서 예측되는 궤적에 대한 평가 프레임 워크
US11748974B2 (en) Method and apparatus for assisting driving
JP2021093162A (ja) 走行関連案内サービスを提供する車両用端末装置、サービス提供サーバ、方法、コンピュータプログラム、及びコンピュータ読み取り可能な記録媒体
CN112581750B (zh) 车辆行驶控制方法、装置、可读存储介质及电子设备
US11713046B2 (en) Driving assistance apparatus and data collection system
US11745745B2 (en) Systems and methods for improving driver attention awareness
CN111540191B (zh) 基于车联网的行车示警方法、系统、设备及存储介质
CN113205088A (zh) 障碍物图像展示方法、电子设备和计算机可读介质
JP7269694B2 (ja) 事象発生推定のための学習データ生成方法・プログラム、学習モデル及び事象発生推定装置
CN108960160B (zh) 基于非结构化预测模型来预测结构化状态量的方法和装置
JP2022047580A (ja) 情報処理装置
WO2020019231A1 (en) Apparatus and method for use with vehicle
US20220105866A1 (en) System and method for adjusting a lead time of external audible signals of a vehicle to road users
KR20210128563A (ko) 클라우드의 도로 객체 인식을 이용하여 주행 정보를 제공하는 방법 및 장치
WO2020250574A1 (ja) 運転支援装置、運転支援方法及びプログラム
US11867523B2 (en) Landmark based routing
US20220306148A1 (en) Method and Apparatus Applied in Autonomous Vehicle
JP2023066132A (ja) 情報処理装置、情報処理方法および情報処理プログラム
CN117644863A (zh) 行车风险预测方法、装置、电子设备及存储介质
CN115631550A (zh) 一种用户反馈的方法和系统

Legal Events

Date Code Title Description
AS Assignment

Owner name: BAYERISCHE MOTOREN WERKE AKTIENGESELLSCHAFT, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KARABURUN, SINAN;REEL/FRAME:052419/0553

Effective date: 20200325

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE