US20180043829A1 - Method and Apparatus for Providing Automatic Mirror Setting Via Inward Facing Cameras - Google Patents
Method and Apparatus for Providing Automatic Mirror Setting Via Inward Facing Cameras Download PDFInfo
- Publication number
- US20180043829A1 US20180043829A1 US15/672,897 US201715672897A US2018043829A1 US 20180043829 A1 US20180043829 A1 US 20180043829A1 US 201715672897 A US201715672897 A US 201715672897A US 2018043829 A1 US2018043829 A1 US 2018043829A1
- Authority
- US
- United States
- Prior art keywords
- vehicle
- data
- mirror
- driver
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000000007 visual effect Effects 0.000 claims abstract description 14
- 230000004438 eyesight Effects 0.000 claims abstract description 10
- 230000001815 facial effect Effects 0.000 claims abstract description 8
- 210000003128 head Anatomy 0.000 claims description 33
- 230000002159 abnormal effect Effects 0.000 claims description 31
- 238000012549 training Methods 0.000 claims description 18
- 230000006870 function Effects 0.000 claims description 12
- 230000003213 activating effect Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 230000007613 environmental effect Effects 0.000 claims description 3
- 230000005043 peripheral vision Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 56
- 230000008569 process Effects 0.000 description 32
- 230000008859 change Effects 0.000 description 16
- 230000015654 memory Effects 0.000 description 16
- 238000001514 detection method Methods 0.000 description 15
- 238000010801 machine learning Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 12
- 230000002596 correlated effect Effects 0.000 description 11
- 230000008901 benefit Effects 0.000 description 10
- 238000013523 data management Methods 0.000 description 9
- 230000006399 behavior Effects 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004941 influx Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000005201 scrubbing Methods 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Q—ARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
- B60Q9/00—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
- B60Q9/008—Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/02—Rear-view mirror arrangements
- B60R1/04—Rear-view mirror arrangements mounted inside vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/02—Rear-view mirror arrangements
- B60R1/06—Rear-view mirror arrangements mounted on vehicle exterior
- B60R1/062—Rear-view mirror arrangements mounted on vehicle exterior with remote control for adjusting position
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3602—Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3605—Destination input or retrieval
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/485—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G06K9/00281—
-
- G06K9/00604—
-
- G06K9/00791—
-
- G06K9/00845—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/06—Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0125—Traffic data processing
- G08G1/0129—Traffic data processing for creating historical data or processing based on historical data
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/096805—Systems involving transmission of navigation instructions to the vehicle where the transmitted instructions are used to compute a route
- G08G1/096811—Systems involving transmission of navigation instructions to the vehicle where the transmitted instructions are used to compute a route where the route is computed offboard
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/096855—Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver
- G08G1/096861—Systems involving transmission of navigation instructions to the vehicle where the output is provided in a suitable form to the driver where the immediate route instructions are output to the driver, e.g. arrow signs for next turn
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/096877—Systems involving transmission of navigation instructions to the vehicle where the input to the navigation device is provided by a suitable I/O arrangement
- G08G1/096888—Systems involving transmission of navigation instructions to the vehicle where the input to the navigation device is provided by a suitable I/O arrangement where input information is obtained using learning systems, e.g. history databases
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/0969—Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/14—Traffic control systems for road vehicles indicating individual free spaces in parking areas
- G08G1/141—Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
- G08G1/143—Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces inside the vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
- H04W4/44—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/105—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8006—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for monitoring and displaying scenes of vehicle interior, e.g. for monitoring passengers or cargo
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G06K2209/27—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/10—Recognition assisted with metadata
Definitions
- the exemplary embodiment(s) of the present invention relates to the field of communication networks. More specifically, the exemplary embodiment(s) of the present invention relates to operating an intelligent machine using a virtuous cycle between cloud, machine learning, and containerized sensors.
- Machine learning is, in itself, an exploratory process which may involve trying different kinds of models, such as convolutional, RNN (recurrent neural network), et cetera.
- Machine learning or training typically concerns a wide variety of hyper-parameters that change the shape of the model and training characteristics. Model training generally requires intensive computation. As such, real-time response via machine learning model can be challenging.
- a drawback associated with traditional automobile or vehicle is that most of mirrors especially external mirrors on both side of vehicle are typically set incorrectly so that driver often time has a large or big blind spot to both side of car. Typically, driver head position is different between the time setting the mirror and the time driving the vehicle.
- One embodiment of the presently claimed invention discloses a method and/or system capable of adaptively adjusting one or more mirrors mounted on a vehicle via an automatic mirror-setting (“AM”) model managed by a virtuous cycle containing a machine learning center (“MLC”) and cloud based network (“CBN”).
- the system or AM system includes a set of mirrors, a set of inward facing cameras, a vehicle onboard computer (“VOC”), and AM module.
- the mirrors, attached to the vehicle are configured to capture at least a portion of external environment in which the vehicle operates.
- the mirrors include a left exterior side mirror, a right exterior side mirror, and an interior center mirror.
- the external environment includes road, nearby structures, pedestrians, traffic condition, nearby cars, and traffic lights.
- the inward facing cameras mounted inside of the vehicle, are configured to collect internal images including operator facial features showing operator visual characteristics.
- VOC which is coupled to CBN, is configured to determine operator vision metadata based on the internal images, operator visual characteristics, and historical stored data.
- the inward facing cameras include multiple exteriorly mounted image sensors capable of capturing internal images relating to position of driver relative to driver seat and interior of the vehicle.
- the operator visual characteristics include number of eyes on operator facial feature. Also, the operator visual characteristics include peripheral vision, vision boundary, and height of visual center.
- the AM module is able to adaptively set a mirror to an optimal orientation so that the area of external blind spot is minimized.
- the AM module includes at least a portion of an AM model which is able to dynamically adjust orientation of at least one of the plurality of mirrors to show an event associated with the external environment based on the external images and historical data from the CBN.
- the AM model includes an abnormal tracking function which is able to realign orientation of at least one of the plurality of mirror to continuously track an abnormal event in response to the external images and real-time cloud data submitted by other nearby vehicles.
- the AM model is trained by MCL which is coupled to the VOC. A function of MCL is to train and improve the AM model based on the labeled data from the CBN.
- the AM system further includes outward facing cameras mounted on the vehicle for collecting external images representing the surrounding environment in which the vehicle operates.
- the CBN is wirelessly coupled to the VOC and configured to correlate and generate labeled data associated with AM data based on historical cloud data, internal images, and external images.
- the outward facing cameras are configured to capture real-time images as the vehicle moves across a geographical area.
- the presently claimed invention discloses a method or process for interactively setting a mirror mounted on a vehicle via metadata extraction utilizing a virtuous cycle including sensors, MLC and CBN.
- the process is capable of receiving a mirror resetting signal indicating at least one mirror mounted on a vehicle requiring an adjustment.
- the process subsequently adjusts at least one mirror to an orientation with minimal blind spot in accordance with driver head position shown in the internal image and historical cloud data.
- the internal images are continuously obtained for a predefined wait period until the driver settling down so that the accurate calculation of driver head position can be computed.
- the set of outward facing cameras mounted on a vehicle can be activated for recording external surrounding images representing a geographic environment in which the vehicle operates.
- AM model is capable of tracking surrounding environmental event in accordance with the external surrounding images and historical data supplied by the virtuous cycle.
- AM system is able to perform a process configured to utilizing one of external mirror mounted on a vehicle to dynamically track an abnormal event facilitated via the virtuous cycle.
- images showing driver head position captured by a set of interior cameras is obtained while the driver operates moving vehicle.
- the process is capable of adaptively adjusting orientation of at least one mirror to track the abnormal event based on projected location according to the message so that the driver is able to see the abnormal event.
- AM model is able to issue a notice telling the driver to watch the abnormal event at the newly reoriented mirror. It should be noted that the labeled data representing driver reaction responding to the abnormal event is uploaded back to the CBN for facilitating AM model training at the MLC.
- FIGS. 1A-1B are block diagrams illustrating a virtuous cycle facilitating an automatic mirror-setting (“AM”) system capable of adaptively adjusting mirror(s) via a virtuous cycle in accordance with one embodiment of the present invention
- FIGS. 1C-1E are diagrams illustrating AM model providing mirror adjustment using inward and/or outward facing cameras via a virtuous cycle in accordance with one embodiment of the present invention
- FIGS. 1F-1H is a block diagram illustrating a pipeline process of outward facing camera capable of identifying and classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention
- FIGS. 2A-2B are block diagrams illustrating a virtuous cycle capable of facilitating AM model detection in accordance with one embodiment of the present invention
- FIG. 3 is a block diagram illustrating a cloud based network using crowdsourcing approach to improve AM model(s) in accordance with one embodiment of the present invention
- FIG. 4 is a block diagram illustrating an AM model or system using the virtuous cycle in accordance with one embodiment of the present invention
- FIG. 5 is a block diagram illustrating an exemplary process of correlating AM data in accordance with one embodiment of the present invention
- FIG. 6 is a block diagram illustrating an exemplary process of real-time data management for AM model in accordance with one embodiment of the present invention
- FIG. 7 is a block diagram illustrating a crowd sourced application model for
- FIG. 8 is a block diagram illustrating a method of storing AM related data using a geo-spatial objective storage in accordance with one embodiment of the present invention
- FIG. 9 is a block diagram illustrating an exemplary approach of analysis engine analyzing AM data in accordance with one embodiment of the present invention.
- FIG. 10 is a block diagram illustrating an exemplary containerized sensor network used for sensing AM related information in accordance with one embodiment of the present invention.
- FIG. 11 is a block diagram illustrating a processing device or computer system which can be installed in a vehicle for facilitating the virtuous cycle in accordance with one embodiment of the present invention.
- FIG. 12 is a flowchart illustrating a process of AM model or system capable of providing driver rating in accordance with one embodiment of the present invention.
- Embodiments of the present invention are described herein with context of a method and/or apparatus for facilitating automatic mirror adjustment based on images captured by inward facing cameras via an AM model continuously trained by a virtuous cycle containing cloud based network, containerized sensing device, and machine learning center (“MLC”).
- AM model continuously trained by a virtuous cycle containing cloud based network, containerized sensing device, and machine learning center (“MLC”).
- MLC machine learning center
- the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines.
- devices of a less general purpose nature such as hardware devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein.
- a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), FLASH Memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card and paper tape, and the like) and other known types of program memory.
- ROM Read Only Memory
- PROM Programmable Read Only Memory
- EEPROM Electrical Erasable Programmable Read Only Memory
- FLASH Memory Jump Drive
- magnetic storage medium e.g., tape, magnetic disk drive, and the like
- optical storage medium e.g., CD-ROM, DVD-ROM, paper card and paper tape, and the like
- system or “device” is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, access switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof.
- computer includes a processor, memory, and buses capable of executing instruction wherein the computer refers to one or a cluster of computers, personal computers, workstations, mainframes, or combinations of computers thereof.
- One embodiment of the presently claimed invention discloses a method or system capable of adjusting an exterior mirror of a vehicle via an automatic mirror-setting (“AM”) model managed by a virtuous cycle containing machine learning center (“MLC”) and cloud based network (“CBN”).
- the system or AM system includes a set of mirrors, a set of inward facing cameras, a vehicle onboard computer (“VOC”), and AM module.
- the mirrors, attached to a vehicle are configured to capture at least a portion of external environment in which the vehicle operates.
- the inward facing cameras, mounted in the vehicle are configured to collect internal images including operator facial features showing operator visual characteristics.
- VOC which is coupled to CBN, is configured to determine operator vision metadata based on the internal images, operator visual characteristics, and historical stored data.
- the AM module is able to adaptively set a mirror to an optimal orientation so that area of external blind spot is minimized.
- AM system is able to perform a process configured to utilizing one of external mirror mounted on a vehicle to dynamically track an abnormal event facilitated via the virtuous cycle.
- images showing driver head position captured by a set of interior cameras is obtained while the driver operates moving vehicle.
- the process is capable of adaptively adjusting orientation of at least one mirror to track the abnormal event based on projected location according to the message so that the driver is able to see the abnormal event.
- AM model is able to issue a notice telling the driver to watch the abnormal event at the newly reoriented mirror. It should be noted that the labeled data representing driver reaction responding to the abnormal event is uploaded back to the CBN for facilitating AM model training at the MLC.
- FIG. 1A is a block diagram 100 illustrating a virtuous cycle facilitating an automatic mirror-setting (“AM”) system capable of adaptively adjusting mirror(s) via a virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 100 illustrates a virtuous cycle containing a vehicle 102 , CBN 104 , and MLC 106 .
- MCL 106 can be located remotely or in the cloud.
- MCL 106 can be a part of CBN 104 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from diagram 100 .
- Vehicle 102 in one example, can be a car, automobile, bus, train, drone, airplane, truck, and the like, and is capable of moving geographically from point A to point B.
- Vehicle or “car” is used.
- Vehicle 102 includes wheels with ABS (anti-lock braking system), body, steering wheel 108 , exterior or outward facing cameras 125 , interior (or 360 ° (degree)) or inward facing camera 126 , antenna 124 , onboard controller or VOC 123 , and operator (or driver) 109 .
- outward facing cameras and/or inward facing cameras 125 - 126 can be installed at front, side-facing, stereo, and inside of vehicle 102 .
- vehicle 102 also includes various sensors which senses information related to vehicle state, vehicle status, driver actions,
- the sensors are able to collect information, such as audio, ABS, steering, braking, acceleration, traction control, windshield wipers, GPS (global positioning system), radar, ultrasound, lidar (Light Detection and Ranging), and the like.
- VOC or onboard controller 123 includes CPU (central processing unit), GPU (graphic processing unit), memory, and disk responsible for gathering data from outward facing or exterior cameras 125 , inward facing or interior cameras 126 , audio sensor, ABS, traction control, steering wheel, CAN-bus sensors, and the like.
- VOC 123 executes AM model received from MLC 106 , and interfaces with antenna 124 to communicate with CBN 104 via a wireless communication network 110 .
- wireless communication network includes, but not limited to, WIFI, cellular network, Bluetooth network, satellite network, or the like.
- a function of VOC 123 is to gather or capture real-time surrounding information as well as exterior information when vehicle 102 is moving.
- CBN 104 includes various digital computing systems, such as, but not limited to, server farm 120 , routers/switches 121 , cloud administrators 119 , connected computing devices 116 - 117 , and network elements 118 .
- a function of CBN 104 is to provide cloud computing which can be viewed as on-demand Internet based computing service with enormous computing power and resources.
- Another function of CBN 104 is to improve or refine AM labeled data via correlating captured real-time data with relevant cloud data. The refined AM labeled data is subsequently passed to MLC 106 for model training via a connection 112 .
- MLC 106 in one embodiment, provides, refines, trains, and/or distributes models 115 such as AM model based on information or data such as AM labeled data provided from CBN 104 . It should be noted that the machine learning makes AM model based on models generated and maintained by various computational algorithms using historical data as well as current data. A function of MLC 106 is that it is capable of pushing information such as revised AM model to vehicle 102 via a wireless communications network 114 in real-time.
- an onboard AM model which could reside inside of VOC 123 receives a triggering event or events from built-in sensors such as driver body language, external surrounding condition, internal detected images, ABS, wheel slippery, turning status, engine status, and the like.
- the triggering event or events may include, but not limited to, activation of ABS, texting, drinking, smoking, arguing, playing, fighting, rapid steering, rapid breaking, excessive wheel slip, activation of emergency stop, and on.
- the recording or recorded images captured by inward facing camera or 360 camera are rewound from an earlier time stamp leading to the receipt of triggering event(s) for identifying, for example, AM labeled data which contains images of driver head position or abnormal events.
- the AM model is retrained and refined at MLC 106 .
- the retrained AM model is subsequently pushed back onto vehicle 102 .
- the system can automatically set the mirrors to the safest possible position. For example, by keeping track of the full range of positions the driver's head has been in, the system can determine the “center point” of their normal driving position. Note that the head position is usually different slightly from where they might place their head when performing a mirror adjustment.
- the AM model has a delay element that, during operation, upon pressing the button for “auto mirror set,” a delay and a tone are issued to allow the driver to position themselves as they will be when driving.
- inward facing camera 126 captures facial images of driver or operator 109 including the driver head position and eye level.
- a focal direction 107 of operator 109 is identified.
- a possible trajectory 105 in which the location is looked at is obtained.
- Trajectory 105 and focal direction 107 are subsequently processed and combined in accordance with stored data in the cloud.
- the object, which is being looked at by operator 109 is identified.
- the object is a house 103 nearby the road.
- the eye level is determined wherein the eye level or head position will be used to adjust the mirrors to optimal orientations with minimal blind spots.
- An advantage of using AM system is to reduce blind spots whereby traffic accidents should be reduced.
- FIG. 1B illustrates a block diagram 140 showing an operator or driver monitored by AM system for adaptively adjusting mirrors via a virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 140 illustrates a driver 148 , inward facing camera(s) 142 , right external side mirror 143 , and exterior camera 144 .
- camera 142 also known as interior camera or 360 camera, monitors or captures driver's facial expression 146 and/or driver (or operator) body language such as head position.
- AM model can conclude that driver is behaving normally or abnormally.
- the interior images captured by inward facing camera(s) 142 can show a location in which operator 148 is focusing based on relative eye positions of operator 148 .
- AM system obtains external images captured by outward facing camera(s) 144 .
- image 145 is recorded and process.
- AM model is able to identify the driver vision associated to side mirrors. For example, the AM model can identify the optimal orientation for mirror 143 in view of driver vision 141 with minimum blind spots.
- FIG. 1C illustrates diagrams 180 and 198 showing AM model containing inward facing cameras to automatically setting mirrors using a virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 180 in one embodiment, includes interior car 189 , exterior car 187 , steering wheel 181 , dashboard 182 , driver head position 184 , inward facing camera 190 , and left mirror 188 .
- onboard vehicle computer can calculate driver head position 184 based on the observed images captured by inward facing camera 190 .
- left mirror 188 is automatically adjusted to view rear view 186 with the coverage with minimum blind spot as indicated by numeral 183 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (components or elements) were added to or removed from diagram 180 .
- Diagram 180 illustrates a heuristic surrounding using a stream of head position data to choose center point of eyes.
- the AM system employs one or more internal cameras to extract metadata regarding the head position of a driver, and use that known position in space in order to automatically set the rearview mirrors to the optimal position for that driver.
- the AM model is, in one aspect, capable of calculating the optimal setting for rear mirrors based on obtained driver head position. For example, when drivers are sitting in their normal driving positions, their head may not stay in one place; instead they sweep through a range of space.
- the system is able to use extracted metadata about head position, and more importantly, the exact position of the driver's eyes. By combining information about the exact distance from the eyes to the rear mirrors, a vertical and horizontal angle can be calculated that will allow the driver to see what is happening behind them, while minimizing the size of the “blind spot” that can occur.
- left mirror 188 has a metric of horizontal distance to center of steering wheel and vertical distance to center of steering wheel.
- the metrics also define mirror width, length, and height.
- Diagram 198 shows a heuristic illustration used for filtered and weighted average of eye position.
- face and eye position is detected at block 192 .
- filter outlier data points are generated and/or extracted at block 194 .
- weighted average of eye position over a predefined time interval (t 1 , t 2 ) is calculated and/or obtained.
- FIG. 1D shows diagrams 1600 - 1602 illustrating real-time coverage of rear view mirror using AM model via a virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 1600 includes vehicle 1606 , blind cars 1610 , and visible cars 1612 wherein the rear view mirrors of vehicle 1606 are set improperly whereby left side viewing coverage 1614 and right side viewing coverage 1618 miss blind cars 1610 .
- coverage 1614 - 1618 covers visible cars 1612
- blind cards 1610 are in blind spots.
- Diagram 1602 illustrates a scenario in which rear view mirror of vehicle 1606 are set properly whereby blind cars 1610 are visible by new coverage 1624 - 1628 .
- vehicle 1606 displays real-time coverage of rear view mirror that all cars 1610 - 1612 are observed leaving minimum blind spots.
- FIG. 1E is a block diagram 1700 illustrating a dynamic tracking function of AM model containing inward and outward facing cameras via a virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 1700 includes three lanes 1702 - 1706 , vehicle 1706 , cars 1708 - 1712 , wireless transmission towers 1711 - 1712 , and virtuous cycle 1708 .
- car 1712 is acting or driving recklessly which constitutes an abnormal event. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more cars (components or elements) were added to or removed from diagram 1700 .
- AM model is able to notify the driver an abnormal event nearby and direct the driver to monitor the situation via one or more mirrors which is dynamic tracking an abnormal event facilitated by the virtuous cycle 1708 .
- the dynamic tracking function is able to move or turn mirror(s) to track the movement caused by the abnormal event. For example, when car 1712 is detected by car 1710 for speeding and changing multiple lanes at once as indicated by numeral 1716 , car 1710 reports the reckless driving behavior as abnormal event via wireless signal 1718 to virtuous cycle 1708 via wireless tower 1711 . After determining the abnormal event based on cloud data, virtuous cycle 1708 pushes abnormal event to vehicle 1706 via wireless signals and connection 1722 - 1724 . The left external side mirror is automatically adjusted from original cover 1728 to situational coverage 1730 which will track the movement of car 1712 . The tracking will allow the driver to monitor the abnormal situation more effectively.
- An advantage of using a dynamic tracking function of AM model is that it provides an additional vision to the driver via mirror operation or adjustment.
- FIG. IF is a logic block diagram illustrating a pipeline process 150 of outward facing camera capable of identifying and classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention.
- Outward facing camera 151 collects images and the images are stored in a queue 152 .
- the scaled image is forwarded to object detection 154 .
- Object detection 154 generates a collection of objection information which is forwarded to queue 155 .
- the object information which includes bounding-box, object category, object orientation, and object distance is forwarded to component 156 and router 157 .
- the categorized data is forwarded to map 158 .
- the recognizer is forwarded to router 157 .
- the output images are forwarded to block 159 which uses classifier 130 - 131 to classify the images and/or objects.
- Pipeline process 150 illustrates a logic processing flow which is instantiated for the purpose of processing incoming data, extracting metadata on a frame by frame or data packet basis, and forwarding both frames and metadata packets forward through the pipeline.
- Each stage of the pipeline can contain software elements that perform operations upon the current audio or video or sensor data frame.
- the elements in the pipeline can be inserted or removed while the pipeline is running, which allows for an adaptive pipeline that can perform different operations depending on the applications.
- the pipeline process is configured to adapt various system constraints that can be situationally present. Additionally, elements in the pipeline can have their internal settings updated in real-time, providing the ability to “turn off,” “turn on” elements, or to adjust their configuration settings on the fly.
- Pipeline process 150 includes a metadata packet schema which includes name/value pairs with arbitrary nesting and basic primitive data types such as arrays and structures that is used to create a self-describing and both machine and human readable form of the extracted real-time metadata flowing through the system.
- a metadata packet schema which includes name/value pairs with arbitrary nesting and basic primitive data types such as arrays and structures that is used to create a self-describing and both machine and human readable form of the extracted real-time metadata flowing through the system.
- Such a generalized schema allows multiple software components to agree on how to describe the high level events that are being captured and analyzed and acted upon by the system.
- a schema is constructed to describe the individual locations within a video frame of a person's eyes, nose, mouth, chin line, etc.
- Such a data structure allows a downstream software component to infer even higher level events, such as “this person is looking up at 34 degrees above the horizon” or “this person is looking left 18 degrees left of center.”
- the process can subsequently construct additional metadata packets and insert them into the stream, resulting in higher level semantic metadata that the system is able to act upon.
- FIG. 1G is a logic block diagram illustrating a pipeline process 160 capable of identifying and classifying face detection, head and gaze orientation, and mouth features using a virtuous cycle in accordance with one embodiment of the present invention.
- Inward facing camera 161 collects images and the images are stored in a queue 162 .
- the scaled image is forwarded to face and head detection 164 .
- the output of detection 164 is forwarded to image transform (“IT”) components 165 - 166 .
- image transform After transformation, the transformed image is forwarded to blocks 169 - 170 .
- the feature map is forwarded to block 167 for pose normalization.
- Block 168 receives face images from IT component 165 and transformed images from block 167 ; the normalized face image is forwarded to block 172 .
- a face ID is identified.
- Block 170 extracts mouth feature and generates mouth feature(s) of driver.
- Block 171 processes head and gaze based on output of IT component 166 which receives information with both scaled and unscaled images.
- block 171 is capable of generating various features, such as gaze, head, number of eyes, glasses, and the like.
- FIG. 1H is a logic block diagram 175 illustrating a process of classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention.
- Block 176 is a software element used to classify a pedestrian based on collected external images captured by outward facing cameras. Based on collected data and historical data, pedestrian may be identified.
- Block 177 is a software element used to classify a vehicle based on collected external images captured by outward facing cameras. Based on collected data and historical data, vehicle information can be identified.
- the exemplary classification information includes model of the vehicle, license plate, state of vehicle registration, and the like. In addition, formation such as turn-signals, brake lights, and headlights can also be classified via facilitation of virtuous cycle.
- Block 178 is a software element used to classify traffic signals or conditions according to collected external images captured by outward facing cameras. For example, according to collected data as well as historical data, the traffic signal can be classified.
- the exemplary classification includes sign, speed limit, stop sign, and the like.
- FIG. 2A is a block diagram 200 illustrating a virtuous cycle capable of detecting or monitoring AM system in accordance with one embodiment of the present invention.
- Diagram 200 which is similar to diagram 100 shown in FIG. 1A , includes a containerized sensor network 206 , real-world scale data 202 , and continuous machine learning 204 .
- continuous machine learning 204 pushes real-time models to containerized sensor network 206 as indicated by numeral 210 .
- Containerized sensor network 206 continuously feeds captured data or images to real-world scale data 202 with uploading in real-time or in a batched format.
- Real-world scale data 202 provides labeled data to continuous machine learning 204 for constant model training as indicated by numeral 212 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed from FIG. 2A .
- the virtuous cycle illustrated in diagram 200 is configured to implement AM system wherein containerized sensor network 206 is similar to vehicle 102 as shown in FIG. 1A and real-world scale data 202 is similar to CBN 104 shown in FIG. IA. Also, continuous machine learning 204 is similar to MCL 106 shown in FIG. IA.
- containerized sensor network 206 such as an automobile or car contains a containerized sensing device capable of collecting surrounding information or images using onboard sensors or sensor network when the car is in motion. Based on the AM model, selective recording the collected surrounding information is selectively recorded to a local storage or memory.
- Real-world scale data 202 such as cloud or CBN, which is wirelessly coupled to the containerized sensing device, is able to correlate with cloud data and recently obtained AM data for producing labeled data.
- real-world scale data 202 generates AM labeled data based on historical AM cloud data and the surrounding information sent from the containerized sensing device.
- Continuous machine learning 204 is configured to train and improve AM model based on the labeled data from real-world scale data 202 .
- the AM system With continuous gathering data and training AM model(s), the AM system will be able to learn, obtain, and/or collect all available data for the population samples.
- a virtuous cycle includes partition-able Machine Learning networks, training partitioned networks, partitioning a network using sub-modules, and composing partitioned networks.
- a virtuous cycle involves data gathering from a device, creating intelligent behaviors from the data, and deploying the intelligence.
- partition idea includes knowing the age of a driver which could place or partition “dangerous driving” into multiple models and selectively deployed by an “age detector.” An advantage of using such partitioned models is that models should be able to perform a better job of recognition with the same resources because the domain of discourse is now smaller. Note that, even if some behaviors overlap by age, the partitioned models can have common recognition components.
- “dangerous driving” can be further partitioned by weather condition, time of day, traffic conditions, et cetera.
- categories of dangerous driving can be partitioned into “inattention”, “aggressive driving”, “following too closely”, “swerving”, “driving too slowly”, “frequent breaking”, deceleration, ABS event, et cetera.
- AM system For example, by resisting a steering behavior that is erratic, the car gives the driver direct feedback on their behavior—if the resistance is modest enough then if the steering behavior is intentional (such as trying to avoid running over a small animal) then the driver is still able to perform their irregular action. However, if the driver is texting or inebriated then the correction may alert them to their behavior and get their attention. Similarly, someone engaged in “road rage” who is driving too close to another car may feel resistance on the gas pedal.
- a benefit of using AM system is to identify driver head position and adjust mirror(s) based on driver head position.
- a model such as AM model includes some individual blocks that are trained in isolation to the larger problem (e.g. weather detection, traffic detection, road type, etc.). Combining the blocks can produce a larger model.
- the sample data may include behaviors that are clearly bad (ABS event, rapid deceleration, midline crossing, being too close to the car in front, etc.).
- one or more sub-modules are built.
- the models include weather condition detection and traffic detection for additional modules intelligence, such as “correction vectors” for “dangerous driving.”
- An advantage of using a virtuous cycle is that it can learn and detect object such as AM in the real world.
- FIG. 2B is a block diagram 230 illustrating an alternative exemplary virtuous cycle capable of detecting AM in accordance with one embodiment of the present invention.
- Diagram 230 includes external data source 234 , sensors 238 , crowdsourcing 233 , and intelligent model 239 .
- components/activities above dotted line 231 are operated in cloud 232 , also known as in-cloud component.
- Components/activities below dotted line 231 are operated in car 236 , also known as in-device or in-car component. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed from FIG. 2B .
- in-cloud components and in-device components coordinate to perform desirable user specific tasks. While in-cloud component leverages massive scale to process incoming device information, cloud applications leverage crowd sourced data to produce applications. External data sources can be used to contextualize the applications to facilitate intellectual crowdsourcing. For example, in-car (or in-phone or in-device) portion of the virtuous cycle pushes intelligent data gathering to the edge application. In one example, edge applications can perform intelligent data gathering as well as intelligent in-car processing. It should be noted that the amount of data gathering may rely on sensor data as well as intelligent models which can be loaded to the edge.
- FIG. 3 is a block diagram 300 illustrating a cloud based network using crowdsourcing approach to improve AM model(s) in accordance with one embodiment of the present invention.
- Diagram 300 includes population of vehicles 302 , sample population 304 , models deployment 306 , correlation component 308 , and cloud application 312 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or samples) were added to or removed from FIG. 3 .
- Crowdsourcing is a process of using various sourcing or specific models generated or contributed from other cloud or Internet users for achieving needed services. For example, crowdsourcing relies on the availability of a large population of vehicles, phones, or other devices to source data 302 . For example, a subset of available devices such as sample 304 is chosen by some criterion such as location to perform data gathering tasks. To gather data more efficiently, intelligent models are deployed to a limited number of vehicles 306 for reducing the need of large uploading and processing a great deal of data in the cloud. It should be noted that the chosen devices such as cars 306 monitor the environment with the intelligent model and create succinct data about what has been observed. The data generated by the intelligent models is uploaded to the correlated data store as indicated by numeral 308 . It should be noted that the uploading can be performed in real-time for certain information or at a later time for other types of information depending on the need as well as condition of network traffic.
- Correlated component 308 includes correlated data storage capable of providing a mechanism for storing and querying uploaded data.
- Cloud applications 312 leverage the correlated data to produce new intelligent models, create crowd sourced applications, and other types of analysis.
- FIG. 4 is a block diagram 400 illustrating an AM system using the virtuous cycle in accordance with one embodiment of the present invention.
- Diagram 400 includes a correlated data store 402 , machine learning framework 404 , and sensor network 406 .
- Correlated data store 402 , machine learning framework 404 , and sensor network 406 are coupled by connections 410 - 416 to form a virtuous cycle as indicated by numeral 420 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from FIG. 4 .
- correlated data store 402 manages real-time streams of data in such a way that correlations between the data are preserved.
- Sensor network 406 represents the collection of vehicles, phones, stationary sensors, and other devices, and is capable of uploading real-time events into correlated data store 402 via a wireless communication network 412 in real-time or in a batched format.
- stationary sensors includes, but not limited to, municipal cameras, webcams in offices and buildings, parking lot cameras, security cameras, and traffic cams capable of collecting real-time images.
- the stationary cameras such as municipal cameras and webcams in offices are usually configured to point to streets, buildings, parking lots wherein the images captured by such stationary cameras can be used for accurate labeling.
- To fuse between motion images captured by vehicles and still images captured by stationary cameras can track object(s) such as car(s) more accurately.
- Combining or fusing stationary sensors and vehicle sensors can provide both labeling data and historical stationary sampling data also known as stationary “fabric”. It should be noted that during the crowdsourcing applications, fusing stationary data (e.g. stationary cameras can collect vehicle speed and position) with real-time moving images can improve ML process.
- Machine Learning (“ML”) framework 404 manages sensor network 406 and provides mechanisms for analysis and training of ML models.
- ML framework 404 draws data from correlated data store 402 via a communication network 410 for the purpose of training modes and/or labeled data analysis.
- ML framework 404 can deploy data gathering modules to gather specific data as well as deploy ML models based on the previously gathered data.
- the data upload, training, and model deployment cycle can be continuous to enable continuous improvement of models.
- FIG. 5 is a block diagram 500 illustrating an exemplary process of correlating AM data in accordance with one embodiment of the present invention.
- Diagram 500 includes source input 504 , real-time data management 508 , history store 510 , and crowd sourced applications 512 - 516 .
- source input 504 includes cars, phones, tablets, watches, computers, and the like capable of collecting massive amount of data or images which will be passed onto real-time data management 508 as indicated by numeral 506 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed from FIG. 5 .
- a correlated system includes a real-time portion and a batch/historical portion.
- the real-time part aims to leverage new data in near or approximately real-time.
- Real-time component or management 508 is configured to manage a massive amount of influx data 506 coming from cars, phones, and other devices 504 .
- real-time data management 508 transmits processed data in bulk to the batch/historical store 510 as well as routes the data to crowd sourced applications 512 - 516 in real-time.
- Crowd sourced applications 512 - 516 leverage real-time events to track, analyze, and store information that can be offered to user, clients, and/or subscribers.
- Batch-Historical side of correlated data store 510 maintains a historical record of potentially all events consumed by the real-time framework.
- historical data can be gathered from the real-time stream and it can be stored in a history store 510 that provides high performance, low cost, and durable storage.
- real-time data management 508 and history store 510 coupled by a connection 502 are configured to perform AM data correlation as indicated by dotted line.
- FIG. 6 is a block diagram 600 illustrating an exemplary process of real-time data for AM system in accordance with one embodiment of the present invention.
- Diagram 600 includes data input 602 , gateway 606 , normalizer 608 , queue 610 , dispatcher 616 , storage conversion 620 , and historical data storage 624 .
- the process of real-time data management further includes a component 614 for publish and subscribe. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from FIG. 6 .
- the real-time data management in one embodiment, is able to handle a large numbers (i.e., 10 ′s of millions) of report events to the cloud as indicated by numeral 604 .
- API (application program interface) gateway 606 can handle multiple functions such as client authentication and load balancing of events pushed into the cloud.
- the real-time data management can leverage standard HTTP protocols.
- the events are routed to stateless servers for performing data scrubbing and normalization as indicated by numeral 608 .
- the events from multiple sources 602 are aggregated together into a scalable/durable/consistent queue as indicated by numeral 610 .
- An event dispatcher 616 provides a publish/subscribe model for crowd source applications 618 which enables each application to look at a small subset of the event types.
- the heterogeneous event stream for example, is captured and converted to files for long-term storage as indicated by numeral 620 .
- Long-term storage 624 provides a scalable and durable repository for historical data.
- FIG. 7 is a block diagram 700 illustrating a crowd sourced application model for AM model in accordance with one embodiment of the present invention.
- Diagram 700 includes a gateway 702 , event handler 704 , state cache 706 , state store 708 , client request handler 710 , gateway 712 , and source input 714 .
- gateway 702 receives an event stream from an event dispatcher and API gateway 712 receives information/data from input source 714 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed from FIG. 7 .
- the crowd sourced application model facilitates events to be routed to a crowd source application from a real-time data manager.
- the events enter gateway 702 using a simple push call.
- the events are converted into inserts or modifications to a common state store.
- State store 708 is able to hold data from multiple applications and is scalable and durable.
- State store 708 besides historical data, is configured to store present data, information about “future data”, and/or data that can be shared across applications such as predictive AI (artificial intelligence).
- State cache 706 in one example, is used to provide fast access to commonly requested data stored in state store 708 .
- application can be used by clients.
- API gateway 712 provides authentication and load balancing.
- Client request handler 710 leverages state store 708 for providing client data.
- an onboard AM model is able to handle real-time AM detection based on triggering events. For example, after ML models or AM models for AM detection have been deployed to all or most of the vehicles, the deployed ML models will report to collected data indicating AM system for facilitating issuance of real-time warning for dangerous event(s). The information or data relating to the real-time dangerous event(s) or AM system is stored in state store 708 . Vehicles 714 looking for AM detection can, for example, access the AM system using gateway 712 .
- FIG. 8 is a block diagram 800 illustrating a method of storing AM related data using a geo-spatial objective storage in accordance with one embodiment of the present invention.
- Diagram 800 includes gateway 802 , initial object 804 , put call 806 , find call 808 , get call 810 , SQL (Structured Query Language) 812 , non-SQL 814 , and geo-spatial object storage 820 .
- SQL Structured Query Language
- FIG. 8 is a block diagram 800 illustrating a method of storing AM related data using a geo-spatial objective storage in accordance with one embodiment of the present invention.
- Diagram 800 includes gateway 802 , initial object 804 , put call 806 , find call 808 , get call 810 , SQL (Structured Query Language) 812 , non-SQL 814 , and geo-spatial object storage 820 .
- SQL Structured Query Language
- Geo-spatial object storage 820 stores or holds objects which may include time period, spatial extent, ancillary information, and optional linked file.
- geo-spatial object storage 820 includes UUID (universally unique identifier) 822 , version 824 , start and end time 826 , bounding 828 , properties 830 , data 832 , and file-path 834 .
- UUID 822 identifies an object
- all objects have version(s) 824 that allow schema to change in the future.
- Start and end time 826 indicates an optional time period with a start time and an end time.
- An optional bounding geometry 828 is used to specify spatial extent of an object.
- An optional set of properties 830 is used to specify name-value pairs.
- Data 832 can be binary data.
- An optional file path 834 may be used to associate with the object of a file containing relevant information such as MPEG (Moving Picture Experts Group) stream.
- MPEG Motion Picture Experts Group
- API gateway 802 is used to provide access to the service. Before an object can be added to the store, the object is assigned an UUID which is provided by the initial object call. Once UUID is established for a new object, the put call 804 stores the object state. The state is stored durably in Non-SQL store 814 along with UUID. A portion of UUID is used as hash partition for scale-out. The indexible properties includes version, time duration, bounding, and properties which are inserted in a scalable SQL store 812 for indexing. The Non-SQL store 814 is used to contain the full object state. Non-SQL store 814 is scaled-out using UUID as, for example, a partition key.
- SQL store 812 is used to create index tables that can be used to perform queries.
- SQL store 812 may include three tables 816 containing information, bounding, and properties. For example, information holds a primary key, objects void, creation timestamp, state of object and object properties “version” and “time duration.” Bounding holds the bounding geometry from the object and the id of the associated information table entry. Properties hold property name/value pairs from the object stored as one name/value pair per row along with ID of associated info table entry.
- Find call 808 accepts a query and returns a result set, and issues a SQL query to SQL store 812 and returns a result set containing UUID that matches the query.
- FIG. 9 is a block diagram 900 illustrating an exemplary approach of analysis engine analyzing AM data in accordance with one embodiment of the present invention.
- Diagram 900 includes history store 902 , analysis engine 904 , and geo-spatial object store 906 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from FIG. 9 .
- diagram 900 illustrates analysis engine 904 containing ML training component capable of analyzing labeled data based on real-time captured AM data and historical data.
- the data transformation engine in one example, interacts with Geo-spatial object store 906 to locate relevant data and with history store to process the data. Optimally, the transformed data may be stored.
- virtuous cycle employing ML training component to provide continuous model training using real-time data as well as historical samples, and deliver AM detection model for one or more subscribers.
- a feature of virtuous cycle is able to continuous training a model and able to provide a real-time or near real-time result. It should be noted that the virtuous cycle is applicable to various other fields, such as, but not limited to, business intelligence, law enforcement, medical services, military applications, and the like.
- FIG. 10 is a block diagram 1000 illustrating an exemplary containerized sensor network used for sensing AM system related information in accordance with one embodiment of the present invention.
- Diagram 1000 includes a sensor bus 1002 , streaming pipeline 1004 , and application layer 1006 wherein sensor bus 1002 is able to receive low-bandwidth sources and high-bandwidth sources.
- Streaming pipeline 1004 in one embodiment, includes ML capable of generating unique model such as model 1008 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from FIG. 10 .
- FIG. 11 is a block diagram 1100 illustrating a processing device or computer system which can be installed in a vehicle to support onboard cameras, CAN (Controller Area Network) bus, Inertial Measurement Units, Lidar, et cetera for facilitating virtuous cycle in accordance with one embodiment of the present invention.
- Computer system or AM system 1100 can include a processing unit 1101 , an interface bus 1112 , and an input/output (“IO”) unit 1120 .
- Processing unit 1101 includes a processor 1102 , a main memory 1104 , a system bus 1111 , a static memory device 1106 , a bus control unit 1105 , I/O element 1130 , and AM element 1185 . It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from FIG. 11 .
- Bus 1111 is used to transmit information between various components and processor 1102 for data processing.
- Processor 1102 may be any of a wide variety of general-purpose processors, embedded processors, or microprocessors such as ARM® embedded processors, Intel® CoreTM Duo, CoreTM Quad, Xeon®, PentiumTM microprocessor, MotorolaTM 68040, AMD® family processors, or Power PCTM microprocessor.
- Main memory 1104 which may include multiple levels of cache memories, stores frequently used data and instructions.
- Main memory 1104 may be RAM (random access memory), MRAM (magnetic RAM), or flash memory.
- Static memory 1106 may be a ROM (read-only memory), which is coupled to bus 1111 , for storing static information and/or instructions.
- Bus control unit 1105 is coupled to buses 1111 - 1112 and controls which component, such as main memory 1104 or processor 1102 , can use the bus.
- Bus control unit 1105 manages the communications between bus 1111 and bus 1112 .
- I/O unit 1120 in one embodiment, includes a display 1121 , keyboard 1122 , cursor control device 1123 , and communication device 1125 .
- Display device 1121 may be a liquid crystal device, cathode ray tube (“CRT”), touch-screen display, or other suitable display device.
- Display 1121 projects or displays images of a graphical planning board.
- Keyboard 1122 may be a conventional alphanumeric input device for communicating information between computer system 1100 and computer operator(s).
- cursor control device 1123 is another type of user input device.
- AM element 1185 in one embodiment, is coupled to bus 1111 , and configured to interface with the virtuous cycle for facilitating AM performance. For example, if AM system 1100 is installed in a car, AM element 1185 is used to operate the AM model as well as interface with the cloud based network. If AM system 1100 is placed at the cloud based network, AM element 1185 can be configured to handle the correlating process for generating labeled data for AM data.
- Communication device 1125 is coupled to bus 1111 for accessing information from remote computers or servers, such as server 104 or other computers, through wide-area network 102 .
- Communication device 1125 may include a modem or a network interface device, or other similar devices that facilitate communication between computer 1100 and the network.
- Computer system 1100 may be coupled to a number of servers via a network infrastructure such as the Internet.
- the exemplary embodiment of the present invention includes various processing steps, which will be described below.
- the steps of the embodiment may be embodied in machine or computer executable instructions.
- the instructions can be used to cause a general purpose or special purpose system, which is programmed with the instructions, to perform the steps of the exemplary embodiment of the present invention.
- the steps of the exemplary embodiment of the present invention may be performed by specific hardware components that contain hard-wired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.
- FIG. 12 is a flowchart 1200 illustrating a process of AM system capable of automatically setting mirrors with minimum blind spots in accordance with one embodiment of the present invention.
- the process at block 1202 , is capable of receiving a mirror resetting signal indicating at least one mirror mounted on a vehicle requiring an adjustment.
- the historical cloud data associated with the vehicle and driver at block 1206 is obtained from a virtuous cycle.
- the process subsequently adjusts at least one mirror to an orientation with minimal blind spot in accordance with driver head position shown in the internal image and historical cloud data.
- the internal images are continuously obtained for a predefined wait period until the driver settling down so that the accurate calculation of driver head position can be computed.
- the set of outward facing cameras mounted on a vehicle can be activated for recording external surrounding images representing a geographic environment in which the vehicle operates.
- AM model is capable of tracking surrounding environmental event in accordance with the external surrounding images and historical data supplied by the virtuous cycle.
Abstract
Description
- This application claims the benefit of priority based upon U.S. Provisional Patent Application having an application Ser. No. 62/372,999, filed on Aug. 10, 2016, and having a title of “Method and System for Providing Information Using Collected and Stored Metadata,” which is hereby incorporated by reference in its entirety.
- This application is related to the following co-pending application assigned to the Assignee of the present invention.
- a. Application Ser. No. 15/672,747, filed Aug. 9, 2017, entitled “Method and Apparatus for Providing Information via Collected and Stored Metadata Using Inferred Attentional Model,” invented by the same inventors with an Attorney's docket No. 1152.P0002US;
- b. Application Ser. No. 15/672,832, filed Aug. 9, 2017, entitled “Method and Apparatus for Providing Driver Information Via Audio and Video Metadata Extraction,” invented by the same inventors with an Attorney's docket No. 1152.P0006US; and
- c. Application No. ______, filed Aug. 10, 2017, entitled “Method and Apparatus for Providing Goal Oriented Navigational Directions,” invented by the same inventors with an Attorney's docket No. 1152.P0008US.
- The exemplary embodiment(s) of the present invention relates to the field of communication networks. More specifically, the exemplary embodiment(s) of the present invention relates to operating an intelligent machine using a virtuous cycle between cloud, machine learning, and containerized sensors.
- With increasing popularity of automation and intelligent electronic devices, such as computerized machines, IoT (the Internet of Things), smart vehicles, smart phones, drones, mobile devices, airplanes, artificial intelligence (“AI”), the demand of intelligent machines and faster real-time response are increasing. To properly provide machine learning, a significant number of pieces, such as data management, model training, and data collection, needs to be improved.
- A conventional type of machine learning is, in itself, an exploratory process which may involve trying different kinds of models, such as convolutional, RNN (recurrent neural network), et cetera. Machine learning or training typically concerns a wide variety of hyper-parameters that change the shape of the model and training characteristics. Model training generally requires intensive computation. As such, real-time response via machine learning model can be challenging.
- A drawback associated with traditional automobile or vehicle is that most of mirrors especially external mirrors on both side of vehicle are typically set incorrectly so that driver often time has a large or big blind spot to both side of car. Typically, driver head position is different between the time setting the mirror and the time driving the vehicle.
- One embodiment of the presently claimed invention discloses a method and/or system capable of adaptively adjusting one or more mirrors mounted on a vehicle via an automatic mirror-setting (“AM”) model managed by a virtuous cycle containing a machine learning center (“MLC”) and cloud based network (“CBN”). The system or AM system includes a set of mirrors, a set of inward facing cameras, a vehicle onboard computer (“VOC”), and AM module. In one embodiment, the mirrors, attached to the vehicle, are configured to capture at least a portion of external environment in which the vehicle operates. In one example, the mirrors include a left exterior side mirror, a right exterior side mirror, and an interior center mirror. The external environment includes road, nearby structures, pedestrians, traffic condition, nearby cars, and traffic lights.
- The inward facing cameras, mounted inside of the vehicle, are configured to collect internal images including operator facial features showing operator visual characteristics. VOC, which is coupled to CBN, is configured to determine operator vision metadata based on the internal images, operator visual characteristics, and historical stored data. In one example, the inward facing cameras include multiple exteriorly mounted image sensors capable of capturing internal images relating to position of driver relative to driver seat and interior of the vehicle. The operator visual characteristics include number of eyes on operator facial feature. Also, the operator visual characteristics include peripheral vision, vision boundary, and height of visual center.
- The AM module is able to adaptively set a mirror to an optimal orientation so that the area of external blind spot is minimized. The AM module includes at least a portion of an AM model which is able to dynamically adjust orientation of at least one of the plurality of mirrors to show an event associated with the external environment based on the external images and historical data from the CBN. In one embodiment, the AM model includes an abnormal tracking function which is able to realign orientation of at least one of the plurality of mirror to continuously track an abnormal event in response to the external images and real-time cloud data submitted by other nearby vehicles. It should be noted that the AM model is trained by MCL which is coupled to the VOC. A function of MCL is to train and improve the AM model based on the labeled data from the CBN.
- In one aspect, the AM system further includes outward facing cameras mounted on the vehicle for collecting external images representing the surrounding environment in which the vehicle operates. The CBN is wirelessly coupled to the VOC and configured to correlate and generate labeled data associated with AM data based on historical cloud data, internal images, and external images. The outward facing cameras are configured to capture real-time images as the vehicle moves across a geographical area.
- In an alternative embodiment, the presently claimed invention discloses a method or process for interactively setting a mirror mounted on a vehicle via metadata extraction utilizing a virtuous cycle including sensors, MLC and CBN. The process is capable of receiving a mirror resetting signal indicating at least one mirror mounted on a vehicle requiring an adjustment. Upon activating at least a portion of inward facing cameras mounted in the vehicle for capturing internal images including driver eye level respect to interior of the vehicle, historical cloud data associated with the vehicle and driver is obtained from a virtuous cycle. The process subsequently adjusts at least one mirror to an orientation with minimal blind spot in accordance with driver head position shown in the internal image and historical cloud data. In one aspect, the internal images are continuously obtained for a predefined wait period until the driver settling down so that the accurate calculation of driver head position can be computed. It should be noted that the set of outward facing cameras mounted on a vehicle can be activated for recording external surrounding images representing a geographic environment in which the vehicle operates. In one aspect, AM model is capable of tracking surrounding environmental event in accordance with the external surrounding images and historical data supplied by the virtuous cycle.
- In an alternative embodiment, AM system is able to perform a process configured to utilizing one of external mirror mounted on a vehicle to dynamically track an abnormal event facilitated via the virtuous cycle. After receiving a message of detecting an abnormal event nearby surrounding area in which the vehicle operates from cloud based data pushed by the MLC, images showing driver head position captured by a set of interior cameras is obtained while the driver operates moving vehicle. The process is capable of adaptively adjusting orientation of at least one mirror to track the abnormal event based on projected location according to the message so that the driver is able to see the abnormal event. In one example, AM model is able to issue a notice telling the driver to watch the abnormal event at the newly reoriented mirror. It should be noted that the labeled data representing driver reaction responding to the abnormal event is uploaded back to the CBN for facilitating AM model training at the MLC.
- Additional features and benefits of the exemplary embodiment(s) of the present invention will become apparent from the detailed description, figures and claims set forth below.
- The exemplary embodiment(s) of the present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.
-
FIGS. 1A-1B are block diagrams illustrating a virtuous cycle facilitating an automatic mirror-setting (“AM”) system capable of adaptively adjusting mirror(s) via a virtuous cycle in accordance with one embodiment of the present invention; -
FIGS. 1C-1E are diagrams illustrating AM model providing mirror adjustment using inward and/or outward facing cameras via a virtuous cycle in accordance with one embodiment of the present invention; -
FIGS. 1F-1H is a block diagram illustrating a pipeline process of outward facing camera capable of identifying and classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention; -
FIGS. 2A-2B are block diagrams illustrating a virtuous cycle capable of facilitating AM model detection in accordance with one embodiment of the present invention; -
FIG. 3 is a block diagram illustrating a cloud based network using crowdsourcing approach to improve AM model(s) in accordance with one embodiment of the present invention; -
FIG. 4 is a block diagram illustrating an AM model or system using the virtuous cycle in accordance with one embodiment of the present invention; -
FIG. 5 is a block diagram illustrating an exemplary process of correlating AM data in accordance with one embodiment of the present invention; -
FIG. 6 is a block diagram illustrating an exemplary process of real-time data management for AM model in accordance with one embodiment of the present invention; -
FIG. 7 is a block diagram illustrating a crowd sourced application model for - AM model in accordance with one embodiment of the present invention;
-
FIG. 8 is a block diagram illustrating a method of storing AM related data using a geo-spatial objective storage in accordance with one embodiment of the present invention; -
FIG. 9 is a block diagram illustrating an exemplary approach of analysis engine analyzing AM data in accordance with one embodiment of the present invention; -
FIG. 10 is a block diagram illustrating an exemplary containerized sensor network used for sensing AM related information in accordance with one embodiment of the present invention; -
FIG. 11 is a block diagram illustrating a processing device or computer system which can be installed in a vehicle for facilitating the virtuous cycle in accordance with one embodiment of the present invention; and -
FIG. 12 is a flowchart illustrating a process of AM model or system capable of providing driver rating in accordance with one embodiment of the present invention. - Embodiments of the present invention are described herein with context of a method and/or apparatus for facilitating automatic mirror adjustment based on images captured by inward facing cameras via an AM model continuously trained by a virtuous cycle containing cloud based network, containerized sensing device, and machine learning center (“MLC”).
- The purpose of the following detailed description is to provide an understanding of one or more embodiments of the present invention. Those of ordinary skills in the art will realize that the following detailed description is illustrative only and is not intended to be in any way limiting. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure and/or description.
- In the interest of clarity, not all of the routine features of the implementations described herein are shown and described. It will, of course, be understood that in the development of any such actual implementation, numerous implementation-specific decisions may be made in order to achieve the developer's specific goals, such as compliance with application- and business-related constraints, and that these specific goals will vary from one implementation to another and from one developer to another. Moreover, it will be understood that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking of engineering for those of ordinary skills in the art having the benefit of embodiment(s) of this disclosure.
- Various embodiments of the present invention illustrated in the drawings may not be drawn to scale. Rather, the dimensions of the various features may be expanded or reduced for clarity. In addition, some of the drawings may be simplified for clarity. Thus, the drawings may not depict all of the components of a given apparatus (e.g., device) or method. The same reference indicators will be used throughout the drawings and the following detailed description to refer to the same or like parts.
- In accordance with the embodiment(s) of present invention, the components, process steps, and/or data structures described herein may be implemented using various types of operating systems, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skills in the art will recognize that devices of a less general purpose nature, such as hardware devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein. Where a method comprising a series of process steps is implemented by a computer or a machine and those process steps can be stored as a series of instructions readable by the machine, they may be stored on a tangible medium such as a computer memory device (e.g., ROM (Read Only Memory), PROM (Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read Only Memory), FLASH Memory, Jump Drive, and the like), magnetic storage medium (e.g., tape, magnetic disk drive, and the like), optical storage medium (e.g., CD-ROM, DVD-ROM, paper card and paper tape, and the like) and other known types of program memory.
- The term “system” or “device” is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, access switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof. The term “computer” includes a processor, memory, and buses capable of executing instruction wherein the computer refers to one or a cluster of computers, personal computers, workstations, mainframes, or combinations of computers thereof.
- One embodiment of the presently claimed invention discloses a method or system capable of adjusting an exterior mirror of a vehicle via an automatic mirror-setting (“AM”) model managed by a virtuous cycle containing machine learning center (“MLC”) and cloud based network (“CBN”). The system or AM system includes a set of mirrors, a set of inward facing cameras, a vehicle onboard computer (“VOC”), and AM module. In one embodiment, the mirrors, attached to a vehicle, are configured to capture at least a portion of external environment in which the vehicle operates. The inward facing cameras, mounted in the vehicle, are configured to collect internal images including operator facial features showing operator visual characteristics. VOC, which is coupled to CBN, is configured to determine operator vision metadata based on the internal images, operator visual characteristics, and historical stored data. The AM module is able to adaptively set a mirror to an optimal orientation so that area of external blind spot is minimized.
- In an alternative embodiment, AM system is able to perform a process configured to utilizing one of external mirror mounted on a vehicle to dynamically track an abnormal event facilitated via the virtuous cycle. After receiving a message of detecting an abnormal event nearby surrounding area in which the vehicle operates from cloud based data pushed by the MLC, images showing driver head position captured by a set of interior cameras is obtained while the driver operates moving vehicle. The process is capable of adaptively adjusting orientation of at least one mirror to track the abnormal event based on projected location according to the message so that the driver is able to see the abnormal event. In one example, AM model is able to issue a notice telling the driver to watch the abnormal event at the newly reoriented mirror. It should be noted that the labeled data representing driver reaction responding to the abnormal event is uploaded back to the CBN for facilitating AM model training at the MLC.
-
FIG. 1A is a block diagram 100 illustrating a virtuous cycle facilitating an automatic mirror-setting (“AM”) system capable of adaptively adjusting mirror(s) via a virtuous cycle in accordance with one embodiment of the present invention. Diagram 100 illustrates a virtuous cycle containing avehicle 102,CBN 104, andMLC 106. In one aspect,MCL 106 can be located remotely or in the cloud. Alternatively,MCL 106 can be a part ofCBN 104. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed from diagram 100. -
Vehicle 102, in one example, can be a car, automobile, bus, train, drone, airplane, truck, and the like, and is capable of moving geographically from point A to point B. To simplify forgoing discussing, the term “vehicle” or “car” is used.Vehicle 102 includes wheels with ABS (anti-lock braking system), body,steering wheel 108, exterior or outward facingcameras 125, interior (or 360° (degree)) or inward facingcamera 126,antenna 124, onboard controller orVOC 123, and operator (or driver) 109. It should be noted that outward facing cameras and/or inward facing cameras 125-126 can be installed at front, side-facing, stereo, and inside ofvehicle 102. In one example,vehicle 102 also includes various sensors which senses information related to vehicle state, vehicle status, driver actions, For example, the sensors, not shown in FIG. IA, are able to collect information, such as audio, ABS, steering, braking, acceleration, traction control, windshield wipers, GPS (global positioning system), radar, ultrasound, lidar (Light Detection and Ranging), and the like. - VOC or
onboard controller 123 includes CPU (central processing unit), GPU (graphic processing unit), memory, and disk responsible for gathering data from outward facing orexterior cameras 125, inward facing orinterior cameras 126, audio sensor, ABS, traction control, steering wheel, CAN-bus sensors, and the like. In one aspect,VOC 123 executes AM model received fromMLC 106, and interfaces withantenna 124 to communicate withCBN 104 via awireless communication network 110. Note that wireless communication network includes, but not limited to, WIFI, cellular network, Bluetooth network, satellite network, or the like. A function ofVOC 123 is to gather or capture real-time surrounding information as well as exterior information whenvehicle 102 is moving. -
CBN 104 includes various digital computing systems, such as, but not limited to,server farm 120, routers/switches 121,cloud administrators 119, connected computing devices 116-117, andnetwork elements 118. A function ofCBN 104 is to provide cloud computing which can be viewed as on-demand Internet based computing service with enormous computing power and resources. Another function ofCBN 104 is to improve or refine AM labeled data via correlating captured real-time data with relevant cloud data. The refined AM labeled data is subsequently passed toMLC 106 for model training via aconnection 112. -
MLC 106, in one embodiment, provides, refines, trains, and/or distributesmodels 115 such as AM model based on information or data such as AM labeled data provided fromCBN 104. It should be noted that the machine learning makes AM model based on models generated and maintained by various computational algorithms using historical data as well as current data. A function ofMLC 106 is that it is capable of pushing information such as revised AM model tovehicle 102 via awireless communications network 114 in real-time. - To identify or collect current operator driving style via
vehicle 102, an onboard AM model which could reside inside ofVOC 123 receives a triggering event or events from built-in sensors such as driver body language, external surrounding condition, internal detected images, ABS, wheel slippery, turning status, engine status, and the like. The triggering event or events may include, but not limited to, activation of ABS, texting, drinking, smoking, arguing, playing, fighting, rapid steering, rapid breaking, excessive wheel slip, activation of emergency stop, and on. Upon receiving triggering events via vehicular status signals, the recording or recorded images captured by inward facing camera or 360 camera are rewound from an earlier time stamp leading to the receipt of triggering event(s) for identifying, for example, AM labeled data which contains images of driver head position or abnormal events. After correlation of labeled data with historical sampling data at CBN, the AM model is retrained and refined atMLC 106. The retrained AM model is subsequently pushed back ontovehicle 102. - It should be noted that by detecting the position of the driver's head, the system can automatically set the mirrors to the safest possible position. For example, by keeping track of the full range of positions the driver's head has been in, the system can determine the “center point” of their normal driving position. Note that the head position is usually different slightly from where they might place their head when performing a mirror adjustment. In one aspect, the AM model has a delay element that, during operation, upon pressing the button for “auto mirror set,” a delay and a tone are issued to allow the driver to position themselves as they will be when driving.
- During an operation, inward facing
camera 126 captures facial images of driver oroperator 109 including the driver head position and eye level. Upon verifying withCBN 104, afocal direction 107 ofoperator 109 is identified. After obtaining and processing external images relating tofocal direction 107, apossible trajectory 105 in which the location is looked at is obtained.Trajectory 105 andfocal direction 107 are subsequently processed and combined in accordance with stored data in the cloud. The object, which is being looked at byoperator 109, is identified. In this example, the object is ahouse 103 nearby the road. After identifying driver vision scope and trajectory, the eye level is determined wherein the eye level or head position will be used to adjust the mirrors to optimal orientations with minimal blind spots. - An advantage of using AM system is to reduce blind spots whereby traffic accidents should be reduced.
-
FIG. 1B illustrates a block diagram 140 showing an operator or driver monitored by AM system for adaptively adjusting mirrors via a virtuous cycle in accordance with one embodiment of the present invention. Diagram 140 illustrates adriver 148, inward facing camera(s) 142, rightexternal side mirror 143, andexterior camera 144. In one aspect,camera 142, also known as interior camera or 360 camera, monitors or captures driver'sfacial expression 146 and/or driver (or operator) body language such as head position. Upon readingstatus 149 which indicates stable with accelerometer, ahead with gaze, hands on steering wheel (no texting), AM model can conclude that driver is behaving normally or abnormally. - During an operation, the interior images captured by inward facing camera(s) 142 can show a location in which
operator 148 is focusing based on relative eye positions ofoperator 148. Once the direction of location such asdirection 145 is identified, AM system obtains external images captured by outward facing camera(s) 144. After identifyingimage 145 is where operator pays attention based ondirection 145, theimage 145 is recorded and process. Based on detectedtrajectory 145, AM model is able to identify the driver vision associated to side mirrors. For example, the AM model can identify the optimal orientation formirror 143 in view ofdriver vision 141 with minimum blind spots. -
FIG. 1C illustrates diagrams 180 and 198 showing AM model containing inward facing cameras to automatically setting mirrors using a virtuous cycle in accordance with one embodiment of the present invention. Diagram 180, in one embodiment, includesinterior car 189,exterior car 187,steering wheel 181,dashboard 182,driver head position 184, inward facingcamera 190, and leftmirror 188. With assistance of virtuous cycle, onboard vehicle computer can calculatedriver head position 184 based on the observed images captured by inward facingcamera 190. After calculation of driver head position and his peripheral vision, leftmirror 188 is automatically adjusted to viewrear view 186 with the coverage with minimum blind spot as indicated bynumeral 183. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (components or elements) were added to or removed from diagram 180. - Diagram 180 illustrates a heuristic surrounding using a stream of head position data to choose center point of eyes. In one embodiment, the AM system employs one or more internal cameras to extract metadata regarding the head position of a driver, and use that known position in space in order to automatically set the rearview mirrors to the optimal position for that driver. The AM model is, in one aspect, capable of calculating the optimal setting for rear mirrors based on obtained driver head position. For example, when drivers are sitting in their normal driving positions, their head may not stay in one place; instead they sweep through a range of space. The system is able to use extracted metadata about head position, and more importantly, the exact position of the driver's eyes. By combining information about the exact distance from the eyes to the rear mirrors, a vertical and horizontal angle can be calculated that will allow the driver to see what is happening behind them, while minimizing the size of the “blind spot” that can occur.
- To operate functions of AM model, vehicle geometry metrics may be used. For example,
left mirror 188 has a metric of horizontal distance to center of steering wheel and vertical distance to center of steering wheel. The metrics also define mirror width, length, and height. An advantage of employing AM model is that it is able to dynamically adjust external and/or internal mirrors to minimize blind spot(s). - Diagram 198 shows a heuristic illustration used for filtered and weighted average of eye position. After capturing inward facing camera output at
block 191, face and eye position is detected atblock 192. Upon generation of time series of extracted position of eyes atblock 193, filter outlier data points are generated and/or extracted atblock 194. Atblock 195, weighted average of eye position over a predefined time interval (t1, t2) is calculated and/or obtained. -
FIG. 1D shows diagrams 1600-1602 illustrating real-time coverage of rear view mirror using AM model via a virtuous cycle in accordance with one embodiment of the present invention. Diagram 1600 includesvehicle 1606,blind cars 1610, andvisible cars 1612 wherein the rear view mirrors ofvehicle 1606 are set improperly whereby leftside viewing coverage 1614 and rightside viewing coverage 1618 missblind cars 1610. Although coverage 1614-1618 coversvisible cars 1612,blind cards 1610 are in blind spots. Diagram 1602 illustrates a scenario in which rear view mirror ofvehicle 1606 are set properly wherebyblind cars 1610 are visible by new coverage 1624-1628. In one example,vehicle 1606 displays real-time coverage of rear view mirror that all cars 1610-1612 are observed leaving minimum blind spots. -
FIG. 1E is a block diagram 1700 illustrating a dynamic tracking function of AM model containing inward and outward facing cameras via a virtuous cycle in accordance with one embodiment of the present invention. Diagram 1700 includes three lanes 1702-1706,vehicle 1706, cars 1708-1712, wireless transmission towers 1711-1712, andvirtuous cycle 1708. In one example,car 1712 is acting or driving recklessly which constitutes an abnormal event. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more cars (components or elements) were added to or removed from diagram 1700. - In one embodiment, AM model is able to notify the driver an abnormal event nearby and direct the driver to monitor the situation via one or more mirrors which is dynamic tracking an abnormal event facilitated by the
virtuous cycle 1708. In one aspect, the dynamic tracking function is able to move or turn mirror(s) to track the movement caused by the abnormal event. For example, whencar 1712 is detected bycar 1710 for speeding and changing multiple lanes at once as indicated by numeral 1716,car 1710 reports the reckless driving behavior as abnormal event viawireless signal 1718 tovirtuous cycle 1708 viawireless tower 1711. After determining the abnormal event based on cloud data,virtuous cycle 1708 pushes abnormal event tovehicle 1706 via wireless signals and connection 1722-1724. The left external side mirror is automatically adjusted fromoriginal cover 1728 tosituational coverage 1730 which will track the movement ofcar 1712. The tracking will allow the driver to monitor the abnormal situation more effectively. - An advantage of using a dynamic tracking function of AM model is that it provides an additional vision to the driver via mirror operation or adjustment.
- FIG. IF is a logic block diagram illustrating a
pipeline process 150 of outward facing camera capable of identifying and classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention. Outward facingcamera 151 collects images and the images are stored in aqueue 152. After scaling the images byimage scaling component 153, the scaled image is forwarded to objectdetection 154.Object detection 154 generates a collection of objection information which is forwarded to queue 155. The object information which includes bounding-box, object category, object orientation, and object distance is forwarded tocomponent 156 androuter 157. Upon categorizing the object information atblock 156, the categorized data is forwarded to map 158. After recognizing the object based onmap 158, the recognizer is forwarded torouter 157. After routing information atrouter 157, the output images are forwarded to block 159 which uses classifier 130-131 to classify the images and/or objects. -
Pipeline process 150 illustrates a logic processing flow which is instantiated for the purpose of processing incoming data, extracting metadata on a frame by frame or data packet basis, and forwarding both frames and metadata packets forward through the pipeline. Each stage of the pipeline can contain software elements that perform operations upon the current audio or video or sensor data frame. The elements in the pipeline can be inserted or removed while the pipeline is running, which allows for an adaptive pipeline that can perform different operations depending on the applications. The pipeline process is configured to adapt various system constraints that can be situationally present. Additionally, elements in the pipeline can have their internal settings updated in real-time, providing the ability to “turn off,” “turn on” elements, or to adjust their configuration settings on the fly. -
Pipeline process 150 includes a metadata packet schema which includes name/value pairs with arbitrary nesting and basic primitive data types such as arrays and structures that is used to create a self-describing and both machine and human readable form of the extracted real-time metadata flowing through the system. Such a generalized schema allows multiple software components to agree on how to describe the high level events that are being captured and analyzed and acted upon by the system. For example, a schema is constructed to describe the individual locations within a video frame of a person's eyes, nose, mouth, chin line, etc. Such a data structure allows a downstream software component to infer even higher level events, such as “this person is looking up at 34 degrees above the horizon” or “this person is looking left 18 degrees left of center.” The process can subsequently construct additional metadata packets and insert them into the stream, resulting in higher level semantic metadata that the system is able to act upon. -
FIG. 1G is a logic block diagram illustrating apipeline process 160 capable of identifying and classifying face detection, head and gaze orientation, and mouth features using a virtuous cycle in accordance with one embodiment of the present invention. Inward facingcamera 161 collects images and the images are stored in aqueue 162. After scaling the images byimage scaling component 163, the scaled image is forwarded to face andhead detection 164. The output ofdetection 164 is forwarded to image transform (“IT”) components 165-166. After transformation, the transformed image is forwarded to blocks 169-170. After facial feature extraction inblock 169, the feature map is forwarded to block 167 for pose normalization.Block 168 receives face images fromIT component 165 and transformed images fromblock 167; the normalized face image is forwarded to block 172. Upon processing normalized face with embedding network atblock 172, a face ID is identified. - Block 170 extracts mouth feature and generates mouth feature(s) of driver.
Block 171 processes head and gaze based on output ofIT component 166 which receives information with both scaled and unscaled images. In one example, block 171 is capable of generating various features, such as gaze, head, number of eyes, glasses, and the like. -
FIG. 1H is a logic block diagram 175 illustrating a process of classifying detected object(s) using a virtuous cycle in accordance with one embodiment of the present invention.Block 176 is a software element used to classify a pedestrian based on collected external images captured by outward facing cameras. Based on collected data and historical data, pedestrian may be identified.Block 177 is a software element used to classify a vehicle based on collected external images captured by outward facing cameras. Based on collected data and historical data, vehicle information can be identified. The exemplary classification information includes model of the vehicle, license plate, state of vehicle registration, and the like. In addition, formation such as turn-signals, brake lights, and headlights can also be classified via facilitation of virtuous cycle.Block 178 is a software element used to classify traffic signals or conditions according to collected external images captured by outward facing cameras. For example, according to collected data as well as historical data, the traffic signal can be classified. The exemplary classification includes sign, speed limit, stop sign, and the like. -
FIG. 2A is a block diagram 200 illustrating a virtuous cycle capable of detecting or monitoring AM system in accordance with one embodiment of the present invention. Diagram 200, which is similar to diagram 100 shown inFIG. 1A , includes a containerizedsensor network 206, real-world scale data 202, andcontinuous machine learning 204. In one embodiment,continuous machine learning 204 pushes real-time models to containerizedsensor network 206 as indicated bynumeral 210.Containerized sensor network 206 continuously feeds captured data or images to real-world scale data 202 with uploading in real-time or in a batched format. Real-world scale data 202 provides labeled data tocontinuous machine learning 204 for constant model training as indicated bynumeral 212. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed fromFIG. 2A . - The virtuous cycle illustrated in diagram 200, in one embodiment, is configured to implement AM system wherein containerized
sensor network 206 is similar tovehicle 102 as shown inFIG. 1A and real-world scale data 202 is similar toCBN 104 shown in FIG. IA. Also,continuous machine learning 204 is similar toMCL 106 shown in FIG. IA. In one aspect, containerizedsensor network 206 such as an automobile or car contains a containerized sensing device capable of collecting surrounding information or images using onboard sensors or sensor network when the car is in motion. Based on the AM model, selective recording the collected surrounding information is selectively recorded to a local storage or memory. - Real-
world scale data 202, such as cloud or CBN, which is wirelessly coupled to the containerized sensing device, is able to correlate with cloud data and recently obtained AM data for producing labeled data. For example, real-world scale data 202 generates AM labeled data based on historical AM cloud data and the surrounding information sent from the containerized sensing device. -
Continuous machine learning 204, such as MLC or cloud, is configured to train and improve AM model based on the labeled data from real-world scale data 202. With continuous gathering data and training AM model(s), the AM system will be able to learn, obtain, and/or collect all available data for the population samples. - In one embodiment, a virtuous cycle includes partition-able Machine Learning networks, training partitioned networks, partitioning a network using sub-modules, and composing partitioned networks. For example, a virtuous cycle involves data gathering from a device, creating intelligent behaviors from the data, and deploying the intelligence. In one example, partition idea includes knowing the age of a driver which could place or partition “dangerous driving” into multiple models and selectively deployed by an “age detector.” An advantage of using such partitioned models is that models should be able to perform a better job of recognition with the same resources because the domain of discourse is now smaller. Note that, even if some behaviors overlap by age, the partitioned models can have common recognition components.
- It should be noted that more context information collected, a better job of recognition can be generated. For example, “dangerous driving” can be further partitioned by weather condition, time of day, traffic conditions, et cetera. In the “dangerous driving” scenario, categories of dangerous driving can be partitioned into “inattention”, “aggressive driving”, “following too closely”, “swerving”, “driving too slowly”, “frequent breaking”, deceleration, ABS event, et cetera.
- For example, by resisting a steering behavior that is erratic, the car gives the driver direct feedback on their behavior—if the resistance is modest enough then if the steering behavior is intentional (such as trying to avoid running over a small animal) then the driver is still able to perform their irregular action. However, if the driver is texting or inebriated then the correction may alert them to their behavior and get their attention. Similarly, someone engaged in “road rage” who is driving too close to another car may feel resistance on the gas pedal. A benefit of using AM system is to identify driver head position and adjust mirror(s) based on driver head position.
- In one aspect, a model such as AM model includes some individual blocks that are trained in isolation to the larger problem (e.g. weather detection, traffic detection, road type, etc.). Combining the blocks can produce a larger model. Note that the sample data may include behaviors that are clearly bad (ABS event, rapid deceleration, midline crossing, being too close to the car in front, etc.). In one embodiment, one or more sub-modules are built. The models include weather condition detection and traffic detection for additional modules intelligence, such as “correction vectors” for “dangerous driving.”
- An advantage of using a virtuous cycle is that it can learn and detect object such as AM in the real world.
-
FIG. 2B is a block diagram 230 illustrating an alternative exemplary virtuous cycle capable of detecting AM in accordance with one embodiment of the present invention. Diagram 230 includesexternal data source 234,sensors 238,crowdsourcing 233, andintelligent model 239. In one aspect, components/activities above dottedline 231 are operated incloud 232, also known as in-cloud component. Components/activities below dottedline 231 are operated incar 236, also known as in-device or in-car component. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed fromFIG. 2B . - In one aspect, in-cloud components and in-device components coordinate to perform desirable user specific tasks. While in-cloud component leverages massive scale to process incoming device information, cloud applications leverage crowd sourced data to produce applications. External data sources can be used to contextualize the applications to facilitate intellectual crowdsourcing. For example, in-car (or in-phone or in-device) portion of the virtuous cycle pushes intelligent data gathering to the edge application. In one example, edge applications can perform intelligent data gathering as well as intelligent in-car processing. It should be noted that the amount of data gathering may rely on sensor data as well as intelligent models which can be loaded to the edge.
-
FIG. 3 is a block diagram 300 illustrating a cloud based network using crowdsourcing approach to improve AM model(s) in accordance with one embodiment of the present invention. Diagram 300 includes population ofvehicles 302,sample population 304,models deployment 306,correlation component 308, andcloud application 312. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or samples) were added to or removed fromFIG. 3 . - Crowdsourcing is a process of using various sourcing or specific models generated or contributed from other cloud or Internet users for achieving needed services. For example, crowdsourcing relies on the availability of a large population of vehicles, phones, or other devices to source
data 302. For example, a subset of available devices such assample 304 is chosen by some criterion such as location to perform data gathering tasks. To gather data more efficiently, intelligent models are deployed to a limited number ofvehicles 306 for reducing the need of large uploading and processing a great deal of data in the cloud. It should be noted that the chosen devices such ascars 306 monitor the environment with the intelligent model and create succinct data about what has been observed. The data generated by the intelligent models is uploaded to the correlated data store as indicated bynumeral 308. It should be noted that the uploading can be performed in real-time for certain information or at a later time for other types of information depending on the need as well as condition of network traffic. - Correlated
component 308 includes correlated data storage capable of providing a mechanism for storing and querying uploaded data.Cloud applications 312, in one embodiment, leverage the correlated data to produce new intelligent models, create crowd sourced applications, and other types of analysis. -
FIG. 4 is a block diagram 400 illustrating an AM system using the virtuous cycle in accordance with one embodiment of the present invention. Diagram 400 includes a correlateddata store 402,machine learning framework 404, andsensor network 406. Correlateddata store 402,machine learning framework 404, andsensor network 406 are coupled by connections 410-416 to form a virtuous cycle as indicated bynumeral 420. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 4 . - In one embodiment, correlated
data store 402 manages real-time streams of data in such a way that correlations between the data are preserved.Sensor network 406 represents the collection of vehicles, phones, stationary sensors, and other devices, and is capable of uploading real-time events into correlateddata store 402 via awireless communication network 412 in real-time or in a batched format. In one aspect, stationary sensors includes, but not limited to, municipal cameras, webcams in offices and buildings, parking lot cameras, security cameras, and traffic cams capable of collecting real-time images. - The stationary cameras such as municipal cameras and webcams in offices are usually configured to point to streets, buildings, parking lots wherein the images captured by such stationary cameras can be used for accurate labeling. To fuse between motion images captured by vehicles and still images captured by stationary cameras can track object(s) such as car(s) more accurately. Combining or fusing stationary sensors and vehicle sensors can provide both labeling data and historical stationary sampling data also known as stationary “fabric”. It should be noted that during the crowdsourcing applications, fusing stationary data (e.g. stationary cameras can collect vehicle speed and position) with real-time moving images can improve ML process.
- Machine Learning (“ML”)
framework 404 managessensor network 406 and provides mechanisms for analysis and training of ML models.ML framework 404 draws data from correlateddata store 402 via acommunication network 410 for the purpose of training modes and/or labeled data analysis.ML framework 404 can deploy data gathering modules to gather specific data as well as deploy ML models based on the previously gathered data. The data upload, training, and model deployment cycle can be continuous to enable continuous improvement of models. -
FIG. 5 is a block diagram 500 illustrating an exemplary process of correlating AM data in accordance with one embodiment of the present invention. Diagram 500 includessource input 504, real-time data management 508,history store 510, and crowd sourced applications 512-516. In one example,source input 504 includes cars, phones, tablets, watches, computers, and the like capable of collecting massive amount of data or images which will be passed onto real-time data management 508 as indicated bynumeral 506. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed fromFIG. 5 . - In one aspect, a correlated system includes a real-time portion and a batch/historical portion. The real-time part aims to leverage new data in near or approximately real-time. Real-time component or
management 508 is configured to manage a massive amount ofinflux data 506 coming from cars, phones, andother devices 504. In one aspect, after ingesting data in real-time, real-time data management 508 transmits processed data in bulk to the batch/historical store 510 as well as routes the data to crowd sourced applications 512-516 in real-time. - Crowd sourced applications 512-516, in one embodiment, leverage real-time events to track, analyze, and store information that can be offered to user, clients, and/or subscribers. Batch-Historical side of correlated
data store 510 maintains a historical record of potentially all events consumed by the real-time framework. In one example, historical data can be gathered from the real-time stream and it can be stored in ahistory store 510 that provides high performance, low cost, and durable storage. In one aspect, real-time data management 508 andhistory store 510 coupled by aconnection 502 are configured to perform AM data correlation as indicated by dotted line. -
FIG. 6 is a block diagram 600 illustrating an exemplary process of real-time data for AM system in accordance with one embodiment of the present invention. Diagram 600 includesdata input 602,gateway 606,normalizer 608,queue 610,dispatcher 616,storage conversion 620, andhistorical data storage 624. The process of real-time data management further includes acomponent 614 for publish and subscribe. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 6 . - The real-time data management, in one embodiment, is able to handle a large numbers (i.e., 10′s of millions) of report events to the cloud as indicated by
numeral 604. API (application program interface)gateway 606 can handle multiple functions such as client authentication and load balancing of events pushed into the cloud. The real-time data management can leverage standard HTTP protocols. The events are routed to stateless servers for performing data scrubbing and normalization as indicated bynumeral 608. The events frommultiple sources 602 are aggregated together into a scalable/durable/consistent queue as indicated bynumeral 610. Anevent dispatcher 616 provides a publish/subscribe model forcrowd source applications 618 which enables each application to look at a small subset of the event types. The heterogeneous event stream, for example, is captured and converted to files for long-term storage as indicated bynumeral 620. Long-term storage 624 provides a scalable and durable repository for historical data. -
FIG. 7 is a block diagram 700 illustrating a crowd sourced application model for AM model in accordance with one embodiment of the present invention. Diagram 700 includes agateway 702,event handler 704,state cache 706,state store 708,client request handler 710,gateway 712, andsource input 714. In one example,gateway 702 receives an event stream from an event dispatcher andAPI gateway 712 receives information/data frominput source 714. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (or elements) were added to or removed fromFIG. 7 . - The crowd sourced application model, in one embodiment, facilitates events to be routed to a crowd source application from a real-time data manager. In one example, the events enter
gateway 702 using a simple push call. Note that multiple events are handled by one or more servers. The events, in one aspect, are converted into inserts or modifications to a common state store.State store 708 is able to hold data from multiple applications and is scalable and durable. For example,State store 708, besides historical data, is configured to store present data, information about “future data”, and/or data that can be shared across applications such as predictive AI (artificial intelligence). -
State cache 706, in one example, is used to provide fast access to commonly requested data stored instate store 708. Note that application can be used by clients.API gateway 712 provides authentication and load balancing.Client request handler 710 leveragesstate store 708 for providing client data. - In an exemplary embodiment, an onboard AM model is able to handle real-time AM detection based on triggering events. For example, after ML models or AM models for AM detection have been deployed to all or most of the vehicles, the deployed ML models will report to collected data indicating AM system for facilitating issuance of real-time warning for dangerous event(s). The information or data relating to the real-time dangerous event(s) or AM system is stored in
state store 708.Vehicles 714 looking for AM detection can, for example, access the AMsystem using gateway 712. -
FIG. 8 is a block diagram 800 illustrating a method of storing AM related data using a geo-spatial objective storage in accordance with one embodiment of the present invention. Diagram 800 includesgateway 802,initial object 804, putcall 806, findcall 808, getcall 810, SQL (Structured Query Language) 812,non-SQL 814, and geo-spatial object storage 820. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 8 . - Geo-
spatial object storage 820, in one aspect, stores or holds objects which may include time period, spatial extent, ancillary information, and optional linked file. In one embodiment, geo-spatial object storage 820 includes UUID (universally unique identifier) 822,version 824, start and endtime 826, bounding 828,properties 830,data 832, and file-path 834. For example, whileUUID 822 identifies an object, all objects have version(s) 824 that allow schema to change in the future. Start and endtime 826 indicates an optional time period with a start time and an end time. Anoptional bounding geometry 828 is used to specify spatial extent of an object. An optional set ofproperties 830 is used to specify name-value pairs.Data 832 can be binary data. Anoptional file path 834 may be used to associate with the object of a file containing relevant information such as MPEG (Moving Picture Experts Group) stream. - In one embodiment,
API gateway 802 is used to provide access to the service. Before an object can be added to the store, the object is assigned an UUID which is provided by the initial object call. Once UUID is established for a new object, the put call 804 stores the object state. The state is stored durably inNon-SQL store 814 along with UUID. A portion of UUID is used as hash partition for scale-out. The indexible properties includes version, time duration, bounding, and properties which are inserted in ascalable SQL store 812 for indexing. TheNon-SQL store 814 is used to contain the full object state.Non-SQL store 814 is scaled-out using UUID as, for example, a partition key. -
SQL store 812 is used to create index tables that can be used to perform queries.SQL store 812 may include three tables 816 containing information, bounding, and properties. For example, information holds a primary key, objects void, creation timestamp, state of object and object properties “version” and “time duration.” Bounding holds the bounding geometry from the object and the id of the associated information table entry. Properties hold property name/value pairs from the object stored as one name/value pair per row along with ID of associated info table entry. - Find
call 808, in one embodiment, accepts a query and returns a result set, and issues a SQL query toSQL store 812 and returns a result set containing UUID that matches the query. -
FIG. 9 is a block diagram 900 illustrating an exemplary approach of analysis engine analyzing AM data in accordance with one embodiment of the present invention. Diagram 900 includeshistory store 902,analysis engine 904, and geo-spatial object store 906. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 9 . - In one aspect, diagram 900 illustrates
analysis engine 904 containing ML training component capable of analyzing labeled data based on real-time captured AM data and historical data. The data transformation engine, in one example, interacts with Geo-spatial object store 906 to locate relevant data and with history store to process the data. Optimally, the transformed data may be stored. - It should be noted that virtuous cycle employing ML training component to provide continuous model training using real-time data as well as historical samples, and deliver AM detection model for one or more subscribers. A feature of virtuous cycle is able to continuous training a model and able to provide a real-time or near real-time result. It should be noted that the virtuous cycle is applicable to various other fields, such as, but not limited to, business intelligence, law enforcement, medical services, military applications, and the like.
-
FIG. 10 is a block diagram 1000 illustrating an exemplary containerized sensor network used for sensing AM system related information in accordance with one embodiment of the present invention. Diagram 1000 includes asensor bus 1002,streaming pipeline 1004, andapplication layer 1006 whereinsensor bus 1002 is able to receive low-bandwidth sources and high-bandwidth sources.Streaming pipeline 1004, in one embodiment, includes ML capable of generating unique model such asmodel 1008. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 10 . -
FIG. 11 is a block diagram 1100 illustrating a processing device or computer system which can be installed in a vehicle to support onboard cameras, CAN (Controller Area Network) bus, Inertial Measurement Units, Lidar, et cetera for facilitating virtuous cycle in accordance with one embodiment of the present invention. Computer system orAM system 1100 can include aprocessing unit 1101, aninterface bus 1112, and an input/output (“IO”)unit 1120.Processing unit 1101 includes aprocessor 1102, amain memory 1104, asystem bus 1111, astatic memory device 1106, abus control unit 1105, I/O element 1130, andAM element 1185. It should be noted that the underlying concept of the exemplary embodiment(s) of the present invention would not change if one or more blocks (circuit or elements) were added to or removed fromFIG. 11 . -
Bus 1111 is used to transmit information between various components andprocessor 1102 for data processing.Processor 1102 may be any of a wide variety of general-purpose processors, embedded processors, or microprocessors such as ARM® embedded processors, Intel® Core™ Duo, Core™ Quad, Xeon®, Pentium™ microprocessor, Motorola™ 68040, AMD® family processors, or Power PC™ microprocessor. -
Main memory 1104, which may include multiple levels of cache memories, stores frequently used data and instructions.Main memory 1104 may be RAM (random access memory), MRAM (magnetic RAM), or flash memory.Static memory 1106 may be a ROM (read-only memory), which is coupled tobus 1111, for storing static information and/or instructions.Bus control unit 1105 is coupled to buses 1111-1112 and controls which component, such asmain memory 1104 orprocessor 1102, can use the bus.Bus control unit 1105 manages the communications betweenbus 1111 andbus 1112. - I/
O unit 1120, in one embodiment, includes adisplay 1121,keyboard 1122,cursor control device 1123, andcommunication device 1125.Display device 1121 may be a liquid crystal device, cathode ray tube (“CRT”), touch-screen display, or other suitable display device.Display 1121 projects or displays images of a graphical planning board.Keyboard 1122 may be a conventional alphanumeric input device for communicating information betweencomputer system 1100 and computer operator(s). Another type of user input device iscursor control device 1123, such as a conventional mouse, touch mouse, trackball, or other type of cursor for communicating information betweensystem 1100 and user(s). -
AM element 1185, in one embodiment, is coupled tobus 1111, and configured to interface with the virtuous cycle for facilitating AM performance. For example, ifAM system 1100 is installed in a car,AM element 1185 is used to operate the AM model as well as interface with the cloud based network. IfAM system 1100 is placed at the cloud based network,AM element 1185 can be configured to handle the correlating process for generating labeled data for AM data. -
Communication device 1125 is coupled tobus 1111 for accessing information from remote computers or servers, such asserver 104 or other computers, through wide-area network 102.Communication device 1125 may include a modem or a network interface device, or other similar devices that facilitate communication betweencomputer 1100 and the network.Computer system 1100 may be coupled to a number of servers via a network infrastructure such as the Internet. - The exemplary embodiment of the present invention includes various processing steps, which will be described below. The steps of the embodiment may be embodied in machine or computer executable instructions. The instructions can be used to cause a general purpose or special purpose system, which is programmed with the instructions, to perform the steps of the exemplary embodiment of the present invention. Alternatively, the steps of the exemplary embodiment of the present invention may be performed by specific hardware components that contain hard-wired logic for performing the steps, or by any combination of programmed computer components and custom hardware components.
-
FIG. 12 is aflowchart 1200 illustrating a process of AM system capable of automatically setting mirrors with minimum blind spots in accordance with one embodiment of the present invention. The process, atblock 1202, is capable of receiving a mirror resetting signal indicating at least one mirror mounted on a vehicle requiring an adjustment. Upon activating at least a portion of inward facing cameras mounted in the vehicle for capturing internal images including driver eye level respect to interior of the vehicle atblock 1204, the historical cloud data associated with the vehicle and driver atblock 1206 is obtained from a virtuous cycle. Atblock 1208, the process subsequently adjusts at least one mirror to an orientation with minimal blind spot in accordance with driver head position shown in the internal image and historical cloud data. In one aspect, the internal images are continuously obtained for a predefined wait period until the driver settling down so that the accurate calculation of driver head position can be computed. It should be noted that the set of outward facing cameras mounted on a vehicle can be activated for recording external surrounding images representing a geographic environment in which the vehicle operates. In one aspect, AM model is capable of tracking surrounding environmental event in accordance with the external surrounding images and historical data supplied by the virtuous cycle. - While particular embodiments of the present invention have been shown and described, it will be obvious to those of ordinary skills in the art that based upon the teachings herein, changes and modifications may be made without departing from this exemplary embodiment(s) of the present invention and its broader aspects. Therefore, the appended claims are intended to encompass within their scope all such changes and modifications as are within the true spirit and scope of this exemplary embodiment(s) of the present invention.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/672,897 US20180043829A1 (en) | 2016-08-10 | 2017-08-09 | Method and Apparatus for Providing Automatic Mirror Setting Via Inward Facing Cameras |
US16/542,242 US20190370581A1 (en) | 2016-08-10 | 2019-08-15 | Method and apparatus for providing automatic mirror setting via inward facing cameras |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662372999P | 2016-08-10 | 2016-08-10 | |
US15/672,897 US20180043829A1 (en) | 2016-08-10 | 2017-08-09 | Method and Apparatus for Providing Automatic Mirror Setting Via Inward Facing Cameras |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/542,242 Continuation US20190370581A1 (en) | 2016-08-10 | 2019-08-15 | Method and apparatus for providing automatic mirror setting via inward facing cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180043829A1 true US20180043829A1 (en) | 2018-02-15 |
Family
ID=61159300
Family Applications (8)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/672,897 Abandoned US20180043829A1 (en) | 2016-08-10 | 2017-08-09 | Method and Apparatus for Providing Automatic Mirror Setting Via Inward Facing Cameras |
US15/672,832 Active 2038-01-27 US10540557B2 (en) | 2016-08-10 | 2017-08-09 | Method and apparatus for providing driver information via audio and video metadata extraction |
US15/672,747 Abandoned US20180046869A1 (en) | 2016-08-10 | 2017-08-09 | Method and Apparatus for Providing Information Via Collected and Stored Metadata Using Inferred Attentional Model |
US15/673,909 Active 2038-03-01 US10503988B2 (en) | 2016-08-10 | 2017-08-10 | Method and apparatus for providing goal oriented navigational directions |
US16/542,242 Abandoned US20190370581A1 (en) | 2016-08-10 | 2019-08-15 | Method and apparatus for providing automatic mirror setting via inward facing cameras |
US16/708,123 Abandoned US20200110951A1 (en) | 2016-08-10 | 2019-12-09 | Method and apparatus for providing goal oriented navigational directions |
US16/746,667 Abandoned US20200151479A1 (en) | 2016-08-10 | 2020-01-17 | Method and apparatus for providing driver information via audio and video metadata extraction |
US16/827,635 Abandoned US20200226395A1 (en) | 2016-08-10 | 2020-03-23 | Methods and systems for determining whether an object is embedded in a tire of a vehicle |
Family Applications After (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/672,832 Active 2038-01-27 US10540557B2 (en) | 2016-08-10 | 2017-08-09 | Method and apparatus for providing driver information via audio and video metadata extraction |
US15/672,747 Abandoned US20180046869A1 (en) | 2016-08-10 | 2017-08-09 | Method and Apparatus for Providing Information Via Collected and Stored Metadata Using Inferred Attentional Model |
US15/673,909 Active 2038-03-01 US10503988B2 (en) | 2016-08-10 | 2017-08-10 | Method and apparatus for providing goal oriented navigational directions |
US16/542,242 Abandoned US20190370581A1 (en) | 2016-08-10 | 2019-08-15 | Method and apparatus for providing automatic mirror setting via inward facing cameras |
US16/708,123 Abandoned US20200110951A1 (en) | 2016-08-10 | 2019-12-09 | Method and apparatus for providing goal oriented navigational directions |
US16/746,667 Abandoned US20200151479A1 (en) | 2016-08-10 | 2020-01-17 | Method and apparatus for providing driver information via audio and video metadata extraction |
US16/827,635 Abandoned US20200226395A1 (en) | 2016-08-10 | 2020-03-23 | Methods and systems for determining whether an object is embedded in a tire of a vehicle |
Country Status (5)
Country | Link |
---|---|
US (8) | US20180043829A1 (en) |
EP (2) | EP3496969A4 (en) |
JP (2) | JP2019530061A (en) |
CN (2) | CN109906165A (en) |
WO (2) | WO2018031673A1 (en) |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170185088A1 (en) * | 2015-01-29 | 2017-06-29 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous vehicle operation in view-obstructed environments |
US20170305418A1 (en) * | 2016-04-21 | 2017-10-26 | Lg Electronics Inc. | Driver assistance apparatus for vehicle |
CN110217271A (en) * | 2019-05-30 | 2019-09-10 | 成都希格玛光电科技有限公司 | Fast railway based on image vision invades limit identification monitoring system and method |
US10430695B2 (en) | 2017-06-16 | 2019-10-01 | Nauto, Inc. | System and method for contextualized vehicle operation determination |
US10453150B2 (en) | 2017-06-16 | 2019-10-22 | Nauto, Inc. | System and method for adverse vehicle event determination |
CN110505469A (en) * | 2018-05-17 | 2019-11-26 | 株式会社电装 | Circular monitoring system and method for vehicle |
WO2019220436A3 (en) * | 2018-05-14 | 2019-12-26 | BrainVu Ltd. | Driver predictive mental response profile and application to automated vehicle brain interface control |
CN110857057A (en) * | 2018-08-10 | 2020-03-03 | 丰田自动车株式会社 | Vehicle periphery display device |
US10703268B2 (en) | 2016-11-07 | 2020-07-07 | Nauto, Inc. | System and method for driver distraction determination |
CN113140108A (en) * | 2021-04-16 | 2021-07-20 | 西北工业大学 | Cloud traffic situation prediction method in internet-connected intelligent traffic system |
CN113548056A (en) * | 2020-04-17 | 2021-10-26 | 东北大学秦皇岛分校 | Automobile safety driving assisting system based on computer vision |
US11170241B2 (en) * | 2017-03-03 | 2021-11-09 | Valeo Comfort And Driving Assistance | Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method |
US11175145B2 (en) | 2016-08-09 | 2021-11-16 | Nauto, Inc. | System and method for precision localization and mapping |
WO2022048051A1 (en) * | 2020-09-02 | 2022-03-10 | 厦门理工学院 | Beidou-based engineering vehicle exhaust emission monitoring and tracking system |
CN114194115A (en) * | 2021-12-22 | 2022-03-18 | 数源科技股份有限公司 | Installation method of visual blind area camera device |
CN114579190A (en) * | 2022-02-17 | 2022-06-03 | 中国科学院计算机网络信息中心 | Cross-center cooperative computing arrangement method and system based on pipeline mechanism |
US11373447B2 (en) * | 2020-02-19 | 2022-06-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems including image detection to inhibit vehicle operation |
US11392131B2 (en) | 2018-02-27 | 2022-07-19 | Nauto, Inc. | Method for determining driving policy |
US11460709B2 (en) * | 2019-03-14 | 2022-10-04 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for adjusting on-vehicle projection |
US11687778B2 (en) | 2020-01-06 | 2023-06-27 | The Research Foundation For The State University Of New York | Fakecatcher: detection of synthetic portrait videos using biological signals |
US20230211731A1 (en) * | 2022-01-05 | 2023-07-06 | GM Global Technology Operations LLC | Vehicle mirror selection based on head pose and gaze direction |
WO2023152729A1 (en) * | 2022-02-14 | 2023-08-17 | Gentex Corporation | Imaging system for a vehicle |
Families Citing this family (91)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9610893B2 (en) | 2015-03-18 | 2017-04-04 | Car1St Technologies, Llc | Methods and systems for providing alerts to a driver of a vehicle via condition detection and wireless communications |
US10328855B2 (en) | 2015-03-18 | 2019-06-25 | Uber Technologies, Inc. | Methods and systems for providing alerts to a connected vehicle driver and/or a passenger via condition detection and wireless communications |
US10778598B1 (en) * | 2015-09-30 | 2020-09-15 | Groupon, Inc. | System and method for object-response asset authorization and pairing |
AU2017100670C4 (en) | 2016-06-12 | 2019-11-21 | Apple Inc. | User interfaces for retrieving contextually relevant media content |
AU2017285130B2 (en) * | 2016-06-13 | 2022-04-21 | Xevo Inc. | Method and system for providing auto space management using virtuous cycle |
US10217359B2 (en) * | 2016-10-20 | 2019-02-26 | Echelon Corporation | System of correlated video and lighting for parking management and control |
US11210939B2 (en) | 2016-12-02 | 2021-12-28 | Verizon Connect Development Limited | System and method for determining a vehicle classification from GPS tracks |
US10345449B2 (en) * | 2016-12-02 | 2019-07-09 | Verizon Connect Ireland Limited | Vehicle classification using a recurrent neural network (RNN) |
WO2018119423A1 (en) | 2016-12-22 | 2018-06-28 | Surround. Io Corporation | Method and system for providing artificial intelligence analytic (aia) services for performance prediction |
EP3343172B1 (en) * | 2017-01-03 | 2024-03-13 | iOnRoad Technologies Ltd. | Creation and use of enhanced maps |
US10527433B2 (en) * | 2017-02-06 | 2020-01-07 | International Business Machines Corporation | Automated vehicle parking space recommendation |
EP3580925A1 (en) * | 2017-02-09 | 2019-12-18 | Solfice Research, Inc. | Systems and methods for shared mixed reality experiences using digital, physical, temporal or spatial discovery services |
US10358142B2 (en) | 2017-03-16 | 2019-07-23 | Qualcomm Incorporated | Safe driving support via automotive hub |
US20200226515A1 (en) * | 2017-03-17 | 2020-07-16 | Honda Motor Co., Ltd. | Movement plan provision system, movement plan provision method, and program |
US10176717B2 (en) | 2017-04-01 | 2019-01-08 | Pied Parker Inc. | Systems and methods for detecting vehicle movements |
WO2018182747A1 (en) * | 2017-04-01 | 2018-10-04 | Intel Corporation | Automotive analytics technology to provide synergistic collision safety |
US10528700B2 (en) | 2017-04-17 | 2020-01-07 | Rockwell Automation Technologies, Inc. | Industrial automation information contextualization method and system |
US10198949B2 (en) * | 2017-04-28 | 2019-02-05 | Mastercard International Incorporated | Method and system for parking verification via blockchain |
US11169507B2 (en) * | 2017-06-08 | 2021-11-09 | Rockwell Automation Technologies, Inc. | Scalable industrial analytics platform |
US10567923B2 (en) * | 2017-07-07 | 2020-02-18 | Toyota Jidosha Kabushiki Kaisha | Computation service for mobile nodes in a roadway environment |
US10284317B1 (en) * | 2017-07-25 | 2019-05-07 | BlueOwl, LLC | Systems and methods for assessing sound within a vehicle using machine learning techniques |
US10732000B2 (en) * | 2017-09-12 | 2020-08-04 | Uber Technologies, Inc. | Promoting user compliance with adaptive checkpoints |
EP3456599A1 (en) * | 2017-09-18 | 2019-03-20 | The Hi-Tech Robotic Systemz Ltd | Monitoring drivers and external environment for vehicles |
EP3680567B1 (en) * | 2017-10-30 | 2021-12-08 | Daikin Industries, Ltd. | Concentration estimation device |
US11273836B2 (en) * | 2017-12-18 | 2022-03-15 | Plusai, Inc. | Method and system for human-like driving lane planning in autonomous driving vehicles |
US11048832B2 (en) * | 2018-01-12 | 2021-06-29 | Intel Corporation | Simulated vehicle operation modeling with real vehicle profiles |
DE102018202854B4 (en) * | 2018-02-26 | 2020-01-02 | Audi Ag | Method for operating an on-board network of a hybrid motor vehicle and hybrid motor vehicle |
US10969237B1 (en) | 2018-03-23 | 2021-04-06 | Apple Inc. | Distributed collection and verification of map information |
CN108490954B (en) * | 2018-05-03 | 2020-12-04 | 河南工学院 | Vehicle control method and device based on positioning system |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US11127203B2 (en) * | 2018-05-16 | 2021-09-21 | Samsung Electronics Co., Ltd. | Leveraging crowdsourced data for localization and mapping within an environment |
CN110555988A (en) * | 2018-05-31 | 2019-12-10 | 上海博泰悦臻网络技术服务有限公司 | Front road condition prediction method and system for vehicle machine |
KR20190141303A (en) * | 2018-06-14 | 2019-12-24 | 엘지전자 주식회사 | Method for operating moving robot |
US11572077B2 (en) * | 2018-07-05 | 2023-02-07 | Nauto, Inc. | Method for distributed data analysis |
JP7193332B2 (en) * | 2018-12-18 | 2022-12-20 | 株式会社シマノ | Electronics and systems for human powered vehicles |
JP7193333B2 (en) * | 2018-12-18 | 2022-12-20 | 株式会社シマノ | Information processing equipment |
US11144042B2 (en) | 2018-07-09 | 2021-10-12 | Rockwell Automation Technologies, Inc. | Industrial automation information contextualization method and system |
US11080337B2 (en) | 2018-07-31 | 2021-08-03 | Marvell Asia Pte, Ltd. | Storage edge controller with a metadata computational engine |
US11138449B2 (en) * | 2018-09-28 | 2021-10-05 | Intel Corporation | Obstacle representation display |
US11170233B2 (en) * | 2018-10-26 | 2021-11-09 | Cartica Ai Ltd. | Locating a vehicle based on multimedia content |
JP7222216B2 (en) * | 2018-10-29 | 2023-02-15 | 株式会社アイシン | Driving support device |
US20200201357A1 (en) * | 2018-12-21 | 2020-06-25 | Ford Global Technologies, Llc | Systems and methods for vehicle scheduling and routing |
US10803333B2 (en) * | 2019-01-30 | 2020-10-13 | StradVision, Inc. | Method and device for ego-vehicle localization to update HD map by using V2X information fusion |
US10796206B2 (en) * | 2019-01-31 | 2020-10-06 | StradVision, Inc. | Method for integrating driving images acquired from vehicles performing cooperative driving and driving image integrating device using same |
US11403541B2 (en) | 2019-02-14 | 2022-08-02 | Rockwell Automation Technologies, Inc. | AI extensions and intelligent model validation for an industrial digital twin |
JP7337510B2 (en) * | 2019-02-14 | 2023-09-04 | 株式会社シマノ | Output device, learning model generation method, computer program, and storage medium |
US11086298B2 (en) | 2019-04-15 | 2021-08-10 | Rockwell Automation Technologies, Inc. | Smart gateway platform for industrial internet of things |
US11080568B2 (en) | 2019-04-26 | 2021-08-03 | Samsara Inc. | Object-model based event detection system |
US10999374B2 (en) | 2019-04-26 | 2021-05-04 | Samsara Inc. | Event detection system |
CN110288001B (en) * | 2019-05-28 | 2023-09-05 | 西南电子技术研究所(中国电子科技集团公司第十研究所) | Target recognition method based on target data feature training learning |
US11138433B2 (en) | 2019-06-07 | 2021-10-05 | The Boeing Company | Cabin experience network with a sensor processing unit |
US11885893B2 (en) * | 2019-08-12 | 2024-01-30 | Motional Ad Llc | Localization based on predefined features of the environment |
US11841699B2 (en) | 2019-09-30 | 2023-12-12 | Rockwell Automation Technologies, Inc. | Artificial intelligence channel for industrial automation |
US11435726B2 (en) | 2019-09-30 | 2022-09-06 | Rockwell Automation Technologies, Inc. | Contextualization of industrial data at the device level |
US20210125428A1 (en) * | 2019-10-23 | 2021-04-29 | Applied Mechatronic Products, Llc | Apparatus and Method for Tire Separation Monitoring |
US11511666B2 (en) * | 2019-10-28 | 2022-11-29 | Verizon Patent And Licensing Inc. | Systems and methods for utilizing machine learning to identify vehicle surroundings, route conditions, and points of interest |
JP2021084177A (en) * | 2019-11-28 | 2021-06-03 | ファナック株式会社 | Unmanned transportation robot system |
CN111159459B (en) * | 2019-12-04 | 2023-08-11 | 恒大恒驰新能源汽车科技(广东)有限公司 | Landmark positioning method, landmark positioning device, computer equipment and storage medium |
US11609576B2 (en) * | 2019-12-05 | 2023-03-21 | Baidu Usa Llc | Emergency vehicle audio detection |
JP2023504983A (en) * | 2019-12-13 | 2023-02-08 | マーベル アジア ピーティーイー、リミテッド | Automotive data processing system with efficient metadata generation and export |
US11487968B2 (en) | 2019-12-16 | 2022-11-01 | Nvidia Corporation | Neural network based facial analysis using facial landmarks and associated confidence values |
CN111145589B (en) * | 2019-12-17 | 2021-10-08 | 北京交通大学 | Vehicle omnidirectional anti-collision early warning system based on vector algorithm |
US11249462B2 (en) | 2020-01-06 | 2022-02-15 | Rockwell Automation Technologies, Inc. | Industrial data services platform |
US11738804B2 (en) | 2020-02-07 | 2023-08-29 | Micron Technology, Inc. | Training a vehicle to accommodate a driver |
US11210951B2 (en) | 2020-03-03 | 2021-12-28 | Verizon Patent And Licensing Inc. | System and method for location data fusion and filtering |
US11183062B2 (en) | 2020-03-16 | 2021-11-23 | Here Global B.V. | Method and system for providing parking recommendations |
US11314495B2 (en) * | 2020-03-30 | 2022-04-26 | Amazon Technologies, Inc. | In-vehicle synthetic sensor orchestration and remote synthetic sensor service |
US11115482B1 (en) * | 2020-03-31 | 2021-09-07 | Xevo Inc. | System and method for correlating keep-alive connection communications with unary connection communications |
CN111477030B (en) * | 2020-04-14 | 2022-01-21 | 北京汽车集团有限公司 | Vehicle collaborative risk avoiding method, vehicle end platform, cloud end platform and storage medium |
US11587180B2 (en) * | 2020-05-14 | 2023-02-21 | Ccc Information Services Inc. | Image processing system |
US11593678B2 (en) | 2020-05-26 | 2023-02-28 | Bank Of America Corporation | Green artificial intelligence implementation |
US11726459B2 (en) | 2020-06-18 | 2023-08-15 | Rockwell Automation Technologies, Inc. | Industrial automation control program generation from computer-aided design |
CN111932870B (en) * | 2020-06-23 | 2021-07-06 | 南京市公安局 | Road network and visual field based blind area detection method and system |
CN111832832B (en) * | 2020-07-21 | 2023-12-29 | 重庆现代建筑产业发展研究院 | District self-inspection system based on thing networking |
US10902290B1 (en) * | 2020-08-04 | 2021-01-26 | Superb Ai Co., Ltd. | Methods for training auto labeling device and performing auto labeling related to object detection while performing automatic verification by using uncertainty scores and devices using the same |
US20220058519A1 (en) * | 2020-08-24 | 2022-02-24 | International Business Machines Corporation | Open feature library management |
CN111968375B (en) * | 2020-08-27 | 2021-08-10 | 北京嘀嘀无限科技发展有限公司 | Traffic flow prediction method and device, readable storage medium and electronic equipment |
US11812245B2 (en) * | 2020-10-08 | 2023-11-07 | Valeo Telematik Und Akustik Gmbh | Method, apparatus, and computer-readable storage medium for providing three-dimensional stereo sound |
CN112319372A (en) * | 2020-11-27 | 2021-02-05 | 北京三快在线科技有限公司 | Image display method and device based on streaming media rearview mirror |
EP4009126A1 (en) * | 2020-12-04 | 2022-06-08 | United Grinding Group Management AG | Method of operating a machine for a production facility |
US11743334B2 (en) | 2021-03-31 | 2023-08-29 | Amazon Technologies, Inc. | In-vehicle distributed computing environment |
CN113119981B (en) * | 2021-04-09 | 2022-06-17 | 东风汽车集团股份有限公司 | Vehicle active safety control method, system and storage medium |
US11749032B2 (en) * | 2021-05-17 | 2023-09-05 | Toyota Research Institute, Inc. | Systems and methods for adapting notifications according to component monitoring |
US20240011789A1 (en) * | 2021-12-16 | 2024-01-11 | Google Llc | Incorporating Current And Anticipated Parking Locations Into Directions Suggestions |
US11794772B2 (en) | 2022-01-14 | 2023-10-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods to increase driver awareness of exterior occurrences |
JPWO2023188130A1 (en) * | 2022-03-30 | 2023-10-05 | ||
EP4265479A1 (en) * | 2022-04-21 | 2023-10-25 | Bayerische Motoren Werke Aktiengesellschaft | Computing device and computer-implemented method for generating multi-view video streams |
CN114550465B (en) * | 2022-04-26 | 2022-07-08 | 四川北斗云联科技有限公司 | Highway management system of hazardous articles transport vechicle |
CN115440246A (en) * | 2022-07-22 | 2022-12-06 | 富士康(昆山)电脑接插件有限公司 | Fault detection method, system, vehicle, electronic device and storage medium |
US11780458B1 (en) | 2022-12-14 | 2023-10-10 | Prince Mohammad Bin Fahd University | Automatic car side-view and rear-view mirrors adjustment and drowsy driver detection system |
CN117437382B (en) * | 2023-12-19 | 2024-03-19 | 成都电科星拓科技有限公司 | Updating method and system for data center component |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070182528A1 (en) * | 2000-05-08 | 2007-08-09 | Automotive Technologies International, Inc. | Vehicular Component Control Methods Based on Blind Spot Monitoring |
US20080239527A1 (en) * | 2007-03-26 | 2008-10-02 | Aisin Aw Co., Ltd. | Driving support method and driving support apparatus |
US20100080416A1 (en) * | 2008-10-01 | 2010-04-01 | Gm Global Technology Operations, Inc. | Eye detection system using a single camera |
US20120299344A1 (en) * | 1995-06-07 | 2012-11-29 | David S Breed | Arrangement for Sensing Weight of an Occupying Item in Vehicular Seat |
US20150092056A1 (en) * | 2013-09-30 | 2015-04-02 | Sackett Solutions & Innovations | Driving assistance systems and methods |
US9760827B1 (en) * | 2016-07-22 | 2017-09-12 | Alpine Electronics of Silicon Valley, Inc. | Neural network applications in resource constrained environments |
US20170311903A1 (en) * | 2016-05-02 | 2017-11-02 | Dexcom, Inc. | System and method for providing alerts optimized for a user |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002243535A (en) * | 2001-02-20 | 2002-08-28 | Omron Corp | Road surface condition detecting device |
US7119832B2 (en) * | 2001-07-23 | 2006-10-10 | L-3 Communications Mobile-Vision, Inc. | Wireless microphone for use with an in-car video system |
JP2003123176A (en) * | 2001-10-15 | 2003-04-25 | L & F Plastics Co Ltd | Tire pressure display means for tubeless tire |
DE10163967A1 (en) * | 2001-12-24 | 2003-07-03 | Volkswagen Ag | Driver assistance system taking into account driver attentiveness state detects driver's state of attention from operation or non-operation of existing vehicle control elements |
JP4275507B2 (en) * | 2003-10-28 | 2009-06-10 | 富士通テン株式会社 | Driving assistance device |
WO2007105792A1 (en) * | 2006-03-15 | 2007-09-20 | Omron Corporation | Monitor and monitoring method, controller and control method, and program |
TW200834040A (en) | 2007-02-02 | 2008-08-16 | Mitac Int Corp | Dioramic navigation apparatus |
JP4861850B2 (en) * | 2007-02-13 | 2012-01-25 | アイシン・エィ・ダブリュ株式会社 | Lane determination device and lane determination method |
WO2009004749A1 (en) | 2007-07-04 | 2009-01-08 | Mitsubishi Electric Corporation | Navigation system |
JP4985166B2 (en) * | 2007-07-12 | 2012-07-25 | トヨタ自動車株式会社 | Self-position estimation device |
JP5018444B2 (en) * | 2007-12-13 | 2012-09-05 | 株式会社豊田中央研究所 | Vehicle fault diagnosis and prediction device |
CA2734436C (en) | 2010-04-27 | 2016-06-14 | Timothy Newman | Method and system for transmitting a warning message to a driver of a vehicle |
US20110313593A1 (en) | 2010-06-21 | 2011-12-22 | Cohen Meir S | Vehicle On Board Diagnostic Port Device with GPS Tracking, Auto-Upload, and Remote Manipulation |
US8863256B1 (en) | 2011-01-14 | 2014-10-14 | Cisco Technology, Inc. | System and method for enabling secure transactions using flexible identity management in a vehicular environment |
US9581997B1 (en) * | 2011-04-22 | 2017-02-28 | Angel A. Penilla | Method and system for cloud-based communication for automatic driverless movement |
KR20130005035A (en) | 2011-07-05 | 2013-01-15 | 가온미디어 주식회사 | Navigation system of cloud computing means and method for the same |
KR20130051797A (en) * | 2011-11-10 | 2013-05-21 | 딕스비전 주식회사 | Apparatus for detecting foreign substance of tire |
US9218698B2 (en) * | 2012-03-14 | 2015-12-22 | Autoconnect Holdings Llc | Vehicle damage detection and indication |
JP2013198065A (en) | 2012-03-22 | 2013-09-30 | Denso Corp | Sound presentation device |
KR20140002373A (en) | 2012-06-29 | 2014-01-08 | 현대자동차주식회사 | Apparatus and method for monitoring driver state by driving pattern learning |
US9311544B2 (en) * | 2012-08-24 | 2016-04-12 | Jeffrey T Haley | Teleproctor reports use of a vehicle and restricts functions of drivers phone |
JP5880360B2 (en) * | 2012-08-31 | 2016-03-09 | トヨタ自動車株式会社 | Driving support system and driving support method |
US9428052B1 (en) * | 2012-09-08 | 2016-08-30 | Towers Watson Software Limited | Automated distraction measurement of machine operator |
JP5622819B2 (en) | 2012-09-28 | 2014-11-12 | 富士重工業株式会社 | Gaze guidance system |
US8981942B2 (en) * | 2012-12-17 | 2015-03-17 | State Farm Mutual Automobile Insurance Company | System and method to monitor and reduce vehicle operator impairment |
US9149236B2 (en) * | 2013-02-04 | 2015-10-06 | Intel Corporation | Assessment and management of emotional state of a vehicle operator |
GB2518187A (en) | 2013-09-12 | 2015-03-18 | Ford Global Tech Llc | Collision warning for a driver controlled vehicle |
DE102013224962A1 (en) * | 2013-12-05 | 2015-06-11 | Robert Bosch Gmbh | Arrangement for creating an image of a scene |
US20150213555A1 (en) | 2014-01-27 | 2015-07-30 | Hti Ip, Llc | Predicting driver behavior based on user data and vehicle data |
DE102014002150B3 (en) * | 2014-02-15 | 2015-07-23 | Audi Ag | Method for determining the absolute position of a mobile unit and mobile unit |
CN103927848A (en) * | 2014-04-18 | 2014-07-16 | 南京通用电器有限公司 | Safe driving assisting system based on biological recognition technology |
KR102179579B1 (en) | 2014-05-29 | 2020-11-17 | 한국자동차연구원 | Car steering wheel of the integrated smart navigation |
WO2016029939A1 (en) * | 2014-08-27 | 2016-03-03 | Metaio Gmbh | Method and system for determining at least one image feature in at least one image |
JP6184923B2 (en) * | 2014-09-11 | 2017-08-23 | 日立オートモティブシステムズ株式会社 | Vehicle collision avoidance device |
US9373203B1 (en) | 2014-09-23 | 2016-06-21 | State Farm Mutual Automobile Insurance Company | Real-time driver monitoring and feedback reporting system |
EP3018448B1 (en) * | 2014-11-04 | 2021-01-06 | Volvo Car Corporation | Methods and systems for enabling improved positioning of a vehicle |
CN105674992A (en) * | 2014-11-20 | 2016-06-15 | 高德软件有限公司 | Navigation method and apparatus |
EP3232159A4 (en) * | 2014-12-08 | 2018-08-15 | Hitachi Automotive Systems, Ltd. | Host vehicle position estimation device |
EP3032221B1 (en) * | 2014-12-09 | 2022-03-30 | Volvo Car Corporation | Method and system for improving accuracy of digital map data utilized by a vehicle |
US10001376B1 (en) * | 2015-02-19 | 2018-06-19 | Rockwell Collins, Inc. | Aircraft position monitoring system and method |
KR20160139624A (en) | 2015-05-28 | 2016-12-07 | 자동차부품연구원 | Method and Apparatus for Management Driver |
US9889859B2 (en) * | 2015-12-21 | 2018-02-13 | Intel Corporation | Dynamic sensor range in advanced driver assistance systems |
US10460600B2 (en) * | 2016-01-11 | 2019-10-29 | NetraDyne, Inc. | Driver behavior monitoring |
JP2017138694A (en) * | 2016-02-02 | 2017-08-10 | ソニー株式会社 | Picture processing device and picture processing method |
-
2017
- 2017-08-09 EP EP17840218.6A patent/EP3496969A4/en not_active Withdrawn
- 2017-08-09 US US15/672,897 patent/US20180043829A1/en not_active Abandoned
- 2017-08-09 WO PCT/US2017/046122 patent/WO2018031673A1/en unknown
- 2017-08-09 US US15/672,832 patent/US10540557B2/en active Active
- 2017-08-09 JP JP2019507287A patent/JP2019530061A/en active Pending
- 2017-08-09 CN CN201780062496.7A patent/CN109906165A/en active Pending
- 2017-08-09 US US15/672,747 patent/US20180046869A1/en not_active Abandoned
- 2017-08-10 CN CN201780062518.XA patent/CN109964260A/en active Pending
- 2017-08-10 EP EP17840271.5A patent/EP3497685A4/en not_active Withdrawn
- 2017-08-10 WO PCT/US2017/046277 patent/WO2018031759A2/en unknown
- 2017-08-10 US US15/673,909 patent/US10503988B2/en active Active
- 2017-08-10 JP JP2019507305A patent/JP2019525185A/en not_active Ceased
-
2019
- 2019-08-15 US US16/542,242 patent/US20190370581A1/en not_active Abandoned
- 2019-12-09 US US16/708,123 patent/US20200110951A1/en not_active Abandoned
-
2020
- 2020-01-17 US US16/746,667 patent/US20200151479A1/en not_active Abandoned
- 2020-03-23 US US16/827,635 patent/US20200226395A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120299344A1 (en) * | 1995-06-07 | 2012-11-29 | David S Breed | Arrangement for Sensing Weight of an Occupying Item in Vehicular Seat |
US20070182528A1 (en) * | 2000-05-08 | 2007-08-09 | Automotive Technologies International, Inc. | Vehicular Component Control Methods Based on Blind Spot Monitoring |
US20080239527A1 (en) * | 2007-03-26 | 2008-10-02 | Aisin Aw Co., Ltd. | Driving support method and driving support apparatus |
US20100080416A1 (en) * | 2008-10-01 | 2010-04-01 | Gm Global Technology Operations, Inc. | Eye detection system using a single camera |
US20150092056A1 (en) * | 2013-09-30 | 2015-04-02 | Sackett Solutions & Innovations | Driving assistance systems and methods |
US20170311903A1 (en) * | 2016-05-02 | 2017-11-02 | Dexcom, Inc. | System and method for providing alerts optimized for a user |
US9760827B1 (en) * | 2016-07-22 | 2017-09-12 | Alpine Electronics of Silicon Valley, Inc. | Neural network applications in resource constrained environments |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10591922B2 (en) * | 2015-01-29 | 2020-03-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous vehicle operation in view-obstructed environments |
US20170185088A1 (en) * | 2015-01-29 | 2017-06-29 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous vehicle operation in view-obstructed environments |
US20170305418A1 (en) * | 2016-04-21 | 2017-10-26 | Lg Electronics Inc. | Driver assistance apparatus for vehicle |
US10611383B2 (en) * | 2016-04-21 | 2020-04-07 | Lg Electronics Inc. | Driver assistance apparatus for vehicle |
US11175145B2 (en) | 2016-08-09 | 2021-11-16 | Nauto, Inc. | System and method for precision localization and mapping |
US11485284B2 (en) | 2016-11-07 | 2022-11-01 | Nauto, Inc. | System and method for driver distraction determination |
US10703268B2 (en) | 2016-11-07 | 2020-07-07 | Nauto, Inc. | System and method for driver distraction determination |
US11170241B2 (en) * | 2017-03-03 | 2021-11-09 | Valeo Comfort And Driving Assistance | Device for determining the attentiveness of a driver of a vehicle, on-board system comprising such a device, and associated method |
US10453150B2 (en) | 2017-06-16 | 2019-10-22 | Nauto, Inc. | System and method for adverse vehicle event determination |
US11017479B2 (en) | 2017-06-16 | 2021-05-25 | Nauto, Inc. | System and method for adverse vehicle event determination |
US11281944B2 (en) | 2017-06-16 | 2022-03-22 | Nauto, Inc. | System and method for contextualized vehicle operation determination |
US11164259B2 (en) | 2017-06-16 | 2021-11-02 | Nauto, Inc. | System and method for adverse vehicle event determination |
US10430695B2 (en) | 2017-06-16 | 2019-10-01 | Nauto, Inc. | System and method for contextualized vehicle operation determination |
US11392131B2 (en) | 2018-02-27 | 2022-07-19 | Nauto, Inc. | Method for determining driving policy |
WO2019220436A3 (en) * | 2018-05-14 | 2019-12-26 | BrainVu Ltd. | Driver predictive mental response profile and application to automated vehicle brain interface control |
CN110505469A (en) * | 2018-05-17 | 2019-11-26 | 株式会社电装 | Circular monitoring system and method for vehicle |
CN110857057A (en) * | 2018-08-10 | 2020-03-03 | 丰田自动车株式会社 | Vehicle periphery display device |
US11460709B2 (en) * | 2019-03-14 | 2022-10-04 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Method and apparatus for adjusting on-vehicle projection |
CN110217271A (en) * | 2019-05-30 | 2019-09-10 | 成都希格玛光电科技有限公司 | Fast railway based on image vision invades limit identification monitoring system and method |
US11687778B2 (en) | 2020-01-06 | 2023-06-27 | The Research Foundation For The State University Of New York | Fakecatcher: detection of synthetic portrait videos using biological signals |
US11373447B2 (en) * | 2020-02-19 | 2022-06-28 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems including image detection to inhibit vehicle operation |
CN113548056A (en) * | 2020-04-17 | 2021-10-26 | 东北大学秦皇岛分校 | Automobile safety driving assisting system based on computer vision |
WO2022048051A1 (en) * | 2020-09-02 | 2022-03-10 | 厦门理工学院 | Beidou-based engineering vehicle exhaust emission monitoring and tracking system |
CN113140108A (en) * | 2021-04-16 | 2021-07-20 | 西北工业大学 | Cloud traffic situation prediction method in internet-connected intelligent traffic system |
CN114194115A (en) * | 2021-12-22 | 2022-03-18 | 数源科技股份有限公司 | Installation method of visual blind area camera device |
US20230211731A1 (en) * | 2022-01-05 | 2023-07-06 | GM Global Technology Operations LLC | Vehicle mirror selection based on head pose and gaze direction |
WO2023152729A1 (en) * | 2022-02-14 | 2023-08-17 | Gentex Corporation | Imaging system for a vehicle |
CN114579190A (en) * | 2022-02-17 | 2022-06-03 | 中国科学院计算机网络信息中心 | Cross-center cooperative computing arrangement method and system based on pipeline mechanism |
Also Published As
Publication number | Publication date |
---|---|
US20200151479A1 (en) | 2020-05-14 |
US10503988B2 (en) | 2019-12-10 |
EP3496969A4 (en) | 2020-09-16 |
US20180046869A1 (en) | 2018-02-15 |
US20200110951A1 (en) | 2020-04-09 |
CN109964260A (en) | 2019-07-02 |
WO2018031673A1 (en) | 2018-02-15 |
EP3496969A1 (en) | 2019-06-19 |
JP2019530061A (en) | 2019-10-17 |
JP2019525185A (en) | 2019-09-05 |
EP3497685A2 (en) | 2019-06-19 |
CN109906165A (en) | 2019-06-18 |
EP3497685A4 (en) | 2020-07-29 |
US10540557B2 (en) | 2020-01-21 |
US20180046870A1 (en) | 2018-02-15 |
US20180047288A1 (en) | 2018-02-15 |
US20190370581A1 (en) | 2019-12-05 |
US20200226395A1 (en) | 2020-07-16 |
WO2018031759A3 (en) | 2018-07-26 |
WO2018031759A2 (en) | 2018-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190370581A1 (en) | Method and apparatus for providing automatic mirror setting via inward facing cameras | |
US11335200B2 (en) | Method and system for providing artificial intelligence analytic (AIA) services using operator fingerprints and cloud data | |
US11068728B2 (en) | Method and system for providing behavior of vehicle operator using virtuous cycle | |
US10834221B2 (en) | Method and system for providing predictions via artificial intelligence (AI) models using a distributed system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: SURROUND.IO CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CORDELL, JOHN P.;WELLAND, ROBERT V.;MCKELVIE, SAMUEL J.;AND OTHERS;SIGNING DATES FROM 20180305 TO 20180313;REEL/FRAME:045235/0915 |
|
AS | Assignment |
Owner name: XEVO INC., WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SURROUND.IO CORPORATION;REEL/FRAME:045590/0011 Effective date: 20180417 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |