US20230316769A1 - Object information obtaining method and system for implementing - Google Patents
Object information obtaining method and system for implementing Download PDFInfo
- Publication number
- US20230316769A1 US20230316769A1 US17/707,874 US202217707874A US2023316769A1 US 20230316769 A1 US20230316769 A1 US 20230316769A1 US 202217707874 A US202217707874 A US 202217707874A US 2023316769 A1 US2023316769 A1 US 2023316769A1
- Authority
- US
- United States
- Prior art keywords
- occupant
- request
- vehicle
- information
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 230000000977 initiatory effect Effects 0.000 claims abstract description 52
- 230000004044 response Effects 0.000 claims description 34
- 238000012544 monitoring process Methods 0.000 claims description 33
- 238000004458 analytical method Methods 0.000 description 49
- 238000013480 data collection Methods 0.000 description 22
- 238000012545 processing Methods 0.000 description 15
- 238000012546 transfer Methods 0.000 description 14
- 230000004807 localization Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 7
- 238000012790 confirmation Methods 0.000 description 7
- 230000000875 corresponding effect Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000003213 activating effect Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 239000004984 smart glass Substances 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 230000036461 convulsion Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 238000007405 data analysis Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/10—Interpretation of driver requests or demands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/08—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
- B60W2040/0809—Driver authorisation; Driver identity check
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/08—Interaction between the driver and the control system
- B60W50/14—Means for informing the driver, warning the driver or prompting a driver intervention
- B60W2050/146—Display means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B60W2420/42—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2540/00—Input parameters relating to occupants
- B60W2540/225—Direction of gaze
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- Occupants of vehicles see objects of interest out of windows of the vehicle.
- the occupants wish to identify the object or learn more information about the object.
- the occupant will capture an image of the object using a mobile device, such as a smartphone, and then perform a search on the Internet to identify the object or learn more about the object.
- movement of the vehicle makes capturing the image of the object more difficult.
- obstructing objects pass between the vehicle and the object that inhibit the capturing of an image of the object.
- a driver is unable to safely remove their hands from the steering wheel to capture the image using the mobile device.
- the occupant looks at a map to attempt to identify the object.
- the occupant is then able to search the Internet to determine whether the object identified using the map is accurate and, if so, more information about the object. Identifying the object using the map is done using the occupants best estimate about the location of the object relative to other known landmarks or objects.
- FIG. 1 is a block diagram of an object identification system in accordance with some embodiments.
- FIG. 2 is a flowchart of a method of identifying an object in accordance with some embodiments.
- FIG. 3 is a flowchart of a method of identifying an object in accordance with some embodiments.
- FIG. 4 is a view of a data structure of an occupant request in accordance with some embodiments.
- FIG. 5 is a view of a data structure of attention area data in accordance with some embodiments.
- FIG. 6 is a view of a data structure of attention area data in accordance with some embodiments.
- FIG. 7 is a view of a data structure of attention area data in accordance with some embodiments.
- FIG. 8 is a view of a user interface in accordance with some embodiments.
- FIG. 9 is a view of a user interface in accordance with some embodiments.
- FIG. 10 is a view of a user interface in accordance with some embodiments.
- FIG. 11 is a block diagram of a system for implementing object identification in accordance with some embodiments.
- first and second features are formed in direct contact
- additional features may be formed between the first and second features, such that the first and second features may not be in direct contact
- present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
- Occupants within a moving vehicle often have difficulty with identifying objects of interest.
- the occupant is unable to accurately identify the object based on either a map or a captured image.
- the occupant such as a driver, is unable to use a map or an image capturing device, such as a smartphone, to attempt to identify the object of interest.
- the object identification method of this description utilizes request initiation commands in combination of gaze data and vehicle sensor data to identify the object.
- information about the identified object is also provided, such as hours of operation, historical information, etc.
- the method is able to determine a direction that the occupant is looking.
- the gaze data is combined with map data and/or vehicle sensor data to determine what object the occupant is observing at the time a request is initiated.
- Utilizing a request initiation helps to reduce processing load and data transferred between the vehicle and an external device such as a server.
- the request initiation includes a key word received via an audio signal from the occupant.
- the request initiation includes detecting a predetermined gesture from the occupant.
- the request initiation includes receiving an input from a user interface (UI) accessible by the occupant.
- UI user interface
- vehicle sensors and/or map data stored within the vehicle help to capture information related to the environment surrounding the vehicle without using a separate device, such as a smartphone, and without the occupant, such as the driver, removing their hands from a steering wheel. This helps to reduce distractions to the occupant and/or driver and allows occupants to identify the object without handling of an external device.
- Use of vehicle sensor and map data also helps to increase object identification accuracy in situations where objects, such as other vehicles, are obstructing the view of the object of interest; or when the object is initially visible and later obstructed by the time the external device is in a state ready to use.
- the object is displayed on a vehicle UI to help confirm the object identification.
- the occupant is able to request additional information related to the identified object. For example, in some embodiments, the occupant is able to request directions to the identified object, hours of operation for the identified object, historical information related to the identified object, or other suitable information.
- FIG. 1 is a block diagram of an object identification system 100 , in accordance with some embodiments.
- the description of the object identification system 100 focuses on an automobile controlled by a driver. However, one of ordinary skill in the art would recognize that other vehicles and operators are within the scope of this description, such as a train operated by an engineer or other mobile vehicles.
- the object identification system 100 includes a vehicle system 110 configured to capture information about an occupant of a vehicle and to generate gaze data.
- the vehicle system 110 also captured request initiation signals and occupant requests.
- the object identification system 100 further includes a server 140 configured to receive the generated gaze data as well as information collected from sensors of the vehicle as log data.
- the object identification system 100 further includes a mobile device 160 accessible by the occupant of the vehicle associated with the occupant request.
- some or all of the functionality of the mobile device 160 is incorporated into the vehicle system 110 . Incorporating the functionality of the mobile device 160 into the vehicle system 110 permits the occupant to utilize the object identification system 100 even if the occupant does not have access to a mobile device or if a battery of the mobile device is not sufficiently charged to permit use of the mobile device.
- the vehicle system 110 includes an electronic control unit (ECU) 120 configured to receive data from an occupant monitoring camera 112 , a front camera 114 , a global positioning system (GPS) 116 and a map 118 .
- the ECU 120 includes a gaze detector 122 configured to receive data from the occupant monitoring camera 112 and detect a gaze direction and/or a gaze depth based on the received data.
- the ECU 120 further includes an attention area recognizer 124 configured to determine a position of a gaze of the occupant.
- the ECU 120 further includes a localization unit 126 configured to receive data from the GPS 116 and the map 118 and determine a position of the vehicle and a pose and state of the vehicle relative to detected and/or known objects and/or road position.
- a pose is an orientation of the vehicle relative to a reference point, such as a roadway.
- the position of the vehicle also refers to a position vector of the vehicle.
- the pose and state of the vehicle refers to a speed and a heading of the vehicle.
- the pose and state of the vehicle also refers to a velocity vector, an acceleration vector and jerk vector of the vehicle.
- the position vector, the velocity vector, the acceleration vector and the jerk vector include angle vector.
- the state of the vehicle also refers to whether an engine or motor of the vehicle is running.
- the ECU 120 further includes a log collector 128 configured to receive information from the front camera 114 , the localization unit 126 and a data collection requester 132 and to combine the data collection request from the occupant with the corresponding sensor data from the vehicle system 110 in order to compile log data usable by the server 140 to identify the object of interest.
- the ECU 120 further includes a request receiver 130 configured to receive a data request from the mobile device 160 . In some embodiments where the functionality of the mobile device 160 is integrated with the vehicle system 110 , the request receiver 130 is omitted.
- the ECU 120 further includes a data collection requester 132 configured to receive gaze data and area of interest information from the attention area recognizer 124 and occupant request information from the request receiver 130 .
- the data collection requester 132 is configured to correlate the received information to generate instructions for the log collector 128 to collect data relevant to the occupant request information from sensors, such as front camera 114 , of the vehicle.
- the ECU 120 further includes a log transmitter 134 configured to receive the log data from the log collector 128 and transmit the log data to the server 140 .
- the occupant monitoring camera 112 is configured to capture images of a driver, or other occupant, of the viewing vehicle.
- the occupant monitoring camera 112 is connected to the vehicle.
- the occupant monitoring camera 112 includes a visible light camera.
- the occupant monitoring camera 112 includes an infrared (IR) camera or another suitable sensor.
- the occupant monitoring camera 112 is movable relative to the vehicle in order to capture images of at least one eye of an occupant that are different sizes.
- the occupant monitoring camera 112 While capturing images of both eyes of the occupant is preferred, some occupants have only a single eye, and in some instances where a head of the occupant is turned away from the occupant monitoring camera 112 , only one of the occupant's eyes is capturable by the occupant monitoring camera 112 . In some embodiments, the occupant monitoring camera 112 is adjusted automatically. In some embodiments, the occupant monitoring camera 112 is manually adjustable. In some embodiments, the captured image includes at least one eye of the occupant. In some embodiments, the captured image includes additional information about the occupant, such as approximate height, approximate weight, hair length, hair color, clothing or other suitable information. In some embodiments, the occupant monitoring camera 112 includes multiple image capturing devices for capturing images of different regions of the occupant.
- occupant monitoring cameras 112 are located at different locations within the vehicle. For example, in some embodiments, a first occupant monitoring camera 112 is located proximate a rear-view mirror in a central region of the vehicle; and a second occupant monitoring camera 112 is located proximate a driver-side door.
- a first occupant monitoring camera 112 is located proximate a rear-view mirror in a central region of the vehicle; and a second occupant monitoring camera 112 is located proximate a driver-side door.
- the data from the occupant monitoring camera 112 includes a timestamp or other metadata to help with synchronization with other data.
- the vehicle system 110 includes additional cameras for monitoring multiple occupants.
- Each of the additional cameras are similar to the occupant monitoring camera 112 described above.
- one or more monitoring cameras are positioned in the vehicle for capturing images of at least one eye of a front-seat passenger.
- one or more monitoring cameras are positioned in the vehicle for capturing images of at least one eye of a rear-seat passenger.
- the additional cameras are only activated in response to the vehicle detecting a corresponding front-seat passenger or rear-seat passenger.
- an operator of the vehicle is able to selectively de-activate the additional cameras.
- the captured images are still sent to the gaze detector 122 ; and the gaze detector 122 is able to generate a gaze result for each of the monitored occupants of the vehicle.
- the front camera 114 is configured to capture images of an environment surrounding the vehicle.
- the front camera 114 includes a visible light camera, an IR camera.
- the front camera 114 is replaced with or is further accompanied by a light detection and ranging (LiDAR) sensor, a radio detection and ranging (RADAR) sensor, a sound navigation and ranging (SONAR) sensor or another suitable sensor.
- the front camera 114 includes additional cameras located at other locations on the vehicle. For example, in some embodiments, additional cameras are located on sides of the vehicle in order to detect a larger portion of the environment to the left and right of the viewing vehicle.
- additional cameras are located on a back side of the vehicle in order to detect a larger portion of the environment to a rear of the vehicle. This information helps to capture additional objects that vehicle occupants other than the driver are able to view out of rear window.
- the front camera 114 is also able to capture images for determining whether any obstructions, such as medians or guard rails, are present between a location of an object and the occupants of the viewing vehicle.
- the data from the front camera 114 includes a timestamp or other metadata in order to help synchronize the data from the front camera 114 with the data from the occupant monitoring camera 112 .
- the GPS 116 is configured to determine a location of the vehicle. Knowing the location of the viewing vehicle helps to relate the object and the direction that drew the attention of the occupants with the objects and areas that are related to determined locations on the map 118 . Knowing the heading of the vehicle helps to predict which direction an occupant of the vehicle is looking in order to assist with generation of gaze data. Knowing a speed of the viewing vehicle helps to determine how long an occupant of the vehicle had an opportunity to view an object of interest. For example, in some embodiments, by the time the occupant initiates a request, the vehicle has moved past the object of interest or a position of the vehicle relative to the object of interest has changed. As a result, knowing the location of the vehicle at different times helps with correlating occupant requests with objects of interest.
- the map 118 includes information related to the roadway and known objects along the roadway. In some embodiments, the map 118 is usable in conjunction with the GPS 116 to determine a location and a heading of the vehicle. In some embodiments, the map 118 is received from an external device, such as the server 140 . In some embodiments, the map 118 is periodically updated based on information from the front camera 114 and/or the GPS 116 . In some embodiments, the map 118 is periodically updated based on information received from the external device. In some embodiments, the map 118 is generated from sensor data by simultaneous localization and mapping (SLAM) algorithm.
- SLAM simultaneous localization and mapping
- the gaze detector 122 is configured to receive data from the driver monitoring camera 112 and generate a detected gaze result.
- the detected gaze result includes a direction that the eyes of the driver are looking.
- the direction includes an azimuth angle and an elevation angle. Including azimuth angle and elevation angle allows a determination of a direction that the driver is looking both parallel to a horizon and perpendicular to the horizon.
- the detected gaze result further includes depth information. Depth information is an estimated distance from the driver that visual axes of the driver's eyes converge. Including depth information allows a determination of a distance between the driver and an object on which the driver is focusing a gaze. Combining depth information along with azimuth angle and elevation angle increases a precision of the detected gaze result.
- the determining depth information is difficult, so only azimuth angle and elevation angle are determined by the gaze detector 122 .
- the gaze detector 122 is further configured to receive data from the front camera 114 and to associate the detected gaze with a pixel location of an image from the front camera 114 based on the azimuth angle and elevation angle.
- the gaze detector 122 is not attached to the vehicle. In some embodiments, the gaze detector 122 is attached to the occupant of the viewing vehicle.
- the gaze detector 122 includes smart glasses, another piece of smart clothing or other such device that is capable of determining gaze information of a wearer.
- gaze data is able to be collected from pedestrians, people riding bicycles or other people that are not in a vehicle. The object identification system 100 is able to utilize this gaze data in order to help identify objects of interest.
- the front camera 114 and the localization unit 126 are still used in combination with the gaze detector 122 .
- the attention area recognizer 124 is configured to receive gaze data from the gaze detector 122 and further refine the gaze data to identify an area of a visible field of the occupant that is a focus of the occupant. Based on the received gaze data, the attention area recognizer 124 identifies a position relative to the vehicle where the occupant's attention is directed. In some embodiments, the attention area recognizer 124 is further configured to receive information from the front camera 114 and identifies pixel regions from captured images of the front camera 114 where the attention of the occupant is directed. The attention area recognizer 124 helps to reduce an amount of data in the log data collected by the log collector 128 to reduce processing load on the ECU 120 .
- the localization unit 126 is configured to receive information from the GPS 116 and the map 118 and determining a location of the vehicle in the world coordinate system or a location of the vehicle relative to the objects on the map 118 and known objects. In some embodiments, the localization unit 126 is usable to determine a heading and a speed of the vehicle. The localization unit 126 is also configured to determine state information for the vehicle. In some embodiments, the state information includes speed of the vehicle. In some embodiments, the state information includes velocity vector of the vehicle. In some embodiments, the state information includes heading of the vehicle. In some embodiments, the state information includes acceleration vector of the vehicle. In some embodiments, the state information includes jerk vector of the vehicle. In some embodiments, the state information includes whether an engine or motor of the vehicle is running. In some embodiments, the state information includes other status information related to the vehicle, such as operation of wind shield wipers, etc.
- the log collector 128 is configured to receive an image from the front camera 114 , state information from the localization unit 126 and occupant request information from the data collection requester 132 .
- the log collector 128 is configured to correlate the received data to determine what portion of the image from the front camera 114 was being observed by the occupant at the time that the occupant request was initiated.
- the log collector 128 is also configured to determine what information is being sought by the occupant, such as object identification, directions to the object, or other suitable information.
- the log collector 128 determines the portion of the image captured by the front camera 114 based on the gaze data analyzed by the attention area analyzer 124 and the data collection requester 132 .
- the log collector 128 Based on the analyzed gaze data, the log collector 128 is able to crop the image from the front camera 114 in order to reduce an amount of data to be transmitted to the server for analysis.
- the log collector 128 uses the state information from the localization unit 126 to complement the analyzed gaze data to help with precision in the image cropping.
- the log collector 128 generates log data based on the received and correlated data, such as the cropped image and requested data.
- the log collector 128 also associates timestamp information with the log data in order to assist with synchronization of the collected data and for queue priority within the server 150 .
- the log collector 128 generates the log data to further include world coordinates associated with the cropped image.
- the log collector 128 generates the log data to further include a map location associated with the cropped image.
- the log collector 128 includes additional information to assist in increasing accuracy of responding to the occupant request.
- the log collector 128 is not limited solely to generating log data based on images.
- the log collector 128 is configured to generate log data based on information from other sensors attached to the vehicle, such as RADAR, LiDAR, or other suitable sensors.
- the log collector 128 can generate log data based on point cloud data received from LiDAR instead of the image data.
- point cloud data includes a set of data points in space that are usable to represent a three-dimensional shape or object based on a distance of each point from the detector.
- the log collector 128 is further configured to generate the log data based on information received from the smart glasses.
- the request receiver 130 is configured to receive a request from the mobile device 160 . In some embodiments where the functionality of the mobile device 160 is incorporated into the vehicle system 110 , the request receiver 130 is omitted and the request is transferred directly to the data collection requester 132 . In some embodiments, the request receiver 130 is configured to receive the request wirelessly. In some embodiments, the request receiver 130 is configured to receive the request via a wired connection. In some embodiments, the request receiver 130 is configured to receive a request initiation prior to receiving the request.
- the request receiver 130 in response to receiving a request initiation, is configured to notify the data collection requester to initiate data collection at the log collector 128 to help ensure that information from the vehicle sensors, such as the front camera 114 , is stored for generation of log data. In some embodiments, the request receiver 130 is further configured to receive the request including identification information for the occupant making the request and timestamp information for when the request was made. In some embodiments, the request receiver 130 is configured to receive information related to an identity of the occupant making the request.
- the data collection requester 132 is configured to correlate the occupant request with region of interest (ROI) information from the attention area analyzer 124 .
- the data collection requester 132 is configured to convert the occupant request and ROI information into instructions usable by the log collector 128 to collect information for satisfying the occupant request.
- the data collection requester 132 is configured to determine what sensors are available to capture information related to a certain region of the environment surrounding the vehicle.
- the data collection requester 132 is configured to identify what types of sensors the log collector 128 should use to satisfy the occupant request.
- the data collection requester 132 is further configured to identify a timestamp of the occupant request to allow the log collector 128 to accurately collect data from the relevant sensors on the vehicle.
- the log transmitter 134 is configured to receive log data from the log collector 128 and transmit the log data to the server 140 .
- the log transmitter 134 is configured to transmit the log data wirelessly.
- the log transmitter 134 is configured to transmit the log data via a wired connection.
- the log transmitter 134 is configured to transmit the log data to the mobile device 160 , which in turn is configured to transmit the log data to the server 140 .
- the log transmitter 134 is configured to transmit the log data to the mobile device 160 using Bluetooth® or another suitable wireless technology.
- the ECU 120 is configured to determine whether the data transfer rate from the mobile device 160 to the server 140 is higher than a transfer rate from the log transmitter 134 to the server 140 .
- the log transmitter 134 In response to a determination that the data transfer rate from the mobile device 160 to the sever 140 is higher, the log transmitter 134 is configured to transmit the log data to the mobile device 160 to be transmitted to the server 140 . In response to a determination that the data transfer rate from the mobile device 160 to the server 140 is not higher, the log transmitter 134 is configured to transmit the log data to the server 140 from the vehicle system 110 directly without transferring the log data to the mobile device 160 .
- the vehicle system 110 further includes a memory configured to store sensor data from sensors attached to the vehicle.
- the memory is further configured to store information associated with previous occupant requests.
- the data collection requester 132 in response to the data collection requester 132 determining that the occupant request matches a previous occupant request, the data collection requester 132 is configured to provide results from the matching previous occupant request to the occupant 180 .
- the previous requests is stored as cache data.
- the server 140 includes a log data receiver 142 configured to receive the log data from the log transmitter 134 .
- the log data receiver 142 is configured to receive the log data from the mobile device 160 .
- the server 140 further includes a log storer 144 configured to store the received log data.
- the server 140 further includes a log analyzer 146 configured to receive the log data from the log storer 144 and information from a database 148 to identify an object of interest and/or provide information related to the object of interest.
- the server 140 further includes a database 148 configured to store information about objects.
- the server 140 further includes an analysis result transmitter 150 configured to transmit the results of the log analyzer 146 to the mobile device 160 .
- the server 140 further includes a log transmitter 152 configured to transmit log identification information to the mobile device 160 .
- the log data receiver 142 is configured to receive the log data from the log transmitter 134 . In some embodiments, the log data receiver 142 is configured to receive the log data from the mobile device 160 . In some embodiments, the log data receiver 142 is configured to receive the log data wirelessly. In some embodiments, the log data receiver 142 is configured to receive the log data via a wired connection. In some embodiments, the log data receiver 142 is configured to attach a timestamp for a time that the log data was received to the log data.
- the log storer 144 is configured to store the received log data for analysis.
- the log storer 144 includes a solid-state memory device.
- the log storer 144 includes a dynamic random-access memory (DRAM).
- the log storer 144 includes a non-volatile memory device.
- the log storer 144 includes cloud-based storage or another suitable storage structure.
- the log storer 144 is configured to store the log data in a queue based on priority. In some embodiments, the priority is based on a timestamp of when the server 140 received the log data. In some embodiments, the priority is based on a timestamp of when the occupant request was received.
- the priority is based on a size of the log data. In some embodiments, the priority is based on an identity of the of the occupant 180 . For example, in some embodiments, the occupant has an account with a service offered on the server 140 for prioritizing fulfillment of occupant requests. In some embodiments, other criteria are used to determine a priority of the log data in the queue. In some embodiments, log data is removed from the log storer 144 following analysis of the log data by the log analyzer 146 . In some embodiments, log data is not protected from over-writing in the log storer 144 following analysis of the log data by the log analyzer 146 .
- the log analyzer 146 is configured to receive log data from the log storer 144 and determine whether the occupant request of the log data matches any records stored in the database 148 .
- the log analyzer 146 includes a trained neural network (NN) to compare the log data with known objects from the database 148 . Once a match between the log data and a known object in the database 148 is found, then log analyzer 146 determines the requested data from the log data, such as object identification, object hours of operation, historical information of the object, etc.
- the log analyzer 146 extracts information from the database 148 that satisfies the requested data and transfers the extracted information to the analysis result transmitter 150 . In some embodiments, the extracted information is transferred to the analysis result transmitter along with identification information for the log data.
- the database 148 is configured to store information related to objects in association with a location of the object and an image of the object.
- the database 148 includes a solid-state memory device.
- the database 148 includes a dynamic random-access memory (DRAM).
- the database 148 includes a non-volatile memory device.
- the database 148 includes a relational database (RDB).
- the database 148 includes a Key Value Store (KVS).
- the database 148 includes NoSQL database.
- the database 148 includes cloud-based storage or another suitable storage structure.
- the database 148 is integral with the log storer 144 .
- the database 148 is separate from the log storer 144 . In some embodiments, the database 148 is configured to store information related to analysis results for previous occupant requests. In some embodiments, the log analyzer 146 is able to retrieve the results from the previous occupant requests in response to a determination that the log data matches a previous occupant request. In some embodiments, the database 148 stores a feature map that is generated by NN instead of storing image data.
- the analysis result transmitter 150 is configured to receive the information satisfying the occupant request from the log analyzer 146 .
- the analysis result transmitter 150 is configured to transmit the information to the mobile device 160 .
- the analysis result transmitter 150 is configured to transmit the information to the vehicle system 110 instead of or in addition to the mobile device 160 .
- the server 140 is configured to determine whether the data transfer rate from the server 140 to the mobile device 160 is higher than a transfer rate from server 140 to the vehicle system 110 . In response to a determination that the data transfer rate from the server 140 to the mobile device 160 is higher, the analysis result transmitter 150 is configured to transmit the information to the mobile device 160 to be transmitted to the vehicle system 110 .
- the analysis result transmitter 150 is configured to transmit the information to the vehicle system 110 directly without the information going through the mobile device 160 .
- the analysis result transmitter 150 is configured to transfer the information wirelessly.
- the analysis result transmitter 150 is configured to transmit the information via a wired connection.
- the analysis result transmitter 150 is configured to transmit identification information for the log data associated with the information as well. Transmitting the identification information for the log data helps the mobile device 160 or the vehicle system 110 to display both the data request and the analysis result to the occupant.
- the log transmitter 152 is configured to transmit information related to the processing of the log data by the server 140 . In some embodiments, the log transmitter 152 transmits the information to the mobile device 160 . In some embodiments, the log transmitter 152 transmits the information to the vehicle system 110 . In some embodiments, the server 140 is configured to determine whether the data transfer rate from the server 140 to the mobile device 160 is higher than a transfer rate from server 140 to the vehicle system 110 . In response to a determination that the data transfer rate from the server 140 to the mobile device 160 is higher, the log transmitter 152 is configured to transmit the information to the mobile device 160 to be transmitted to the vehicle system 110 .
- the log transmitter 152 is configured to transmit the information to the vehicle system 110 directly without the information going through the mobile device 160 .
- the log transmitter 152 is configured to transmit the log data to the mobile device 160 and/or the vehicle system 110 for review by the occupant.
- the log transmitter 152 is configured to transmit identification information for the log data to the mobile device 160 and/or the vehicle system 110 in response to the log analyzer 146 taking the log data out of the queue in the log storer 144 .
- the log transmitter 152 transmits the information wirelessly.
- the log transmitter 152 transmits the information via a wired connection.
- the mobile device 160 includes a log receiver 162 configured to receive information from the log transmitter 152 .
- the mobile device further includes an analysis result receiver 164 configured to receive information from the analysis result transmitter 150 .
- the mobile device 160 further includes a UI 166 configured to convey information to the occupant 180 based on the information received from the log transmitter 152 and the analysis result transmitter 150 .
- the UI 166 is further configured to receive input information from the occupant 180 .
- the mobile device 160 further includes a microphone 168 configured to receive request initiation information and request data from the occupant 180 .
- the mobile device 160 further includes a voice recognizer 170 configured to analyze the data received by the microphone 168 and determine a content of the request initiation information and the request data.
- the mobile device 160 further includes a request transmitter 172 configured to transmit the request data to the request receiver 130 .
- the log receiver 162 is configured to receive information from the log transmitter 152 . In some embodiments, the log receiver 162 is configured to receive the information wirelessly. In some embodiments, the log receiver 162 is configured to receive the information via a wired connection.
- the analysis result receiver 164 is configured to receive information from the analysis result transmitter 150 . In some embodiments, the analysis result receiver 164 is configured to receive the information wirelessly. In some embodiments, the log receiver 162 is configured to receive the information via a wired connection.
- the UI 166 is configured to receive information from the log receiver 162 and the analysis result receiver 164 .
- the UI 166 is configured to convey the received information to the occupant 180 .
- the UI 166 includes a touchscreen.
- the UI 166 is part of a smartphone.
- the UI 166 is integrated into a vehicle including the vehicle system 110 .
- the UI 166 is configured to receive input from the occupant 180 .
- the UI 166 is configured to receive an input indicating an identity of the occupant 180 .
- the UI 166 is configured to receive an input corresponding to a data request from the occupant 180 .
- the microphone 168 is configured to capture audio signals from the occupant 180 .
- the microphone 168 is part of a smartphone.
- the microphone 168 is integral with a vehicle including the vehicle system 110 .
- the microphone 168 includes a directional microphone.
- the microphone 168 is configured to capture a voice of the occupant 180 .
- the voice recognizer 170 is configured to receive an audio signal from the microphone 168 and determine a content of the audio signal. In some embodiments, the voice recognizer 170 is configured to determine whether the audio signal indicates a request initiation, such as a keyword or key phrase. In some embodiments, the voice recognizer 170 is configured to determine a type of data requested by the occupant 180 , such as identifying an object, information about an object, etc. In some embodiments, the voice recognizer 170 is further configured to determine an identity of the occupant 180 . In some embodiments, the voice recognizer 170 is configured to determine the identity of the occupant 180 based on voice recognition software.
- the voice recognizer 170 is configured to determine the identity of the occupant 180 based on an identifying keyword or key phrase, such as an occupant name or other identifying information. In some embodiments, the voice recognizer 170 is configured to determine the identity of the occupant 180 based on an input received at the UI 166 . In some embodiments, the voice recognizer 170 is configured to determine the identity of the occupant 180 based on an input from the vehicle system 110 , such as an image of the occupant that is speaking from the occupant monitoring camera 112 .
- the request initiation includes an input received at the UI 166 .
- the request initiation includes a detected gesture, such as a gesture detected using occupant monitoring camera 112 .
- the request initiation includes a combination of different inputs, such as an input at the UI 166 and a verbal input, or a recognition result of a face of the occupant, or a recognition result of an iris of an eye of the occupant by the gaze detector 122 or other suitable combinations.
- Inclusion of a request initiation as part of an occupant request helps to minimize unnecessary processing and data transmission which helps to minimize processing load and power consumption for the vehicle including the vehicle system 110 .
- minimizing power consumption becomes a greater concern in order to maintain battery charge and maximize a distance that the EV is able to travel without re-charging.
- the request transmitter 172 is configured to receive request information from the voice recognizer 170 and transmit information to the request receiver 130 . In some embodiments, the request transmitter 172 is configured to transmit a request initiation signal in response to the voice recognizer 170 identifying a request initiation. In some embodiments, the request transmitter 172 does not send a signal in response to the voice recognizer 170 identifying a request initiation. Sending a signal in response to a request initiation helps the vehicle system 110 to store sensor data to improve accuracy and precision of satisfying the occupant request. However, sending the signal in response to a request initiation increases an amount of data transmitted and processing load. The request transmitter 172 is configured to transmit the occupant request based on the analysis by the voice recognizer 170 . In some embodiments, the request transmitter 172 is configured to transmit the occupant request or other information wirelessly. In some embodiments, the request transmitter 172 is configured to transmit the occupant request or other information via a wired connection.
- the microphone 168 and the voice recognizer 170 are omitted and occupant requests, including request initiation, are received through the UI 166 .
- results of the analysis by the server 140 transmitted to the mobile device 160 cause an alert, such as an audio or visual alert, to automatically display on the mobile device 160 .
- FIG. 2 is a flowchart of a method 200 of identifying an object in accordance with some embodiments.
- the method 200 is implemented using system 100 ( FIG. 1 ).
- the method 200 is implemented using system 1100 ( FIG. 11 ).
- initiating the request helps to avoid unnecessary processing load on the mobile device 160 , the vehicle system 110 and the server 140 by avoiding processing inadvertently triggered occupant requests.
- initiating the request includes the occupant 180 speaking a keyword or a key phrase, e.g., detected by the microphone 168 ( FIG. 1 ).
- initiating the request includes the occupant touching a button, e.g., on UI 166 ( FIG. 1 ).
- initiating the request includes the mobile device 160 or the vehicle system 110 detecting, e.g., using the occupant monitoring camera 112 ( FIG. 1 ), a predetermined gesture by the occupant 180 .
- the mobile device activates a request receiver in operation 220 ; and the occupant 180 is able to input a request in operation 212 .
- the occupant 180 inputs the request.
- the request is the information that the occupant 180 would like to know about an object of interest.
- the request includes identifying information about the object.
- the request includes other information about the object, such as hours of operation, directions to the object, historical information about the object, or other suitable information.
- the occupant 180 inputs the request verbally, e.g., detected by the microphone 168 ( FIG. 1 ).
- the occupant 180 inputs the request using a UI, e.g., UI 166 ( FIG. 1 ).
- the occupation 180 inputs the request using a predetermined gesture, e.g., detected by occupant monitoring camera 112 ( FIG.
- a manner of initiating a request and inputting the request are the same, e.g., both initiation and inputting are performed verbally by the occupant. In some embodiments, a manner of initiating the request and inputting the request are different, e.g., initiation is performed using a UI and inputting is performed verbally. Other combinations of initiation and inputting of requests are within the scope of this disclosure.
- the mobile device 160 activates a request receiver 220 .
- Activating the request receiver in response to initiating the request helps the mobile device 160 to conserve power by avoiding having the request receiver be constantly monitoring for requests from the occupant 180 .
- activating the request receiver includes displaying an input screen on a UI, e.g., UI 166 ( FIG. 1 ).
- activating the request receiver includes initializing a microphone, e.g., microphone 168 ( FIG. 1 ).
- activating the request receiver includes activating circuitry within the mobile device 160 that will process a received request.
- the operation 220 is repeated until the mobile device 160 receives an input request in operation 212 .
- a predetermined time period e.g. 10 seconds to 30 seconds
- the operation 220 is discontinued and the request receiver returns to a sleep or low power state. If the predetermined time period is too long, then power consumption is unnecessarily increased, in some instances. If the predetermined time period is too short, the occupant 180 will not have sufficient time to input the request in operation 212 , in some instances.
- the operation 220 is discontinued in response to receipt of a cancellation signal, e.g., triggered by a keyword, key phrase, an input to the UI, or other suitable input.
- the mobile device 160 receives the request from operation 212 .
- the request is received directly from the occupant 180 .
- the request is receive indirectly from the occupant 180 via an external device, such as a keyboard or another suitable external device.
- the operation 212 and the operation 222 are implemented using a same component of the mobile device 160 , e.g., the microphone 168 or the UI 166 ( FIG. 1 ).
- the request is analyzed and transmitted.
- the request is analyzed to determine the type of data requested by the occupant 180 .
- the request is analyzed using the voice recognizer 170 ( FIG. 1 ).
- the analyzed request is transmitted to the vehicle system 110 in order to collect log data for satisfying the request.
- the analyzed request is transmitted using the request transmitter 172 ( FIG. 1 ).
- the analyzed request is received by the vehicle system 110 .
- the analyzed request is received wirelessly.
- the analyzed request is received via a wired connection.
- the analyzed request is received using the request receiver 130 ( FIG. 1 ).
- one or more images of the occupant are captured.
- the captured images are associated with timestamp data to determine a time at the one or more image was captured.
- the one or more images of the occupant capture at least one eye of the occupant.
- images of the occupant are captured at regular intervals.
- images of the occupant are captured in response to receiving a signal indicating that a request has been initiated, e.g., a signal from the mobile device 160 to the vehicle system 110 as part of operation 220 .
- the one or more images of the occupant are captured using the occupant monitoring camera 112 ( FIG. 1 ). In some embodiments, only images of the occupant associated with an occupant request are captured.
- images of more than one occupant of a vehicle are captured and only images of the occupant associated with the occupant request are used to generate request data later in method 200 .
- the operation 232 is performed in response to a signal generated in operation 220 . In some embodiments, operation 232 is performed independent of receipt of initiating a request.
- the occupant gaze is detected based on the one or more images captured in operation 232 .
- Detecting the gaze of the occupant includes identifying angles of the occupant's gaze relative to the vehicle. In some embodiments, the angles include the azimuth angle and the elevation angle. In some embodiments, detecting the gaze further includes determining a depth of the gaze relative to the vehicle position. In some embodiments, the operation 234 is implemented using the gaze detector 122 ( FIG. 1 ).
- an attention area is identified based on the detected gaze of the occupant from operation 234 .
- the attention area is identified to determine a ROI for the occupant 180 .
- the attention area is identified based on world coordinates.
- the attention area is identified based on pixel regions of an image captured by the vehicle, e.g., using front camera 114 ( FIG. 1 ).
- the attention area is identified based on relative coordinates with respect to the vehicle. Identifying the attention area helps to reduce an amount of data to be transmitted to the server 140 for processing.
- the operation 236 is implemented using the attention area recognizer 124 ( FIG. 1 ).
- operations 232 - 236 are performed continually during operation of the vehicle.
- the information generated by the operations 232 - 236 is stored in a memory within the vehicle system 110 for analysis in response to receiving an occupant request.
- operations 232 - 236 are performed in response to receiving an initiate request signal as part of operation 220 .
- operations 232 - 236 are discontinued in response to a signal received indicating that operation 220 has been discontinued due to failure to receive a timely input request or in response to a cancellation input.
- a data collection request is generated based on the received analyzed request.
- the data collection request identifies information from the operations 232 - 236 that is usable to satisfy the received analyzed request.
- the data collection request identifies which sensors of the vehicle are usable to satisfy the received analyzed request.
- the data collection request also identifies a time period over which to collect the sensor data based on a timestamp of the received request in operation 222 .
- the operation 238 is implemented using the data request collector 132 ( FIG. 1 ).
- sensor data is collected based on the data collection request.
- the sensor data is collected from a memory within the vehicle system 110 .
- the sensor data is collected from a single sensor.
- the sensor data is collected from multiple sensors.
- the sensor data is collected using log collector 128 ( FIG. 1 ).
- the sensor data collected in operation 240 is cropped. Cropping the sensor data reduces an amount of data to be transmitted to the server 140 .
- the term cropped here is used based on the sensor data being image data. However, one of ordinary skill in the art would understand that operation 242 is used to reduced superfluous data based on the identified attention area from operation 236 regardless of a type of sensor data being used.
- the operation 242 is implemented using log collector 128 ( FIG. 1 ).
- the cropped sensor data along with timestamp information is considered log data, in some embodiments.
- the log data is transmitted to the server 140 .
- the operation 244 is omitted and the results satisfying the received analyzed request are provided by the vehicle system 110 directly.
- the log data is transmitted wirelessly.
- the log data is transmitted via a wired connection.
- the operation 244 is implemented using the log transmitter 134 ( FIG. 1 ).
- the server 140 receives the log data.
- the operation 250 is implemented using log data receiver 142 ( FIG. 1 ).
- the log data is not transmitted to the sever 140 and the operation 250 is omitted.
- log data is stored in the server 140 .
- the log data is stored for later processing by the server 140 .
- the log data is stored in a priority based queue.
- priority in the queue is based on a time that the log data is received by the server 140 .
- priority in the queue is based on a time that the occupant request was received, i.e., in operation 222 .
- priority in the queue is based on an identity of the occupant 180 .
- the log data is analyzed to determine a result that satisfies the occupant request in the log data.
- the log data is analyzed by comparing the data from the sensors of the vehicle with data in a database of the server 140 . Once a match between an object in the vehicle sensor data and the data in the database is found, the database is queried to retrieve information that satisfies the occupant request. For example, in some embodiments, the database is queried to determine identification information for the object, hours of operation for the object, a location of the object, etc. In some embodiments, the information from the database includes a web address for the occupant 180 to find information about the object. In some embodiments where no match between the vehicle sensor data and the data in the database is found, the operation 254 returns a result indicating that no match was found. In some embodiments, the operation 254 is implemented using the log analyzer 146 ( FIG. 1 ).
- the analysis result from operation 254 is transmitted.
- the analysis result is transmitted wirelessly.
- the analysis result is transmitted via a wired connection.
- the analysis result is transmitted to the mobile device 160 .
- the analysis result is transmitted to the vehicle system 110 instead of or in addition to the mobile device 160 .
- the operation 256 is implemented using the analysis result transmitter 150 ( FIG. 1 ).
- the mobile device 160 receives the analysis results.
- the analysis results include both the information from the database retrieved in operation 254 as well as log data identification information. Including log data identification information along with the analysis results helps to expedite analysis and providing of additional information about the object in a situation where the occupant requests more information about the object following the receipt of the analysis results.
- the operation 260 is implemented using analysis result receiver 164 ( FIG. 1 ).
- the occupant 180 is notified of the analysis results.
- the occupant is notified by providing the occupant 180 with a web address to access information about the object.
- the occupant is notified by providing the occupant 180 with the requested information about the object.
- the occupant 180 is notified using a visual notification.
- the occupant 180 is notified using an audio notification.
- the occupant is notified using UI 166 ( FIG. 1 ).
- the occupant 180 is notified by an alert, at least one of audio or visual, automatically appearing on the mobile device 160 in response to receiving the analysis results from the server 140 .
- the notification to the occupant 180 includes the vehicle sensor data, such as a cropped image, included as part of the log data to allow the occupant 180 to confirm that the received information corresponds to the intended object of interest.
- the notification to the occupant 180 includes a request for confirmation that the object of interest was correctly identified; and results of the request for confirmation are provided to the server 140 to help improve performance of log data analysis in operation 254 .
- the occupant gives a feedback to at least one of the server 140 , the mobile device 160 , or the vehicle system 110 about whether the received results were really relevant to the request that the occupant made or about whether the occupant liked the information or not.
- This feedback provides training of a neural network (NN) so that the log analyzer 146 , the attention area recognizer 124 , the data collection requester 132 , and the voice recognizer 170 are able to be tuned or trained so that the false positives and false negatives are reduced over time.
- NN neural network
- the method 200 includes updating of the database in the server 140 based on confirmation results from the occupant following notification of analysis results.
- at least one operation of the method 200 is omitted.
- the operation 242 is omitted if data transmission size is not a concern.
- an order of operations of the method 200 is changed.
- the operation 234 occurs after operation 230 to reduce processing load on the vehicle system 110 .
- FIG. 3 is a flowchart of a method 300 of identifying an object in accordance with some embodiments.
- the method 300 is implemented using system 100 ( FIG. 1 ).
- the method 300 is implemented using system 1100 ( FIG. 11 ).
- the method 300 is similar to the method 200 ( FIG. 2 ). Operations in method 300 that are similar to operations in method 200 have a same reference number. For the sake of brevity, only the operations of method 300 that are different from operations in method 200 are discussed below.
- the log data is analyzed and associated with object information for the object of interest.
- the log data is analyzed by comparing the data from the sensors of the vehicle with data in a database of the server 140 . Once a match between an object in the vehicle sensor data and the data in the database is found, a link to the object information in the database for the matching object is associated with the log data.
- the link allows the occupant 180 to access the database in the server 140 to obtain the requested information about the object.
- the link includes Uniform Resource Locator (URL) which the occupant is able to open using the UI 166 (such as web browser).
- the link permits the occupant 180 to obtain additional information about the object other than just the requested information.
- URL Uniform Resource Locator
- the log data is analyzed by comparing the feature map that is extracted by NN from the data from the sensors of the vehicle with feature map that is extracted by NN from data in a database of the server 140 .
- the operation 305 is implemented using the log analyzer 146 ( FIG. 1 ).
- the link to access the log data and associated object information from operation 305 is transmitted.
- the link is transmitted wirelessly.
- the link is transmitted via a wired connection.
- the link is transmitted to the mobile device 160 .
- the link is transmitted to the vehicle system 110 instead of or in addition to the mobile device 160 .
- the operation 310 is implemented using the analysis result transmitter 150 ( FIG. 1 ).
- the mobile device 160 receives the link.
- the link includes both the link for accessing the database as well as log data identification information. Including log data identification information along with the analysis results helps to expedite analysis and providing of additional information about the object in a situation where the occupant requests more information about the object following the receipt of the link and the link does not provide access to all information about the object stored in the database.
- the operation 320 is implemented using analysis result receiver 164 ( FIG. 1 ).
- the occupant 180 is notified of the link.
- the occupant is notified by providing the occupant 180 with a web address to access information about the object.
- the occupant is notified by providing the occupant 180 with a selectable icon for accessing the information about the object.
- the occupant 180 is notified using a visual notification.
- the occupant 180 is notified using an audio notification.
- the occupant is notified using UI 166 ( FIG. 1 ).
- the occupant 180 is notified by an alert, at least one of audio or visual, automatically appearing on the mobile device 160 in response to receiving the link from the server 140 .
- the notification to the occupant 180 includes the vehicle sensor data, such as a cropped image, included as part of the log data to allow the occupant 180 to confirm that the received information corresponds to the intended object of interest.
- the notification to the occupant 180 includes a request for confirmation that the object of interest was correctly identified; and results of the request for confirmation are provided to the server 140 to help improve performance of log data analysis in operation 305 .
- the method 300 includes updating of the database in the server 140 based on confirmation results from the occupant following notification of link.
- at least one operation of the method 300 is omitted.
- the operation 242 is omitted if data transmission size is not a concern.
- an order of operations of the method 300 is changed.
- the operation 234 occurs after operation 230 to reduce processing load on the vehicle system 110 .
- FIG. 4 is a view of a data structure 400 of an occupant request in accordance with some embodiments.
- the data structure 400 corresponds to status of the occupant request received from the occupant 180 by the microphone 168 and processed by the voice recognizer 170 ( FIG. 1 ).
- the data structure 400 corresponds to occupant request received in operation 222 ( FIG. 2 ).
- the data structure 400 includes occupant identification information 405 .
- the occupant identification information 405 indicates an identity of the occupant that made the occupant request. In some embodiments, the occupant identification information 405 is determined based on analysis by the voice recognizer 170 ( FIG. 1 ). In some embodiments, the occupant identification information 405 is determined based on an input at the UI 116 ( FIG. 1 ). In some embodiments, the occupant identification information 405 is determined based on who has control of the mobile device 160 ( FIG. 1 ). In some embodiments, the occupant identification information 405 is determined based on recognition result of an iris of the eye of the occupant recognize d by a camera on the mobile device 160 .
- the occupant identification information 405 is determined based on a fingerprint of the occupant recognize d by the mobile device 160 or by a sensor on a steering wheel of the vehicle.
- the data structure 400 further includes request data 410 .
- the request data 410 includes a content of the information requested by the occupant.
- the request data 410 includes a request for identification of an object.
- the request data 410 includes a request for information about the object in addition to or different from identification of the object.
- the data structure 400 further includes timestamp information 415 .
- the timestamp information 415 indicates a time corresponding to receipt of the requested information from the occupant.
- the data structure 400 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the occupant request data.
- at least one of the components is excluded from the data structure 400 .
- the occupant identification information 405 is excluded from the data structure 400 .
- additional information is included in the data structure 400 .
- the data structure 400 further includes information about a location of the occupant within the vehicle.
- FIG. 5 is a view of a data structure 500 of attention area data in accordance with some embodiments.
- the data structure 500 corresponds to an attention area determined by the attention area analyzer 124 ( FIG. 1 ).
- the data structure 500 corresponds to an attention area identified in operation 236 ( FIG. 2 ).
- the data structure 500 includes occupant identification information 505 .
- the occupant identification information 505 indicates an identity of the occupant that made the occupant request. In some embodiments, the occupant identification information 505 is determined based on analysis by the voice recognizer 170 ( FIG. 1 ). In some embodiments, the occupant identification information 505 is determined based on an input at the UI 116 ( FIG. 1 ). In some embodiments, the occupant identification information 505 is determined based on who has control of the mobile device 160 ( FIG. 1 ). In some embodiments, the occupant identification information 405 is determined based on recognition result of the iris of the eye of the occupant recognize d by gaze detector 122 or a camera on the mobile device 160 .
- the occupant identification information 405 is determined based on the fingerprint of the occupant recognize d by the mobile device 160 or by a sensor on the steering wheel of the vehicle.
- the data structure 500 further includes timestamp information 510 .
- the timestamp information 510 indicates a time corresponding to receipt of the requested information from the occupant.
- the timestamp information 510 includes information related to a time when data was captured by the vehicle sensors.
- the timestamp information 510 includes information related to a time when the attention area was determined.
- the data structure 500 further includes region of interest (ROI) information 515 .
- the ROI information 515 indicates a location, e.g., in an image, where the attention area is determined to be located.
- the ROI information 515 is determined based on a correlation between gaze data for the occupant associated with the occupant identification information 505 and sensor data from the vehicle.
- the ROI information 515 includes a first corner pixel position 520 .
- the first corner pixel position 520 indicates a location within an image of a top left corner of an attention area determined based on the gaze data for the occupant.
- the ROI information 515 further includes a second corner pixel position 525 .
- the second corner pixel position 525 indicates a location within the image of a bottom right corner of the attention area determined based on the gaze data for the occupant.
- the ROI information 515 is usable for cropping an image, e.g., using log collector 128 ( FIG. 1 ) or in operation 242 ( FIG. 2 ).
- the data structure 500 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data.
- at least one of the components is excluded from the data structure 500 .
- the occupant identification information 505 is excluded from the data structure 500 .
- additional information is included in the data structure 500 .
- the data structure 500 further includes additional corner pixel positions for the ROI information 515 .
- FIG. 6 is a view of a data structure 600 of attention area data in accordance with some embodiments.
- the data structure 600 corresponds to an attention area determined by the attention area analyzer 124 ( FIG. 1 ).
- the data structure 600 corresponds to an attention area identified in operation 236 ( FIG. 2 ).
- the data structure 600 is similar to the data structure 500 ( FIG. 5 ). Components of the data structure 600 that are similar to the data structure 500 have a same reference number. For the sake of brevity only components of the data structure 600 that are different from the data structure 500 are discussed below.
- the data structure 600 includes ROI information 615 that includes depth information 620 in addition to the first corner pixel position 520 and the second corner pixel position 525 .
- the depth information 620 is usable to determine a distance from the vehicle at which a gaze of the occupant is focused. In some embodiments, the depth information 620 is determined using the gaze detector 122 ( FIG. 1 ) or in operation 234 ( FIG. 2 ). Including the depth information 620 helps to increase precision of determining an object about which the occupant is requesting information.
- the data structure 600 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data.
- at least one of the components is excluded from the data structure 600 .
- the occupant identification information 505 is excluded from the data structure 600 .
- additional information is included in the data structure 600 .
- the data structure 600 further includes additional corner pixel positions for the ROI information 615 .
- FIG. 7 is a view of a data structure of attention area data in accordance with some embodiments.
- the data structure 700 corresponds to an attention area determined by the attention area analyzer 124 ( FIG. 1 ).
- the data structure 700 corresponds to an attention area identified in operation 236 ( FIG. 2 ).
- the data structure 700 is similar to the data structure 500 ( FIG. 5 ). Components of the data structure 700 that are similar to the data structure 500 have a same reference number. For the sake of brevity only components of the data structure 700 that are different from the data structure 500 are discussed below.
- the data structure 700 includes ROI information 715 that includes world coordinate position information 720 in place of the first corner pixel position 520 and the second corner pixel position 525 .
- the world coordinate position information 720 is usable to determine a location of the object within the real world. In some embodiments, the world coordinate position information 720 is determined using the log collector 128 ( FIG. 1 ) or in operation 236 ( FIG. 2 ). Including the world coordinate position information 720 helps to increase precision of determining an object about which the occupant is requesting information.
- the data structure 700 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data.
- at least one of the components is excluded from the data structure 700 .
- the occupant identification information 505 is excluded from the data structure 700 .
- additional information is included in the data structure 700 .
- the data structure 700 further includes at least a partial image of the object.
- FIG. 8 is a view of a user interface 800 in accordance with some embodiments.
- the UI 800 corresponds to UI 116 ( FIG. 1 ).
- UI 800 is part of mobile device 160 ( FIG. 1 ).
- UI 800 is part of vehicle system 110 ( FIG. 1 ).
- the UI 800 includes a navigation UI 805 and an image UI 810 .
- the image UI 810 includes a captured image from a vehicle sensor 815 and a highlight of the identified object 820 .
- the UI 800 is usable to notify the occupant of the object that was identified as a source of the occupant request using image UI 810 .
- the UI 800 is further usable to notify the occupant of a travel path to the object using navigation UI 805 .
- the UI 800 is configured to receive information from the occupant of the as part of the occupant request, request initiation, confirmation of the identified object or other such input information.
- the UI 800 is integrated into the vehicle.
- the 800 is separable from the vehicle.
- the navigation UI 805 is configured to receive GPS information, e.g., from GPS 116 ( FIG. 1 ), and display a map visible to the driver of the vehicle.
- the navigation UI 805 is further configured to display a travel path along the map that the vehicle is able to traverse to reach the identified object.
- the navigation UI 805 includes a touchscreen.
- the navigation UI 805 is configured to receive updates to the map and/or the travel path from an external device, such as the server 140 ( FIG. 1 ).
- the image UI 810 includes a captured image from the vehicle sensor 815 and a highlight of the identified object 820 .
- the highlight of the identified object 820 overlaps the image from the vehicle sensor 815 to identify the object within the image from the vehicle sensor.
- the image from the vehicle sensor 815 is a cropped image from the vehicle sensor.
- the image UI 810 is able to receive input from the occupant to confirm or deny the accuracy of the identified object.
- the image UI 810 includes a touchscreen.
- FIG. 8 includes the navigation UI 805 as being separate from the image UI 810 .
- the image UI 810 is overlaid on the navigation UI 805 .
- the image UI 810 is hidden while vehicle is in motion.
- FIG. 9 is a view of a user interface 900 in accordance with some embodiments.
- the UI 900 corresponds to UI 116 ( FIG. 1 ).
- UI 900 is part of mobile device 160 ( FIG. 1 ).
- UI 900 is part of vehicle system 110 ( FIG. 1 ).
- the UI 900 is similar to the UI 800 .
- Components of the UI 900 that are similar to the UI 800 have a same reference number. For the sake of brevity, only components of UI 900 that are different from UI 800 are discussed below.
- the UI 900 includes a link UI 910 configured to display a link to object information, e.g., a link received in operation 320 ( FIG. 3 ).
- the link UI 910 includes a selectable link and is configured to display the object information in response to retrieving the information following selection of the link by the occupant.
- the link UI 910 is configured to display an icon associated with the link.
- the link UI 910 includes a touchscreen.
- FIG. 9 includes the navigation UI 805 as being separate from the image UI 810 and the link UI 910 .
- at least one of the image UI 810 or the link UI 910 is overlaid on the navigation UI 805 .
- at least one of the image UI 810 or the link UI 910 is hidden while vehicle is in motion.
- FIG. 10 is a view of a user interface in accordance with some embodiments.
- the UI 1000 corresponds to UI 116 ( FIG. 1 ).
- UI 1000 is part of mobile device 160 ( FIG. 1 ).
- UI 900 is part of vehicle system 110 ( FIG. 1 ).
- the UI 1000 is similar to the UI 800 .
- Components of the UI 1000 that are similar to the UI 800 have a same reference number. For the sake of brevity, only components of UI 1000 that are different from UI 800 are discussed below.
- the UI 1000 includes a request history UI 1010 configured to display information related to the occupant request and any subsequent requests for additional information about the object.
- the request history UI 1010 includes a dialog type display with the occupant request and object information provided in sequence.
- the request history UI 1010 is configured to provide a selectable list of previous occupant requests; and display the information provided in response to a corresponding occupant request in response to selection of that occupant request.
- the request history UI 1010 includes a touchscreen.
- FIG. 10 includes the navigation UI 805 as being separate from the image UI 810 and the request history UI 1010 .
- at least one of the image UI 810 or the request history UI 1010 is overlaid on the navigation UI 805 .
- at least one of the image UI 810 or the request history UI 1010 is hidden while vehicle is in motion.
- FIG. 11 is a block diagram of a system for implementing object identification in accordance with some embodiments.
- System 1100 includes a hardware processor 1102 and a non-transitory, computer readable storage medium 1104 encoded with, i.e., storing, the computer program code 1106 , i.e., a set of executable instructions.
- Computer readable storage medium 1104 is also encoded with instructions 1107 for interfacing with external devices.
- the processor 1102 is electrically coupled to the computer readable storage medium 1104 via a bus 1108 .
- the processor 1102 is also electrically coupled to an input/output (I/O) interface 1110 by bus 1108 .
- a network interface 1112 is also electrically connected to the processor 1102 via bus 1108 .
- I/O input/output
- Network interface 1112 is connected to a network 1114 , so that processor 1102 and computer readable storage medium 1104 are capable of connecting to external elements via network 1114 .
- the processor 1102 is configured to execute the computer program code 1106 encoded in the computer readable storage medium 1104 in order to cause system 1100 to be usable for performing a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ).
- the processor 1102 is a central processing unit (CPU), a multi-processor, a distributed processing system, an application specific integrated circuit (ASIC), and/or a suitable processing unit.
- CPU central processing unit
- ASIC application specific integrated circuit
- the computer readable storage medium 1104 includes an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device).
- the computer readable storage medium 1104 includes a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk.
- the computer readable storage medium 1104 includes a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD).
- CD-ROM compact disk-read only memory
- CD-R/W compact disk-read/write
- DVD digital video disc
- the storage medium 1104 stores the computer program code 1106 configured to cause system 1100 to perform a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ). In some embodiments, the storage medium 1104 also stores information needed for performing a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ) as well as information generated during performing a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG.
- a gaze data parameter 1116 such as a gaze data parameter 1116 , an object data parameter 1118 , a vehicle position parameter 1120 , a request content parameter 1122 , and/or a set of executable instructions to perform a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ).
- the storage medium 1104 stores instructions 1107 for interfacing with external devices.
- the instructions 1107 enable processor 1102 to generate instructions readable by the external devices to effectively implement a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ).
- System 1100 includes I/O interface 1110 .
- I/O interface 1110 is coupled to external circuitry.
- I/O interface 1110 includes a keyboard, keypad, mouse, trackball, trackpad, and/or cursor direction keys for communicating information and commands to processor 1102 .
- System 1100 also includes network interface 1112 coupled to the processor 1102 .
- Network interface 1112 allows system 1100 to communicate with network 1114 , to which one or more other computer systems are connected.
- Network interface 1112 includes wireless network interfaces such as BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA; or wired network interface such as ETHERNET, USB, or IEEE-1394.
- a portion or all of the operations as described in object identification system 100 ( FIG. 1 ), method 200 ( FIG. 2 ) or method 300 ( FIG. 3 ) is implemented in two or more systems 1100 , and information such as gaze data parameter 1116 , object data parameter 1118 , vehicle location parameter 1120 , or request content parameter 1122 are exchanged between different systems 1100 via network 1114 .
- An aspect of this description relates to a method of obtaining object information.
- the method includes receiving a request initiation from an occupant of a vehicle.
- the method includes receiving a request from the occupant after receiving the request initiation.
- the method further includes determining a content of the request from the occupant.
- the method further includes detecting a gaze location of the occupant.
- the method further includes receiving information related to an environment surrounding the vehicle based on data collected by a sensor attached to the vehicle.
- the method further includes identifying a region of interest (ROI) outside of the vehicle based on the detected gaze location and the information related to the environment surrounding the vehicle.
- the method further includes generating log data based on the ROI and the content of the request.
- the method further includes transmitting the log data to an external device.
- ROI region of interest
- the method further includes receiving information related to an object within the ROI, wherein the information satisfies the content of the request.
- receiving the initiation request includes receiving the initiation request including a keyword, a key phrase, a predetermined gesture, or an input to a user interface (UI).
- receiving information related to the environment surrounding the vehicle includes receiving an image from a camera attached to the vehicle.
- the method further includes cropping the image based on the ROI, wherein generating the log data comprises generating the log data using the cropped image.
- receiving information related to the object includes receiving identifying information related to the object in response to the content of the request being a request for identification of the object.
- the method further includes determining an identity of the occupant, wherein generating the log data comprises generating the log data based on the identity of the occupant.
- detecting the gaze location of the occupant includes detecting an azimuth angle of a gaze of the occupant relative to the vehicle, and detecting an elevation angle of the gaze of the occupation relative to the vehicle.
- detecting the gaze location of the occupant further includes detecting a depth of the gaze of the occupant relative to the vehicle.
- detecting the gaze location of the occupant includes detecting a world coordinate of the gaze location, and generating the log data comprises generating the log data based on the world coordinate.
- detecting the gaze location of the occupant includes capturing an image of the occupant using a camera attached to the vehicle.
- An aspect of this description relates to a system for obtaining object information.
- the system includes an occupant monitoring camera; a front camera; a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium.
- the processor is configured to execute the instructions for receiving a request initiation from an occupant of a vehicle.
- the processor is further configured to execute the instructions for receiving a request from the occupant after receiving the request initiation.
- the processor is further configured to execute the instructions for determining a content of the request from the occupant.
- the processor is further configured to execute the instructions for detecting a gaze location of the occupant based on information from the occupant monitoring camera.
- the processor is further configured to execute the instructions for receiving information related to an environment surrounding the vehicle based on the front camera attached to the vehicle.
- the processor is further configured to execute the instructions for identifying a region of interest (ROI) outside of the vehicle based on the detected gaze location and the information related to the environment surrounding the vehicle.
- the processor is further configured to execute the instructions for generating log data based on the ROI and the content of the request.
- the processor is further configured to execute the instructions for generating instructions for transmitting the log data to an external device.
- the processor is further configured to execute the instructions for receiving information related to an object within the ROI, wherein the information satisfies the content of the request.
- the processor is configured to execute the instructions for cropping an image from the front camera based on the ROI; and generating the log data using the cropped image.
- the processor is configured to execute the instructions for receiving information related to the object comprising identifying information related to the object in response to the content of the request being a request for identification of the object.
- the processor is configured to execute the instructions for determining an identity of the occupant; and generating the log data based on the identity of the occupant.
- the processor is configured to execute the instructions for detecting an azimuth angle of a gaze of the occupant relative to the vehicle, and detecting an elevation angle of the gaze of the occupation relative to the vehicle.
- the processor is configured to execute the instructions for detecting a depth of the gaze of the occupant relative to the vehicle. In some embodiments, the processor is configured to execute the instructions for detecting a world coordinate of the gaze location; and generating the log data based on the world coordinate.
- An aspect of this description relates to a method of obtaining object information.
- the method includes receiving a request initiation from an occupant of a vehicle using a microphone.
- the method further includes receiving a request from the occupant after receiving the request initiation using the microphone.
- the method further includes detecting a gaze location of the occupant.
- the method further includes receiving information related to an environment surrounding the vehicle using a camera attached to the vehicle.
- the method further includes generating log data based on the information related to the environment surrounding the vehicle and the received request.
- the method further includes transmitting the log data to an external device.
- the method further includes receiving information related to an object within the environment surrounding the vehicle.
- the method further includes automatically generating a notification viewable by the occupant in response to receiving the information related to the object.
- receiving information related to the object includes receiving a link for accessing the external device.
- automatically generating the notification includes displaying the link on a user interface viewable by the occupant.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Ophthalmology & Optometry (AREA)
- General Engineering & Computer Science (AREA)
- Navigation (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
- This application is related to U.S. application Ser. No. 17/497,846, filed Oct. 8, 2021, which is hereby incorporated by reference in its entirety.
- Occupants of vehicles see objects of interest out of windows of the vehicle. In some instances, the occupants wish to identify the object or learn more information about the object. In some instances, the occupant will capture an image of the object using a mobile device, such as a smartphone, and then perform a search on the Internet to identify the object or learn more about the object. In some instances, movement of the vehicle makes capturing the image of the object more difficult. In addition, in some instances, obstructing objects pass between the vehicle and the object that inhibit the capturing of an image of the object. In some instances, a driver is unable to safely remove their hands from the steering wheel to capture the image using the mobile device.
- In some approaches, the occupant looks at a map to attempt to identify the object. The occupant is then able to search the Internet to determine whether the object identified using the map is accurate and, if so, more information about the object. Identifying the object using the map is done using the occupants best estimate about the location of the object relative to other known landmarks or objects.
- Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It is noted that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.
-
FIG. 1 is a block diagram of an object identification system in accordance with some embodiments. -
FIG. 2 is a flowchart of a method of identifying an object in accordance with some embodiments. -
FIG. 3 is a flowchart of a method of identifying an object in accordance with some embodiments. -
FIG. 4 is a view of a data structure of an occupant request in accordance with some embodiments. -
FIG. 5 is a view of a data structure of attention area data in accordance with some embodiments. -
FIG. 6 is a view of a data structure of attention area data in accordance with some embodiments. -
FIG. 7 is a view of a data structure of attention area data in accordance with some embodiments. -
FIG. 8 is a view of a user interface in accordance with some embodiments. -
FIG. 9 is a view of a user interface in accordance with some embodiments. -
FIG. 10 is a view of a user interface in accordance with some embodiments. -
FIG. 11 is a block diagram of a system for implementing object identification in accordance with some embodiments. - The following disclosure provides many different embodiments, or examples, for implementing different features of the provided subject matter. Specific examples of components, values, operations, materials, arrangements, or the like, are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Other components, values, operations, materials, arrangements, or the like, are contemplated. For example, the formation of a first feature over or on a second feature in the description that follows may include embodiments in which the first and second features are formed in direct contact, and may also include embodiments in which additional features may be formed between the first and second features, such that the first and second features may not be in direct contact. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
- Occupants within a moving vehicle often have difficulty with identifying objects of interest. In some instances, the occupant is unable to accurately identify the object based on either a map or a captured image. In some instances, the occupant, such as a driver, is unable to use a map or an image capturing device, such as a smartphone, to attempt to identify the object of interest. In order to assist the occupant in accurately identifying an object of interest, the object identification method of this description utilizes request initiation commands in combination of gaze data and vehicle sensor data to identify the object. In some embodiments, information about the identified object is also provided, such as hours of operation, historical information, etc.
- By utilizing gaze data, the method is able to determine a direction that the occupant is looking. The gaze data is combined with map data and/or vehicle sensor data to determine what object the occupant is observing at the time a request is initiated. Utilizing a request initiation helps to reduce processing load and data transferred between the vehicle and an external device such as a server. In some embodiments, the request initiation includes a key word received via an audio signal from the occupant. In some embodiments, the request initiation includes detecting a predetermined gesture from the occupant. In some embodiments, the request initiation includes receiving an input from a user interface (UI) accessible by the occupant.
- Using the vehicle sensors and/or map data stored within the vehicle help to capture information related to the environment surrounding the vehicle without using a separate device, such as a smartphone, and without the occupant, such as the driver, removing their hands from a steering wheel. This helps to reduce distractions to the occupant and/or driver and allows occupants to identify the object without handling of an external device. Use of vehicle sensor and map data also helps to increase object identification accuracy in situations where objects, such as other vehicles, are obstructing the view of the object of interest; or when the object is initially visible and later obstructed by the time the external device is in a state ready to use.
- In some embodiments, the object is displayed on a vehicle UI to help confirm the object identification. Once the object of interest is identified, then the occupant is able to request additional information related to the identified object. For example, in some embodiments, the occupant is able to request directions to the identified object, hours of operation for the identified object, historical information related to the identified object, or other suitable information.
-
FIG. 1 is a block diagram of anobject identification system 100, in accordance with some embodiments. The description of theobject identification system 100 focuses on an automobile controlled by a driver. However, one of ordinary skill in the art would recognize that other vehicles and operators are within the scope of this description, such as a train operated by an engineer or other mobile vehicles. Theobject identification system 100 includes avehicle system 110 configured to capture information about an occupant of a vehicle and to generate gaze data. Thevehicle system 110 also captured request initiation signals and occupant requests. Theobject identification system 100 further includes aserver 140 configured to receive the generated gaze data as well as information collected from sensors of the vehicle as log data. Theobject identification system 100 further includes amobile device 160 accessible by the occupant of the vehicle associated with the occupant request. In some embodiments, some or all of the functionality of themobile device 160 is incorporated into thevehicle system 110. Incorporating the functionality of themobile device 160 into thevehicle system 110 permits the occupant to utilize theobject identification system 100 even if the occupant does not have access to a mobile device or if a battery of the mobile device is not sufficiently charged to permit use of the mobile device. - The
vehicle system 110 includes an electronic control unit (ECU) 120 configured to receive data from anoccupant monitoring camera 112, afront camera 114, a global positioning system (GPS) 116 and amap 118. The ECU 120 includes agaze detector 122 configured to receive data from theoccupant monitoring camera 112 and detect a gaze direction and/or a gaze depth based on the received data. TheECU 120 further includes anattention area recognizer 124 configured to determine a position of a gaze of the occupant. TheECU 120 further includes alocalization unit 126 configured to receive data from theGPS 116 and themap 118 and determine a position of the vehicle and a pose and state of the vehicle relative to detected and/or known objects and/or road position. A pose is an orientation of the vehicle relative to a reference point, such as a roadway. In some embodiments, the position of the vehicle also refers to a position vector of the vehicle. The pose and state of the vehicle refers to a speed and a heading of the vehicle. In some embodiments, the pose and state of the vehicle also refers to a velocity vector, an acceleration vector and jerk vector of the vehicle. In some embodiments, the position vector, the velocity vector, the acceleration vector and the jerk vector include angle vector. In some embodiments, the state of the vehicle also refers to whether an engine or motor of the vehicle is running. TheECU 120 further includes alog collector 128 configured to receive information from thefront camera 114, thelocalization unit 126 and a data collection requester 132 and to combine the data collection request from the occupant with the corresponding sensor data from thevehicle system 110 in order to compile log data usable by theserver 140 to identify the object of interest. TheECU 120 further includes arequest receiver 130 configured to receive a data request from themobile device 160. In some embodiments where the functionality of themobile device 160 is integrated with thevehicle system 110, therequest receiver 130 is omitted. TheECU 120 further includes a data collection requester 132 configured to receive gaze data and area of interest information from theattention area recognizer 124 and occupant request information from therequest receiver 130. Thedata collection requester 132 is configured to correlate the received information to generate instructions for thelog collector 128 to collect data relevant to the occupant request information from sensors, such asfront camera 114, of the vehicle. TheECU 120 further includes alog transmitter 134 configured to receive the log data from thelog collector 128 and transmit the log data to theserver 140. - The
occupant monitoring camera 112 is configured to capture images of a driver, or other occupant, of the viewing vehicle. Theoccupant monitoring camera 112 is connected to the vehicle. In some embodiments, theoccupant monitoring camera 112 includes a visible light camera. In some embodiments, theoccupant monitoring camera 112 includes an infrared (IR) camera or another suitable sensor. In some embodiments, theoccupant monitoring camera 112 is movable relative to the vehicle in order to capture images of at least one eye of an occupant that are different sizes. While capturing images of both eyes of the occupant is preferred, some occupants have only a single eye, and in some instances where a head of the occupant is turned away from theoccupant monitoring camera 112, only one of the occupant's eyes is capturable by theoccupant monitoring camera 112. In some embodiments, theoccupant monitoring camera 112 is adjusted automatically. In some embodiments, theoccupant monitoring camera 112 is manually adjustable. In some embodiments, the captured image includes at least one eye of the occupant. In some embodiments, the captured image includes additional information about the occupant, such as approximate height, approximate weight, hair length, hair color, clothing or other suitable information. In some embodiments, theoccupant monitoring camera 112 includes multiple image capturing devices for capturing images of different regions of the occupant. In some embodiments,occupant monitoring cameras 112 are located at different locations within the vehicle. For example, in some embodiments, a firstoccupant monitoring camera 112 is located proximate a rear-view mirror in a central region of the vehicle; and a secondoccupant monitoring camera 112 is located proximate a driver-side door. One of ordinary skill in the art would recognize that other locations for theoccupant monitoring camera 112, which do not interfere with operation of the vehicle, are within the scope of this disclosure. In some embodiments, the data from theoccupant monitoring camera 112 includes a timestamp or other metadata to help with synchronization with other data. - One of ordinary skill in the art would understand that in some embodiments the
vehicle system 110 includes additional cameras for monitoring multiple occupants. Each of the additional cameras are similar to theoccupant monitoring camera 112 described above. For example, in some embodiments, one or more monitoring cameras are positioned in the vehicle for capturing images of at least one eye of a front-seat passenger. In some embodiments, one or more monitoring cameras are positioned in the vehicle for capturing images of at least one eye of a rear-seat passenger. In some embodiments, the additional cameras are only activated in response to the vehicle detecting a corresponding front-seat passenger or rear-seat passenger. In some embodiments, an operator of the vehicle is able to selectively de-activate the additional cameras. In embodiments including additional cameras, the captured images are still sent to thegaze detector 122; and thegaze detector 122 is able to generate a gaze result for each of the monitored occupants of the vehicle. - The
front camera 114 is configured to capture images of an environment surrounding the vehicle. In some embodiments, thefront camera 114 includes a visible light camera, an IR camera. In some embodiments, thefront camera 114 is replaced with or is further accompanied by a light detection and ranging (LiDAR) sensor, a radio detection and ranging (RADAR) sensor, a sound navigation and ranging (SONAR) sensor or another suitable sensor. In some embodiments, thefront camera 114 includes additional cameras located at other locations on the vehicle. For example, in some embodiments, additional cameras are located on sides of the vehicle in order to detect a larger portion of the environment to the left and right of the viewing vehicle. Since vehicle occupants are able to look out of side windows of the vehicle, using additional cameras to detect a larger portion of the environment surrounding the vehicle helps to increase precision of determining objects being viewed by the occupants of the vehicle. For example, in some embodiments, additional cameras are located on a back side of the vehicle in order to detect a larger portion of the environment to a rear of the vehicle. This information helps to capture additional objects that vehicle occupants other than the driver are able to view out of rear window. Thefront camera 114 is also able to capture images for determining whether any obstructions, such as medians or guard rails, are present between a location of an object and the occupants of the viewing vehicle. In some embodiments, the data from thefront camera 114 includes a timestamp or other metadata in order to help synchronize the data from thefront camera 114 with the data from theoccupant monitoring camera 112. - The
GPS 116 is configured to determine a location of the vehicle. Knowing the location of the viewing vehicle helps to relate the object and the direction that drew the attention of the occupants with the objects and areas that are related to determined locations on themap 118. Knowing the heading of the vehicle helps to predict which direction an occupant of the vehicle is looking in order to assist with generation of gaze data. Knowing a speed of the viewing vehicle helps to determine how long an occupant of the vehicle had an opportunity to view an object of interest. For example, in some embodiments, by the time the occupant initiates a request, the vehicle has moved past the object of interest or a position of the vehicle relative to the object of interest has changed. As a result, knowing the location of the vehicle at different times helps with correlating occupant requests with objects of interest. - The
map 118 includes information related to the roadway and known objects along the roadway. In some embodiments, themap 118 is usable in conjunction with theGPS 116 to determine a location and a heading of the vehicle. In some embodiments, themap 118 is received from an external device, such as theserver 140. In some embodiments, themap 118 is periodically updated based on information from thefront camera 114 and/or theGPS 116. In some embodiments, themap 118 is periodically updated based on information received from the external device. In some embodiments, themap 118 is generated from sensor data by simultaneous localization and mapping (SLAM) algorithm. - The following description will focus primarily on analysis of information related to the driver for the sake of brevity. One of ordinary skill in the art would understand that the description is applicable to other occupants, such as front-seat passengers or rear-seat passengers, of the vehicle as well.
- The
gaze detector 122 is configured to receive data from thedriver monitoring camera 112 and generate a detected gaze result. The detected gaze result includes a direction that the eyes of the driver are looking. In some embodiments, the direction includes an azimuth angle and an elevation angle. Including azimuth angle and elevation angle allows a determination of a direction that the driver is looking both parallel to a horizon and perpendicular to the horizon. In some embodiments, the detected gaze result further includes depth information. Depth information is an estimated distance from the driver that visual axes of the driver's eyes converge. Including depth information allows a determination of a distance between the driver and an object on which the driver is focusing a gaze. Combining depth information along with azimuth angle and elevation angle increases a precision of the detected gaze result. In some embodiments where the captured image includes only a single eye of the driver, the determining depth information is difficult, so only azimuth angle and elevation angle are determined by thegaze detector 122. In some embodiments, thegaze detector 122 is further configured to receive data from thefront camera 114 and to associate the detected gaze with a pixel location of an image from thefront camera 114 based on the azimuth angle and elevation angle. - In some embodiments, the
gaze detector 122 is not attached to the vehicle. In some embodiments, thegaze detector 122 is attached to the occupant of the viewing vehicle. For example, in some embodiments, thegaze detector 122 includes smart glasses, another piece of smart clothing or other such device that is capable of determining gaze information of a wearer. In some embodiments that utilize smart glasses, gaze data is able to be collected from pedestrians, people riding bicycles or other people that are not in a vehicle. Theobject identification system 100 is able to utilize this gaze data in order to help identify objects of interest. In embodiments that include the user of thegaze detector 122 not attached to the vehicle, thefront camera 114 and thelocalization unit 126 are still used in combination with thegaze detector 122. - The
attention area recognizer 124 is configured to receive gaze data from thegaze detector 122 and further refine the gaze data to identify an area of a visible field of the occupant that is a focus of the occupant. Based on the received gaze data, theattention area recognizer 124 identifies a position relative to the vehicle where the occupant's attention is directed. In some embodiments, theattention area recognizer 124 is further configured to receive information from thefront camera 114 and identifies pixel regions from captured images of thefront camera 114 where the attention of the occupant is directed. Theattention area recognizer 124 helps to reduce an amount of data in the log data collected by thelog collector 128 to reduce processing load on theECU 120. - The
localization unit 126 is configured to receive information from theGPS 116 and themap 118 and determining a location of the vehicle in the world coordinate system or a location of the vehicle relative to the objects on themap 118 and known objects. In some embodiments, thelocalization unit 126 is usable to determine a heading and a speed of the vehicle. Thelocalization unit 126 is also configured to determine state information for the vehicle. In some embodiments, the state information includes speed of the vehicle. In some embodiments, the state information includes velocity vector of the vehicle. In some embodiments, the state information includes heading of the vehicle. In some embodiments, the state information includes acceleration vector of the vehicle. In some embodiments, the state information includes jerk vector of the vehicle. In some embodiments, the state information includes whether an engine or motor of the vehicle is running. In some embodiments, the state information includes other status information related to the vehicle, such as operation of wind shield wipers, etc. - The
log collector 128 is configured to receive an image from thefront camera 114, state information from thelocalization unit 126 and occupant request information from thedata collection requester 132. Thelog collector 128 is configured to correlate the received data to determine what portion of the image from thefront camera 114 was being observed by the occupant at the time that the occupant request was initiated. Thelog collector 128 is also configured to determine what information is being sought by the occupant, such as object identification, directions to the object, or other suitable information. Thelog collector 128 determines the portion of the image captured by thefront camera 114 based on the gaze data analyzed by theattention area analyzer 124 and thedata collection requester 132. Based on the analyzed gaze data, thelog collector 128 is able to crop the image from thefront camera 114 in order to reduce an amount of data to be transmitted to the server for analysis. Thelog collector 128 uses the state information from thelocalization unit 126 to complement the analyzed gaze data to help with precision in the image cropping. - The
log collector 128 generates log data based on the received and correlated data, such as the cropped image and requested data. Thelog collector 128 also associates timestamp information with the log data in order to assist with synchronization of the collected data and for queue priority within theserver 150. In some embodiments, thelog collector 128 generates the log data to further include world coordinates associated with the cropped image. In some embodiments, thelog collector 128 generates the log data to further include a map location associated with the cropped image. In some embodiments, thelog collector 128 includes additional information to assist in increasing accuracy of responding to the occupant request. - While the above description relates to generating log data based on an image from the
front camera 114, one of ordinary skill in the art would understand that thelog collector 128 is not limited solely to generating log data based on images. In some embodiments, thelog collector 128 is configured to generate log data based on information from other sensors attached to the vehicle, such as RADAR, LiDAR, or other suitable sensors. In some embodiments, thelog collector 128 can generate log data based on point cloud data received from LiDAR instead of the image data. One of ordinary skill in the art would recognize that point cloud data includes a set of data points in space that are usable to represent a three-dimensional shape or object based on a distance of each point from the detector. In some embodiments where the occupant is wearing smart glasses, thelog collector 128 is further configured to generate the log data based on information received from the smart glasses. - The
request receiver 130 is configured to receive a request from themobile device 160. In some embodiments where the functionality of themobile device 160 is incorporated into thevehicle system 110, therequest receiver 130 is omitted and the request is transferred directly to thedata collection requester 132. In some embodiments, therequest receiver 130 is configured to receive the request wirelessly. In some embodiments, therequest receiver 130 is configured to receive the request via a wired connection. In some embodiments, therequest receiver 130 is configured to receive a request initiation prior to receiving the request. In some embodiments, in response to receiving a request initiation, therequest receiver 130 is configured to notify the data collection requester to initiate data collection at thelog collector 128 to help ensure that information from the vehicle sensors, such as thefront camera 114, is stored for generation of log data. In some embodiments, therequest receiver 130 is further configured to receive the request including identification information for the occupant making the request and timestamp information for when the request was made. In some embodiments, therequest receiver 130 is configured to receive information related to an identity of the occupant making the request. - The
data collection requester 132 is configured to correlate the occupant request with region of interest (ROI) information from theattention area analyzer 124. Thedata collection requester 132 is configured to convert the occupant request and ROI information into instructions usable by thelog collector 128 to collect information for satisfying the occupant request. In some embodiments, thedata collection requester 132 is configured to determine what sensors are available to capture information related to a certain region of the environment surrounding the vehicle. In some embodiments, thedata collection requester 132 is configured to identify what types of sensors thelog collector 128 should use to satisfy the occupant request. Thedata collection requester 132 is further configured to identify a timestamp of the occupant request to allow thelog collector 128 to accurately collect data from the relevant sensors on the vehicle. - The
log transmitter 134 is configured to receive log data from thelog collector 128 and transmit the log data to theserver 140. In some embodiments, thelog transmitter 134 is configured to transmit the log data wirelessly. In some embodiments, thelog transmitter 134 is configured to transmit the log data via a wired connection. In some embodiments, thelog transmitter 134 is configured to transmit the log data to themobile device 160, which in turn is configured to transmit the log data to theserver 140. In some embodiments, thelog transmitter 134 is configured to transmit the log data to themobile device 160 using Bluetooth® or another suitable wireless technology. In some embodiments, theECU 120 is configured to determine whether the data transfer rate from themobile device 160 to theserver 140 is higher than a transfer rate from thelog transmitter 134 to theserver 140. In response to a determination that the data transfer rate from themobile device 160 to the sever 140 is higher, thelog transmitter 134 is configured to transmit the log data to themobile device 160 to be transmitted to theserver 140. In response to a determination that the data transfer rate from themobile device 160 to theserver 140 is not higher, thelog transmitter 134 is configured to transmit the log data to theserver 140 from thevehicle system 110 directly without transferring the log data to themobile device 160. - In some embodiments, the
vehicle system 110 further includes a memory configured to store sensor data from sensors attached to the vehicle. In some embodiments, the memory is further configured to store information associated with previous occupant requests. In some embodiments, in response to the data collection requester 132 determining that the occupant request matches a previous occupant request, thedata collection requester 132 is configured to provide results from the matching previous occupant request to theoccupant 180. In some embodiment, the previous requests is stored as cache data. One of ordinary skill in the art would understand caching as using hardware or software to store data so that future requests for that data are able to be served faster. - The
server 140 includes alog data receiver 142 configured to receive the log data from thelog transmitter 134. In some embodiments, thelog data receiver 142 is configured to receive the log data from themobile device 160. Theserver 140 further includes alog storer 144 configured to store the received log data. Theserver 140 further includes alog analyzer 146 configured to receive the log data from thelog storer 144 and information from adatabase 148 to identify an object of interest and/or provide information related to the object of interest. Theserver 140 further includes adatabase 148 configured to store information about objects. Theserver 140 further includes ananalysis result transmitter 150 configured to transmit the results of thelog analyzer 146 to themobile device 160. Theserver 140 further includes alog transmitter 152 configured to transmit log identification information to themobile device 160. - The
log data receiver 142 is configured to receive the log data from thelog transmitter 134. In some embodiments, thelog data receiver 142 is configured to receive the log data from themobile device 160. In some embodiments, thelog data receiver 142 is configured to receive the log data wirelessly. In some embodiments, thelog data receiver 142 is configured to receive the log data via a wired connection. In some embodiments, thelog data receiver 142 is configured to attach a timestamp for a time that the log data was received to the log data. - The
log storer 144 is configured to store the received log data for analysis. In some embodiments, thelog storer 144 includes a solid-state memory device. In some embodiments, thelog storer 144 includes a dynamic random-access memory (DRAM). In some embodiments, thelog storer 144 includes a non-volatile memory device. In some embodiments, thelog storer 144 includes cloud-based storage or another suitable storage structure. In some embodiments, thelog storer 144 is configured to store the log data in a queue based on priority. In some embodiments, the priority is based on a timestamp of when theserver 140 received the log data. In some embodiments, the priority is based on a timestamp of when the occupant request was received. In some embodiments, the priority is based on a size of the log data. In some embodiments, the priority is based on an identity of the of theoccupant 180. For example, in some embodiments, the occupant has an account with a service offered on theserver 140 for prioritizing fulfillment of occupant requests. In some embodiments, other criteria are used to determine a priority of the log data in the queue. In some embodiments, log data is removed from thelog storer 144 following analysis of the log data by thelog analyzer 146. In some embodiments, log data is not protected from over-writing in thelog storer 144 following analysis of the log data by thelog analyzer 146. - The
log analyzer 146 is configured to receive log data from thelog storer 144 and determine whether the occupant request of the log data matches any records stored in thedatabase 148. In some embodiments, thelog analyzer 146 includes a trained neural network (NN) to compare the log data with known objects from thedatabase 148. Once a match between the log data and a known object in thedatabase 148 is found, then loganalyzer 146 determines the requested data from the log data, such as object identification, object hours of operation, historical information of the object, etc. Thelog analyzer 146 extracts information from thedatabase 148 that satisfies the requested data and transfers the extracted information to theanalysis result transmitter 150. In some embodiments, the extracted information is transferred to the analysis result transmitter along with identification information for the log data. - The
database 148 is configured to store information related to objects in association with a location of the object and an image of the object. In some embodiments, thedatabase 148 includes a solid-state memory device. In some embodiments, thedatabase 148 includes a dynamic random-access memory (DRAM). In some embodiments, thedatabase 148 includes a non-volatile memory device. In some embodiments, thedatabase 148 includes a relational database (RDB). In some embodiments, thedatabase 148 includes a Key Value Store (KVS). In some embodiments, thedatabase 148 includes NoSQL database. In some embodiments, thedatabase 148 includes cloud-based storage or another suitable storage structure. In some embodiments, thedatabase 148 is integral with thelog storer 144. In some embodiments, thedatabase 148 is separate from thelog storer 144. In some embodiments, thedatabase 148 is configured to store information related to analysis results for previous occupant requests. In some embodiments, thelog analyzer 146 is able to retrieve the results from the previous occupant requests in response to a determination that the log data matches a previous occupant request. In some embodiments, thedatabase 148 stores a feature map that is generated by NN instead of storing image data. - The
analysis result transmitter 150 is configured to receive the information satisfying the occupant request from thelog analyzer 146. Theanalysis result transmitter 150 is configured to transmit the information to themobile device 160. In some embodiments, theanalysis result transmitter 150 is configured to transmit the information to thevehicle system 110 instead of or in addition to themobile device 160. In some embodiments, theserver 140 is configured to determine whether the data transfer rate from theserver 140 to themobile device 160 is higher than a transfer rate fromserver 140 to thevehicle system 110. In response to a determination that the data transfer rate from theserver 140 to themobile device 160 is higher, theanalysis result transmitter 150 is configured to transmit the information to themobile device 160 to be transmitted to thevehicle system 110. In response to a determination that the data transfer rate from theserver 140 to thevehicle system 110 is higher, theanalysis result transmitter 150 is configured to transmit the information to thevehicle system 110 directly without the information going through themobile device 160. In some embodiments, theanalysis result transmitter 150 is configured to transfer the information wirelessly. In some embodiments, theanalysis result transmitter 150 is configured to transmit the information via a wired connection. In some embodiments, theanalysis result transmitter 150 is configured to transmit identification information for the log data associated with the information as well. Transmitting the identification information for the log data helps themobile device 160 or thevehicle system 110 to display both the data request and the analysis result to the occupant. - The
log transmitter 152 is configured to transmit information related to the processing of the log data by theserver 140. In some embodiments, thelog transmitter 152 transmits the information to themobile device 160. In some embodiments, thelog transmitter 152 transmits the information to thevehicle system 110. In some embodiments, theserver 140 is configured to determine whether the data transfer rate from theserver 140 to themobile device 160 is higher than a transfer rate fromserver 140 to thevehicle system 110. In response to a determination that the data transfer rate from theserver 140 to themobile device 160 is higher, thelog transmitter 152 is configured to transmit the information to themobile device 160 to be transmitted to thevehicle system 110. In response to a determination that the data transfer rate from theserver 140 to thevehicle system 110 is higher, thelog transmitter 152 is configured to transmit the information to thevehicle system 110 directly without the information going through themobile device 160. In some embodiments, thelog transmitter 152 is configured to transmit the log data to themobile device 160 and/or thevehicle system 110 for review by the occupant. In some embodiments, thelog transmitter 152 is configured to transmit identification information for the log data to themobile device 160 and/or thevehicle system 110 in response to thelog analyzer 146 taking the log data out of the queue in thelog storer 144. In some embodiments, thelog transmitter 152 transmits the information wirelessly. In some embodiments, thelog transmitter 152 transmits the information via a wired connection. - The
mobile device 160 includes alog receiver 162 configured to receive information from thelog transmitter 152. The mobile device further includes ananalysis result receiver 164 configured to receive information from theanalysis result transmitter 150. Themobile device 160 further includes aUI 166 configured to convey information to theoccupant 180 based on the information received from thelog transmitter 152 and theanalysis result transmitter 150. TheUI 166 is further configured to receive input information from theoccupant 180. Themobile device 160 further includes amicrophone 168 configured to receive request initiation information and request data from theoccupant 180. Themobile device 160 further includes avoice recognizer 170 configured to analyze the data received by themicrophone 168 and determine a content of the request initiation information and the request data. Themobile device 160 further includes arequest transmitter 172 configured to transmit the request data to therequest receiver 130. - The
log receiver 162 is configured to receive information from thelog transmitter 152. In some embodiments, thelog receiver 162 is configured to receive the information wirelessly. In some embodiments, thelog receiver 162 is configured to receive the information via a wired connection. - The
analysis result receiver 164 is configured to receive information from theanalysis result transmitter 150. In some embodiments, theanalysis result receiver 164 is configured to receive the information wirelessly. In some embodiments, thelog receiver 162 is configured to receive the information via a wired connection. - The
UI 166 is configured to receive information from thelog receiver 162 and theanalysis result receiver 164. TheUI 166 is configured to convey the received information to theoccupant 180. In some embodiments, theUI 166 includes a touchscreen. In some embodiments, theUI 166 is part of a smartphone. In some embodiments, theUI 166 is integrated into a vehicle including thevehicle system 110. In some embodiments, theUI 166 is configured to receive input from theoccupant 180. In some embodiments, theUI 166 is configured to receive an input indicating an identity of theoccupant 180. In some embodiments, theUI 166 is configured to receive an input corresponding to a data request from theoccupant 180. - The
microphone 168 is configured to capture audio signals from theoccupant 180. In some embodiments, themicrophone 168 is part of a smartphone. In some embodiments, themicrophone 168 is integral with a vehicle including thevehicle system 110. In some embodiments, themicrophone 168 includes a directional microphone. In some embodiments, themicrophone 168 is configured to capture a voice of theoccupant 180. - The
voice recognizer 170 is configured to receive an audio signal from themicrophone 168 and determine a content of the audio signal. In some embodiments, thevoice recognizer 170 is configured to determine whether the audio signal indicates a request initiation, such as a keyword or key phrase. In some embodiments, thevoice recognizer 170 is configured to determine a type of data requested by theoccupant 180, such as identifying an object, information about an object, etc. In some embodiments, thevoice recognizer 170 is further configured to determine an identity of theoccupant 180. In some embodiments, thevoice recognizer 170 is configured to determine the identity of theoccupant 180 based on voice recognition software. In some embodiments, thevoice recognizer 170 is configured to determine the identity of theoccupant 180 based on an identifying keyword or key phrase, such as an occupant name or other identifying information. In some embodiments, thevoice recognizer 170 is configured to determine the identity of theoccupant 180 based on an input received at theUI 166. In some embodiments, thevoice recognizer 170 is configured to determine the identity of theoccupant 180 based on an input from thevehicle system 110, such as an image of the occupant that is speaking from theoccupant monitoring camera 112. - The above description relates to a request initiation based on a verbal input. One of ordinary skill in the art will recognize that the current description is not limited to a verbal request initiation. In some embodiments, the request initiation includes an input received at the
UI 166. In some embodiments, the request initiation includes a detected gesture, such as a gesture detected usingoccupant monitoring camera 112. In some embodiments, the request initiation includes a combination of different inputs, such as an input at theUI 166 and a verbal input, or a recognition result of a face of the occupant, or a recognition result of an iris of an eye of the occupant by thegaze detector 122 or other suitable combinations. Inclusion of a request initiation as part of an occupant request helps to minimize unnecessary processing and data transmission which helps to minimize processing load and power consumption for the vehicle including thevehicle system 110. As more vehicles become electric vehicles (EVs), minimizing power consumption becomes a greater concern in order to maintain battery charge and maximize a distance that the EV is able to travel without re-charging. - The
request transmitter 172 is configured to receive request information from thevoice recognizer 170 and transmit information to therequest receiver 130. In some embodiments, therequest transmitter 172 is configured to transmit a request initiation signal in response to thevoice recognizer 170 identifying a request initiation. In some embodiments, therequest transmitter 172 does not send a signal in response to thevoice recognizer 170 identifying a request initiation. Sending a signal in response to a request initiation helps thevehicle system 110 to store sensor data to improve accuracy and precision of satisfying the occupant request. However, sending the signal in response to a request initiation increases an amount of data transmitted and processing load. Therequest transmitter 172 is configured to transmit the occupant request based on the analysis by thevoice recognizer 170. In some embodiments, therequest transmitter 172 is configured to transmit the occupant request or other information wirelessly. In some embodiments, therequest transmitter 172 is configured to transmit the occupant request or other information via a wired connection. - One of ordinary skill in the art would understand that modifications to the
object identification system 100 are within the scope of this disclosure. For example, in some embodiments, themicrophone 168 and thevoice recognizer 170 are omitted and occupant requests, including request initiation, are received through theUI 166. In some embodiments, results of the analysis by theserver 140 transmitted to themobile device 160 cause an alert, such as an audio or visual alert, to automatically display on themobile device 160. -
FIG. 2 is a flowchart of amethod 200 of identifying an object in accordance with some embodiments. In some embodiments, themethod 200 is implemented using system 100 (FIG. 1 ). In some embodiments, themethod 200 is implemented using system 1100 (FIG. 11 ). - In
operation 210, theoccupant 180 initiates a request. Initiating the request helps to avoid unnecessary processing load on themobile device 160, thevehicle system 110 and theserver 140 by avoiding processing inadvertently triggered occupant requests. In some embodiments, initiating the request includes theoccupant 180 speaking a keyword or a key phrase, e.g., detected by the microphone 168 (FIG. 1 ). In some embodiments, initiating the request includes the occupant touching a button, e.g., on UI 166 (FIG. 1 ). In some embodiments, initiating the request includes themobile device 160 or thevehicle system 110 detecting, e.g., using the occupant monitoring camera 112 (FIG. 1 ), a predetermined gesture by theoccupant 180. Once the request is initiated, the mobile device activates a request receiver inoperation 220; and theoccupant 180 is able to input a request inoperation 212. - In
operation 212, theoccupant 180 inputs the request. The request is the information that theoccupant 180 would like to know about an object of interest. In some embodiments, the request includes identifying information about the object. In some embodiments, the request includes other information about the object, such as hours of operation, directions to the object, historical information about the object, or other suitable information. In some embodiments, theoccupant 180 inputs the request verbally, e.g., detected by the microphone 168 (FIG. 1 ). In some embodiments, theoccupant 180 inputs the request using a UI, e.g., UI 166 (FIG. 1 ). In some embodiments, theoccupation 180 inputs the request using a predetermined gesture, e.g., detected by occupant monitoring camera 112 (FIG. 1 ). In some embodiments, a manner of initiating a request and inputting the request are the same, e.g., both initiation and inputting are performed verbally by the occupant. In some embodiments, a manner of initiating the request and inputting the request are different, e.g., initiation is performed using a UI and inputting is performed verbally. Other combinations of initiation and inputting of requests are within the scope of this disclosure. - In
operation 220, themobile device 160 activates arequest receiver 220. Activating the request receiver in response to initiating the request helps themobile device 160 to conserve power by avoiding having the request receiver be constantly monitoring for requests from theoccupant 180. In some embodiments, activating the request receiver includes displaying an input screen on a UI, e.g., UI 166 (FIG. 1 ). IN some embodiments, activating the request receiver includes initializing a microphone, e.g., microphone 168 (FIG. 1 ). In some embodiments, activating the request receiver includes activating circuitry within themobile device 160 that will process a received request. - The
operation 220 is repeated until themobile device 160 receives an input request inoperation 212. In some embodiments, following a predetermined time period, e.g., 10 seconds to 30 seconds, without receiving the input request fromoperation 212, theoperation 220 is discontinued and the request receiver returns to a sleep or low power state. If the predetermined time period is too long, then power consumption is unnecessarily increased, in some instances. If the predetermined time period is too short, theoccupant 180 will not have sufficient time to input the request inoperation 212, in some instances. In some embodiments, theoperation 220 is discontinued in response to receipt of a cancellation signal, e.g., triggered by a keyword, key phrase, an input to the UI, or other suitable input. - In
operation 222, themobile device 160 receives the request fromoperation 212. In some embodiments, the request is received directly from theoccupant 180. In some embodiments, the request is receive indirectly from theoccupant 180 via an external device, such as a keyboard or another suitable external device. In some embodiments, theoperation 212 and theoperation 222 are implemented using a same component of themobile device 160, e.g., themicrophone 168 or the UI 166 (FIG. 1 ). - In
operation 224, the request is analyzed and transmitted. The request is analyzed to determine the type of data requested by theoccupant 180. In some embodiments, the request is analyzed using the voice recognizer 170 (FIG. 1 ). The analyzed request is transmitted to thevehicle system 110 in order to collect log data for satisfying the request. In some embodiments, the analyzed request is transmitted using the request transmitter 172 (FIG. 1 ). - In
operation 230, the analyzed request is received by thevehicle system 110. In some embodiments, the analyzed request is received wirelessly. In some embodiments, the analyzed request is received via a wired connection. In some embodiments, the analyzed request is received using the request receiver 130 (FIG. 1 ). - In
operation 232, one or more images of the occupant are captured. The captured images are associated with timestamp data to determine a time at the one or more image was captured. The one or more images of the occupant capture at least one eye of the occupant. In some embodiments, images of the occupant are captured at regular intervals. In some embodiments, images of the occupant are captured in response to receiving a signal indicating that a request has been initiated, e.g., a signal from themobile device 160 to thevehicle system 110 as part ofoperation 220. In some embodiments, the one or more images of the occupant are captured using the occupant monitoring camera 112 (FIG. 1 ). In some embodiments, only images of the occupant associated with an occupant request are captured. In some embodiments, images of more than one occupant of a vehicle are captured and only images of the occupant associated with the occupant request are used to generate request data later inmethod 200. In some embodiments, theoperation 232 is performed in response to a signal generated inoperation 220. In some embodiments,operation 232 is performed independent of receipt of initiating a request. - In
operation 234, the occupant gaze is detected based on the one or more images captured inoperation 232. Detecting the gaze of the occupant includes identifying angles of the occupant's gaze relative to the vehicle. In some embodiments, the angles include the azimuth angle and the elevation angle. In some embodiments, detecting the gaze further includes determining a depth of the gaze relative to the vehicle position. In some embodiments, theoperation 234 is implemented using the gaze detector 122 (FIG. 1 ). - In
operation 236, an attention area is identified based on the detected gaze of the occupant fromoperation 234. The attention area is identified to determine a ROI for theoccupant 180. In some embodiments, the attention area is identified based on world coordinates. In some embodiments, the attention area is identified based on pixel regions of an image captured by the vehicle, e.g., using front camera 114 (FIG. 1 ). In some embodiments, the attention area is identified based on relative coordinates with respect to the vehicle. Identifying the attention area helps to reduce an amount of data to be transmitted to theserver 140 for processing. In some embodiments, theoperation 236 is implemented using the attention area recognizer 124 (FIG. 1 ). - In some embodiments, operations 232-236 are performed continually during operation of the vehicle. The information generated by the operations 232-236 is stored in a memory within the
vehicle system 110 for analysis in response to receiving an occupant request. In some embodiments, operations 232-236 are performed in response to receiving an initiate request signal as part ofoperation 220. In some embodiments, operations 232-236 are discontinued in response to a signal received indicating thatoperation 220 has been discontinued due to failure to receive a timely input request or in response to a cancellation input. - In
operation 238, a data collection request is generated based on the received analyzed request. The data collection request identifies information from the operations 232-236 that is usable to satisfy the received analyzed request. The data collection request identifies which sensors of the vehicle are usable to satisfy the received analyzed request. The data collection request also identifies a time period over which to collect the sensor data based on a timestamp of the received request inoperation 222. In some embodiments, theoperation 238 is implemented using the data request collector 132 (FIG. 1 ). - In
operation 240, sensor data is collected based on the data collection request. In some embodiments, the sensor data is collected from a memory within thevehicle system 110. In some embodiments, the sensor data is collected from a single sensor. In some embodiments, the sensor data is collected from multiple sensors. In some embodiments, the sensor data is collected using log collector 128 (FIG. 1 ). - In
operation 242, the sensor data collected inoperation 240 is cropped. Cropping the sensor data reduces an amount of data to be transmitted to theserver 140. The term cropped here is used based on the sensor data being image data. However, one of ordinary skill in the art would understand thatoperation 242 is used to reduced superfluous data based on the identified attention area fromoperation 236 regardless of a type of sensor data being used. In some embodiments, theoperation 242 is implemented using log collector 128 (FIG. 1 ). The cropped sensor data along with timestamp information is considered log data, in some embodiments. - In
operation 244, the log data is transmitted to theserver 140. In some embodiments where a memory within thevehicle system 110 is able to compare the received analyzed request with previous occupant requests, theoperation 244 is omitted and the results satisfying the received analyzed request are provided by thevehicle system 110 directly. In some embodiments, the log data is transmitted wirelessly. In some embodiments, the log data is transmitted via a wired connection. In some embodiments, theoperation 244 is implemented using the log transmitter 134 (FIG. 1 ). - In
operation 250, theserver 140 receives the log data. In some embodiments, theoperation 250 is implemented using log data receiver 142 (FIG. 1 ). In some embodiments where thevehicle system 110 is able to provide a result satisfying the occupant request, the log data is not transmitted to the sever 140 and theoperation 250 is omitted. - In
operation 252, log data is stored in theserver 140. The log data is stored for later processing by theserver 140. In some embodiments, the log data is stored in a priority based queue. In some embodiments, priority in the queue is based on a time that the log data is received by theserver 140. In some embodiments, priority in the queue is based on a time that the occupant request was received, i.e., inoperation 222. In some embodiments, priority in the queue is based on an identity of theoccupant 180. - In
operation 254, the log data is analyzed to determine a result that satisfies the occupant request in the log data. The log data is analyzed by comparing the data from the sensors of the vehicle with data in a database of theserver 140. Once a match between an object in the vehicle sensor data and the data in the database is found, the database is queried to retrieve information that satisfies the occupant request. For example, in some embodiments, the database is queried to determine identification information for the object, hours of operation for the object, a location of the object, etc. In some embodiments, the information from the database includes a web address for theoccupant 180 to find information about the object. In some embodiments where no match between the vehicle sensor data and the data in the database is found, theoperation 254 returns a result indicating that no match was found. In some embodiments, theoperation 254 is implemented using the log analyzer 146 (FIG. 1 ). - In
operation 256, the analysis result fromoperation 254 is transmitted. In some embodiments, the analysis result is transmitted wirelessly. In some embodiments, the analysis result is transmitted via a wired connection. In themethod 200, the analysis result is transmitted to themobile device 160. In some embodiments, the analysis result is transmitted to thevehicle system 110 instead of or in addition to themobile device 160. In some embodiments, theoperation 256 is implemented using the analysis result transmitter 150 (FIG. 1 ). - In
operation 260, themobile device 160 receives the analysis results. In some embodiments, the analysis results include both the information from the database retrieved inoperation 254 as well as log data identification information. Including log data identification information along with the analysis results helps to expedite analysis and providing of additional information about the object in a situation where the occupant requests more information about the object following the receipt of the analysis results. In some embodiments, theoperation 260 is implemented using analysis result receiver 164 (FIG. 1 ). - In
operation 262, theoccupant 180 is notified of the analysis results. In some embodiments, the occupant is notified by providing theoccupant 180 with a web address to access information about the object. In some embodiments, the occupant is notified by providing theoccupant 180 with the requested information about the object. In some embodiments, theoccupant 180 is notified using a visual notification. In some embodiments, theoccupant 180 is notified using an audio notification. In some embodiments, the occupant is notified using UI 166 (FIG. 1 ). In some embodiments, theoccupant 180 is notified by an alert, at least one of audio or visual, automatically appearing on themobile device 160 in response to receiving the analysis results from theserver 140. In some embodiments, the notification to theoccupant 180 includes the vehicle sensor data, such as a cropped image, included as part of the log data to allow theoccupant 180 to confirm that the received information corresponds to the intended object of interest. In some embodiments, the notification to theoccupant 180 includes a request for confirmation that the object of interest was correctly identified; and results of the request for confirmation are provided to theserver 140 to help improve performance of log data analysis inoperation 254. In some embodiments, after theoperation 262, the occupant gives a feedback to at least one of theserver 140, themobile device 160, or thevehicle system 110 about whether the received results were really relevant to the request that the occupant made or about whether the occupant liked the information or not. This feedback provides training of a neural network (NN) so that thelog analyzer 146, theattention area recognizer 124, thedata collection requester 132, and thevoice recognizer 170 are able to be tuned or trained so that the false positives and false negatives are reduced over time. - One of ordinary skill in the art would recognize that modifications to the
method 200 are within the scope of this disclosure. In some embodiments, additional operations are included in themethod 200. For example, in some embodiments, themethod 200 includes updating of the database in theserver 140 based on confirmation results from the occupant following notification of analysis results. In some embodiments, at least one operation of themethod 200 is omitted. For example, in some embodiments, theoperation 242 is omitted if data transmission size is not a concern. In some embodiments, an order of operations of themethod 200 is changed. For example, in some embodiments, theoperation 234 occurs afteroperation 230 to reduce processing load on thevehicle system 110. One of ordinary skill in the art would recognize that other modifications are within the scope of this disclosure. -
FIG. 3 is a flowchart of amethod 300 of identifying an object in accordance with some embodiments. In some embodiments, themethod 300 is implemented using system 100 (FIG. 1 ). In some embodiments, themethod 300 is implemented using system 1100 (FIG. 11 ). Themethod 300 is similar to the method 200 (FIG. 2 ). Operations inmethod 300 that are similar to operations inmethod 200 have a same reference number. For the sake of brevity, only the operations ofmethod 300 that are different from operations inmethod 200 are discussed below. - In
operation 305, the log data is analyzed and associated with object information for the object of interest. The log data is analyzed by comparing the data from the sensors of the vehicle with data in a database of theserver 140. Once a match between an object in the vehicle sensor data and the data in the database is found, a link to the object information in the database for the matching object is associated with the log data. The link allows theoccupant 180 to access the database in theserver 140 to obtain the requested information about the object. In some embodiments, the link includes Uniform Resource Locator (URL) which the occupant is able to open using the UI 166 (such as web browser). In some embodiments, the link permits theoccupant 180 to obtain additional information about the object other than just the requested information. In some embodiment, the log data is analyzed by comparing the feature map that is extracted by NN from the data from the sensors of the vehicle with feature map that is extracted by NN from data in a database of theserver 140. In some embodiments, theoperation 305 is implemented using the log analyzer 146 (FIG. 1 ). - In
operation 310, the link to access the log data and associated object information fromoperation 305 is transmitted. In some embodiments, the link is transmitted wirelessly. In some embodiments, the link is transmitted via a wired connection. In themethod 300, the link is transmitted to themobile device 160. In some embodiments, the link is transmitted to thevehicle system 110 instead of or in addition to themobile device 160. In some embodiments, theoperation 310 is implemented using the analysis result transmitter 150 (FIG. 1 ). - In
operation 320, themobile device 160 receives the link. In some embodiments, the link includes both the link for accessing the database as well as log data identification information. Including log data identification information along with the analysis results helps to expedite analysis and providing of additional information about the object in a situation where the occupant requests more information about the object following the receipt of the link and the link does not provide access to all information about the object stored in the database. In some embodiments, theoperation 320 is implemented using analysis result receiver 164 (FIG. 1 ). - In
operation 322, theoccupant 180 is notified of the link. In some embodiments, the occupant is notified by providing theoccupant 180 with a web address to access information about the object. In some embodiments, the occupant is notified by providing theoccupant 180 with a selectable icon for accessing the information about the object. In some embodiments, theoccupant 180 is notified using a visual notification. In some embodiments, theoccupant 180 is notified using an audio notification. In some embodiments, the occupant is notified using UI 166 (FIG. 1 ). In some embodiments, theoccupant 180 is notified by an alert, at least one of audio or visual, automatically appearing on themobile device 160 in response to receiving the link from theserver 140. In some embodiments, the notification to theoccupant 180 includes the vehicle sensor data, such as a cropped image, included as part of the log data to allow theoccupant 180 to confirm that the received information corresponds to the intended object of interest. In some embodiments, the notification to theoccupant 180 includes a request for confirmation that the object of interest was correctly identified; and results of the request for confirmation are provided to theserver 140 to help improve performance of log data analysis inoperation 305. - One of ordinary skill in the art would recognize that modifications to the
method 300 are within the scope of this disclosure. In some embodiments, additional operations are included in themethod 300. For example, in some embodiments, themethod 300 includes updating of the database in theserver 140 based on confirmation results from the occupant following notification of link. In some embodiments, at least one operation of themethod 300 is omitted. For example, in some embodiments, theoperation 242 is omitted if data transmission size is not a concern. In some embodiments, an order of operations of themethod 300 is changed. For example, in some embodiments, theoperation 234 occurs afteroperation 230 to reduce processing load on thevehicle system 110. One of ordinary skill in the art would recognize that other modifications are within the scope of this disclosure. -
FIG. 4 is a view of adata structure 400 of an occupant request in accordance with some embodiments. In some embodiments, thedata structure 400 corresponds to status of the occupant request received from theoccupant 180 by themicrophone 168 and processed by the voice recognizer 170 (FIG. 1 ). In some embodiments, thedata structure 400 corresponds to occupant request received in operation 222 (FIG. 2 ). - The
data structure 400 includesoccupant identification information 405. Theoccupant identification information 405 indicates an identity of the occupant that made the occupant request. In some embodiments, theoccupant identification information 405 is determined based on analysis by the voice recognizer 170 (FIG. 1 ). In some embodiments, theoccupant identification information 405 is determined based on an input at the UI 116 (FIG. 1 ). In some embodiments, theoccupant identification information 405 is determined based on who has control of the mobile device 160 (FIG. 1 ). In some embodiments, theoccupant identification information 405 is determined based on recognition result of an iris of the eye of the occupant recognize d by a camera on themobile device 160. In some embodiments, theoccupant identification information 405 is determined based on a fingerprint of the occupant recognize d by themobile device 160 or by a sensor on a steering wheel of the vehicle. Thedata structure 400 further includesrequest data 410. Therequest data 410 includes a content of the information requested by the occupant. In some embodiments, therequest data 410 includes a request for identification of an object. In some embodiments, therequest data 410 includes a request for information about the object in addition to or different from identification of the object. Thedata structure 400 further includestimestamp information 415. Thetimestamp information 415 indicates a time corresponding to receipt of the requested information from the occupant. - The
data structure 400 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the occupant request data. In some embodiments, at least one of the components is excluded from thedata structure 400. For example, in some embodiments, theoccupant identification information 405 is excluded from thedata structure 400. In some embodiments, additional information is included in thedata structure 400. For example, in some embodiments, thedata structure 400 further includes information about a location of the occupant within the vehicle. -
FIG. 5 is a view of adata structure 500 of attention area data in accordance with some embodiments. In some embodiments, thedata structure 500 corresponds to an attention area determined by the attention area analyzer 124 (FIG. 1 ). In some embodiments, thedata structure 500 corresponds to an attention area identified in operation 236 (FIG. 2 ). - The
data structure 500 includesoccupant identification information 505. Theoccupant identification information 505 indicates an identity of the occupant that made the occupant request. In some embodiments, theoccupant identification information 505 is determined based on analysis by the voice recognizer 170 (FIG. 1 ). In some embodiments, theoccupant identification information 505 is determined based on an input at the UI 116 (FIG. 1 ). In some embodiments, theoccupant identification information 505 is determined based on who has control of the mobile device 160 (FIG. 1 ). In some embodiments, theoccupant identification information 405 is determined based on recognition result of the iris of the eye of the occupant recognize d bygaze detector 122 or a camera on themobile device 160. In some embodiments, theoccupant identification information 405 is determined based on the fingerprint of the occupant recognize d by themobile device 160 or by a sensor on the steering wheel of the vehicle. Thedata structure 500 further includestimestamp information 510. In some embodiments, thetimestamp information 510 indicates a time corresponding to receipt of the requested information from the occupant. In some embodiments, thetimestamp information 510 includes information related to a time when data was captured by the vehicle sensors. In some embodiments, thetimestamp information 510 includes information related to a time when the attention area was determined. Thedata structure 500 further includes region of interest (ROI)information 515. TheROI information 515 indicates a location, e.g., in an image, where the attention area is determined to be located. TheROI information 515 is determined based on a correlation between gaze data for the occupant associated with theoccupant identification information 505 and sensor data from the vehicle. TheROI information 515 includes a firstcorner pixel position 520. In some embodiments, the firstcorner pixel position 520 indicates a location within an image of a top left corner of an attention area determined based on the gaze data for the occupant. TheROI information 515 further includes a second corner pixel position 525. In some embodiments, the second corner pixel position 525 indicates a location within the image of a bottom right corner of the attention area determined based on the gaze data for the occupant. Using the firstcorner pixel position 520 and the second corner pixel position 525, boundaries of the determined attention area are able to be set using minimal position information. In some embodiments, theROI information 515 is usable for cropping an image, e.g., using log collector 128 (FIG. 1 ) or in operation 242 (FIG. 2 ). - The
data structure 500 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data. In some embodiments, at least one of the components is excluded from thedata structure 500. For example, in some embodiments, theoccupant identification information 505 is excluded from thedata structure 500. In some embodiments, additional information is included in thedata structure 500. For example, in some embodiments, thedata structure 500 further includes additional corner pixel positions for theROI information 515. -
FIG. 6 is a view of adata structure 600 of attention area data in accordance with some embodiments. In some embodiments, thedata structure 600 corresponds to an attention area determined by the attention area analyzer 124 (FIG. 1 ). In some embodiments, thedata structure 600 corresponds to an attention area identified in operation 236 (FIG. 2 ). Thedata structure 600 is similar to the data structure 500 (FIG. 5 ). Components of thedata structure 600 that are similar to thedata structure 500 have a same reference number. For the sake of brevity only components of thedata structure 600 that are different from thedata structure 500 are discussed below. - The
data structure 600 includesROI information 615 that includesdepth information 620 in addition to the firstcorner pixel position 520 and the second corner pixel position 525. Thedepth information 620 is usable to determine a distance from the vehicle at which a gaze of the occupant is focused. In some embodiments, thedepth information 620 is determined using the gaze detector 122 (FIG. 1 ) or in operation 234 (FIG. 2 ). Including thedepth information 620 helps to increase precision of determining an object about which the occupant is requesting information. - The
data structure 600 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data. In some embodiments, at least one of the components is excluded from thedata structure 600. For example, in some embodiments, theoccupant identification information 505 is excluded from thedata structure 600. In some embodiments, additional information is included in thedata structure 600. For example, in some embodiments, thedata structure 600 further includes additional corner pixel positions for theROI information 615. -
FIG. 7 is a view of a data structure of attention area data in accordance with some embodiments. In some embodiments, thedata structure 700 corresponds to an attention area determined by the attention area analyzer 124 (FIG. 1 ). In some embodiments, thedata structure 700 corresponds to an attention area identified in operation 236 (FIG. 2 ). Thedata structure 700 is similar to the data structure 500 (FIG. 5 ). Components of thedata structure 700 that are similar to thedata structure 500 have a same reference number. For the sake of brevity only components of thedata structure 700 that are different from thedata structure 500 are discussed below. - The
data structure 700 includesROI information 715 that includes world coordinateposition information 720 in place of the firstcorner pixel position 520 and the second corner pixel position 525. The world coordinateposition information 720 is usable to determine a location of the object within the real world. In some embodiments, the world coordinateposition information 720 is determined using the log collector 128 (FIG. 1 ) or in operation 236 (FIG. 2 ). Including the world coordinateposition information 720 helps to increase precision of determining an object about which the occupant is requesting information. - The
data structure 700 is merely exemplary and one of ordinary skill in the art would understand that different information is able to be included in the attention area data. In some embodiments, at least one of the components is excluded from thedata structure 700. For example, in some embodiments, theoccupant identification information 505 is excluded from thedata structure 700. In some embodiments, additional information is included in thedata structure 700. For example, in some embodiments, thedata structure 700 further includes at least a partial image of the object. -
FIG. 8 is a view of auser interface 800 in accordance with some embodiments. In some embodiments, theUI 800 corresponds to UI 116 (FIG. 1 ). In some embodiments,UI 800 is part of mobile device 160 (FIG. 1 ). In some embodiments,UI 800 is part of vehicle system 110 (FIG. 1 ). - The
UI 800 includes anavigation UI 805 and animage UI 810. Theimage UI 810 includes a captured image from avehicle sensor 815 and a highlight of the identifiedobject 820. TheUI 800 is usable to notify the occupant of the object that was identified as a source of the occupant request usingimage UI 810. TheUI 800 is further usable to notify the occupant of a travel path to the object usingnavigation UI 805. In some embodiments, theUI 800 is configured to receive information from the occupant of the as part of the occupant request, request initiation, confirmation of the identified object or other such input information. In some embodiments, theUI 800 is integrated into the vehicle. In some embodiments, the 800 is separable from the vehicle. - The
navigation UI 805 is configured to receive GPS information, e.g., from GPS 116 (FIG. 1 ), and display a map visible to the driver of the vehicle. Thenavigation UI 805 is further configured to display a travel path along the map that the vehicle is able to traverse to reach the identified object. In some embodiments, thenavigation UI 805 includes a touchscreen. In some embodiments, thenavigation UI 805 is configured to receive updates to the map and/or the travel path from an external device, such as the server 140 (FIG. 1 ). - The
image UI 810 includes a captured image from thevehicle sensor 815 and a highlight of the identifiedobject 820. The highlight of the identifiedobject 820 overlaps the image from thevehicle sensor 815 to identify the object within the image from the vehicle sensor. In some embodiments, the image from thevehicle sensor 815 is a cropped image from the vehicle sensor. In some embodiments, theimage UI 810 is able to receive input from the occupant to confirm or deny the accuracy of the identified object. In some embodiments, theimage UI 810 includes a touchscreen. -
FIG. 8 includes thenavigation UI 805 as being separate from theimage UI 810. In some embodiments, theimage UI 810 is overlaid on thenavigation UI 805. In some embodiments, theimage UI 810 is hidden while vehicle is in motion. -
FIG. 9 is a view of auser interface 900 in accordance with some embodiments. In some embodiments, theUI 900 corresponds to UI 116 (FIG. 1 ). In some embodiments,UI 900 is part of mobile device 160 (FIG. 1 ). In some embodiments,UI 900 is part of vehicle system 110 (FIG. 1 ). TheUI 900 is similar to theUI 800. Components of theUI 900 that are similar to theUI 800 have a same reference number. For the sake of brevity, only components ofUI 900 that are different fromUI 800 are discussed below. - The
UI 900 includes alink UI 910 configured to display a link to object information, e.g., a link received in operation 320 (FIG. 3 ). In some embodiments, thelink UI 910 includes a selectable link and is configured to display the object information in response to retrieving the information following selection of the link by the occupant. In some embodiments, thelink UI 910 is configured to display an icon associated with the link. In some embodiments, thelink UI 910 includes a touchscreen. -
FIG. 9 includes thenavigation UI 805 as being separate from theimage UI 810 and thelink UI 910. In some embodiments, at least one of theimage UI 810 or thelink UI 910 is overlaid on thenavigation UI 805. In some embodiments, at least one of theimage UI 810 or thelink UI 910 is hidden while vehicle is in motion. -
FIG. 10 is a view of a user interface in accordance with some embodiments. In some embodiments, theUI 1000 corresponds to UI 116 (FIG. 1 ). In some embodiments,UI 1000 is part of mobile device 160 (FIG. 1 ). In some embodiments,UI 900 is part of vehicle system 110 (FIG. 1 ). TheUI 1000 is similar to theUI 800. Components of theUI 1000 that are similar to theUI 800 have a same reference number. For the sake of brevity, only components ofUI 1000 that are different fromUI 800 are discussed below. - The
UI 1000 includes arequest history UI 1010 configured to display information related to the occupant request and any subsequent requests for additional information about the object. In some embodiments, therequest history UI 1010 includes a dialog type display with the occupant request and object information provided in sequence. In some embodiments, therequest history UI 1010 is configured to provide a selectable list of previous occupant requests; and display the information provided in response to a corresponding occupant request in response to selection of that occupant request. In some embodiments, therequest history UI 1010 includes a touchscreen. -
FIG. 10 includes thenavigation UI 805 as being separate from theimage UI 810 and therequest history UI 1010. In some embodiments, at least one of theimage UI 810 or therequest history UI 1010 is overlaid on thenavigation UI 805. In some embodiments, at least one of theimage UI 810 or therequest history UI 1010 is hidden while vehicle is in motion. -
FIG. 11 is a block diagram of a system for implementing object identification in accordance with some embodiments.System 1100 includes ahardware processor 1102 and a non-transitory, computerreadable storage medium 1104 encoded with, i.e., storing, thecomputer program code 1106, i.e., a set of executable instructions. Computerreadable storage medium 1104 is also encoded withinstructions 1107 for interfacing with external devices. Theprocessor 1102 is electrically coupled to the computerreadable storage medium 1104 via a bus 1108. Theprocessor 1102 is also electrically coupled to an input/output (I/O)interface 1110 by bus 1108. Anetwork interface 1112 is also electrically connected to theprocessor 1102 via bus 1108.Network interface 1112 is connected to anetwork 1114, so thatprocessor 1102 and computerreadable storage medium 1104 are capable of connecting to external elements vianetwork 1114. Theprocessor 1102 is configured to execute thecomputer program code 1106 encoded in the computerreadable storage medium 1104 in order to causesystem 1100 to be usable for performing a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ). - In some embodiments, the
processor 1102 is a central processing unit (CPU), a multi-processor, a distributed processing system, an application specific integrated circuit (ASIC), and/or a suitable processing unit. - In some embodiments, the computer
readable storage medium 1104 includes an electronic, magnetic, optical, electromagnetic, infrared, and/or a semiconductor system (or apparatus or device). For example, the computerreadable storage medium 1104 includes a semiconductor or solid-state memory, a magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk, and/or an optical disk. In some embodiments using optical disks, the computerreadable storage medium 1104 includes a compact disk-read only memory (CD-ROM), a compact disk-read/write (CD-R/W), and/or a digital video disc (DVD). - In some embodiments, the
storage medium 1104 stores thecomputer program code 1106 configured to causesystem 1100 to perform a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ). In some embodiments, thestorage medium 1104 also stores information needed for performing a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ) as well as information generated during performing a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ), such as agaze data parameter 1116, anobject data parameter 1118, avehicle position parameter 1120, arequest content parameter 1122, and/or a set of executable instructions to perform a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ). - In some embodiments, the
storage medium 1104stores instructions 1107 for interfacing with external devices. Theinstructions 1107 enableprocessor 1102 to generate instructions readable by the external devices to effectively implement a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ). -
System 1100 includes I/O interface 1110. I/O interface 1110 is coupled to external circuitry. In some embodiments, I/O interface 1110 includes a keyboard, keypad, mouse, trackball, trackpad, and/or cursor direction keys for communicating information and commands toprocessor 1102. -
System 1100 also includesnetwork interface 1112 coupled to theprocessor 1102.Network interface 1112 allowssystem 1100 to communicate withnetwork 1114, to which one or more other computer systems are connected.Network interface 1112 includes wireless network interfaces such as BLUETOOTH, WIFI, WIMAX, GPRS, or WCDMA; or wired network interface such as ETHERNET, USB, or IEEE-1394. In some embodiments, a portion or all of the operations as described in object identification system 100 (FIG. 1 ), method 200 (FIG. 2 ) or method 300 (FIG. 3 ) is implemented in two ormore systems 1100, and information such asgaze data parameter 1116,object data parameter 1118,vehicle location parameter 1120, orrequest content parameter 1122 are exchanged betweendifferent systems 1100 vianetwork 1114. - An aspect of this description relates to a method of obtaining object information. The method includes receiving a request initiation from an occupant of a vehicle. The method includes receiving a request from the occupant after receiving the request initiation. The method further includes determining a content of the request from the occupant. The method further includes detecting a gaze location of the occupant. The method further includes receiving information related to an environment surrounding the vehicle based on data collected by a sensor attached to the vehicle. The method further includes identifying a region of interest (ROI) outside of the vehicle based on the detected gaze location and the information related to the environment surrounding the vehicle. The method further includes generating log data based on the ROI and the content of the request. The method further includes transmitting the log data to an external device. The method further includes receiving information related to an object within the ROI, wherein the information satisfies the content of the request. In some embodiments, receiving the initiation request includes receiving the initiation request including a keyword, a key phrase, a predetermined gesture, or an input to a user interface (UI). In some embodiments, receiving information related to the environment surrounding the vehicle includes receiving an image from a camera attached to the vehicle. In some embodiments, the method further includes cropping the image based on the ROI, wherein generating the log data comprises generating the log data using the cropped image. In some embodiments, receiving information related to the object includes receiving identifying information related to the object in response to the content of the request being a request for identification of the object. In some embodiments, the method further includes determining an identity of the occupant, wherein generating the log data comprises generating the log data based on the identity of the occupant. In some embodiments, detecting the gaze location of the occupant includes detecting an azimuth angle of a gaze of the occupant relative to the vehicle, and detecting an elevation angle of the gaze of the occupation relative to the vehicle. In some embodiments, detecting the gaze location of the occupant further includes detecting a depth of the gaze of the occupant relative to the vehicle. In some embodiments, detecting the gaze location of the occupant includes detecting a world coordinate of the gaze location, and generating the log data comprises generating the log data based on the world coordinate. In some embodiments, detecting the gaze location of the occupant includes capturing an image of the occupant using a camera attached to the vehicle.
- An aspect of this description relates to a system for obtaining object information. The system includes an occupant monitoring camera; a front camera; a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for receiving a request initiation from an occupant of a vehicle. The processor is further configured to execute the instructions for receiving a request from the occupant after receiving the request initiation. The processor is further configured to execute the instructions for determining a content of the request from the occupant. The processor is further configured to execute the instructions for detecting a gaze location of the occupant based on information from the occupant monitoring camera. The processor is further configured to execute the instructions for receiving information related to an environment surrounding the vehicle based on the front camera attached to the vehicle. The processor is further configured to execute the instructions for identifying a region of interest (ROI) outside of the vehicle based on the detected gaze location and the information related to the environment surrounding the vehicle. The processor is further configured to execute the instructions for generating log data based on the ROI and the content of the request. The processor is further configured to execute the instructions for generating instructions for transmitting the log data to an external device. The processor is further configured to execute the instructions for receiving information related to an object within the ROI, wherein the information satisfies the content of the request. In some embodiments, the processor is configured to execute the instructions for cropping an image from the front camera based on the ROI; and generating the log data using the cropped image. In some embodiments, the processor is configured to execute the instructions for receiving information related to the object comprising identifying information related to the object in response to the content of the request being a request for identification of the object. In some embodiments, the processor is configured to execute the instructions for determining an identity of the occupant; and generating the log data based on the identity of the occupant. In some embodiments, the processor is configured to execute the instructions for detecting an azimuth angle of a gaze of the occupant relative to the vehicle, and detecting an elevation angle of the gaze of the occupation relative to the vehicle. In some embodiments, the processor is configured to execute the instructions for detecting a depth of the gaze of the occupant relative to the vehicle. In some embodiments, the processor is configured to execute the instructions for detecting a world coordinate of the gaze location; and generating the log data based on the world coordinate.
- An aspect of this description relates to a method of obtaining object information. The method includes receiving a request initiation from an occupant of a vehicle using a microphone. The method further includes receiving a request from the occupant after receiving the request initiation using the microphone. The method further includes detecting a gaze location of the occupant. The method further includes receiving information related to an environment surrounding the vehicle using a camera attached to the vehicle. The method further includes generating log data based on the information related to the environment surrounding the vehicle and the received request. The method further includes transmitting the log data to an external device. The method further includes receiving information related to an object within the environment surrounding the vehicle. The method further includes automatically generating a notification viewable by the occupant in response to receiving the information related to the object. In some embodiments, receiving information related to the object includes receiving a link for accessing the external device. In some embodiments, automatically generating the notification includes displaying the link on a user interface viewable by the occupant.
- The foregoing outlines features of several embodiments so that those skilled in the art may better understand the aspects of the present disclosure. Those skilled in the art should appreciate that they may readily use the present disclosure as a basis for designing or modifying other processes and structures for carrying out the same purposes and/or achieving the same advantages of the embodiments introduced herein. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the present disclosure, and that they may make various changes, substitutions, and alterations herein without departing from the spirit and scope of the present disclosure.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/707,874 US20230316769A1 (en) | 2022-03-29 | 2022-03-29 | Object information obtaining method and system for implementing |
JP2023026444A JP7526837B2 (en) | 2022-03-29 | 2023-02-22 | Object information acquisition method and system for implementing same |
CN202310310614.9A CN116895058A (en) | 2022-03-29 | 2023-03-28 | Object information acquisition method and system for implementing the method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/707,874 US20230316769A1 (en) | 2022-03-29 | 2022-03-29 | Object information obtaining method and system for implementing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230316769A1 true US20230316769A1 (en) | 2023-10-05 |
Family
ID=88193235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/707,874 Pending US20230316769A1 (en) | 2022-03-29 | 2022-03-29 | Object information obtaining method and system for implementing |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230316769A1 (en) |
JP (1) | JP7526837B2 (en) |
CN (1) | CN116895058A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230322264A1 (en) * | 2022-04-06 | 2023-10-12 | Ghost Autonomy Inc. | Process scheduling based on data arrival in an autonomous vehicle |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009257964A (en) | 2008-04-17 | 2009-11-05 | Toyota Motor Corp | Navigation device |
JP2015092302A (en) | 2012-01-30 | 2015-05-14 | 日本電気株式会社 | Video processing system, video processing method, video processing device, and control method and control program thereof |
JP2013255168A (en) | 2012-06-08 | 2013-12-19 | Toyota Infotechnology Center Co Ltd | Imaging apparatus and imaging method |
JP6480279B2 (en) | 2014-10-15 | 2019-03-06 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Information acquisition method, information acquisition system, and information acquisition program |
JP2018004325A (en) | 2016-06-28 | 2018-01-11 | 京セラ株式会社 | Gaze point detector, gaze point detection method, gaze point detection system, and vehicle |
JP2020144552A (en) | 2019-03-05 | 2020-09-10 | 株式会社デンソーテン | Information providing device and information providing method |
-
2022
- 2022-03-29 US US17/707,874 patent/US20230316769A1/en active Pending
-
2023
- 2023-02-22 JP JP2023026444A patent/JP7526837B2/en active Active
- 2023-03-28 CN CN202310310614.9A patent/CN116895058A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230322264A1 (en) * | 2022-04-06 | 2023-10-12 | Ghost Autonomy Inc. | Process scheduling based on data arrival in an autonomous vehicle |
Also Published As
Publication number | Publication date |
---|---|
CN116895058A (en) | 2023-10-17 |
JP2023147206A (en) | 2023-10-12 |
JP7526837B2 (en) | 2024-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11392131B2 (en) | Method for determining driving policy | |
US10421436B2 (en) | Systems and methods for surveillance of a vehicle using camera images | |
JP6940612B2 (en) | Near crash judgment system and method | |
CN110268413B (en) | Low level sensor fusion | |
US11288860B2 (en) | Information processing apparatus, information processing method, program, and movable object | |
US7792328B2 (en) | Warning a vehicle operator of unsafe operation behavior based on a 3D captured image stream | |
KR101912703B1 (en) | 3-channel monitoring apparatus for state of vehicle and method thereof | |
KR20170137636A (en) | Information-attainment system based on monitoring an occupant | |
WO2019077999A1 (en) | Imaging device, image processing apparatus, and image processing method | |
US20180147986A1 (en) | Method and system for vehicle-based image-capturing | |
US20200160715A1 (en) | Information processing system, program, and information processing method | |
US10964137B2 (en) | Risk information collection device mounted on a vehicle | |
KR20210043566A (en) | Information processing device, moving object, information processing method and program | |
US11135987B2 (en) | Information processing device, information processing method, and vehicle | |
US20230316769A1 (en) | Object information obtaining method and system for implementing | |
EP4167193A1 (en) | Vehicle data collection system and method of using | |
CN110784523B (en) | Target object information pushing method and device | |
EP4083910A1 (en) | Information processing device, information processing system, information processing method and information processing program | |
JP7331929B2 (en) | Image data acquisition device, method, program and image data transmission device | |
CN110033631B (en) | Determination device, determination method, and non-transitory computer-readable storage medium | |
US20220281485A1 (en) | Control apparatus, system, vehicle, and control method | |
US12027050B2 (en) | Hazard notification method and system for implementing | |
US12052563B2 (en) | System for data communication using vehicle camera, method therefor and vehicle for the same | |
JP2020071594A (en) | History storage device and history storage program | |
CN115690719A (en) | System and method for object proximity monitoring around a vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: WOVEN ALPHA, INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HASHIMOTO, DAISUKE;REEL/FRAME:062289/0153 Effective date: 20221220 |
|
AS | Assignment |
Owner name: WOVEN BY TOYOTA, INC., JAPAN Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:WOVEN ALPHA, INC.;WOVEN BY TOYOTA, INC.;REEL/FRAME:063769/0707 Effective date: 20230401 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: NOTICE OF APPEAL FILED |