US20220189292A1 - Systems and methods for vehicle identification - Google Patents

Systems and methods for vehicle identification Download PDF

Info

Publication number
US20220189292A1
US20220189292A1 US17/689,127 US202217689127A US2022189292A1 US 20220189292 A1 US20220189292 A1 US 20220189292A1 US 202217689127 A US202217689127 A US 202217689127A US 2022189292 A1 US2022189292 A1 US 2022189292A1
Authority
US
United States
Prior art keywords
vehicle
identifications
vehicles
information
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/689,127
Inventor
Xiaoyong Yi
Liwei Ren
Jiang Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Voyager HK Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Voyager HK Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd, Voyager HK Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Priority to US17/689,127 priority Critical patent/US20220189292A1/en
Assigned to VOYAGER (HK) CO., LTD. reassignment VOYAGER (HK) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DIDI RESEARCH AMERICA, LLC
Assigned to BEIJING VOYAGER TECHNOLOGY CO., LTD. reassignment BEIJING VOYAGER TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VOYAGER (HK) CO., LTD.
Assigned to DIDI RESEARCH AMERICA, LLC reassignment DIDI RESEARCH AMERICA, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REN, LIWEI, YI, XIAOYONG, ZHANG, JIANG
Publication of US20220189292A1 publication Critical patent/US20220189292A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/008Registering or indicating the working of vehicles communicating information to a remotely located station
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0816Indicating performance data, e.g. occurrence of a malfunction
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/20Monitoring the location of vehicles belonging to a group, e.g. fleet of vehicles, countable or determined number of vehicles
    • G05D2201/0213
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination

Definitions

  • the disclosure relates generally to vehicle identification.
  • APR automated license plate reader
  • An autonomous vehicle may be equipped with a set of sensors configured to generate output signals conveying information about the surroundings of the autonomous vehicle.
  • a sensor may include an image sensor configured to generate output signals conveying image information defining images of a surrounding environment. The images may be used to identify vehicles present in the environment and their locations.
  • the autonomous vehicle may be part of a fleet of autonomous vehicles individually equipped with such sensors. The output of the sensors and/or information derived from the output of the sensors from the fleet of autonomous vehicles may facilitate a crowd-sourced technique vehicle identification.
  • information derived from different autonomous vehicles may be compared to determine one or a combination of vehicle identity profiles, speed of travel, direction of travel, or trajectory.
  • the method may comprise: obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle; identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises: identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle; determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles; receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and identifying and returning vehicle context information of the target vehicle.
  • the system may comprise one or more processors and a memory storing instructions.
  • the instructions when executed by the one or more processors, may cause the system to perform: obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle; identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises: identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle; determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles; receiving a request for identifying a target vehicle
  • the identifications may include one or a combination of a license plate number, a color, a make, a model, or a unique marking.
  • individual vehicle identification information may include the identifications of the one or more vehicles.
  • identifications of the one or more vehicles may be derived from the vehicle identification information.
  • individual vehicle identification information may include one or a combination of image information or video information.
  • the identifications of the one or more vehicles may be derived from the image information and/or video information through one or more image and/or video processing techniques.
  • the identifying the plurality of identifications in the set of vehicle identification information that belong to the individual vehicle comprises: stitching the image or video information comprising the two or more identifications that are within the threshold degree of sameness and comprise one or more overlapping parts of a same vehicle.
  • the stitching may be implemented with one or a combination of feature point detection, image registration, alignment, or composing.
  • the determining vehicle context information of the plurality of individual vehicles comprises: determining one or a combination of a speed of travel, a direction of travel, or a trajectory of a vehicle based on comparing locations of the vehicle in the stitched image or video information comprising the vehicle.
  • the identity profile of an individual vehicle may represent an identity of the individual vehicle as a whole.
  • the identity profile may be determined by combining multiples ones of the identifications determined to be for the same vehicle.
  • the trajectory may include a path followed by a vehicle.
  • system may further perform tracking of a vehicle based on the set of vehicle identification information and/or vehicle context information.
  • FIG. 1 illustrates an example environment for vehicle identification, in accordance with various embodiments of the disclosure.
  • FIG. 2 illustrates an example flow chart of vehicle identification, in accordance with various embodiments of the disclosure.
  • FIG. 3 illustrates a block diagram of an example computer system in which any of the embodiments described herein may be implemented.
  • One or more techniques presented herein may perform vehicle identification using a fleet of autonomous vehicles.
  • the information collected from the autonomous vehicles may improve vehicle identification due to the distribution of the autonomous vehicles in an environment and due to the amount of information that may be retrieved from the autonomous vehicles providing a dense dataset through which vehicles may be identified.
  • FIG. 1 illustrates an example system 100 for vehicle identification, in accordance with various embodiments.
  • the example system 100 may include one or a combination of a computing system 102 , an autonomous vehicle 116 , or one or more other autonomous vehicles 122 .
  • autonomous vehicle 116 may be directed to the autonomous vehicle 116 , this is for illustrative purposes only and not to be considered limiting.
  • other autonomous vehicle(s) included in the one or more other autonomous vehicles 122 may be configured the same as or similar to autonomous vehicle 116 and may include the same or similar components, described herein.
  • the autonomous vehicle 116 and the one or more other autonomous vehicles 122 may represent a set of autonomous vehicles which may be part of a fleet of autonomous vehicles.
  • the autonomous vehicle 116 may include one or more processors and memory (e.g., permanent memory, temporary memory).
  • the processor(s) may be configured to perform various operations by interpreting machine-readable instructions stored in the memory.
  • the autonomous vehicle 116 may include other computing resources.
  • the autonomous vehicle 116 may have access (e.g., via one or more connections, via one or more networks 110 ) to other computing resources or other entities participating in the system 100 .
  • the autonomous vehicle 116 may include one or a combination of an identification component 118 or a set of sensors 120 .
  • the autonomous vehicle 116 may include other components.
  • the set of sensors 120 may include one or more sensors configured to generate output signals conveying vehicle identification information or other information.
  • the vehicle identification information may convey identifications, locations, or combination of identifications and locations of one or more vehicles present in an environment surrounding autonomous vehicle 116 .
  • the identifications of the one or more vehicles may include one or a combination of a license plate number, one or more vehicle colors, a vehicle make (e.g., manufacturer), a vehicle model, or a unique marking.
  • a license plate number may be comprised of one or a combination of alphanumeric characters or symbols.
  • a unique marking may refer to one or a combination of decals, writing, damage, or other marking upon a vehicle.
  • the identifications may be partial identifications.
  • a partial identification may include one or a combination of a part of a license plate number (less than all alphanumeric characters or symbols making up the license plate number), some of the colors of the vehicle (if the vehicle is multi-colored), or a make identification without a model.
  • the set of sensors 120 may include an image sensor, a set of image sensors, a location sensor, a set of location sensors, or a combination of image sensors, location sensors, and other sensors.
  • a set of sensors e.g., set of image sensors
  • An image sensor may be configured to generate output signals conveying image information and/or video information.
  • the image information may define visual content in the form of one or more images.
  • the video information may define visual content in the form of a sequence of images. Individual images may be defined by pixels and/or other information. Pixels may be characterized by one or a combination of pixel location, pixel color, or pixel transparency.
  • An image sensor may include one or more of charge-coupled device sensor, active pixel sensor, complementary metal-oxide semiconductor sensor, N-type metal-oxide-semiconductor sensor, and/or other image sensor.
  • the identifications of the one or more vehicles may be derived from the image information or the video information through one or more image and/or video processing techniques.
  • Such techniques may include one or a combination of computer vision, Speeded Up Robust Features (SURF), Scale-invariant Feature Transform (SIFT), Oriented FAST and rotated BRIEF (ORB), deep learning (of neural networks), or Optical Character Recognition (OCR).
  • SURF Speeded Up Robust Features
  • SIFT Scale-invariant Feature Transform
  • ORB Oriented FAST and rotated BRIEF
  • OCR Optical Character Recognition
  • a location sensor may be configured to generate output signals conveying location information.
  • Location information derived from output signals of a location sensor may define one or a combination of a location of autonomous vehicle 116 , an elevation of autonomous vehicle 116 , a timestamp when a location was obtained, or other measurements.
  • a location sensor may include one or a combination of a GPS, an altimeter, or a pressure sensor.
  • individual vehicle identification information may include one or a combination of image information, video information, or identifications derived from the image information or video information.
  • the identification component 118 may determine the identifications of the one or more vehicles from one or a combination of the image information or the video information through one or more of the image or video processing techniques described herein.
  • the identifications may be included in the vehicle identification information or the vehicle identification information may include one or a combination of the image information or the video information from which the vehicle identification information may be derived.
  • the identification component 118 may communicate the vehicle identification information to the computing system 102 via one or more networks 110 .
  • the one or more networks 110 may include the Internet or other networks.
  • the computing system 102 may obtain other vehicle identification information from other autonomous vehicle(s) 122 . Accordingly, the computing system 102 may obtain a set of vehicle identification information.
  • the computing system 102 may include one or more processors and memory (e.g., permanent memory, temporary memory).
  • the processor(s) may be configured to perform various operations by interpreting machine-readable instructions stored in the memory.
  • the computing system 102 may include other computing resources.
  • the computing system 102 may have access (e.g., via one or more connections, via one or more networks 110 ) to other computing resources or other entities participating in the system 100 .
  • the computing system 102 may include one or a combination of an identification component 104 , a context component 106 , or a tracking component 108 . While the computing system 102 is shown in FIG. 1 as a single entity, this is merely for ease of reference and is not meant to be limiting. One or more components or one or more functionalities of the computing system 102 described herein may be implemented in a single computing device or multiple computing devices. In some embodiments, one or more components or one or more functionalities of the computing system 102 described herein may be implemented in one or more networks 110 , one or more endpoints, one or more servers, or one or more clouds.
  • the identification component 104 may obtain, from a set of autonomous vehicles, a set of vehicle identification information.
  • the identification component 104 may obtain vehicle identification information from autonomous vehicle 116 and individual ones of one or more other autonomous vehicles 122 .
  • the individual vehicle identification information obtained from an individual autonomous vehicle may include identifications, locations, or combinations of the identifications and locations of one or more vehicles.
  • the vehicle identification information obtained by identification component 104 may include the identifications of vehicles.
  • autonomous vehicle 116 may determine the identifications of vehicles via identification component 118 and communicate the identifications to computing system 102 .
  • the identification component 104 may determine the identifications of vehicles from the vehicle identification information.
  • the identification component 104 may obtain vehicle identification information including one or a combination of image information or video information.
  • the identification component 104 may determine the identifications of vehicles using one or more image or video-based techniques described herein.
  • the context component 106 may determine, from the set of vehicle identification information, vehicle context information for individual vehicles of the one or more vehicles.
  • the context component 106 may determine vehicle context information for autonomous vehicle 116 based on the vehicle identification information obtained from autonomous vehicle 116 and/or other vehicle identification information from other autonomous vehicles.
  • the vehicle context information for an individual vehicles may describe a context of the individual vehicles.
  • the context of the individual vehicles may describe circumstances specific to the individual vehicles.
  • the context may include one or a combination of a speed of travel, a direction of travel, a trajectory, or an identity profile.
  • determining the context may be based on comparing individual ones of the identifications and the locations of individual vehicles to other ones of the identifications and the locations of the individual vehicles.
  • the comparisons of individual ones of the identifications of the individual vehicles to the other ones of the identifications of the individual vehicles may facilitate determining that multiples ones of the identifications are for the same vehicle. For example, based on the comparisons, it may be determined that multiple ones of the identifications (obtained from the same or different autonomous vehicles) match. A match may convey a logical inference that the multiple identifications are for the same vehicle. In some implementations, matching may mean that identifications are the same or complementary.
  • being the same match may mean the identifications are within a threshold degree of sameness.
  • two (or more) identifications may be determined to match if they depict the same series of alphanumerical characters or symbols or depict the same 90% of the series of alphanumerical characters or symbols.
  • color identifications two (or more) identifications may be determined to match if they depict colors within a threshold range as observed on a color scale.
  • vehicle make identifications two (or more) identifications may be determined to match if they depict the same make vehicle.
  • two (or more) identifications may be determined to match if they depict the same model vehicle.
  • two (or more) identifications may be determined to match if they depict the same visual distinct unique marking in the same location of the vehicle.
  • being a complementary match may mean that multiple identifications may be partial identifications which, when combined, form a complete identification.
  • the partial identifications may include depiction of different parts of the vehicle. That partial identification may include depiction of one or more overlapping parts of the vehicle.
  • license plate identifications one identification may depict part of a series of alphanumerical characters or symbols and another identification may depict another part of the series of alphanumerical characters or symbols.
  • the identifications may be combined to depict the series of alphanumerical characters or symbols defining the license plate number as a whole.
  • color identifications one identification may depict a part of a vehicle having a color and another identification may depict another part of the vehicle having the same color.
  • the identifications may be combined to depict the vehicle as a whole having the color uniform throughout.
  • one identification may depict part of a unique marking and another identification may depict another part of the unique marking.
  • the identifications may be combined to depict the unique marking as a whole. In some implementations, combining identifications may be accomplished through stitching images and/or video together.
  • stitching may include operations such as one or a combination of feature point detection, image registration, alignment, or composing.
  • Feature point detection may be accomplished through techniques such as SIFT and SURF.
  • Image registration may involve matching features in a set of images.
  • a method for image registration may include Random Sample Consensus (RANSAC) or other techniques.
  • Alignment may include transforming an image to match a view point of another image.
  • Composing may comprise the process where the images are aligned in such a way that they appear as a single shot of vehicle.
  • deep learning (of neural networks) based approaches may also be used.
  • the identity profile of an individual vehicle may represent an identity of the individual vehicle as a whole.
  • the identity of the vehicle as a whole may comprise a representation of more than one identification which may have been obtained for the given vehicle. That is, the identity profile may be determined by combining multiples ones of the identifications (obtained from one or more autonomous vehicles) determined to be for the same vehicle.
  • the identity profile may be made through the stitching techniques described herein which may result in one or more images which depict more than one identification.
  • an identity profile of a vehicle may include an image or series of image from which two or more of a license plate number, a color, a make, a model, or a unique marking of the vehicle may be identifiable.
  • determining one or a combination of the speed of travel, the direction of travel, or the trajectory of a vehicle may be based on comparing the individual locations associated with the multiples ones of the identifications determined to be for the same vehicle, which may be obtained from the stitched images and/or videos capturing the vehicle.
  • a speed of travel may specify a vehicle was traveling at 110 kilometers an hour.
  • a direction of travel may be represented by cardinal directions.
  • a direction of travel may specify that a vehicle was traveling north.
  • a trajectory of a vehicle may include a path followed by the vehicle.
  • the trajectory may be specified with respect to one or a combination of named roads, highways, freeways, intersections, neighborhoods, or cities.
  • locations may be referenced within a map of an environment including information about one or a combination of named roads, highways, freeways, intersections, neighborhoods, or cities.
  • a mapping service may be accessed and used to cross reference the determined locations with the information conveyed in the map.
  • a mapping service may include Google® Maps.
  • a trajectory may specify that a vehicle traveled for two miles on a Main St., turned left and proceeded for six blocks on 1 st St., etc.
  • how a vehicle's trajectory changes (or doesn't change) over time may reflect a travel pattern of the vehicle.
  • a travel pattern may include a common trajectory which appears more than once.
  • the tracking component 108 may be configured to actively look for one or more vehicles using the set of vehicle identification information, vehicle context information, or a combination of vehicle identification information and vehicle context information.
  • tracking component 108 may obtain identification(s) of a vehicle, for example through user input by a user designing to locate the vehicle.
  • the tracking component 108 may monitor the set of vehicle identification information obtained from a fleet of autonomous vehicle and the vehicle context information determined therefrom.
  • the tracking component 108 may perform such monitoring while looking for a match between the user-provided identification(s) and the identification conveyed by the set of vehicle identification information.
  • the tracking component 108 may, in response to finding a match, provide the vehicle context information for the matched vehicle to the user through one or more user interfaces.
  • the provided vehicle context information may include one or a combination of a speed of travel, a direction of travel, or a trajectory of the vehicle to allow the user to track the vehicle.
  • autonomous vehicles may exchange notifications of vehicle identification information, requests for tracking, or a combination of requests and notifications.
  • autonomous vehicle 116 may send requests via identification component 118 to other autonomous vehicles 122 to identify a particular vehicle.
  • the autonomous vehicle 116 may obtain requests via identification component 118 from other autonomous vehicles 122 to identify a particular vehicle.
  • the autonomous vehicle 116 may notify, via identification component 118 , other vehicles about an identified vehicle (e.g., by sending vehicle identification information).
  • the autonomous vehicle 116 may obtain notifications, via identification component 118 , from other vehicles about an identified vehicle (e.g., by receiving vehicle identification information).
  • the requests and notifications may be sent to computer system 102 , which in turn may forward the requests or notifications to autonomous vehicles nearby a requesting or notifying vehicle's GPS location, or may be sent directly from autonomous vehicle 116 to one or more nearby autonomous vehicles.
  • FIG. 2 illustrates an example flow chart 200 for vehicle identification, in accordance with various embodiments of the disclosure.
  • a set of vehicle identification information may be obtained from a set of autonomous vehicles.
  • the individual vehicle identification information may be obtained from an individual autonomous vehicle.
  • the individual vehicle identification information may convey identifications of one or more vehicles and locations of the one or more vehicles.
  • vehicle context information for individual vehicles may be determined from the set of vehicle identification information.
  • the vehicle context information for the individual vehicles may describe a context of the individual vehicles.
  • FIG. 3 is a block diagram that illustrates a computer system 300 upon which any of the embodiments described herein may be implemented.
  • the computer system 300 includes a bus 302 or other communication mechanism for communicating information, one or more hardware processors 304 coupled with bus 302 for processing information.
  • Hardware processor(s) 304 may be, for example, one or more general purpose microprocessors.
  • the computer system 300 also includes a main memory 303 , such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 302 for storing information and instructions to be executed by processor(s) 304 .
  • Main memory 306 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor(s) 304 .
  • Such instructions when stored in storage media accessible to processor(s) 304 , render computer system 300 into a special-purpose machine that is customized to perform the operations specified in the instructions.
  • Main memory 306 may include non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks. Volatile media may include dynamic memory.
  • Common forms of media may include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a DRAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
  • the computer system 300 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 300 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 300 in response to processor(s) 304 executing one or more sequences of one or more instructions contained in main memory 306 . Such instructions may be read into main memory 306 from another storage medium, such as storage device 308 . Execution of the sequences of instructions contained in main memory 306 causes processor(s) 304 to perform the process steps described herein. For example, the process/method shown in FIG. 2 and described in connection with this figure can be implemented by computer program instructions stored in main memory 306 . When these instructions are executed by processor(s) 304 , they may perform the steps as shown in FIG. 2 and described above. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • the computer system 300 also includes a communication interface 310 coupled to bus 302 .
  • Communication interface 310 provides a two-way data communication coupling to one or more network links that are connected to one or more networks.
  • communication interface 310 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN).
  • LAN local area network
  • Wireless links may also be implemented.
  • processors or processor-implemented engines may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented engines may be distributed across a number of geographic locations.
  • components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components (e.g., a tangible unit capable of performing certain operations which may be configured or arranged in a certain physical manner).
  • software components e.g., code embodied on a machine-readable medium
  • hardware components e.g., a tangible unit capable of performing certain operations which may be configured or arranged in a certain physical manner.
  • components of the computing system 102 and autonomous vehicle 116 may be described as performing or configured for performing an operation, when the components may comprise instructions which may program or configure the computing system 102 and autonomous vehicle 116 to perform the operation.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Medical Informatics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

Systems and methods for vehicle identification are described herein. A set of vehicle identification information may be obtained from a set of autonomous vehicles. Individual vehicle identification information may convey identifications of one or more vehicles and locations of the one or more vehicles. Vehicle context information for individual vehicles may be determined from the set of vehicle identification information. The vehicle context information for the individual vehicles may describe a context of the individual vehicles. The context may include one or a combination of a speed of travel, a direction of travel, a trajectory, or an identity profile.

Description

    CROSS REFERENCE TO RELATED APPLICATION
  • This application is a continuation application of U.S. patent application Ser. No. 16/235,931, filed on Dec. 28, 2018. The entire content of the above referenced application is incorporated herein by reference.
  • TECHNICAL FIELD
  • The disclosure relates generally to vehicle identification.
  • BACKGROUND
  • Some technologies such as automated license plate reader (ALPR) systems have been used to automatically capture data such as license plate numbers of vehicles that come into view, location, date, time, and/or photographs of the vehicle. This data may be uploaded to a central repository for use with many applications. An application may include law enforcement use to find out where a vehicle has been in the past, to determine whether a specific vehicle was at the spot of a crime, or to discover the travel patterns so that they are able to dig out more criminal activities. Another application may include use with a “hotlist” of identifications of stolen vehicles. Law enforcement may load the hotlist into an ALPR system to actively looking for those stolen vehicles and the vehicles relevant to criminals.
  • SUMMARY
  • One or more implementations of the systems and methods relate to vehicle identification using autonomous vehicles. An autonomous vehicle may be equipped with a set of sensors configured to generate output signals conveying information about the surroundings of the autonomous vehicle. For example, a sensor may include an image sensor configured to generate output signals conveying image information defining images of a surrounding environment. The images may be used to identify vehicles present in the environment and their locations. The autonomous vehicle may be part of a fleet of autonomous vehicles individually equipped with such sensors. The output of the sensors and/or information derived from the output of the sensors from the fleet of autonomous vehicles may facilitate a crowd-sourced technique vehicle identification.
  • For example, information derived from different autonomous vehicles may be compared to determine one or a combination of vehicle identity profiles, speed of travel, direction of travel, or trajectory.
  • One aspect of the present disclosure is directed to a method for vehicle identification. The method may comprise: obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle; identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises: identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle; determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles; receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and identifying and returning vehicle context information of the target vehicle.
  • Another aspect of the present disclosure is directed to a system for vehicle identification. The system may comprise one or more processors and a memory storing instructions. The instructions, when executed by the one or more processors, may cause the system to perform: obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle; identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises: identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle; determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles; receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and identifying and returning vehicle context information of the target vehicle.
  • In some embodiments, the identifications may include one or a combination of a license plate number, a color, a make, a model, or a unique marking.
  • In some embodiments, individual vehicle identification information may include the identifications of the one or more vehicles.
  • In some embodiments, identifications of the one or more vehicles may be derived from the vehicle identification information. By way of non-limiting illustration, individual vehicle identification information may include one or a combination of image information or video information. The identifications of the one or more vehicles may be derived from the image information and/or video information through one or more image and/or video processing techniques.
  • In some embodiments, the identifying the plurality of identifications in the set of vehicle identification information that belong to the individual vehicle comprises: stitching the image or video information comprising the two or more identifications that are within the threshold degree of sameness and comprise one or more overlapping parts of a same vehicle. The stitching may be implemented with one or a combination of feature point detection, image registration, alignment, or composing.
  • In some embodiments, the determining vehicle context information of the plurality of individual vehicles comprises: determining one or a combination of a speed of travel, a direction of travel, or a trajectory of a vehicle based on comparing locations of the vehicle in the stitched image or video information comprising the vehicle.
  • In some embodiments, the identity profile of an individual vehicle may represent an identity of the individual vehicle as a whole. The identity profile may be determined by combining multiples ones of the identifications determined to be for the same vehicle.
  • In some embodiments, the trajectory may include a path followed by a vehicle.
  • In some embodiments, the system may further perform tracking of a vehicle based on the set of vehicle identification information and/or vehicle context information.
  • These and other features of the systems, methods, and non-transitory computer readable media disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for purposes of illustration and description only and are not intended as a definition of the limits of the invention. It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the invention, as claimed.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred and non-limiting embodiments of the invention may be more readily understood by referring to the accompanying drawings in which:
  • FIG. 1 illustrates an example environment for vehicle identification, in accordance with various embodiments of the disclosure.
  • FIG. 2 illustrates an example flow chart of vehicle identification, in accordance with various embodiments of the disclosure.
  • FIG. 3 illustrates a block diagram of an example computer system in which any of the embodiments described herein may be implemented.
  • DETAILED DESCRIPTION
  • Specific, non-limiting embodiments of the present invention will now be described with reference to the drawings. It should be understood that particular features and aspects of any embodiment disclosed herein may be used and/or combined with particular features and aspects of any other embodiment disclosed herein. It should also be understood that such embodiments are by way of example and are merely illustrative of a small number of embodiments within the scope of the present invention. Various changes and modifications obvious to one skilled in the art to which the present invention pertains are deemed to be within the spirit, scope and contemplation of the present invention as further defined in the appended claims.
  • The approaches disclosed herein improve functioning of computing systems that identify vehicles. One or more techniques presented herein may perform vehicle identification using a fleet of autonomous vehicles. The information collected from the autonomous vehicles may improve vehicle identification due to the distribution of the autonomous vehicles in an environment and due to the amount of information that may be retrieved from the autonomous vehicles providing a dense dataset through which vehicles may be identified.
  • FIG. 1 illustrates an example system 100 for vehicle identification, in accordance with various embodiments. The example system 100 may include one or a combination of a computing system 102, an autonomous vehicle 116, or one or more other autonomous vehicles 122.
  • It is noted that while some features and functions of the systems and methods presented herein may be directed to the autonomous vehicle 116, this is for illustrative purposes only and not to be considered limiting. For example, it is to be understood that other autonomous vehicle(s) included in the one or more other autonomous vehicles 122 may be configured the same as or similar to autonomous vehicle 116 and may include the same or similar components, described herein. The autonomous vehicle 116 and the one or more other autonomous vehicles 122 may represent a set of autonomous vehicles which may be part of a fleet of autonomous vehicles.
  • The autonomous vehicle 116 may include one or more processors and memory (e.g., permanent memory, temporary memory). The processor(s) may be configured to perform various operations by interpreting machine-readable instructions stored in the memory. The autonomous vehicle 116 may include other computing resources. The autonomous vehicle 116 may have access (e.g., via one or more connections, via one or more networks 110) to other computing resources or other entities participating in the system 100.
  • The autonomous vehicle 116 may include one or a combination of an identification component 118 or a set of sensors 120. The autonomous vehicle 116 may include other components.
  • The set of sensors 120 may include one or more sensors configured to generate output signals conveying vehicle identification information or other information. The vehicle identification information may convey identifications, locations, or combination of identifications and locations of one or more vehicles present in an environment surrounding autonomous vehicle 116. The identifications of the one or more vehicles may include one or a combination of a license plate number, one or more vehicle colors, a vehicle make (e.g., manufacturer), a vehicle model, or a unique marking. A license plate number may be comprised of one or a combination of alphanumeric characters or symbols. A unique marking may refer to one or a combination of decals, writing, damage, or other marking upon a vehicle. In some embodiments, the identifications may be partial identifications. By way of non-limiting illustration, a partial identification may include one or a combination of a part of a license plate number (less than all alphanumeric characters or symbols making up the license plate number), some of the colors of the vehicle (if the vehicle is multi-colored), or a make identification without a model.
  • The set of sensors 120 may include an image sensor, a set of image sensors, a location sensor, a set of location sensors, or a combination of image sensors, location sensors, and other sensors. A set of sensors (e.g., set of image sensors) may include one or more sensors (e.g., one or more image sensors).
  • An image sensor may be configured to generate output signals conveying image information and/or video information. The image information may define visual content in the form of one or more images. The video information may define visual content in the form of a sequence of images. Individual images may be defined by pixels and/or other information. Pixels may be characterized by one or a combination of pixel location, pixel color, or pixel transparency. An image sensor may include one or more of charge-coupled device sensor, active pixel sensor, complementary metal-oxide semiconductor sensor, N-type metal-oxide-semiconductor sensor, and/or other image sensor. In some embodiments, the identifications of the one or more vehicles may be derived from the image information or the video information through one or more image and/or video processing techniques. Such techniques may include one or a combination of computer vision, Speeded Up Robust Features (SURF), Scale-invariant Feature Transform (SIFT), Oriented FAST and rotated BRIEF (ORB), deep learning (of neural networks), or Optical Character Recognition (OCR).
  • In some implementations, a location sensor may be configured to generate output signals conveying location information. Location information derived from output signals of a location sensor may define one or a combination of a location of autonomous vehicle 116, an elevation of autonomous vehicle 116, a timestamp when a location was obtained, or other measurements. A location sensor may include one or a combination of a GPS, an altimeter, or a pressure sensor.
  • In some embodiments, individual vehicle identification information may include one or a combination of image information, video information, or identifications derived from the image information or video information. The identification component 118 may determine the identifications of the one or more vehicles from one or a combination of the image information or the video information through one or more of the image or video processing techniques described herein. In some implementations, the identifications may be included in the vehicle identification information or the vehicle identification information may include one or a combination of the image information or the video information from which the vehicle identification information may be derived.
  • The identification component 118 may communicate the vehicle identification information to the computing system 102 via one or more networks 110. The one or more networks 110 may include the Internet or other networks. The computing system 102 may obtain other vehicle identification information from other autonomous vehicle(s) 122. Accordingly, the computing system 102 may obtain a set of vehicle identification information.
  • The computing system 102 may include one or more processors and memory (e.g., permanent memory, temporary memory). The processor(s) may be configured to perform various operations by interpreting machine-readable instructions stored in the memory. The computing system 102 may include other computing resources. The computing system 102 may have access (e.g., via one or more connections, via one or more networks 110) to other computing resources or other entities participating in the system 100.
  • The computing system 102 may include one or a combination of an identification component 104, a context component 106, or a tracking component 108. While the computing system 102 is shown in FIG. 1 as a single entity, this is merely for ease of reference and is not meant to be limiting. One or more components or one or more functionalities of the computing system 102 described herein may be implemented in a single computing device or multiple computing devices. In some embodiments, one or more components or one or more functionalities of the computing system 102 described herein may be implemented in one or more networks 110, one or more endpoints, one or more servers, or one or more clouds.
  • The identification component 104 may obtain, from a set of autonomous vehicles, a set of vehicle identification information. By way of non-limiting illustration, the identification component 104 may obtain vehicle identification information from autonomous vehicle 116 and individual ones of one or more other autonomous vehicles 122. The individual vehicle identification information obtained from an individual autonomous vehicle may include identifications, locations, or combinations of the identifications and locations of one or more vehicles.
  • In some embodiments, the vehicle identification information obtained by identification component 104 may include the identifications of vehicles. For example, autonomous vehicle 116 may determine the identifications of vehicles via identification component 118 and communicate the identifications to computing system 102.
  • In some embodiments, the identification component 104 may determine the identifications of vehicles from the vehicle identification information. By way of non-limiting illustration, the identification component 104 may obtain vehicle identification information including one or a combination of image information or video information. The identification component 104 may determine the identifications of vehicles using one or more image or video-based techniques described herein.
  • The context component 106 may determine, from the set of vehicle identification information, vehicle context information for individual vehicles of the one or more vehicles. By way of non-limiting illustration, the context component 106 may determine vehicle context information for autonomous vehicle 116 based on the vehicle identification information obtained from autonomous vehicle 116 and/or other vehicle identification information from other autonomous vehicles.
  • In some embodiments, the vehicle context information for an individual vehicles may describe a context of the individual vehicles. The context of the individual vehicles may describe circumstances specific to the individual vehicles. By way of non-limiting illustration, the context may include one or a combination of a speed of travel, a direction of travel, a trajectory, or an identity profile.
  • In some embodiments, determining the context may be based on comparing individual ones of the identifications and the locations of individual vehicles to other ones of the identifications and the locations of the individual vehicles.
  • The comparisons of individual ones of the identifications of the individual vehicles to the other ones of the identifications of the individual vehicles may facilitate determining that multiples ones of the identifications are for the same vehicle. For example, based on the comparisons, it may be determined that multiple ones of the identifications (obtained from the same or different autonomous vehicles) match. A match may convey a logical inference that the multiple identifications are for the same vehicle. In some implementations, matching may mean that identifications are the same or complementary.
  • In some implementations, being the same match may mean the identifications are within a threshold degree of sameness. By way of non-limiting illustration, for license plate identifications, two (or more) identifications may be determined to match if they depict the same series of alphanumerical characters or symbols or depict the same 90% of the series of alphanumerical characters or symbols. By way of non-limiting illustration, for color identifications, two (or more) identifications may be determined to match if they depict colors within a threshold range as observed on a color scale. By way of non-limiting illustration, for vehicle make identifications, two (or more) identifications may be determined to match if they depict the same make vehicle. By way of non-limiting illustration, for vehicle model identifications, two (or more) identifications may be determined to match if they depict the same model vehicle. By way of non-limiting illustration, for unique marking identifications, two (or more) identifications may be determined to match if they depict the same visual distinct unique marking in the same location of the vehicle.
  • In some implementations, being a complementary match may mean that multiple identifications may be partial identifications which, when combined, form a complete identification. The partial identifications may include depiction of different parts of the vehicle. That partial identification may include depiction of one or more overlapping parts of the vehicle. By way of non-limiting illustration, for license plate identifications, one identification may depict part of a series of alphanumerical characters or symbols and another identification may depict another part of the series of alphanumerical characters or symbols. The identifications may be combined to depict the series of alphanumerical characters or symbols defining the license plate number as a whole. By way of non-limiting illustration, for color identifications, one identification may depict a part of a vehicle having a color and another identification may depict another part of the vehicle having the same color. The identifications may be combined to depict the vehicle as a whole having the color uniform throughout. By way of non-limiting illustration, for unique marking identifications, one identification may depict part of a unique marking and another identification may depict another part of the unique marking. The identifications may be combined to depict the unique marking as a whole. In some implementations, combining identifications may be accomplished through stitching images and/or video together.
  • In some implementations, stitching may include operations such as one or a combination of feature point detection, image registration, alignment, or composing. Feature point detection may be accomplished through techniques such as SIFT and SURF. Image registration may involve matching features in a set of images. A method for image registration may include Random Sample Consensus (RANSAC) or other techniques. Alignment may include transforming an image to match a view point of another image. Composing may comprise the process where the images are aligned in such a way that they appear as a single shot of vehicle. In some embodiments, deep learning (of neural networks) based approaches may also be used.
  • In some implementations, the identity profile of an individual vehicle may represent an identity of the individual vehicle as a whole. The identity of the vehicle as a whole may comprise a representation of more than one identification which may have been obtained for the given vehicle. That is, the identity profile may be determined by combining multiples ones of the identifications (obtained from one or more autonomous vehicles) determined to be for the same vehicle. The identity profile may be made through the stitching techniques described herein which may result in one or more images which depict more than one identification. By way of non-limiting illustration, an identity profile of a vehicle may include an image or series of image from which two or more of a license plate number, a color, a make, a model, or a unique marking of the vehicle may be identifiable.
  • In some embodiments, determining one or a combination of the speed of travel, the direction of travel, or the trajectory of a vehicle may be based on comparing the individual locations associated with the multiples ones of the identifications determined to be for the same vehicle, which may be obtained from the stitched images and/or videos capturing the vehicle.
  • In some embodiments, a speed of travel may be represented by a distance traveled per unit. Determining a speed of travel of a vehicle may be accomplished by one or a combination of: comparing locations of identifications of the vehicle, determining distance between locations, determine time span(s) between the locations, or dividing the distance by the time span to obtain a speed of travel (e.g., distance per unit time). By way of non-limiting illustration, a speed of travel may specify a vehicle was traveling at 110 kilometers an hour.
  • In some embodiments, a direction of travel may be represented by cardinal directions. The cardinal directions may include north, south, east, and west. Determining a direction of travel may be accomplished by one or a combination of: comparing locations of identifications of the vehicle, determining which locations occurred before other ones of the locations, determining that the vehicle is traveling from a first location to a second location, and determining a pointing direction from the first location to the second location, or associating the pointing direction with a cardinal direction. By way of non-limiting illustration, a direction of travel may specify that a vehicle was traveling north.
  • In some embodiments, a trajectory of a vehicle may include a path followed by the vehicle. The trajectory may be specified with respect to one or a combination of named roads, highways, freeways, intersections, neighborhoods, or cities. In some implementations, locations may be referenced within a map of an environment including information about one or a combination of named roads, highways, freeways, intersections, neighborhoods, or cities. By way of non-limiting illustration, a mapping service may be accessed and used to cross reference the determined locations with the information conveyed in the map. A mapping service may include Google® Maps. By way of non-limiting illustration, a trajectory may specify that a vehicle traveled for two miles on a Main St., turned left and proceeded for six blocks on 1st St., etc. In some implementations, how a vehicle's trajectory changes (or doesn't change) over time may reflect a travel pattern of the vehicle. A travel pattern may include a common trajectory which appears more than once.
  • The tracking component 108 may be configured to actively look for one or more vehicles using the set of vehicle identification information, vehicle context information, or a combination of vehicle identification information and vehicle context information. By way of non-limiting illustration, tracking component 108 may obtain identification(s) of a vehicle, for example through user input by a user designing to locate the vehicle. The tracking component 108 may monitor the set of vehicle identification information obtained from a fleet of autonomous vehicle and the vehicle context information determined therefrom. The tracking component 108 may perform such monitoring while looking for a match between the user-provided identification(s) and the identification conveyed by the set of vehicle identification information. The tracking component 108 may, in response to finding a match, provide the vehicle context information for the matched vehicle to the user through one or more user interfaces. By way of non-limiting illustration, the provided vehicle context information may include one or a combination of a speed of travel, a direction of travel, or a trajectory of the vehicle to allow the user to track the vehicle.
  • In some embodiments, autonomous vehicles may exchange notifications of vehicle identification information, requests for tracking, or a combination of requests and notifications. For example, autonomous vehicle 116 may send requests via identification component 118 to other autonomous vehicles 122 to identify a particular vehicle. The autonomous vehicle 116 may obtain requests via identification component 118 from other autonomous vehicles 122 to identify a particular vehicle. The autonomous vehicle 116 may notify, via identification component 118, other vehicles about an identified vehicle (e.g., by sending vehicle identification information). The autonomous vehicle 116 may obtain notifications, via identification component 118, from other vehicles about an identified vehicle (e.g., by receiving vehicle identification information). The requests and notifications may be sent to computer system 102, which in turn may forward the requests or notifications to autonomous vehicles nearby a requesting or notifying vehicle's GPS location, or may be sent directly from autonomous vehicle 116 to one or more nearby autonomous vehicles.
  • FIG. 2 illustrates an example flow chart 200 for vehicle identification, in accordance with various embodiments of the disclosure. At block 202, a set of vehicle identification information may be obtained from a set of autonomous vehicles. The individual vehicle identification information may be obtained from an individual autonomous vehicle. The individual vehicle identification information may convey identifications of one or more vehicles and locations of the one or more vehicles. At a block 204, vehicle context information for individual vehicles may be determined from the set of vehicle identification information. The vehicle context information for the individual vehicles may describe a context of the individual vehicles.
  • FIG. 3 is a block diagram that illustrates a computer system 300 upon which any of the embodiments described herein may be implemented. The computer system 300 includes a bus 302 or other communication mechanism for communicating information, one or more hardware processors 304 coupled with bus 302 for processing information. Hardware processor(s) 304 may be, for example, one or more general purpose microprocessors.
  • The computer system 300 also includes a main memory 303, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 302 for storing information and instructions to be executed by processor(s) 304. Main memory 306 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor(s) 304. Such instructions, when stored in storage media accessible to processor(s) 304, render computer system 300 into a special-purpose machine that is customized to perform the operations specified in the instructions. Main memory 306 may include non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks. Volatile media may include dynamic memory. Common forms of media may include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a DRAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.
  • The computer system 300 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 300 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 300 in response to processor(s) 304 executing one or more sequences of one or more instructions contained in main memory 306. Such instructions may be read into main memory 306 from another storage medium, such as storage device 308. Execution of the sequences of instructions contained in main memory 306 causes processor(s) 304 to perform the process steps described herein. For example, the process/method shown in FIG. 2 and described in connection with this figure can be implemented by computer program instructions stored in main memory 306. When these instructions are executed by processor(s) 304, they may perform the steps as shown in FIG. 2 and described above. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
  • The computer system 300 also includes a communication interface 310 coupled to bus 302. Communication interface 310 provides a two-way data communication coupling to one or more network links that are connected to one or more networks. As another example, communication interface 310 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented.
  • The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented engines may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented engines may be distributed across a number of geographic locations.
  • Certain embodiments are described herein as including logic or a number of components. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components (e.g., a tangible unit capable of performing certain operations which may be configured or arranged in a certain physical manner). As used herein, for convenience, components of the computing system 102 and autonomous vehicle 116 may be described as performing or configured for performing an operation, when the components may comprise instructions which may program or configure the computing system 102 and autonomous vehicle 116 to perform the operation.
  • While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
  • The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.

Claims (20)

What is claimed is:
1. A system for vehicle identification, the system comprising:
one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to perform:
obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle;
identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises:
identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and
identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle;
determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles;
receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and
identifying and returning vehicle context information of the target vehicle.
2. The system of claim 1, wherein the identifications comprise one or a combination of a license plate number, a color, a make, a model, or a marking.
3. The system of claim 1, wherein the identifications of the one or more vehicles are derived from the image information or the video information through one or more image and/or video processing techniques.
4. The system of claim 1, wherein determining the vehicle context information is further based on comparing individual ones of the identifications and the locations of individual vehicles to other ones of the identifications and the locations of the individual vehicles.
5. The system of claim 1, wherein the obtaining the set of vehicle identification information from the plurality of autonomous vehicles comprises:
from one of the plurality of autonomous vehicles, obtaining location information of vehicles surrounding the one autonomous vehicle from location sensors of the vehicles, wherein the location sensors include at least one of a GPS, an altimeter, or a pressure sensor.
6. The system of claim 1, wherein the vehicle context information for the target vehicle describes a context of the target vehicle, the context including at least one of: a speed of travel, a direction of travel, or a trajectory.
7. The system of claim 1, wherein the vehicle context information for the target vehicle further comprises an identity profile of the target vehicle, the identity profile representing an identity of the target vehicle as a whole, and wherein the identity profile is determined by combining the two or more identifications determined to be for the same vehicle.
8. The system of claim 1, wherein the system further performs tracking the target vehicle based on the set of vehicle identification information or the vehicle context information.
9. The system of claim 1, wherein the identifying the plurality of identifications in the set of vehicle identification information that belong to the individual vehicle comprises:
stitching the image or video information comprising the two or more identifications that are within the threshold degree of sameness and comprise the one or more overlapping parts of a same vehicle.
10. The system of claim 9, wherein the stitching comprises one or a combination of feature point detection, image registration, alignment, or composing.
11. The system of claim 9, wherein the determining vehicle context information of the plurality of individual vehicles comprises:
determining one or a combination of a speed of travel, a direction of travel, or a trajectory of a vehicle based on comparing the locations of the vehicle in the stitched image or video information comprising the vehicle.
12. A method for vehicle identification, the method comprising:
obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle;
identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises:
identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and
identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle;
determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles;
receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and
identifying and returning vehicle context information of the target vehicle.
13. The method of claim 12, wherein the identifications comprise one or a combination of a license plate number, a color, a make, a model, or a marking.
14. The method of claim 12, wherein the identifications of the one or more vehicles are derived from the image information and the video information through one or more image and/or video processing techniques.
15. The method of claim 12, wherein the obtaining the set of vehicle identification information from the plurality of autonomous vehicles comprises:
from one of the plurality of autonomous vehicles, obtaining location information of vehicles surrounding the one autonomous vehicle from location sensors of the vehicles, wherein the location sensors include at least one of a GPS, an altimeter, or a pressure sensor.
16. The method of claim 12, wherein the vehicle context information for the target vehicle describes a context of the target vehicle, the context including at least one of: a speed of travel, a direction of travel, or a trajectory.
17. The method of claim 12, wherein the identifying the plurality of identifications in the set of vehicle identification information that belong to the individual vehicle comprises:
stitching the image or video information comprising the two or more identifications that are within the threshold degree of sameness or comprise the one or more overlapping parts of a same vehicle.
18. The method of claim 17, wherein the determining vehicle context information of the plurality of individual vehicles comprises:
determining one or a combination of a speed of travel, a direction of travel, or a trajectory of a vehicle based on comparing the locations of the vehicle in the stitched image or video information comprising the vehicle.
19. A non-transitory computer-readable storage medium configured with instructions executable by one or more processors to cause the one or more processors to perform operations comprising:
obtaining, from a plurality of autonomous vehicles, a set of vehicle identification information, wherein each of the set of vehicle identification information is obtained from a corresponding autonomous vehicle in the plurality of autonomous vehicles, and comprises image or video information of identifications and locations of one or more vehicles that are in an environment surrounding the corresponding autonomous vehicle;
identifying a plurality of identifications in the set of vehicle identification information that belong to an individual vehicle, wherein the identifying comprises:
identifying two or more identifications in the set of vehicle identification information being within a threshold degree of sameness; and
identifying two or more identifications in the set of vehicle identification information comprising one or more overlapping parts of a same vehicle;
determining vehicle context information of a plurality of individual vehicles based on the plurality of identifications belonging to each of the plurality of individual vehicles;
receiving a request for identifying a target vehicle, wherein the request comprises an identification of the target vehicle; and
identifying and returning vehicle context information of the target vehicle.
20. The non-transitory computer-readable storage medium of claim 19, wherein the identifying the plurality of identifications in the set of vehicle identification information that belong to the individual vehicle comprises:
stitching the image or video information comprising the two or more identifications that are within the threshold degree of sameness or comprise the one or more overlapping parts of a same vehicle.
US17/689,127 2018-12-28 2022-03-08 Systems and methods for vehicle identification Abandoned US20220189292A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/689,127 US20220189292A1 (en) 2018-12-28 2022-03-08 Systems and methods for vehicle identification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/235,931 US11302182B2 (en) 2018-12-28 2018-12-28 Systems and methods for vehicle identification
US17/689,127 US20220189292A1 (en) 2018-12-28 2022-03-08 Systems and methods for vehicle identification

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/235,931 Continuation US11302182B2 (en) 2018-12-28 2018-12-28 Systems and methods for vehicle identification

Publications (1)

Publication Number Publication Date
US20220189292A1 true US20220189292A1 (en) 2022-06-16

Family

ID=71124427

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/235,931 Active 2039-07-22 US11302182B2 (en) 2018-12-28 2018-12-28 Systems and methods for vehicle identification
US17/689,127 Abandoned US20220189292A1 (en) 2018-12-28 2022-03-08 Systems and methods for vehicle identification

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/235,931 Active 2039-07-22 US11302182B2 (en) 2018-12-28 2018-12-28 Systems and methods for vehicle identification

Country Status (1)

Country Link
US (2) US11302182B2 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160068156A1 (en) * 2014-09-10 2016-03-10 Volkswagen Ag Modifying autonomous vehicle driving by recognizing vehicle characteristics
US20160357188A1 (en) * 2015-06-05 2016-12-08 Arafat M.A. ANSARI Smart vehicle
US20180272944A1 (en) * 2017-03-22 2018-09-27 GM Global Technology Operations LLC System for and method of dynamically displaying images on a vehicle electronic display
US20180362031A1 (en) * 2017-06-20 2018-12-20 nuTonomy Inc. Risk processing for vehicles having autonomous driving capabilities
US20190347821A1 (en) * 2018-04-03 2019-11-14 Mobileye Vision Technologies Ltd. Determining lane position of a partially obscured target vehicle

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2395595B (en) * 2002-11-14 2005-01-05 Nathan Mendel Rau Automated license plate recognition system for use in law enforcement vehicles
US8191915B2 (en) * 2008-10-17 2012-06-05 GM Global Technology Operations LLC Vehicle docking assistance system
US8576069B2 (en) * 2009-10-22 2013-11-05 Siemens Corporation Mobile sensing for road safety, traffic management, and road maintenance
US20150338226A1 (en) * 2014-05-22 2015-11-26 Telogis, Inc. Context-based routing and access path selection
US9701239B2 (en) * 2015-11-04 2017-07-11 Zoox, Inc. System of configuring active lighting to indicate directionality of an autonomous vehicle
US10034066B2 (en) * 2016-05-02 2018-07-24 Bao Tran Smart device
US9952594B1 (en) 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
CN107481526A (en) 2017-09-07 2017-12-15 公安部第三研究所 System and method for drive a vehicle lane change detection record and lane change violating the regulations report control
US11900672B2 (en) * 2018-04-23 2024-02-13 Alpine Electronics of Silicon Valley, Inc. Integrated internal and external camera system in vehicles
JP7156011B2 (en) * 2018-12-26 2022-10-19 トヨタ自動車株式会社 Information presentation device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160068156A1 (en) * 2014-09-10 2016-03-10 Volkswagen Ag Modifying autonomous vehicle driving by recognizing vehicle characteristics
US20160357188A1 (en) * 2015-06-05 2016-12-08 Arafat M.A. ANSARI Smart vehicle
US20180272944A1 (en) * 2017-03-22 2018-09-27 GM Global Technology Operations LLC System for and method of dynamically displaying images on a vehicle electronic display
US20180362031A1 (en) * 2017-06-20 2018-12-20 nuTonomy Inc. Risk processing for vehicles having autonomous driving capabilities
US20190347821A1 (en) * 2018-04-03 2019-11-14 Mobileye Vision Technologies Ltd. Determining lane position of a partially obscured target vehicle

Also Published As

Publication number Publication date
US11302182B2 (en) 2022-04-12
US20200211369A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
Maddern et al. 1 year, 1000 km: The oxford robotcar dataset
CN109977776B (en) Lane line detection method and device and vehicle-mounted equipment
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
WO2018128667A1 (en) Systems and methods for lane-marker detection
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
US11430199B2 (en) Feature recognition assisted super-resolution method
US10380889B2 (en) Determining car positions
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN112183367B (en) Vehicle data error detection method, device, server and storage medium
Mousavian et al. Semantic image based geolocation given a map
Xu et al. An automatic optical and SAR image registration method with iterative level set segmentation and SIFT
Slavkovikj et al. Image-based road type classification
CN112444251B (en) Vehicle driving position determining method and device, storage medium and computer equipment
Yang et al. Vehicle counting method based on attention mechanism SSD and state detection
Lin et al. SAN: Scale-aware network for semantic segmentation of high-resolution aerial images
Chandrasekaran et al. Computer vision based parking optimization system
US20220189292A1 (en) Systems and methods for vehicle identification
WO2020255628A1 (en) Image processing device, and image processing program
Xiao et al. Long-range uav thermal geo-localization with satellite imagery
WO2020139385A1 (en) Systems and methods for vehicle identification
CN110555344A (en) Lane line recognition method, lane line recognition device, electronic device, and storage medium
Ghosh et al. Sensing the sensor: Estimating camera properties with minimal information
Shahid et al. A cross-platform hd dataset and a two-step framework for robust aerial image matching
Lin et al. Accurate coverage summarization of UAV videos

Legal Events

Date Code Title Description
AS Assignment

Owner name: VOYAGER (HK) CO., LTD., HONG KONG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DIDI RESEARCH AMERICA, LLC;REEL/FRAME:059347/0643

Effective date: 20200318

Owner name: BEIJING VOYAGER TECHNOLOGY CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOYAGER (HK) CO., LTD.;REEL/FRAME:059194/0839

Effective date: 20200318

Owner name: DIDI RESEARCH AMERICA, LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YI, XIAOYONG;ZHANG, JIANG;REN, LIWEI;REEL/FRAME:059194/0679

Effective date: 20181227

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION