CN111837163A - System and method for vehicle wheel detection - Google Patents

System and method for vehicle wheel detection Download PDF

Info

Publication number
CN111837163A
CN111837163A CN201980017911.6A CN201980017911A CN111837163A CN 111837163 A CN111837163 A CN 111837163A CN 201980017911 A CN201980017911 A CN 201980017911A CN 111837163 A CN111837163 A CN 111837163A
Authority
CN
China
Prior art keywords
image data
vehicle
vehicle wheel
data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980017911.6A
Other languages
Chinese (zh)
Other versions
CN111837163B (en
Inventor
王泮渠
陈鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tusimple Inc
Original Assignee
Tusimple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/917,331 external-priority patent/US10671873B2/en
Application filed by Tusimple Inc filed Critical Tusimple Inc
Priority to CN202210931196.0A priority Critical patent/CN115331198A/en
Publication of CN111837163A publication Critical patent/CN111837163A/en
Application granted granted Critical
Publication of CN111837163B publication Critical patent/CN111837163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations

Abstract

A system and method for vehicle wheel detection is disclosed. Particular embodiments may be configured to: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.

Description

System and method for vehicle wheel detection
Priority patent application
Priority of U.S. non-provisional patent application No. 15/917,331, entitled "SYSTEM AND METHOD FOR VEHICLE passenger selection," filed on 3, 9, 2018, which is incorporated herein by reference in its entirety. Priority of U.S. non-provisional patent application serial No. 15/456,219, entitled "SYSTEM AND METHOD FOR managing recording USING detail on recording condition (DUC)" filed on 3, 10, 2017, which is incorporated herein by reference in its entirety. Priority of U.S. non-provisional patent application serial No. 15/456,294, entitled "SYSTEM AND METHOD FOR managing closed cell conversation (HDC)", filed on 3, 10, 2017, which is hereby incorporated by reference in its entirety. The disclosure of the referenced patent application is considered part of the disclosure of the present application and is incorporated by reference herein in its entirety.
Technical Field
This patent document relates generally to tools (systems, apparatus, methods, computer program products, etc.) for image processing, vehicle control systems, and autonomous driving systems, and more particularly, but not by way of limitation, to a system and method for vehicle wheel detection.
Background
In autonomous driving systems, the surrounding driving environment and traffic participants are successfully perceived and predicted, which is crucial to making proper and safe decisions for controlling an autonomous vehicle or a host vehicle. In the existing literature and applications of visual perception, techniques such as object recognition, two-dimensional (2D) object detection, and 2D scene understanding (or semantic segmentation) have been widely studied and used. These visual perception techniques have been successfully applied for use with autonomous or managed vehicles with the aid of rapidly evolving deep learning techniques and computing capabilities, such as graphics processing units [ GPUs ]. However, compared to these 2D perception methods, full three-dimensional (3D) perception techniques are less studied because of the difficulty in obtaining reliable ground truth data and in correctly training the 3D models. For example, correctly annotating a 3D bounding box for 3D object detection requires accurate measurement of extrinsic and intrinsic camera parameters and motion of the autonomous or host vehicle, which is often difficult or impossible to achieve. Even if ground truth data is available, 3D models are difficult to train because the amount of training data is limited and the measurements are inaccurate. Therefore, less expensive and less performing alternative solutions have been used in these visual perception applications.
Therefore, an efficient system for detecting and analyzing vehicle wheels is necessary.
Disclosure of Invention
Vehicle wheels are important features for determining the exact position and attitude of a moving vehicle. Vehicle attitude may include vehicle heading, orientation, speed, acceleration, and the like. However, in existing literature and applications of computer vision and autonomous driving, the use of vehicle wheel characteristics for vehicle control is generally ignored. In various example embodiments disclosed herein, a system and method for vehicle wheel detection using image segmentation is provided. In an example embodiment, the system includes three components: 1) data collection and annotation, 2) model training using deep convolutional neural networks, and 3) real-time model inference. To take advantage of the latest deep learning models and training strategies, various example embodiments disclosed herein form the wheel detection problem as two types of segmentation tasks and train on deep neural networks that excel in multiple types of semantic segmentation problems. The test result shows that: the system disclosed herein can successfully detect vehicle wheel characteristics in real time under complex driving scenarios. Various example embodiments disclosed herein may be used for various applications, such as 3D vehicle pose estimation and vehicle lane distance estimation, among others.
Vehicle wheels may be used for vehicle characterization for at least three reasons: 1) the perception and prediction of other traffic participants is mainly related to the trajectories of the other traffic participants on the road surface, wherein the wheels may provide the best measurement, since they are the vehicle components closest to the road surface; 2) the wheels may provide a reliable estimate of the vehicle pose, as vehicles typically have four or more wheels as reference points; 3) conceptually, a wheel is easy to detect because its shape and position in the vehicle are uniform. When we obtain an accurate wheel feature segmentation analysis for a given vehicle, we can obtain or infer valuable vehicle information such as pose, position, intent, and trajectory. This vehicle information may provide a great benefit to the perception, localization, and planning system for autonomous driving.
In one example aspect, a system comprises: a data processor; and an autonomous vehicle wheel detection system executable by the data processor, the autonomous vehicle wheel detection system configured to perform an autonomous vehicle wheel detection operation. The autonomous vehicle wheel detection operation is configured to: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase that includes applying the trained one or more classifiers to extract the wheel object data.
In another aspect, a method is disclosed for: obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
In another aspect, a non-transitory machine-usable storage medium employs instructions that, when executed by a machine, cause the machine to: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase that includes applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
These and other aspects are disclosed herein.
Drawings
Various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 illustrates a block diagram of an example ecosystem in which an on-board image processing module of an example embodiment can be implemented;
fig. 2 illustrates an image acquired from a camera (top half image) and its corresponding wheel annotation result (bottom half image);
FIG. 3 illustrates an offline training phase (first phase) used in an example embodiment to configure or train an autonomous vehicle wheel detection system and classifiers therein;
FIG. 4 (bottom image) illustrates an example ground truth label map that may be used to train a segmentation model in accordance with an example embodiment; FIG. 4 (top half image) also illustrates a hybrid visualization of the original example image in combination with ground truth;
FIG. 5 illustrates a second phase of operational use or simulated use of the autonomous vehicle wheel detection system in an example embodiment;
FIG. 6 (bottom image) illustrates an example predictive label map using a trained segmentation model that was trained using the example image of FIG. 4 and other training images; FIG. 6 (top image) also illustrates a hybrid visualization of the original example image in combination with the prediction results;
FIG. 7 (bottom image) illustrates another example ground truth label map that may be used to train a segmentation model in accordance with an example embodiment; FIG. 7 (top half image) also illustrates a hybrid visualization of the original example image in combination with ground truth;
FIG. 8 (bottom image) illustrates an example predictive label map using a trained segmentation model that was trained using the example image of FIG. 7 and other training images; fig. 8 (top half image) also illustrates a hybrid visualization of the original example image in combination with the prediction results.
FIG. 9 (bottom image) illustrates yet another example ground truth label map that may be used to train a segmentation model in accordance with an example embodiment; FIG. 9 (top half image) also illustrates a hybrid visualization of the original example image in combination with ground truth;
FIG. 10 (bottom image) illustrates an example predictive label map using a trained segmentation model that was trained using the example image of FIG. 9 and other training images; fig. 10 (top half image) also illustrates a hybrid visualization of the original example image in combination with the prediction results.
FIG. 11 is a process flow diagram illustrating an example embodiment of a system and method for vehicle wheel detection; and
fig. 12 shows a diagrammatic representation of machine in the example form of a computer system within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies discussed herein.
Identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It should be understood that: other embodiments may be utilized and structural changes may be made without departing from the scope of the disclosed subject matter. Any combination of the following features and elements is contemplated to implement and practice the present disclosure.
In the present specification, common or similar features may be denoted by common reference numerals. As used herein, "exemplary" may indicate examples, embodiments or aspects, and should not be construed as limiting or indicating preferences or preferred embodiments.
Currently, autonomous vehicles face several technical limitations that hinder their interaction and adaptability with the real world.
Current autonomous vehicle technology is typically reactive-that is, the decision is based on the current condition or state. For example, an autonomous vehicle may be programmed to emergency stop when an object is detected in the middle of a road. However, current autonomous vehicle technology has limited ability to determine the likelihood of a rear-end collision or a sequential road collision due to rapid braking.
Furthermore, current technology does not know how to make real-world judgment calls. Various objects on the road require different judgments based on the environment and the current situation. For example, turning around to avoid cartons poses unnecessary hazards to autonomous vehicles and other drivers. On the other hand, turning is necessary to avoid a person hitting the middle of the road. The determination call changes based on road conditions, trajectories of other vehicles, speed of the autonomous vehicle, and speed and vehicle direction of other vehicles.
Additionally, current techniques are not suitable for environments with other human drivers. When reacting to changes in traffic patterns, the autonomous vehicle must be able to predict the behavior of other drivers or pedestrians. One goal in accepting autonomous vehicles in real life is: in a manner that allows for proper interaction with other human drivers and vehicles. Human drivers often make traffic decisions based on predictable human responses, which are not necessarily favorable to machine rules. In other words, the autonomous vehicle has the following technical problems: current autonomous vehicles behave too much like machines. Such behavior may lead to accidents because other drivers cannot anticipate certain actions performed by the autonomous vehicle.
Technical solutions to the above problems are provided herein, among other solutions. For example, an efficient system for detecting and analyzing vehicle wheels is used to generate vehicle wheel object data, and the vehicle wheels are based on an image data collection system. Analyzing the vehicle wheels makes ego-vehicle trajectory decisions and other autonomous vehicle decisions more human-like, thereby reducing problems with passengers, surrounding vehicles, and pedestrians. Accordingly, the present disclosure provides, among other solutions, an efficient system for detecting and analyzing vehicle wheels as a solution to the above-mentioned problems.
As described in various example embodiments, systems and methods for vehicle wheel detection are described herein. The example embodiments disclosed herein may be used in the context of an on-board control system 150 in a vehicle ecosystem 101. In one example embodiment, the onboard control system 150 with the image processing module 200 residing in the vehicle 105 may be configured the same as the architecture and ecosystem 101 illustrated in fig. 1. However, it will be clear to a person skilled in the art that: the image processing module 200 described and claimed herein may also be implemented, configured and used in various other applications and systems.
Referring now to FIG. 1, a block diagram illustrates an exemplary ecosystem 101 in which an on-board control system 150 and an image processing module 200 in exemplary embodiments can be implemented. These components are described in more detail below. Ecosystem 101 includes various systems and components that can generate and/or communicate one or more information/data sources and related services to in-vehicle control system 150 and image processing module 200, which in-vehicle control system 150 and image processing module 200 can be installed in vehicle 105. For example, a camera installed in vehicle 105 as one of the devices of vehicle subsystem 140 may generate image and timing data that may be received by in-vehicle control system 150. The in-vehicle control system 150 and the image processing module 200 executing therein may receive the image and timing data inputs. As described in more detail below, the image processing module 200 may process the image input and extract object features, which may be used by the autonomous vehicle control subsystem as another subsystem of the vehicle subsystem 140. For example, the autonomous vehicle control subsystem may use the real-time extracted object features to safely and efficiently navigate and control the vehicle 105 through the real-world driving environment while avoiding obstacles and safely controlling the vehicle.
In the example embodiment described herein, the onboard control system 150 may be in data communication with a plurality of vehicle subsystems 140, all of which vehicle subsystems 140 may reside in the user's vehicle 105. A vehicle subsystem interface 141 is provided to facilitate data communication between the on-board control system 150 and the plurality of vehicle subsystems 140. In-vehicle control system 150 may be configured to include a data processor 171 to execute an image processing module 200 to process image data received from one or more vehicle subsystems 140. The data processor 171 may be integrated with a data storage device 172, the data storage device 172 being part of the computing system 170 in the in-vehicle control system 150. The data storage device 172 may be used to store data, processing parameters, and data processing instructions. A processing module interface 165 may be provided to facilitate data communication between the data processor 171 and the image processing module 200. In various example embodiments, a plurality of processing modules configured similarly to the image processing module 200 may be provided for execution by the data processor 171. As shown by the dashed lines in fig. 1, the image processing module 200 may be integrated into the in-vehicle control system 150, optionally downloaded to the in-vehicle control system 150, or deployed separately from the in-vehicle control system 150.
In-vehicle control system 150 may be configured to receive data from wide area network 120 and network resources 122 connected thereto, or to transmit data to wide area network 120 and network resources 122 connected thereto. In-vehicle network-enabled device 130 and/or user mobile device 132 may be used to communicate via network 120. Network-enabled device 131 may be used by in-vehicle control system 150 to facilitate data communication between in-vehicle control system 150 and network 120 via in-vehicle network-enabled device 130. Similarly, user-mobile device interface 133 may be used by in-vehicle control system 150 to facilitate data communication between in-vehicle control system 150 and network 120 via user-mobile device 132. In this manner, in-vehicle control system 150 may obtain real-time access to network resource 122 via network 120. The network resources 122 may be used to obtain processing modules to be executed by the data processor 171, data content for training the internal neural network, system parameters, or other data.
Ecosystem 101 can include a wide area data network 120. The network 120 represents one or more conventional wide area data networks, such as the Internet, a cellular telephone network, a satellite network, a pager network, a wireless broadcast network, a gaming network, a WiFi network, a peer-to-peer network, a voice-over-IP (VoIP) network, and so forth. One or more of these networks 120 may be used to connect users or client systems with network resources 122, such as websites, servers, central control sites, and the like. Network resource 122 may generate and/or distribute data that may be received in vehicle 105 via in-vehicle network-enabled device 130 or user mobile device 132. The network resources 122 may also host a network cloud service that may support functionality used for computing or assisting in processing image input or image input analysis. The antenna may be used to connect the onboard control system 150 and the image processing module 200 to the data network 120 via cellular, satellite, radio, or other conventional signal receiving mechanisms. Such cellular data networks are currently available (e.g., Verizon)TM、AT&TTM、T-MobileTMEtc.). Such satellite-based data or content networks are also currently available (e.g., sirius xm)TM、HughesNetTMEtc.). Conventional broadcast networks, such as AM/FM radio networks, pager networks, UHF networks, gaming networks, WiFi networks, peer-to-peer networks, voice over IP (VoIP) networks, and the like, are also well known. Thus, as described in more detail below, the in-vehicle control system 150 and the image processing module 200 may receive network-based data or content via an in-vehicle network enabled device interface 131, which in-vehicle network enabled device interface 131 may be used to connect with the in-vehicle network enabled device 130 and the network 120. In this manner, in-vehicle control system 150 and image processing module 200 may support various network-enabled in-vehicle devices and systems from within vehicle 105.
As shown in FIG. 1, the in-vehicle control system 150 and the image processing module 200 alsoData, image processing control parameters, and training content may be received from user mobile devices 132, which user mobile devices 132 may be located inside or near vehicle 105. User mobile device 132 may represent a standard mobile device such as a cellular phone, a smart phone, a Personal Digital Assistant (PDA), an MP3 player, a tablet computing device (e.g., an iPad)TM) Laptop computers, CD players, and other mobile devices that may generate, receive, and/or communicate data, image processing control parameters, and content for in-vehicle control system 150 and image processing module 200. As shown in fig. 1, mobile device 132 may also be in data communication with network cloud 120. The mobile devices 132 may obtain data and content from internal memory components of the mobile devices 132 themselves or from the network resources 122 via the network 120. Additionally, the mobile devices 132 may themselves include GPS data receivers, accelerometers, WiFi triangulation, or other geographic location sensors or components in the mobile device that may be used to determine the user's real-time geographic location (via the mobile device) at any time. In any case, in-vehicle control system 150 and image processing module 200 may receive data from mobile device 132 as shown in FIG. 1.
Still referring to fig. 1, an example embodiment of ecosystem 101 can include a vehicle operation subsystem 140. For embodiments implemented in vehicle 105, many standard vehicles include operating subsystems, such as Electronic Control Units (ECUs), that support monitoring/control subsystems for the engine, brakes, transmission, electrical systems, exhaust systems, internal environment, etc. For example, data signals transmitted from vehicle operations subsystem 140 (e.g., an ECU of vehicle 105) to on-board control system 150 via vehicle subsystem interface 141 may include information regarding the status of one or more components or subsystems of vehicle 105. In particular, data signals that may be communicated from the vehicle operations subsystem 140 to a Controller Area Network (CAN) bus of the vehicle 105 may be received and processed by the onboard control system 150 via the vehicle subsystem interface 141. Embodiments of the systems and methods described herein may be used with substantially any mechanized system (including, but not limited to, industrial equipment, boats, trucks, machinery, or automobiles) that uses a CAN bus or similar data communication bus as defined herein; thus, the term "vehicle" as used herein may include any such motorized system. Embodiments of the systems and methods described herein may also be used with any system that employs some form of network data communication; however, such network communication is not required.
Still referring to fig. 1, the example embodiment of ecosystem 101 and the vehicle operation subsystem 140 therein can include various vehicle subsystems that support the operation of vehicle 105. Generally, for example, the vehicle 105 may take the form of a car, truck, motorcycle, bus, boat, airplane, helicopter, lawn mower, bulldozer, snowmobile, aircraft, recreational vehicle, farm equipment, construction equipment, trolley, golf cart, train, and cart. Other vehicles are also possible. The vehicle 105 may be configured to operate in an autonomous mode, in whole or in part. For example, the vehicle 105 may control itself in an autonomous mode and may be operable to determine a current state of the vehicle and its environment, determine a predicted behavior of at least one other vehicle in the environment, determine a confidence that may correspond to a likelihood that the at least one other vehicle performs the predicted behavior, and control the vehicle 105 based on the determined information. When in the autonomous mode, the vehicle 105 may be configured to operate without human interaction.
The vehicle 105 may include various vehicle subsystems such as a vehicle drive subsystem 142, a vehicle sensor subsystem 144, a vehicle control subsystem 146, and an occupant interface subsystem 148. As described above, the vehicle 105 may also include an in-vehicle control system 150, a computing system 170, and an image processing module 200. Vehicle 105 may include more or fewer subsystems, and each subsystem may include multiple elements. Further, each sub-system and component of the vehicle 105 may be interconnected. Thus, one or more of the described functions of vehicle 105 may be divided into additional functional or physical components, or combined into fewer functional or physical components. In some other examples, additional functional and physical components may be added to the example illustrated by fig. 1. The vehicle drive subsystem 142 may include components operable to provide powered motion to the vehicle 105. In an example embodiment, the vehicle drive subsystem 142 may include an engine or electric motor, wheels/tires, a transmission, an electrical subsystem, and a power source. The engine or electric motor may be any combination of an internal combustion engine, an electric motor, a steam engine, a fuel cell engine, a propane engine, or other type of engine or electric motor. In some example embodiments, the engine may be configured to convert the power source into mechanical energy. In some example embodiments, the vehicle drive subsystem 142 may include multiple types of engines or motors. For example, a gas-electric hybrid vehicle may include a gasoline engine and an electric motor. Other examples are possible. The wheels of the vehicle 105 may be standard tires. The wheels of the vehicle 105 may be configured in various forms, including: for example in the form of a unicycle, bicycle, tricycle or four wheel (such as on a car or truck). Other wheel geometries are possible, such as those comprising six or more wheels. Any combination of wheels of the vehicle 105 may be operable to rotate differently relative to other wheels. The wheel may represent at least one wheel fixedly attached to the transmission and at least one tire coupled to a rim of the wheel, which may contact the drive surface. The wheel may comprise a combination of metal and rubber or another combination of materials. The transmission may include elements operable to transmit mechanical power from the engine to the wheels. To this end, the transmission may include a gearbox, a clutch, a differential, and a drive shaft. The transmission may also include other elements. The drive shaft may include one or more shafts that may be coupled to one or more wheels. The electrical system may include components operable to transmit and control electrical signals in the vehicle 105. These electrical signals may be used to activate lights, servos, motors, and other electrically driven or controlled devices of the vehicle 105. The power source may represent an energy source that may fully or partially power an engine or an electric motor. That is, the engine or the motor may be configured to convert the power source into mechanical energy. Examples of the power source include: gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, fuel cells, solar panels, batteries, and other sources of electrical power. Additionally or alternatively, the power source may include any combination of a fuel tank, a battery, a capacitor, or a flywheel. The power source may also provide power to other subsystems of the vehicle 105.
Vehicle sensor subsystem 144 may include a number of sensors configured to sense information about the environment or condition of vehicle 105. For example, the vehicle sensor subsystem 144 may include an Inertial Measurement Unit (IMU), a Global Positioning System (GPS) transceiver, a RADAR unit, a laser range finder/LIDAR unit, and one or more cameras or image capture devices. The vehicle sensor subsystem 144 may also include sensors configured to monitor internal systems of the vehicle 105 (e.g., O2 monitor, fuel gauge, engine oil temperature). Other sensors are also possible. One or more of the sensors included in vehicle sensor subsystem 144 may be configured to be actuated individually or collectively to modify a position, an orientation, or both of the one or more sensors.
The IMU may include any combination of sensors (e.g., accelerometers and gyroscopes) configured to sense changes in position and orientation of the vehicle 105 based on inertial acceleration. The GPS transceiver may be any sensor configured to estimate the geographic location of the vehicle 105. To this end, the GPS transceiver may include a receiver/transmitter operable to provide information regarding the position of the vehicle 105 relative to the earth. The RADAR unit may represent a system that utilizes radio signals to sense objects within the local environment of the vehicle 105. In some embodiments, in addition to sensing objects, the RADAR unit may be configured to sense the speed and heading of objects in the vicinity of the vehicle 105. The laser range finder or LIDAR unit may be any sensor configured to sense objects in the environment in which the vehicle 105 is located using laser light. In an example embodiment, a laser rangefinder/LIDAR unit may include one or more laser sources, a laser scanner, and one or more detectors, among other system components. The laser rangefinder/LIDAR unit may be configured to operate in either a coherent (e.g., using heterodyne detection) or non-coherent detection mode. The camera may include one or more devices configured to capture a plurality of images of the environment of the vehicle 105. The camera may be a still image camera or a motion video camera.
Vehicle control subsystem 146 may be configured to control operation of vehicle 105 and its components. Thus, the vehicle control subsystem 146 may include various elements, such as a steering unit, a damper, a brake unit, a navigation unit, and an autonomous control unit.
The steering unit may represent any combination of mechanisms operable to adjust the heading of the vehicle 105. The damper may be configured to control, for example, the operating speed of the engine, and in turn, the speed of the vehicle 105. The brake unit may include any combination of mechanisms configured to decelerate the vehicle 105. The brake unit may use friction to slow the wheel in a standard manner. In other embodiments, the brake unit may convert the kinetic energy of the wheel into an electric current. The brake unit may also take other forms. The navigation unit may be any system configured to determine a driving path or route of the vehicle 105. The navigation unit may also be configured to dynamically update the driving path while the vehicle 105 is running. In some embodiments, the navigation unit may be configured to combine data from the image processing module 200, the GPS transceiver, and one or more predetermined maps to determine the driving path of the vehicle 105. An autonomous control unit may represent a control system configured to identify, assess, and avoid or otherwise negotiate potential obstacles in the environment of the vehicle 105. In general, the autonomous control unit may be configured to control the vehicle 105 to operate without a driver or to provide driver assistance in controlling the vehicle 105. In some embodiments, the autonomous control unit may be configured to combine data from the image processing module 200, GPS transceiver, RADAR, LIDAR, camera, and other vehicle subsystems to determine a driving path or trajectory of the vehicle 105. Additionally or alternatively, the vehicle control subsystem 146 may include components in addition to those shown and described.
The occupant interface subsystem 148 may be configured to allow interaction between the vehicle 105 and external sensors, other vehicles, other computer systems, and/or occupants or users of the vehicle 105. For example, the occupant interface subsystem 148 may include standard visual display devices (e.g., plasma displays, Liquid Crystal Displays (LCDs), touch screen displays, heads-up displays, etc.), speakers or other audio output devices, microphones or other audio input devices, navigation interfaces, and interfaces for controlling the internal environment (e.g., temperature, fan, etc.) of the vehicle 105. In an example embodiment, the occupant interface subsystem 148 may provide, for example, a means for a user/occupant of the vehicle 105 to interact with other vehicle subsystems. The visual display device may provide information to a user of vehicle 105. The user interface device may also be operable to accept input from a user via the touch screen. The touch screen may be configured to sense at least one of a position and a movement of a user's finger via capacitive sensing, resistive sensing, or surface acoustic wave processes, among other possibilities. The touch screen may be capable of sensing finger movement in a direction parallel to or planar with the touch screen surface, in a direction perpendicular to the touch screen surface, or both, and may also be capable of sensing a level of pressure applied to the touch screen surface. The touch screen may be formed of one or more translucent or transparent insulating layers and one or more translucent or transparent conductive layers. Touch screens can also take other forms. In other examples, the occupant interface subsystem 148 may provide the vehicle 105 with a means for communicating with devices within its environment. The microphone may be configured to receive audio (e.g., voice commands or other audio input) from a user of the vehicle 105. Similarly, the speakers may be configured to output audio to a user of the vehicle 105. In one example embodiment, the occupant interface subsystem 148 may be configured to wirelessly communicate with one or more devices directly or via a communication network. For example, the wireless communication system may use 3G cellular communication (such as CDMA, EVDO, GSM/GPRS) or 4G cellular communication (such as WiMAX or LTE). Alternatively, the wireless communication system may communicate with a Wireless Local Area Network (WLAN), e.g. using
Figure BDA0002671264460000131
In some embodiments, the wireless communication system may communicate directly with the device, for example using an infrared link,
Figure BDA0002671264460000141
Or
Figure BDA0002671264460000142
Other wireless protocols (such as various vehicle communication systems) are possible within the context of this disclosure. For example, a wireless communication system may include one or more dedicated short-range communication (DSRC) devices, which may include public or private data communications between vehicles and/or roadside stations.
Many or all of the functions of the vehicle 105 may be controlled by the computing system 170. The computing system 170 may include at least one data processor 171 (the at least one data processor 171 may include at least one microprocessor) that executes processing instructions stored in a non-transitory computer readable medium, such as data storage device 172. Computing system 170 may also represent multiple computing devices that may be used to control individual components or subsystems of vehicle 105 in a distributed manner. In some embodiments, the data storage device 172 may include processing instructions (e.g., program logic) that are executable by the data processor 171 to perform various functions of the vehicle 105, including those described herein in connection with the figures. The data storage device 172 may also include other instructions, including the following instructions: transmit data to, receive data from, interact with, or control one or more of the vehicle drive subsystem 142, the vehicle sensor subsystem 144, the vehicle control subsystem 146, the occupant interface subsystem 148.
In addition to processing instructions, the data storage device 172 may also store data such as image processing parameters, training data, road map and path information, and other information. The vehicle 105 and the computing system 170 may use this information during operation of the vehicle 105 in autonomous, semi-autonomous, and/or manual modes.
The vehicle 105 may include a user interface for providing information to or receiving information from a user or occupant of the vehicle 105. The user interface may control or enable control of the content and layout of the interactive images that may be displayed on the display device. Further, the user interface may include one or more input/output devices within the set of occupant interface subsystems 148, such as a display device, a speaker, a microphone, or a wireless communication system.
The computing system 170 may control the functions of the vehicle 105 based on inputs received from various vehicle subsystems, such as the vehicle drive subsystem 142, the vehicle sensor subsystem 144, and the vehicle control subsystem 146, as well as from the occupant interface subsystem 148. For example, the computing system 170 may use input from the vehicle control subsystem 146 to control the steering unit to avoid obstacles detected by the vehicle sensor subsystem 144 and the image processing module 200, to move in a controlled manner, or to follow a path or trajectory based on output generated by the image processing module 200. In an example embodiment, the computing system 170 may be operable to provide control of many aspects of the vehicle 105 and its subsystems.
Although fig. 1 shows various components of vehicle 105 (e.g., vehicle subsystem 140, computing system 170, data storage device 172, and image processing module 200) as being integrated into vehicle 105, one or more of these components may be mounted or associated separately from vehicle 105. For example, the data storage device 172 may exist partially or completely separate from the vehicle 105. Thus, the vehicle 105 may be provided in the form of device elements that may be located separately or together. The equipment elements that make up vehicle 105 may be communicatively coupled together in a wired or wireless manner.
Additionally, the in-vehicle control system 150 as described above may obtain other data and/or content (denoted herein as auxiliary data) from local and/or remote sources. As described herein, the assistance data may be used to enhance, modify, or train the operation of the image processing module 200 based on various factors, including: the environment in which the user is operating the vehicle (e.g., the location of the vehicle, a designated destination, direction of travel, speed, time of day, status of the vehicle, etc.), and various other data available from these various sources (local and remote).
In particular embodiments, in-vehicle control system 150 and image processing module 200 may be implemented as in-vehicle components of vehicle 105. In various example embodiments, in-vehicle control system 150 and image processing module 200 in data communication therewith may be implemented as an integrated component or as separate components. In an example embodiment, software components of in-vehicle control system 150 and/or image processing module 200 may be dynamically upgraded, modified, and/or enhanced by using a data connection with mobile device 132 and/or network resource 122 via network 120. In-vehicle control system 150 may periodically query mobile device 132 or network resource 122 for updates, or may push updates to in-vehicle control system 150.
System and method for vehicle wheel detection
In various example embodiments disclosed herein, a system and method for vehicle wheel detection using image segmentation is provided. In an example embodiment, the system includes three components: 1) data collection and annotation, 2) model training using deep convolutional neural networks, and 3) real-time model inference. To take advantage of the latest deep learning models and training strategies, various example embodiments disclosed herein form the wheel detection problem as two types of segmentation tasks and train on deep neural networks that excel in multiple types of semantic segmentation problems. When the system obtains an accurate wheel feature segmentation analysis for a given vehicle, the system can obtain or infer valuable vehicle information, such as pose, position, intent, and trajectory. This vehicle information may provide a great benefit to the perception, localization, and planning system for autonomous driving. In various example embodiments described herein, components of a vehicle wheel detection system are described below.
Data collection and annotation
In various example embodiments, the wheel segmentation problem may be defined in different ways, such as 1) a bounding box regression problem that only requires the positions of the four corners of a rectangular bounding box; 2) a semantic segmentation problem that requires pixel-level labeling of wheel regions or 3) an instance segmentation problem that requires assigning a different instance identifier number (ID) to each individual wheel. The example embodiments described herein provide an annotation paradigm that is efficient and suitable for all possible tasks. Since vehicle wheels typically share similar visible shapes, such as circles or ovals, the processing performed by the exemplary embodiment transforms the vehicle wheel annotation task into a contour annotation task. That is, example embodiments may be configured to identify and render contour lines or contours around each vehicle wheel detected in the input image. In accordance with vehicle wheel profiles, example embodiments may be configured to generate corresponding detection bounding boxes by extracting extreme values for all four directions (up, down, left, right) of each wheel profile and generating corresponding bounding boxes from these extreme values. Additionally, example embodiments may be configured to obtain semantic segmentation labels corresponding to vehicle wheel contours by filling an interior region defined by the wheel contours. Finally, example embodiments may also obtain a vehicle wheel instance label by counting the number of closed vehicle wheel profiles and generating a different instance identifier number (ID) for each instance of the vehicle wheel detected in the input image. Accordingly, example embodiments may generate various information based on the vehicle wheel profile identified in the input image. Importantly, the method comprises the following steps: for human labelers, it is very easy to map vehicle wheel contours, thereby helping us to efficiently build large machine learning training data sets. Thus, machine learning techniques may be used to enable example embodiments to collect raw training image data as well as train machine learning models to identify and annotate vehicle wheel contours in input images. Example embodiments may then generate the various information described above based on the identified vehicle wheel profile. A sample raw input image and vehicle wheel contour marking results produced by an example embodiment are shown in fig. 2.
Fig. 2 illustrates a raw input image (fig. 2, top half image) acquired from a camera of an autonomous vehicle and corresponding vehicle wheel contour marking or annotation results (fig. 2, bottom half image of opposite color) produced by an example embodiment. Dashed arrows shown in fig. 2 are added to highlight the association between each instance of the vehicle wheel contour annotation and the portion of the original input image from which the vehicle wheel contour annotation was derived. As described in more detail below, the trained machine learning model may be used to generate vehicle wheel contour annotations from raw input images. Such contour-level vehicle wheel annotation enabled by the example embodiments disclosed herein provides several important benefits, including allowing the transformation of detected vehicle wheel object information into any desired format.
Model training
In example embodiments described herein, supervised learning methods may be used for classification of objects, object features, and object relationships captured in a set of input images. Supervised learning approaches include processes that use a set of training or test data to train a classifier or model in an offline training phase. By requiring predefined features and manually annotated labels for each object (e.g., vehicle wheel) in the input image, example embodiments may train one or more machine learning classifiers on a number of static training images. Additionally, example embodiments may train a machine learning classifier on a sequence of training images. After the training phase, the trained machine learning classifier may be used in a second phase (an operational or inference phase) to receive real-time images and effectively and efficiently detect wheel features of each wheel in the received images. The training and operational use of machine learning classifiers in example embodiments is described in more detail below.
Referring now to fig. 3, the example embodiments disclosed herein may be used in the context of an autonomous vehicle wheel detection system 210 for an autonomous vehicle. As described above, the autonomous vehicle wheel detection system 210 may be included in or executed by the image processing module 200 as described above. The autonomous vehicle wheel detection system 210 may include one or more vehicle wheel object contour classifiers 211, which one or more vehicle wheel object contour classifiers 211 may correspond to the machine learning classifiers described herein. Other types of classifiers or models may be equivalently used. Fig. 3 illustrates an offline training phase (first phase) used to configure or train the autonomous vehicle wheel detection system 210 and the classifier 211 therein in an example embodiment based on the training image data collection system 201 and the manual annotation data collection system 203 representing ground truth. In an example embodiment, the training image data collection system 201 may be used to collect perception data to train or configure processing parameters for the autonomous vehicle wheel detection system 210 with the training image data. As described in more detail below for example embodiments, after the initial training phase, the autonomous vehicle wheel detection system 210 may be used in an operation, inference, or simulation phase (second phase) to generate image feature predictions and wheel contour feature detections based on image data received by the autonomous vehicle wheel detection system 210 and based on training received by the autonomous vehicle wheel detection system 210 during the initial offline training phase.
Referring again to fig. 3, training image data collection system 201 may include an array of sensory information collection devices or sensors, which may include image generation devices (e.g., cameras), radiation stimulated emission light amplification (laser) devices, light detection and ranging (LIDAR) devices, Global Positioning System (GPS) devices, sound navigation and ranging (sonar) devices, radio detection and ranging (radar) devices, and so forth. The sensory information collected by the information collection devices at various traffic locations may include traffic or vehicle image data, road data, environmental data, range data from LIDAR or radar devices, and other sensor information received from information collection devices of the data collection system 201 located near a particular road (e.g., a monitored location). Additionally, the data collection system 201 may include information collection devices installed in moving test vehicles that navigate through a predetermined route in an environment or location of interest. Portions of the ground truth data may also be collected by the data collection system 201.
To expand the size and improve the variance of the training image dataset, the data collection system 201 may collect images from a wide-angle camera and a tele-camera mounted on the vehicle under the following broad driving scenarios: local, highway, sunny day, cloudy, city, countryside, bridge, desert, etc. The training image dataset may be divided into a training dataset for model training and a test dataset for model evaluation.
The image data collection system 201 may collect actual images of vehicles, moving or static objects, road features, environmental features, and corresponding ground truth data in different scenarios. Different scenes may correspond to different locations, different modes of transportation, different environmental conditions, etc. The image data and other perception data collected by the data collection system 201 and the ground truth data reflect real-world traffic information related to the location or route, scene, monitored vehicle or object. By using the standard capabilities of well-known data collection devices, collected traffic and vehicle image data and other sensory or sensor data may be wirelessly communicated (or otherwise communicated) to a data processor of a standard computing system, on which the image data collection system 201 may be executed. Alternatively, the collected traffic and vehicle image data and other sensory or sensor data may be stored in a memory device located at the monitored location or in the test vehicle and subsequently transferred to a data processor of a standard computing system.
As shown in fig. 3, a manual annotation data collection system 203 is provided to apply labels to features found in training images collected by the data collection system 201. These training images may be analyzed by a human labeler or an automated process to manually define a label or classification for each of the features identified in the training images. The manually applied data may also include object relationship information that includes a state of each object in the training image data frame. For example, a manual labeler may map the contours of vehicle wheel objects detected in the training image dataset. Likewise, manually annotated image tags and object relationship information may represent ground truth data corresponding to training images from the image data collection system 201. These feature tags or ground truth data may be provided to the autonomous vehicle wheel detection system 210 as part of an offline training phase described in more detail below.
Traffic and vehicle image data and other sensory or sensor data, feature tag data and ground truth data collected or calculated by the training image data collection system 201 for training, as well as object or feature tags produced by the manual annotation data collection system 203, may be used to generate training data that may be processed by the autonomous vehicle wheel detection system 210 during an offline training phase. For example, as is well known, classifiers, models, neural networks, and other machine learning systems can be trained in a training phase to produce configured outputs based on training data provided to the classifiers, models, neural networks, or other machine learning systems. As described in more detail below, the training data provided by the image data collection system 201 and the manual annotation data collection system 203 may be used to train the autonomous vehicle wheel detection system 210 and the classifiers 211 therein to determine vehicle wheel contour features corresponding to objects (e.g., wheels) identified in the training images. The off-line training phase of the autonomous vehicle wheel inspection system 210 is described in more detail below.
Example embodiments may train and use machine learning classifiers in a vehicle wheel detection process. These machine learning classifiers are represented in fig. 3 as vehicle wheel object contour classifiers 211. In an example embodiment, the vehicle wheel object contour classifier 211 may be trained using images from a training image dataset. In this way, the vehicle wheel object contour classifier 211 can efficiently and effectively detect the vehicle wheel feature of each vehicle from a set of input images. The training of the vehicle wheel object contour classifier 211 in an example embodiment is described in more detail below.
Reference is now made to fig. 4 (top half image) which illustrates a hybrid visualization of the original example original training image combined with ground truth. FIG. 4 illustrates sample training images that may be used by example embodiments to train vehicle wheel object contour classifier 211 to process training images. The raw training image may be one of the training images provided to the autonomous vehicle wheel detection system 210 by the training image data collection system 201 as described above. Training image data from the raw training images may be collected and provided to the autonomous vehicle wheel detection system 210, where features of the raw training images may be extracted at the autonomous vehicle wheel detection system 210. Semantic segmentation or similar processes may be used for feature extraction. As is well known, feature extraction may provide pixel-level object labels and bounding boxes for each feature or object identified in image data. In many cases, the features or objects identified in the image data will correspond to vehicle wheel objects. Likewise, vehicle wheel objects in the input training image may be extracted and represented with labels and bounding boxes. The bounding box may be represented as a rectangular box having a size corresponding to the extracted outline of the vehicle wheel object. Additionally, object level contour detection of each vehicle wheel object may also be performed using known techniques. Thus, for each received training image, the autonomous vehicle wheel detection system 210 may obtain or generate vehicle wheel object detection data represented by labels and bounding boxes, as well as object-level contour detection for each vehicle wheel object instance in the training image. Reference is now made to fig. 4 (bottom half image) which illustrates an example ground truth label map that may be used to train a segmentation model in accordance with an example embodiment.
Because the exact shape and exact location of the vehicle wheel provides more information than the bounding box, example embodiments may employ a semantic segmentation framework to perform vehicle wheel detection tasks that are not overly complex as compared to example segmentation tasks. The formal definition of the problem can be described as follows:
given an original input RGB (red/green/blue) image I,
outputting a label graph R having the same size as I
Wherein the vehicle wheel pixel is marked 1
And the background pixel is labeled 0.
By filling the interior region defined by the vehicle wheel contour and performing dilation to obtain more positive training samples (e.g., wheel objects), example embodiments may generate ground truth to mitigate potential data imbalance issues.
Example embodiments may use a fully convolutional neural network (FCN) as a machine learning model trained for a vehicle wheel object contour detection task as described herein. The general form of FCN has been widely applied to pixel-level image-to-image learning tasks. In an example embodiment, the FCN trained for a vehicle wheel object contour detection task (e.g., a machine learning model) may be customized to include semantic segmentation using Dense Upsampling Convolution (DUC) and semantic segmentation using Hybrid Dilation Convolution (HDC) as described in the related patent applications referenced above. With respect to more complex multi-class scenario parsing tasks, the FCNs for the vehicle wheel object contour detection task can be pre-trained, and therefore the learned features can speed up the training process. Since the image background includes many more pixels than the image foreground (e.g., vehicle wheels), example embodiments may train the machine learning model using a weighted multiple logistic loss function to ensure proper training and mitigate overfitting. Example embodiments may train the entire machine learning model using random gradient descent (SGD) in order to make enough iterations to ensure convergence.
At this point, the off-line training process is complete and the parameters associated with one or more classifiers 211 have been appropriately adjusted so that one or more classifiers 211 adequately detect the vehicle object wheel features corresponding to the input image data. After being trained by the offline training process described above, one or more classifiers 211 with their appropriately adjusted parameters can be deployed in the operation, inference, or simulation phase (second phase) described below in connection with fig. 5.
Reasoning
After FCN training converges, example embodiments may use the pre-trained FCN to perform model inference in a second or operational phase. FIG. 5 illustrates a second phase of operational use or simulated use of the autonomous vehicle wheel detection system 210 in an example embodiment. As shown in fig. 5, the autonomous vehicle wheel detection system 210 may receive real world operational image data (including still images and image sequences) from the image data collection system 205. The image data collection system 205 may include an array of sensory information collection devices, sensors, and/or image generation devices on or associated with the autonomous vehicle, similar to the sensory information collection devices of the image data collection system 201, except that the image data collection system 205 collects real-world operational image data instead of training image data. As described in more detail herein, by applying one or more trained vehicle wheel object contour classifiers 211, the autonomous vehicle wheel detection system 210 may process input real world operational image data to generate vehicle wheel object data 220, which vehicle wheel object data 220 may be used by other autonomous vehicle subsystems to configure or control operation of the autonomous vehicle. Also as described above, semantic segmentation or similar processes may be used for vehicle wheel object extraction from real world image data.
To obtain a tradeoff between inference speed and model precision, example embodiments may adjust the size of all input images to a width of 512 and a height of 288, so that we achieve real-time (50HZ) performance while maintaining high accuracy (recall > 0.9). Examples of ground truth images and corresponding predictions are illustrated in fig. 6-10. It can be seen that: the trained models of the example embodiments achieve superior results under various conditions, such as different vehicle types (e.g., cars, trucks, etc.), different distances (near and far), and different lighting conditions (e.g., sunny days, shady places, etc.).
FIG. 6 (bottom image) illustrates an example predictive label map using a trained segmentation model that is trained using the example image of FIG. 4 and other training images. Fig. 6 (top half image) also illustrates a hybrid visualization of the original example image in combination with the prediction results.
Fig. 7 (bottom image) illustrates another example ground truth label map that may be used to train a segmentation model according to an example embodiment. Fig. 7 (top half image) also illustrates a hybrid visualization of the original example image in combination with ground truth.
FIG. 8 (bottom image) illustrates an example predictive label map using a trained segmentation model that was trained using the example image of FIG. 7 and other training images. Fig. 8 (top half image) also illustrates a hybrid visualization of the original example image in combination with the prediction results.
Fig. 9 (bottom image) illustrates yet another example ground truth label map that may be used to train a segmentation model according to an example embodiment. Fig. 9 (top half image) also illustrates a hybrid visualization of the original example image in combination with ground truth.
FIG. 10 (bottom image) illustrates an example predictive label map using a trained segmentation model that was trained using the example image of FIG. 9 and other training images. Fig. 10 (top half image) also illustrates a hybrid visualization of the original example image in combination with the prediction results.
Using one or more trained classifiers 211, the autonomous vehicle wheel detection system 210 may process input image data to generate vehicle wheel object data 220, which vehicle wheel object data 220 may be used by other autonomous vehicle subsystems to configure or control operation of the autonomous vehicle. Accordingly, a system and method for vehicle wheel detection for autonomous vehicle control is disclosed.
Referring now to FIG. 11, a flowchart illustrates an example embodiment of a system and method 1000 for vehicle wheel detection. The example embodiment may be configured to: receiving training image data from a training image data collection system (processing block 1010); obtaining ground truth data corresponding to the training image data (process block 1020); performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data (processing block 1030); receiving operational image data from an image data collection system associated with the autonomous vehicle (process block 1040); and performing an operational phase that includes applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data (processing block 1050).
As used herein and unless otherwise specified, the term "mobile device" includes any computing or communication device that can communicate with the in-vehicle control system 150 and/or the image processing module 200 described herein to obtain read or write permission for data signals, messages, or content transmitted via any mode of data communication. In many cases, the mobile device 132 is a handheld portable device, such as a smart phone, mobile phone, cellular phone, tablet computer, laptop computer, display pager, Radio Frequency (RF) device, Infrared (IR) device, global positioning device (GPS), Personal Digital Assistant (PDA), handheld computer, wearable computer, portable game player, other mobile communication and/or computing device, or an integrated device combining one or more of the preceding devices, or the like. Additionally, the mobile device 132 may be a computing device, a Personal Computer (PC), a multiprocessor system, a microprocessor-based or programmable consumer electronics device, a network PC, a diagnostic device, a system operated by a vehicle 119 manufacturer or service technician, or the like, but is not limited to portable devices. Mobile device 132 may receive and process data in any of a variety of data formats. The data format may include or be configured to operate in any programming format, protocol, or language, including but not limited to: JavaScript, C + +, iOS, Android, and the like. As used herein and unless otherwise specified, the term "network resource" includes any device, system, or service that can communicate with the in-vehicle control system 150 and/or the image processing module 200 described herein to obtain read or write permission for data signals, messages, or content transmitted via any mode of interprocess or networked data communication. In many cases, the network resource 122 is a data network accessible computing platform including: a client or server computer, a website, a mobile device, a peer-to-peer (P2P) network node, and the like. Additionally, the network resource 122 may be a network application, a network router, switch, bridge, gateway, diagnostic device, system operable by a vehicle 119 manufacturer or service technician, or any machine capable of executing a set of instructions (sequential or out-of-order) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. The network resources 122 may include any of a variety of providers or processors that the network may transmit digital content. Typically, the file format employed is extensible markup language (XML), however, the various embodiments are not so limited and other file formats may be used. For example, various embodiments may support data formats other than hypertext markup language (HTML)/XML or formats other than open/standard data formats. Various embodiments described herein may support any electronic file format, such as Portable Document Format (PDF), audio (e.g., moving picture experts group audio layer 3-MP 3, etc.), video (e.g., MP4, etc.), and any proprietary interchange format defined by a particular content site.
A wide area data network 120 (also referred to as a network cloud) used with network resources 122 may be configured to couple one computing or communication device with another computing or communication device. The network may be enabled to employ any form of computer-readable data or media for communicating information from one electronic device to another. Network 120 may include the internet, such as over a Universal Serial Bus (USB) or ethernet port, other forms of computer readable media, or any combination thereof, in addition to other Wide Area Networks (WANs), cellular telephone networks, metropolitan area networks, Local Area Networks (LANs), other packet switched networks, circuit switched networks, direct data connections. In addition to other Wide Area Networks (WANs), cellular telephone networks, satellite networks, over-the-air broadcast networks, AM/FM radio networks, pager networks, UHF networks, other broadcast networks, gaming networks, WiFi networks, peer-to-peer networks, voice over IP (VoIP) networks, metropolitan area networks, Local Area Networks (LANs), other packet switched networks, circuit switched networks, direct data connections, the network 120 may also include the internet, such as through a Universal Serial Bus (USB) or ethernet port, other forms of computer readable media, or any combination thereof. On an interconnected set of networks (including those based on different architectures and protocols), a router or gateway may serve as a link between the networks, enabling messages to be sent between computing devices on different networks. Similarly, communication links within a network may typically include twisted pair cable, USB, firewire, Ethernet, or coaxial cable, while communication links between networks may utilize analog or digital telephone lines, full or partial dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Network (ISDN), Digital Subscriber Lines (DSL), wireless links including satellite links, cellular telephone links, or other communication links known to those of ordinary skill in the art. In addition, remote computers and other related electronic devices may be remotely connected to the network via a modem and temporary telephone link.
Network 120 may also include any of a variety of wireless sub-networks that may further overlay stand-alone ad-hoc networks, etc. to provide infrastructure-oriented connections. Such sub-networks may include mesh networks, wireless lan (wlan) networks, cellular networks, and the like. The network may also comprise an autonomous system of terminals, gateways, routers or the like connected by wireless radio links or wireless transceivers. These connectors may be configured to move freely and randomly and organize arbitrarily so that the topology of the network may change quickly. Network 120 may also employ one or more of a variety of standard wireless and/or cellular protocols or access technologies, including those set forth herein in connection with network interface 712 and network 714 depicted in the figures.
In particular embodiments, mobile device 132 andand/or network resource 122 may function as a client device that enables a user to access and use in-vehicle control system 150 and/or image processing module 200 to interact with one or more components of a vehicle subsystem. Indeed, these client devices 132 or 122 may comprise any computing device configured to send and receive information over a network, such as the network 120 described herein. Such client devices may include mobile devices such as cellular telephones, smart phones, tablet computers, display pagers, Radio Frequency (RF) devices, Infrared (IR) devices, global positioning devices (GPS), Personal Digital Assistants (PDAs), handheld computers, wearable computers, gaming consoles, integrated devices combining one or more of the preceding devices, and the like. Client devices may also include other computing devices, such as Personal Computers (PCs), multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, and the like. Similarly, client devices may vary widely in capabilities and features. For example, a client device configured as a cell phone may have a numeric keypad and a few lines of monochrome LCD display on which only text can be displayed. In another example, a network-enabled client device may have a touch-sensitive screen, a stylus, and a color LCD display in which text and graphics may be displayed. Further, the network-enabled client device may include a browser application that enables the browser application to receive and send wireless application protocol messages (WAPs), wired application messages, and/or the like. In one embodiment, the browser application is enabled to employ Hypertext markup language (HTML), dynamic HTML, Handheld Device Markup Language (HDML), Wireless Markup Language (WML), WMLScript, JavaScriptTMExtensible html (xhtml), compact html (chtml), etc., to display and send messages with related information.
The client device may also include at least one client application configured to receive content or messages from another computing device via network transmission. The client application may include the ability to provide and receive textual content, graphical content, video content, audio content, alerts, messages, notifications, and the like. Further, a client device may also be configured to transmit and/or receive messages with another computing device, such as through Short Message Service (SMS), direct messaging (e.g., Twitter), email, Multimedia Messaging Service (MMS), Instant Messaging (IM), Internet Relay Chat (IRC), mrrc, Jabber, Enhanced Messaging Service (EMS), text messaging, smart messaging, over-the-air (OTA) messaging, and so forth. The client device may also include a wireless application device on which the client application is configured to enable a user of the device to wirelessly transmit information to or receive information from a network resource via a network.
In-vehicle control system 150 and/or image processing module 200 may be implemented using a system that enhances the security of the execution environment, thereby increasing security and reducing the likelihood that in-vehicle control system 150 and/or image processing module 200 and related services may be corrupted by viruses or malware. For example, the in-vehicle control system 150 and/or the image processing module 200 may be implemented using a trusted execution environment, which may ensure that sensitive data is stored, processed, and transferred in a secure manner.
Fig. 12 shows a diagrammatic representation of a machine in the example form of a computing system 700 within which a set of instructions, when executed and/or processing logic, may cause the machine to perform any one or more of the methodologies described and/or claimed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a Personal Computer (PC), a laptop computer, a tablet computing system, a Personal Digital Assistant (PDA), a cellular telephone, a smart phone, a web appliance, a set-top box (STB), a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or out-of-order) or activating processing logic that specifies actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions or processing logic to perform any one or more of the methodologies described and/or claimed herein.
The example computing system 700 may include a data processor 702 (e.g., a system on a chip (SoC), a general purpose processing core, a graphics core, and optionally other processing logic) and a memory 704, which data processor 702 and memory 704 may communicate with each other via a bus or other data transfer system 706. The mobile computing system 700 and/or the mobile communication system 700 may also include various input/output (I/O) devices and/or interfaces 710, such as a touch screen display, audio jack, voice interface, and optionally a network interface 712. In an example embodiment, the network interface 712 may include one or more radio transceivers configured for compatibility with any one or more standard wireless and/or cellular protocols or access technologies (e.g., second generation (2G), 2.5 generation, third generation (3G), fourth generation (4G), and future generation wireless access for cellular systems, global system for mobile communications (GSM), General Packet Radio Service (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), LTE, CDMA2000, WLAN, Wireless Router (WR) mesh, etc.). Network interface 712 may also be configured for use with various other wired and/or wireless communication protocols, including TCP/IP, UDP, SIP, SMS, RTP, WAP, CDMA, TDMA, UMTS, UWB, WiFi, WiMax, Wifi,
Figure BDA0002671264460000281
IEEE 802.1lx, etc. In essence, network interface 712 may include or support virtually any wired and/or wireless communication and data processing mechanism by which information/data may be communicated between computing system 700 and another computing or communication system via network 714.
Memory 704 may represent a machine-readable medium having stored thereon one or more sets of instructions, software, firmware, or other processing logic (e.g., logic 708) embodying any one or more of the methodologies or functions described and/or claimed herein. The logic 708, or portions thereof, may also reside, completely or at least partially, within the processor 702 during execution thereof by the mobile computing and/or communication system 700. Likewise, the memory 704 and the processor 702 may also constitute machine-readable media. The logic 708, or portions thereof, may also be configured to partially implement at least a portion of its processing logic or logic in hardware. The logic 708, or portions thereof, may also be transmitted or received over a network 714 via the network interface 712. While the machine-readable medium in the example embodiments may be a single medium, the term "machine-readable medium" should be taken to include a single non-transitory medium or multiple non-transitory media (e.g., a centralized or distributed database, and/or associated caches and computing systems) that store the one or more sets of instructions. The term "machine-readable medium" shall also be taken to include any non-transitory medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term "machine-readable medium" may accordingly be taken to include, but is not limited to: solid-state memory, optical media, and magnetic media.
Some embodiments described herein may be captured using the following clause-based description.
A system includes a data processor and an autonomous vehicle wheel detection system executable by the data processor and configured to perform an autonomous vehicle wheel detection operation. The autonomous vehicle wheel detection operation is configured to: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase that includes applying the trained one or more classifiers to extract the wheel object data.
The system according to clause 1, wherein the training phase is configured to obtain the ground truth data from a manual image annotation or tagging process.
The system according to clause 1, further configured to generate a blended visualization of the original image in combination with the ground truth.
The system according to clause 1, further configured to generate the ground truth by filling an interior region defined by the extracted contour of the vehicle wheel object.
The system of clause 1, further configured to use a fully convolutional neural network (FCN) as the machine learning model.
The system according to clause 1, further configured to use a full convolutional neural network (FCN) as a machine learning model having semantic segmentation using Dense Upsampling Convolution (DUC) and semantic segmentation using Hybrid Dilated Convolution (HDC).
The system according to clause 1, configured to generate an object-level contour detection for each extracted vehicle wheel object in the operational image data.
A method is disclosed for: obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
The method according to clause 8, wherein the training phase comprises obtaining ground truth data from a manual image annotation or tagging process.
The method of clause 8, comprising: a blended visualization of the original image combined with the ground truth is generated.
The method of clause 8, comprising: generating ground truth by filling an interior region defined by the contour of the extracted vehicle wheel object.
The method of clause 8, comprising: a fully convolutional neural network (FCN) is used as the machine learning model.
The method of clause 8, comprising: a full convolutional neural network (FCN) is used as a machine learning model with semantic segmentation using Dense Upsampling Convolution (DUC) and semantic segmentation using Hybrid Dilation Convolution (HDC).
The method of clause 8, comprising: object level contour detection is generated for each extracted vehicle wheel object in the operational image data.
A non-transitory machine-usable storage medium embodying instructions that, when executed by a machine, cause the machine to: receiving training image data from a training image data collection system; obtaining ground truth data corresponding to the training image data; performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data; receiving operational image data from an image data collection system associated with an autonomous vehicle; and performing an operational phase that includes applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
The non-transitory machine-usable storage medium of clause 15, wherein the training phase is configured to obtain the ground truth data from a manual image annotation or tagging process.
The non-transitory machine-usable storage medium of clause 15, further configured to generate a blended visualization of the original image in combination with the ground truth.
The non-transitory machine-usable storage medium of clause 15, further configured to generate the ground truth by filling an interior region defined by the contour of the extracted vehicle wheel object.
The non-transitory machine-usable storage medium of clause 15, further configured to use a full convolutional neural network (FCN) as the machine learning model.
The non-transitory machine-usable storage medium of clause 15, further configured to generate an object-level contour detection for each extracted vehicle wheel object in the operational image data.
The other embodiments, modules, and functional operations disclosed herein as well as others may be implemented in digital electronic circuitry, or in computer software, firmware, or hardware (including the structures disclosed herein and their structural equivalents) or in combinations of one or more of them. The disclosed and other embodiments may be implemented as one or more computer program products, that is, one or more modules of computer program instructions encoded on a computer-readable medium for execution by, or to control the operation of, data processing apparatus. The machine-readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of substances that affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing apparatus" encompasses all apparatus, devices and machines for processing data, including: such as a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described herein can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). A processor adapted to execute a computer program comprises: such as general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of the computer are: a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including: such as semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); magnetic disks (e.g., internal hard disks or removable disks); magneto-optical disks; as well as CD-ROM discs and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a second embodiment. And claimed combinations may be directed to subcombinations or variations of subcombinations.
Similarly, while operations are shown in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few embodiments and examples have been described and other embodiments, enhancements and variations can be made based on what is described and illustrated in this patent document.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of the components and systems that might make use of the structures described herein. Many other embodiments will be apparent to those of ordinary skill in the art upon review of the description provided above. Other embodiments may be utilized and derived, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The drawings herein are merely representational and may not be drawn to scale. Certain proportions of the figures may be exaggerated, while other proportions may be minimized. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software implementations, firmware implementations, and hardware implementations.
The Abstract of the disclosure is provided to enable the reader to quickly ascertain the nature of the technical disclosure. Submitting the summary for the following understanding: the abstract is not intended to be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing embodiments, it can be understood that: to simplify the present disclosure, various features are grouped together in a single embodiment. This method of disclosure should not be interpreted as reflecting an intention that: the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (20)

1. A system, comprising:
a data processor; and
an autonomous vehicle wheel detection system executable by the data processor, the autonomous vehicle wheel detection system configured to perform an autonomous vehicle wheel detection operation configured to:
receiving training image data from a training image data collection system;
obtaining ground truth data corresponding to the training image data;
performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data;
receiving operational image data from an image data collection system associated with an autonomous vehicle; and
performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
2. The system of claim 1, wherein the training phase is configured to obtain ground truth data from a manual image annotation or tagging process.
3. The system of claim 1, further configured to generate a blended visualization of raw images combined with the ground truth.
4. The system of claim 1, further configured to generate the ground truth by filling an interior region defined by the extracted contours of the vehicle wheel object.
5. The system of claim 1, further configured to use a fully convolutional neural network (FCN) as a machine learning model.
6. The system of claim 1, further configured to use a full convolutional neural network (FCN) as a machine learning model having semantic segmentation using Dense Upsampling Convolution (DUC) and semantic segmentation using Hybrid Dilation Convolution (HDC).
7. The system of claim 1, configured to generate an object-level contour detection for each extracted vehicle wheel object in the operational image data.
8. A method, comprising:
receiving training image data from a training image data collection system;
obtaining ground truth data corresponding to the training image data;
performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images of the training image data;
receiving operational image data from an image data collection system associated with an autonomous vehicle; and
performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
9. The method of claim 8, wherein the training phase comprises obtaining ground truth data from a manual image annotation or tagging process.
10. The method of claim 8, comprising: generating a blended visualization of the original image combined with the ground truth.
11. The method of claim 8, comprising: generating the ground truth by filling an interior region defined by the extracted contour of the vehicle wheel object.
12. The method of claim 8, comprising: a fully convolutional neural network (FCN) is used as the machine learning model.
13. The method of claim 8, comprising: a full convolutional neural network (FCN) is used as a machine learning model with semantic segmentation using Dense Upsampling Convolution (DUC) and semantic segmentation using Hybrid Dilation Convolution (HDC).
14. The method of claim 8, comprising: generating object-level contour detection for each extracted vehicle wheel object in the operational image data.
15. A non-transitory machine-usable storage medium embodying instructions that, when executed by a machine, cause the machine to:
receiving training image data from a training image data collection system;
obtaining ground truth data corresponding to the training image data;
performing a training phase to train one or more classifiers to process images in the training image data to detect vehicle wheel objects in the images in the training image data;
receiving operational image data from an image data collection system associated with an autonomous vehicle; and is
Performing an operational phase comprising applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and generating vehicle wheel object data.
16. The non-transitory machine-usable storage medium of claim 15, wherein the training phase is configured to obtain ground truth data from a manual image annotation or tagging process.
17. The non-transitory machine-usable storage medium of claim 15, further configured to generate a blended visualization of raw images combined with the ground truth.
18. The non-transitory machine-usable storage medium of claim 15, further configured to generate the ground truth by filling an interior region defined by the extracted contour of the vehicle wheel object.
19. The non-transitory machine-usable storage medium of claim 15, further configured to use a fully convolutional neural network (FCN) as a machine learning model.
20. The non-transitory machine-usable storage medium of claim 15, further configured to generate an object-level contour detection for each extracted vehicle wheel object in the operational image data.
CN201980017911.6A 2018-03-09 2019-03-08 System and method for vehicle wheel detection Active CN111837163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210931196.0A CN115331198A (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US15/917,331 2018-03-09
US15/917,331 US10671873B2 (en) 2017-03-10 2018-03-09 System and method for vehicle wheel detection
PCT/US2019/021478 WO2019190726A1 (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210931196.0A Division CN115331198A (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection

Publications (2)

Publication Number Publication Date
CN111837163A true CN111837163A (en) 2020-10-27
CN111837163B CN111837163B (en) 2022-08-23

Family

ID=68060692

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201980017911.6A Active CN111837163B (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection
CN202210931196.0A Pending CN115331198A (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210931196.0A Pending CN115331198A (en) 2018-03-09 2019-03-08 System and method for vehicle wheel detection

Country Status (4)

Country Link
EP (1) EP3762903A4 (en)
CN (2) CN111837163B (en)
AU (1) AU2019241892B2 (en)
WO (1) WO2019190726A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590980A (en) * 2021-07-20 2021-11-02 北京地平线机器人技术研发有限公司 Method for training neural network, method and device for predicting track

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11797277B2 (en) * 2019-10-22 2023-10-24 Shenzhen Corerain Technologies Co., Ltd. Neural network model conversion method server, and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103794056A (en) * 2014-03-06 2014-05-14 北京卓视智通科技有限责任公司 Vehicle type accurate classification system and method based on real-time double-line video stream
CN105118044A (en) * 2015-06-16 2015-12-02 华南理工大学 Method for automatically detecting defects of wheel-shaped cast product
CN105976392A (en) * 2016-05-30 2016-09-28 北京智芯原动科技有限公司 Maximum-output-probability-based vehicle tyre detection method and apparatus
CN106250838A (en) * 2016-07-27 2016-12-21 乐视控股(北京)有限公司 vehicle identification method and system
US20170053538A1 (en) * 2014-03-18 2017-02-23 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
US20170206418A1 (en) * 2014-12-16 2017-07-20 Irobot Corporation Systems and Methods for Capturing Images and Annotating the Captured Images with Information
CN107292291A (en) * 2017-07-19 2017-10-24 北京智芯原动科技有限公司 A kind of vehicle identification method and system
CN107346437A (en) * 2017-07-03 2017-11-14 大连理工大学 The extraction method of body side view parameter model
US20170351261A1 (en) * 2015-11-04 2017-12-07 Zoox, Inc. Sensor-Based Object-Detection Optimization For Autonomous Vehicles
CN107577988A (en) * 2017-08-03 2018-01-12 东软集团股份有限公司 Realize the method, apparatus and storage medium, program product of side vehicle location

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103794056A (en) * 2014-03-06 2014-05-14 北京卓视智通科技有限责任公司 Vehicle type accurate classification system and method based on real-time double-line video stream
US20170053538A1 (en) * 2014-03-18 2017-02-23 Sri International Real-time system for multi-modal 3d geospatial mapping, object recognition, scene annotation and analytics
US20170206418A1 (en) * 2014-12-16 2017-07-20 Irobot Corporation Systems and Methods for Capturing Images and Annotating the Captured Images with Information
CN105118044A (en) * 2015-06-16 2015-12-02 华南理工大学 Method for automatically detecting defects of wheel-shaped cast product
US20170351261A1 (en) * 2015-11-04 2017-12-07 Zoox, Inc. Sensor-Based Object-Detection Optimization For Autonomous Vehicles
CN105976392A (en) * 2016-05-30 2016-09-28 北京智芯原动科技有限公司 Maximum-output-probability-based vehicle tyre detection method and apparatus
CN106250838A (en) * 2016-07-27 2016-12-21 乐视控股(北京)有限公司 vehicle identification method and system
CN107346437A (en) * 2017-07-03 2017-11-14 大连理工大学 The extraction method of body side view parameter model
CN107292291A (en) * 2017-07-19 2017-10-24 北京智芯原动科技有限公司 A kind of vehicle identification method and system
CN107577988A (en) * 2017-08-03 2018-01-12 东软集团股份有限公司 Realize the method, apparatus and storage medium, program product of side vehicle location

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙维广: "一种基于图像的车轮检测方法研究", 《科技资讯》 *
胡顾飞等: "基于Adaboost算法的车辆轮胎检测研究与实现", 《计算机技术 与发展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113590980A (en) * 2021-07-20 2021-11-02 北京地平线机器人技术研发有限公司 Method for training neural network, method and device for predicting track
CN113590980B (en) * 2021-07-20 2024-03-29 北京地平线机器人技术研发有限公司 Method for training neural network, method and device for predicting track

Also Published As

Publication number Publication date
CN111837163B (en) 2022-08-23
WO2019190726A1 (en) 2019-10-03
CN115331198A (en) 2022-11-11
AU2019241892A1 (en) 2020-10-08
EP3762903A1 (en) 2021-01-13
EP3762903A4 (en) 2021-12-08
AU2019241892B2 (en) 2024-03-28

Similar Documents

Publication Publication Date Title
US11967140B2 (en) System and method for vehicle wheel detection
US11745736B2 (en) System and method for vehicle occlusion detection
CN110753934B (en) System and method for actively selecting and tagging images for semantic segmentation
US10311312B2 (en) System and method for vehicle occlusion detection
US11734563B2 (en) System and method for vehicle taillight state recognition
CN111344646B (en) System and method for data-driven prediction for trajectory planning of autonomous vehicles
US20200242373A1 (en) System and method for large-scale lane marking detection using multimodal sensor data
US10528823B2 (en) System and method for large-scale lane marking detection using multimodal sensor data
US10387736B2 (en) System and method for detecting taillight signals of a vehicle
CN110914778A (en) System and method for image localization based on semantic segmentation
CN111373458B (en) Prediction-based system and method for trajectory planning for autonomous vehicles
CN112272844B (en) Systems and methods for neighboring vehicle intent prediction for autonomous vehicles
CN111837163B (en) System and method for vehicle wheel detection
CN111108505B (en) System and method for detecting a tail light signal of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant