US20220292289A1 - Systems and methods for depth estimation in a vehicle - Google Patents

Systems and methods for depth estimation in a vehicle Download PDF

Info

Publication number
US20220292289A1
US20220292289A1 US17/198,954 US202117198954A US2022292289A1 US 20220292289 A1 US20220292289 A1 US 20220292289A1 US 202117198954 A US202117198954 A US 202117198954A US 2022292289 A1 US2022292289 A1 US 2022292289A1
Authority
US
United States
Prior art keywords
data
vehicle
loss
cameras
disparity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/198,954
Inventor
Albert Shalumov
Michael Slutsky
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Priority to US17/198,954 priority Critical patent/US20220292289A1/en
Assigned to GM GLOBAL TECHNOLOGY OPERATIONS reassignment GM GLOBAL TECHNOLOGY OPERATIONS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHALUMOV, ALBERT, SLUTSKY, MICHAEL
Priority to DE102021129544.0A priority patent/DE102021129544A1/en
Priority to CN202111528067.9A priority patent/CN115082874A/en
Publication of US20220292289A1 publication Critical patent/US20220292289A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06K9/00805
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • G06K9/6215
    • G06K9/6256
    • G06K9/726
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • G06V30/274Syntactic or semantic context, e.g. balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • H04N5/23238
    • H04N5/247
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • image data from surround cameras is processed by the DNN to force consistency in depth estimation.
  • the surround cameras may be wide lens cameras.
  • the panoramic loss term uses reprojection to a common viewpoint of the images of the surround cameras as part of the loss function.
  • a similarity measure is taken that compares overlapping image patches from adjacent surround view cameras.
  • Ground truth data 58 and training image data 54 are provided as inputs to the depth estimation system 50 at the training stage.
  • the ground truth data 58 includes ground truth depth data and ground truth semantic segmentation data.
  • the training image data 54 can be taken from a database of surround image data that has been recorded by cameras 14 a to 14 d mounted to a vehicle 12 as described above with respect to FIG. 1 .
  • the training image data 54 can be simulated in other embodiments.
  • the ground truth data 58 can be output from the simulation when a simulation is used to generate the training image data 54 .

Abstract

Methods and system for training a neural network for depth estimation in a vehicle. The methods and systems receive respective training image data from at least two cameras. Fields of view of adjacent cameras of the at least two cameras partially overlap. The respective training image data is processed through a neural network providing depth data and semantic segmentation data as outputs. The neural network is trained based on a loss function. The loss function combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term. The panoramic loss term includes a similarity measure regarding overlapping image patches of the respective image data that each correspond to a region of overlapping fields of view of the adjacent cameras. The semantic segmentation loss term quantifies a difference between ground truth semantic segmentation data and the semantic segmentation data output from the neural network.

Description

    INTRODUCTION
  • The present disclosure generally relates to depth estimation based on image data from cameras mounted to a vehicle, and more particularly relates to methods and systems for estimating depth based on surround view image data.
  • Accurate depth data is important for many vehicle systems, both existing and future. From obstacle prediction to informative user interfaces, depth data facilitates vehicle usage. One method for obtaining depth data is by adding LiDARs to a vehicle sensor suite. Another method employed is to use a pair of closely placed front-facing cameras and solve for depth.
  • LiDARs add hardware and maintenance costs to the vehicle and have significant power requirements. Further, LiDAR provides sparse depth measurements such that additional processing is still required to convert it to dense data. Stereo cameras also require additional sensors to be fitted to the vehicle.
  • Accordingly, it is desirable to provide systems and methods that can supply dense depth data with minimal hardware costs added to vehicles that already include cameras covering the vehicle surroundings. Furthermore, other desirable features and characteristics of the present invention will be apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
  • SUMMARY
  • [This section will be completed once the claims have been agreed with the inventors].
  • DESCRIPTION OF THE DRAWINGS
  • The present disclosure will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
  • FIG. 1 is a functional block diagram of a system for depth estimation at an inference stage, in accordance with an exemplary embodiment;
  • FIG. 2 is a functional block diagram of a system for depth estimation at a training stage, in accordance with an exemplary embodiment;
  • FIGS. 3A and 3B provide diagrams related to image projection, in accordance with an exemplary embodiment;
  • FIG. 4 is a flow chart of a method for training a neural network, in accordance with an exemplary embodiment; and
  • FIG. 5 is a flow chart of a method of controlling a vehicle using outputs from a trained neural network, in accordance with an exemplary embodiment.
  • DETAILED DESCRIPTION
  • The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses thereof. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description.
  • As used herein, the term module refers to any hardware, software, firmware, electronic control component, processing logic, and/or processor device, individually or in any combination, including without limitation: application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • Embodiments of the present disclosure may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the present disclosure may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, logic elements, look-up tables, or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, those skilled in the art will appreciate that embodiments of the present disclosure may be practiced in conjunction with any number of systems, and that the systems described herein is merely exemplary embodiments of the present disclosure.
  • For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, control, and other functional aspects of the systems (and the individual operating components of the systems) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in an embodiment of the present disclosure.
  • Described herein are systems and methods to estimate a dense depth map from input images from a plurality of cameras mounted to a vehicle. The dense depth map is estimated by processing the images using a Deep Neural Network (DNN): One possible DNN includes an encoder-decoder architecture to generate depth data and semantic segmentation data. In one embodiment, the DNN is trained based on a loss function that combines loss terms including disparity (1/depth) loss, disparity smoothness loss, semantic segmentation loss, and panoramic loss. The loss function is a single multi-task learning loss function. The depth data output by the trained DNN can be used in a variety of vehicular applications including image splicing or stitching, estimating distance from obstacles and controlling the vehicle to avoid the obstacles, dense depth prediction, view perspective change and surround view generation.
  • In some embodiments, image data from surround cameras is processed by the DNN to force consistency in depth estimation. The surround cameras may be wide lens cameras. The panoramic loss term uses reprojection to a common viewpoint of the images of the surround cameras as part of the loss function. In particular, a similarity measure is taken that compares overlapping image patches from adjacent surround view cameras.
  • Systems and methods described herein generating dense depth estimation from vehicle surround cameras using the DNN. The DNN employs multi-task learning and co-learns both depth and semantic segmentation. As part of evaluating the panoramic loss term, known camera extrinsic and intrinsic parameters as well as inferred depth are used to generate a 3D point cloud. The 3D point cloud is projected to a common plane to provide a panoramic image. The panoramic loss terms assesses similarity of overlapping regions of the panoramic image as part of the loss function. The loss function may combine disparity, its smoothness, semantic segmentation and panoramic loss in a single loss function.
  • FIG. 1 illustrates a system 30 for estimating depth and performing semantic segmentation on image data 16 received from a plurality of cameras 14 a to 14 d mounted to a vehicle 12. The depth estimation system 30 is included in the vehicle 12 and includes a processor 24, a memory 26 and a neural network 18. The cameras 14 a to 14 d are mounted to the vehicle 12 at a plurality of different locations to provide surround images. In the illustrated embodiment, there are four cameras 14 a to 14 d. A first camera 14 a is a front facing camera. A second camera 14 b is a left facing camera. A third camera 14 c is a rear facing camera. A fourth camera 14 d is a right facing camera. Fewer or greater numbers of cameras 14 a to 14 d can be included in the vehicle 12. The cameras 14 a to 14 d are wide angle cameras in one embodiment, also known as fisheye lens cameras. The first/front camera 14 a has a partly overlapping field of view with the second/left camera 14 b and with the fourth/right camera 14 d. The second/left camera 14 b has a partly overlapping field of view with the third/rear camera 14 c and with the first/front camera 14 a. The fourth/right camera 14 d has a partly overlapping field of view with third/rear camera 14 c and with the first/front camera 14 a. In this way, image data 16 output from the cameras 14 a to 14 d provides a 360° surround view around the vehicle 12. The cameras 14 a to 14 d can provide color image data 16 such as Red, Green, Blue (RGB) image data 16.
  • The system 30 is shown in the context of (e.g. included within) a vehicle 12, specifically an automobile. The system 30, however, is useful in other vehicular contexts such as aircraft, sea vessels, etc. In various embodiments, the vehicle 12 is an autonomous vehicle and the system 30 is incorporated into the autonomous vehicle 12. However, the system 30 is useful in any kind of vehicle (autonomous or otherwise) that includes surround cameras 14 a to 14 d that produce image data 16 that can be combined and processed by the neural network 18 to infer depth data 20 and semantic segmentation data 22. The autonomous vehicle 12 is, for example, a vehicle that is automatically controlled to carry passengers from one location to another. The vehicle 12 is depicted in the illustrated embodiment as a passenger car, but it should be appreciated that any other vehicle including motorcycles, trucks, sport utility vehicles (SUVs), recreational vehicles (RVs), marine vessels, aircraft, etc., can also be used. In an exemplary embodiment, the autonomous vehicle 12 is a so-called Level Four or Level Five automation system. A Level Four system indicates “high automation”, referring to the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene. A Level Five system indicates “full automation”, referring to the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.
  • In embodiments, the vehicle 12 includes a vehicle controller 40 that controls one or more vehicular functions based on at least depth data 20 and optionally also the semantic segmentation data 22. The vehicle controller 40 may include one or more advanced driver-assistance systems providing electronic driver assistance based on the outputs from the neural network 18. The vehicle controller 40 may include an autonomous driver or semi-autonomous driver controlling the vehicle 12 through one or more vehicle actuators (e.g. actuators of propulsion, braking and steering systems) based on the depth data 20 and the semantic segmentation data 22. In embodiments, the vehicle controller 40 includes control modules receiving depth data 20 and the semantic segmentation data 22 in order to determine control instructions to be applied to the vehicle actuators. The control modules of the vehicle controller 40 may run localization and environmental perception algorithms that process the depth data 20 and the semantic segmentation data 22 in order to determine the control instructions. The control modules can include an obstacle detection and avoidance module that processes the depth data 20 and the semantic segmentation data 22 to evaluate the type of obstacles and the three dimensional location of the obstacles. The obstacles are tracked and their trajectory in three dimensions can be predicted. The vehicle controller 40 can responsively control the vehicle 12 to avoid collisions with the tracked obstacles.
  • Continuing to refer to FIG. 1, the depth estimation system 30 includes at least one processor 24, a memory 26, and the like. The processor 24 may execute program instructions 28 stored in the memory 26. The processor 24 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which the methods and functions according to the present disclosure are performed. The memory 26 may be composed of a volatile storage medium and/or a non-volatile storage medium. For example, the memory 26 may be comprised of a read only memory (ROM) and/or a random access memory (RAM). The memory 26 stores computer program instructions 28 executed by the processor 24 in order to implement blocks, modules and method steps described herein. The methods implemented by the depth estimation system 30 include receiving the image data 16, passing the image data 16 through the neural network 18 and outputting depth data 20 and semantic segmentation data 22 at the inference stage. At the training stage, which is illustrated in FIG. 2, a loss function 52 is evaluated that includes a plurality of conceptually different loss terms, as will be described further below. The neural network 18 can be one of a variety of kinds of neural network including DNNs such as Convolutional Neural Networks (CNNs). In one embodiment, the neural network 18 is implemented with an encoder-decoder neural network architecture or as a Generative Adversarial Network (GAN).
  • In FIG. 2, a depth estimation system 50 at the training stage is shown. After the neural network 18 has been trained, it can be deployed in the depth estimation system 30 in the vehicle 12 at an inference stage, as shown in FIG. 1. The depth estimation system 50 at the training stage has a processor 62 and memory 64 storing computer program instructions 66. The processor 62, the memory 64 and the computer program instructions 66 can be implemented according to that described above with respect to the processor 24, the memory 26 and the computer program instructions 28 of FIG. 1. The computer program instructions 66 are executed by the processor 62 to control a training process. In the training process, the loss function 52 is calculated and optimized by iteratively adjusting weights of the neural network 18 until a maximum or minimum solution is achieved. In one embodiment, the loss function L is:

  • L=λ 1 ·L 12·Smoothness+λ3·SemSeg+λ4 Panoramic  (equation 1)
  • In equation 1, L1 is disparity loss, which measures a difference between a disparity map derived from the depth data 20 output from the neural network 18 and ground truth depth data included in the ground truth data 58. Smoothness or disparity smoothness loss is a measure of how smooth the disparity map is at image regions outside of image edges. Lack of disparity smoothness at image edges is given little weight, which is controlled by a smoothness control function. SemSeg or semantic segmentation loss is a measure of similarity between the semantic segmentation data 22 output from the neural network 18 and ground truth semantic segmentation data included in the ground truth data 58. Panoramic or panoramic loss is a measure of dissimilarity of image patches at overlapping regions of the input images. The overlapping regions of the input images are determined by first placing the image data from each different camera into a common global three dimensional coordinate system and then projecting the image data into an equirectangular panorama. λ1, λ2, λ3 and λ4 are tunable weighting factors. Disparity is defined as the inverse of depth.
  • Ground truth data 58 and training image data 54 are provided as inputs to the depth estimation system 50 at the training stage. The ground truth data 58 includes ground truth depth data and ground truth semantic segmentation data. The training image data 54 can be taken from a database of surround image data that has been recorded by cameras 14 a to 14 d mounted to a vehicle 12 as described above with respect to FIG. 1. The training image data 54 can be simulated in other embodiments. The ground truth data 58 can be output from the simulation when a simulation is used to generate the training image data 54. The ground truth depth data can be generated based on depth sensing capabilities of stereo camera systems, LiDAR devices, RADAR device or any other depth sensing device mounted to the vehicle 12 and working in tandem with the cameras 14 a to 14 d when recording the training image data 54. The semantic segmentation ground truth data can be generated by automated or human labelling of objects based the training image data 54. All manner of types of obstacles and objects may be labelled as part of the semantic segmentation output by the neural network 18 and as part of the ground truth data 58 such as other vehicles, type of vehicle, pedestrians, trees, lane markings, curbsides, cyclists, etc.
  • As discussed with respect to equation 1, the loss function 52 includes a plurality of loss terms, which will be described in further detail in accordance with one exemplary embodiment. Given {di p} set of predicted disparity maps and {di gt} set of ground truth disparity maps, then L1 loss term can be calculated by:
  • 1 N i = 1 N "\[LeftBracketingBar]" d i p - d i g t "\[RightBracketingBar]" ( equation 2 )
  • Equation 2 represents a measure of the error between the depth data 20 output by the neural network 18 and the ground truth depth data. That is, equation 2 quantifies how correctly the system is inferring the depth data 20 as compared to the ground truth. Other measures of quantifying the disparity loss term can be used by the loss function 52. The set of predicted disparity maps can be derived based on an inverse of the depth maps output by the neural network 18 as part of the depth data 20. The set of ground truth disparity maps can be derived based on an inverse of the ground truth depth maps included in the ground truth depth data.
  • The disparity smoothness loss term of the loss function 52 is calculated by a process including first approximating input image edges Δx, Δy. The image edges can be detected in a variety of ways. One method converts the training image data 54, which is color image data, into grayscale intensity maps. An absolute intensity difference is determined along the x and y axes to determine intensity delta maps Δx, Δy. The absolute difference in disparity values is calculated for neighboring pixels to provide Dx, Dy. The disparity delta maps and the intensity delta maps are combined using the expression:

  • D x e −α s Δ x +D y e −α s Δ y   (equation 3)
  • In equation 3, αs is smoothness control factor and e−α s Δ x and e−α s Δ y are smoothness control functions. Equation 3 provides a measure of smoothness in the depth data 20 output by the neural network 18 in a variable function that has less weight at determined image edges. That is, the smoothness control functions are variable so that at image edges (where Δx, Δy are relatively high), the depth smoothness is multiplied by a smaller smoothness control variable than at image areas away from image edges. Image edges are defined in equation 3 as those areas with relatively large steps in image intensity. In this way, the depth smoothness constraint is applied more strictly within non image edge regions whereas local depth variations are more tolerated where depth changes would be expected at image edges. In another embodiment, the image edges could be determined in an alternative way. For example, machine learning techniques have proven to accurately locate image edges and a map of non-image edges could be generated based thereon so that depth smoothness is performed only (or is applied in a biased way) in non-image edge regions.
  • The semantic segmentation loss term of the loss function 52 is calculated using a categorical cross entropy function. Given predicted class ŷi included in the semantic segmentation data 22 output from the neural network 18 and the ground truth class yi included in the ground truth data 58, the semantic segmentation loss term is defined as:
  • - i = 1 N y i log y ˆ i ( equation 4 )
  • Equation 4 is a measure of correctness of the semantic segmentation data 22 output by the neural network 18 with respect to the ground truth classification labels. Although a cross-entropy calculation is proposed, other methods are possible for quantifying the similarity between the classification prediction from the neural network 18 and the ground truth.
  • The panoramic loss term of the loss function 52 is computed by first creating a panoramic image from the training image data 54 in a common global coordinate frame using an image projection methodology. Overlapping image patches from the panoramic image are then extracted and an image dissimilarity measure between overlapping image patches is taken to quantify the panoramic loss term. In one exemplary embodiment, and with reference to FIG. 3A, each pixel 60 in an image 65 from one of the cameras is unprojected into a three dimensional cartesian coordinate in a local coordinate frame of the camera and using camera intrinsic parameters included in the intrinsic and extrinsic camera data 56 and the depth data 20. That is, for every u/v coordinate of a pixel 60 in the image 65, there is a corresponding coordinate in XYZ cartesian coordinates based on the camera intrinsic parameters, providing a direction vector. The direction vector is multiplied by depth included in the depth data 20 predicted by the neural network 18 to provide a full local 3D coordinate in a local cartesian coordinate frame of the camera. The 3D local coordinates derived in the previous step are transformed to global 3D coordinates by rotating and translating them using the extrinsic parameters of the camera according to the intrinsic and extrinsic camera data 56. A 3D point cloud is thus produced that is a combination of the surround images in a global, common coordinate frame. An equirectangular panorama is derived from the global 3D coordinates of each image in the surround training image data 54, by computing:
  • φ = asin ( z d ) , θ = atan ( x y ) ( equation 5 [ MS1 ] )
  • In equation 5, the distance d and the angles φ, θ can be understood with reference to FIG. 3B, which is also shows one point 63 in the panorama corresponding to the pixel 60 in the image of FIG. 3A. The x, y and z coordinates are the global 3D coordinates determined in the prior step. The equirectangular coordinates are normalized to panorama dimensions to provide a combined panoramic image where the image data from each camera partly overlaps spatially with image data from adjacent cameras. The overlapping image data in the constructed panoramic projection is extracted as overlapping image patches and a similarity measure is performed to quantify a degree of image similarity.
  • In one embodiment, the training image data 54 includes training image data from (or simulated as though from) pairs of adjacent cameras mounted to a vehicle 12 as shown in FIG. 1 such as a front and left camera pair, a front and right camera pair, a right and rear camera pair, and a left and rear camera pair. For the training image data 54 from each pair of adjacent cameras, a mask of overlapping regions in the panorama is created based on intersecting regions with valid data in the two cameras and a Structural Similarity Index Measure (SSIM) SSIM (for example) is computed to quantify the similarity of overlapping image patches in the constructed panorama. The SSIM function can be defined as:
  • SSIM ( imagepatch 1 , imagepatch 2 ) = ( 2 μ imagepatch 1 μ imagepatch 2 + c 1 ) ( 2 σ imagepatch 1 imagepatch 2 + c 2 ) ( μ imagepatch 1 2 + μ imagepatch 2 2 + c 1 ) ( σ imagepatch 1 2 + σ imagepatch 2 2 + c 2 ) ( equation 6 )
  • Equation 6 measures a similarity between two overlapping image patches extracted from the constructed panorama. μ_ is data mean, σ_ is covariance and c are constants.
  • Referring now to FIGS. 3 and 4, and with continued reference to FIGS. 1 and 2, the flowcharts illustrate methods 100, 200 that can be performed by the depth estimation system 30 at the inference stage of FIG. 1 and the depth estimation system 50 at the training stage in accordance with the present disclosure. As can be appreciated in light of the disclosure, the order of operation within the method is not limited to the sequential execution as illustrated in FIGS. 3 and 4, but may be performed in one or more varying orders as applicable and in accordance with the present disclosure. In various embodiments, the method 200 of FIG. 5 can be scheduled to run based on one or more predetermined events, and/or can run continuously during operation of the vehicle 12.
  • The exemplary method 100 of FIG. 4 is a method of training the neural network 18. The method 10 may be implemented by the processor 62 executing the programming instructions 66 as exemplified in FIG. 2. Method 100 includes a step 110 of receiving the training image data 54. The training image data 54 represents image data received from a plurality of cameras mounted to a vehicle that provide a surround view of the vehicle with adjacent cameras partially overlapping in fields of view. The training image data 54 can be simulated or obtained from an operational vehicular system. Further, the ground truth data 58 is received, which includes ground truth depth data and ground truth semantic segmentation data. The ground truth semantic segmentation data can be simulated, obtained from human labelling or artificially generated labeling. The ground truth depth data can be simulated or derived from depth sensors associated with the operational vehicular system that are registered with the fields of view of the plurality of cameras.
  • In step 130 a panorama image is generated from each frame of the training image data 54. The panorama image can be generated in a number of ways. In one embodiment, as described above, the panorama image is generated by transforming the image from each camera into three dimensional point cloud in a local coordinate frame of the camera using the depth data 20 output from the neural network 18. The three dimensional image or point cloud in the local coordinate frame is transformed to three dimensional coordinates in a global coordinate frame using extrinsic parameters for the camera included in the intrinsic and extrinsic camera data 56. The 3D point cloud in the global coordinate frame is projected into a combined panorama image having overlapping image patches. In step 140, overlapping image patches are extracted from the panorama.
  • In step 150, the loss function 52 is computed. The loss function 52 includes the plurality of loss terms as described above. In one embodiment, the loss terms include the panoramic loss term in which image similarity of the overlapping image patches from step 140 is quantified. The loss terms include the semantic segmentation that quantifies a similarity between the semantic segmentation data 22 output by the neural network and the ground truth semantic segmentation data. The loss terms may additionally include the smoothness loss term that quantifies smoothness of the depth data 20 in a way that is variable so as to carry greater weight in regions that do not correspond to edges within the image. The loss terms may additionally include the disparity loss term that quantities a similarity of the depth data 20 output from the neural network 18 and the ground truth depth data. The various loss terms are combined in a weighted sum in the loss function 52.
  • In step 160, the neural network 18 is adapted to optimize an evaluation of the loss function 52, thereby training the neural network 18. The optimization algorithm may be an iterative process.
  • Referring to FIG. 5, a method 200 of controlling the vehicle 12 is described. In step 210, frames of respective image data 16 are received from the plurality of cameras 14 a to 14 d that are arranged to provide a surround view of the vehicle 12. In step 220, the image data 16, which may be a concatenation of the respective image data received from each camera 14 a to 14 d, is passed through the neural network 18, which has been trained according to the exemplary method 100 of FIG. 4. The neural network 18 outputs depth data 20 and semantic segmentation data 22. In step 230, an application of the vehicle 12 is controlled based on the outputs from the neural network 18. The depth data 20 and the semantic segmentation data 22 is useful in obstacle identification, localization and tracking applications and automated control of the motion of the vehicle 12 to avoid any obstacles. Depth data 20 can also be used in stitching or splicing images from respective cameras 14 a to 14 d or providing surround views or changes of perspective of images from the cameras 14 a to 14 d. The resulting image may be displayed on an internal display screen of the vehicle 12 in a user interface control application.
  • It will be appreciated that the disclosed methods, systems, and vehicles may vary from those depicted in the Figures and described herein. For example, the vehicle 12, the depth estimation system 30 at inference, the depth estimation system 50 at training and/or various components thereof may vary from that depicted in FIGS. 1 and 2 and described in connection therewith. In addition, it will be appreciated that certain steps of the methods may vary from those depicted in FIGS. 4 and 5. It will similarly be appreciated that certain steps of the method described above may occur simultaneously or in a different order than that depicted in FIGS. 4 and 5.
  • While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the appended claims and the legal equivalents thereof

Claims (20)

What is claimed is:
1. A method of controlling a vehicle, the method comprising:
receiving, via at least one processor, respective image data from at least two cameras, wherein fields of view of adjacent cameras of the at least two cameras partially overlap and wherein the at least two cameras are mounted to the vehicle;
processing the respective image data through a neural network, wherein the neural network is trained to provide depth data and semantic segmentation data as outputs and wherein the neural network is trained using a loss function that combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term, wherein the panoramic loss term includes a similarity measure regarding overlapping patches of the respective image data that each correspond to a region of overlapping fields of view of the adjacent cameras;
controlling, via the at least one processor, a function of the vehicle based on the depth data.
2. The method of claim 1, wherein at least four cameras are mounted to the vehicle providing a surround view of the vehicle.
3. The method of claim 1, wherein the plurality of loss terms includes a disparity loss term, wherein disparity is inverse of depth.
4. The method of claim 1, wherein the plurality of loss terms includes a disparity smoothness loss term, wherein disparity is inverse of depth.
5. The method of claim 4, wherein the disparity smoothness loss term is variable so as to be lower at image edges.
6. The method of claim 1, comprising controlling, via the at least one processor, the function of the vehicle based on the depth data and the semantic segmentation data.
7. The method of claim 1, comprising performing obstacle detection and avoidance processing based on the depth data and the semantic segmentation data and controlling at least one of steering, braking and propulsion of the vehicle based on the output of the obstacle detection and avoidance processing.
8. A method of training a neural network and using the trained neural network to control a vehicle, the method comprising:
receiving, via at least one processor, respective training image data representing image data received from at least two cameras, wherein fields of view of adjacent cameras of the at least two cameras partially overlap;
processing the respective training image data through a neural network providing depth data and semantic segmentation data as outputs;
receiving, via the at least one processor, ground truth data including ground truth semantic segmentation data;
training, via the at least one processor, the neural network based on a loss function, wherein the loss function combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term;
wherein the panoramic loss term includes a similarity measure regarding overlapping image patches of the respective training image data that each correspond to a region of overlapping fields of view of the adjacent cameras; and
wherein the semantic segmentation loss term quantifies a difference between the ground truth semantic segmentation data and the semantic segmentation data output from the neural network;
using the trained neural network to process image data received from at least two vehicle cameras mounted to the vehicle, wherein fields of view of adjacent vehicle cameras of the at least two vehicle cameras partially overlap, thereby providing live depth data and live semantic segmentation data as outputs;
controlling, via the at least one processor, a function of the vehicle based on the live depth data.
9. The method of claim 8, wherein at least four vehicle cameras are mounted to the vehicle providing a surround view of the vehicle.
10. The method of claim 8, wherein the plurality of loss terms includes a disparity loss term, wherein disparity is inverse of depth, wherein the ground truth data includes ground truth disparity data, and wherein the disparity loss term quantifies a difference between the ground truth disparity data and disparity data derived from the depth data output from the neural network.
11. The method of claim 8, wherein the plurality of loss terms includes a disparity smoothness loss term, wherein disparity is inverse of depth, and wherein the disparity smoothness term quantifies a smoothness of disparity data that is derived from the depth data output from the neural network.
12. The method of claim 11, wherein the disparity smoothness loss term is variable so as to have lower values at image edges.
13. The method of claim 11, wherein the disparity smoothness loss term is calculated by steps including:
finding image edges; and
applying a smoothness control function that varies based on the found image edges.
14. The method of claim 13, wherein the image edges are determined based on evaluating local image intensity changes.
15. The method of claim 8, wherein the panoramic loss term is calculated by steps including:
projecting the respective image data into a combined equirectangular panoramic image;
applying a mask to the combined equirectangular panoramic image so as to extract the overlapping image patches; and
performing the similarity measure on the overlapping image patches to quantify the similarity of the overlapping image patches.
16. The method of claim 8, comprising controlling, via the at least one processor, the function of the vehicle based on the depth data and the semantic segmentation data.
17. A vehicle, the vehicle comprising:
at least two cameras, wherein fields of view of adjacent cameras of the at least two cameras partially overlap and wherein the at least two cameras are mounted to the vehicle;
at least one processor in operable communication with the at least two cameras, the at least one processor configured to execute program instructions, wherein the program instructions are configured to cause the at least one processor to:
receive respective image data from the at least two cameras;
process the respective image data through a neural network, wherein the neural network is trained to provide depth data and semantic segmentation data as outputs and wherein the neural network is trained using a loss function that combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term, wherein the panoramic loss term includes a similarity measure regarding overlapping patches of the respective image data that each correspond to a region of overlapping fields of view of the adjacent cameras; and
control a function of the vehicle based on the depth data.
18. The vehicle of claim 17, wherein at least four cameras are mounted to the vehicle providing a surround view of the vehicle.
19. The vehicle of claim 17, wherein the plurality of loss terms includes a disparity loss term, wherein disparity is inverse of depth.
20. The vehicle of claim 17, wherein the plurality of loss terms includes a disparity smoothness loss term, wherein disparity is inverse of depth, and wherein the disparity smoothness loss term is variable so as to have lower values at image edges.
US17/198,954 2021-03-11 2021-03-11 Systems and methods for depth estimation in a vehicle Pending US20220292289A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/198,954 US20220292289A1 (en) 2021-03-11 2021-03-11 Systems and methods for depth estimation in a vehicle
DE102021129544.0A DE102021129544A1 (en) 2021-03-11 2021-11-12 SYSTEMS AND METHODS FOR DEPTH ESTIMATION IN A VEHICLE
CN202111528067.9A CN115082874A (en) 2021-03-11 2021-12-14 System and method for depth estimation in a vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/198,954 US20220292289A1 (en) 2021-03-11 2021-03-11 Systems and methods for depth estimation in a vehicle

Publications (1)

Publication Number Publication Date
US20220292289A1 true US20220292289A1 (en) 2022-09-15

Family

ID=83005146

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/198,954 Pending US20220292289A1 (en) 2021-03-11 2021-03-11 Systems and methods for depth estimation in a vehicle

Country Status (3)

Country Link
US (1) US20220292289A1 (en)
CN (1) CN115082874A (en)
DE (1) DE102021129544A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220114298A1 (en) * 2020-10-13 2022-04-14 Flyreel, Inc. Generating measurements of physical structures and environments through automated analysis of sensor data
US20220206510A1 (en) * 2020-12-28 2022-06-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200160533A1 (en) * 2018-11-15 2020-05-21 Samsung Electronics Co., Ltd. Foreground-background-aware atrous multiscale network for disparity estimation
US20210118184A1 (en) * 2019-10-17 2021-04-22 Toyota Research Institute, Inc. Systems and methods for self-supervised scale-aware training of a model for monocular depth estimation
US20220237866A1 (en) * 2019-05-30 2022-07-28 MobileyeVisionTechnologies Ltd. Vehicle environment modeling with cameras

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200160533A1 (en) * 2018-11-15 2020-05-21 Samsung Electronics Co., Ltd. Foreground-background-aware atrous multiscale network for disparity estimation
US20220237866A1 (en) * 2019-05-30 2022-07-28 MobileyeVisionTechnologies Ltd. Vehicle environment modeling with cameras
US20210118184A1 (en) * 2019-10-17 2021-04-22 Toyota Research Institute, Inc. Systems and methods for self-supervised scale-aware training of a model for monocular depth estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Godard et al. "UnsupervisedMonocular Depth Estimation with Left-Right Consistency", arXiv:1609.03677v3 [cs.CV] 12 Apr 2017 (Year: 2017) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220114298A1 (en) * 2020-10-13 2022-04-14 Flyreel, Inc. Generating measurements of physical structures and environments through automated analysis of sensor data
US11699001B2 (en) * 2020-10-13 2023-07-11 Flyreel, Inc. Generating measurements of physical structures and environments through automated analysis of sensor data
US20230259667A1 (en) * 2020-10-13 2023-08-17 Flyreel, Inc. Generating measurements of physical structures and environments through automated analysis of sensor data
US11960799B2 (en) * 2020-10-13 2024-04-16 Flyreel, Inc. Generating measurements of physical structures and environments through automated analysis of sensor data
US20220206510A1 (en) * 2020-12-28 2022-06-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot
US11885638B2 (en) * 2020-12-28 2024-01-30 Bear Robotics, Inc. Method, system, and non-transitory computer-readable recording medium for generating a map for a robot

Also Published As

Publication number Publication date
DE102021129544A1 (en) 2022-09-15
CN115082874A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN111507460B (en) Method and apparatus for detecting parking space in order to provide automatic parking system
US11482014B2 (en) 3D auto-labeling with structural and physical constraints
US9286524B1 (en) Multi-task deep convolutional neural networks for efficient and robust traffic lane detection
US10861176B2 (en) Systems and methods for enhanced distance estimation by a mono-camera using radar and motion data
US11436743B2 (en) Systems and methods for semi-supervised depth estimation according to an arbitrary camera
JP7239703B2 (en) Object classification using extraterritorial context
Yao et al. Estimating drivable collision-free space from monocular video
EP3822852B1 (en) Method, apparatus, computer storage medium and program for training a trajectory planning model
US11398095B2 (en) Monocular depth supervision from 3D bounding boxes
CN112912920A (en) Point cloud data conversion method and system for 2D convolutional neural network
US11107228B1 (en) Realistic image perspective transformation using neural networks
Prophet et al. Semantic segmentation on automotive radar maps
US20220156483A1 (en) Efficient three-dimensional object detection from point clouds
US20220292289A1 (en) Systems and methods for depth estimation in a vehicle
KR20190131207A (en) Robust camera and lidar sensor fusion method and system
US20230213643A1 (en) Camera-radar sensor fusion using local attention mechanism
US20230342960A1 (en) Depth estimation based on ego-motion estimation and residual flow estimation
US11321859B2 (en) Pixel-wise residual pose estimation for monocular depth estimation
Alkhorshid et al. Road detection through supervised classification
Danapal et al. Sensor fusion of camera and LiDAR raw data for vehicle detection
KR102270827B1 (en) Generating Joint Cameraand LiDAR Features Using Cross-View Spatial Feature Mapping for 3D Object Detection
CN116129234A (en) Attention-based 4D millimeter wave radar and vision fusion method
WO2023149990A1 (en) Depth map completion in visual content using semantic and three-dimensional information
US11380110B1 (en) Three dimensional traffic sign detection
WO2023277722A1 (en) Multimodal method and apparatus for segmentation and depht estimation

Legal Events

Date Code Title Description
AS Assignment

Owner name: GM GLOBAL TECHNOLOGY OPERATIONS, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHALUMOV, ALBERT;SLUTSKY, MICHAEL;SIGNING DATES FROM 20210225 TO 20210228;REEL/FRAME:055566/0792

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS