US10726304B2 - Refining synthetic data with a generative adversarial network using auxiliary inputs - Google Patents

Refining synthetic data with a generative adversarial network using auxiliary inputs Download PDF

Info

Publication number
US10726304B2
US10726304B2 US15/699,653 US201715699653A US10726304B2 US 10726304 B2 US10726304 B2 US 10726304B2 US 201715699653 A US201715699653 A US 201715699653A US 10726304 B2 US10726304 B2 US 10726304B2
Authority
US
United States
Prior art keywords
data
image
synthetic
synthetic image
accessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/699,653
Other versions
US20190080206A1 (en
Inventor
Guy Hotson
Gintaras Vincent Puskorius
Vidya Nariyambut murali
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ford Global Technologies LLC
Original Assignee
Ford Global Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ford Global Technologies LLC filed Critical Ford Global Technologies LLC
Priority to US15/699,653 priority Critical patent/US10726304B2/en
Assigned to FORD GLOBAL TECHNOLOGIES, LLC reassignment FORD GLOBAL TECHNOLOGIES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PUSKORIUS, GINTARAS VINCENT, HOTSON, GUY, NARIYAMBUT MURALI, VIDYA
Priority to DE102018121808.7A priority patent/DE102018121808A1/en
Priority to CN201811035739.0A priority patent/CN109472365A/en
Publication of US20190080206A1 publication Critical patent/US20190080206A1/en
Application granted granted Critical
Publication of US10726304B2 publication Critical patent/US10726304B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/6264
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2178Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
    • G06F18/2185Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor the supervisor being an automated module, e.g. intelligent oracle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06K9/00791
    • G06K9/00798
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Definitions

  • This invention relates generally to the field of formulating realistic training data for training machine learning models, and, more particularly, to refining synthetic data with a generative adversarial network using auxiliary inputs.
  • image training data e.g., still images or video
  • synthetic data e.g., virtual images generated by gaming or other graphical engines
  • Annotating synthetic data is more straightforward as annotation is a direct by-product of generating the synthetic data.
  • FIG. 1 illustrates an example block diagram of a computing device.
  • FIG. 2 illustrates an example generative adversarial network that facilitates refining synthetic data using auxiliary inputs.
  • FIG. 3 illustrates a flow chart of an example method for refining synthetic data with a generative adversarial network using auxiliary inputs.
  • FIG. 4 illustrates an example data flow for refining synthetic data with a generative adversarial network using auxiliary inputs.
  • the present invention extends to methods, systems, and computer program products for refining synthetic data with a Generative Adversarial Network using auxiliary inputs.
  • GANs Generative Adversarial Networks
  • Refined synthetic data can be rendered more realistically than the original synthetic data.
  • Refined synthetic data also retains annotation metadata and labeling metadata used for training of machine learning models.
  • GANs can be extended to use auxiliary channels as inputs to a refiner network to provide hints about increasing the realism of synthetic data. Refinement of synthetic data enhances the use of synthetic data for additional applications.
  • a GAN is used to refine a synthetic (or virtual) image, for example, an image generated by a gaming engine, into a more realistic refined synthetic (or virtual) image.
  • the more realistic refined synthetic image retains annotation metadata and labeling metadata of the synthetic image used for training of machine learning models.
  • Auxiliary inputs are provided to a refiner network to provide hints about how the more realistic refined synthetic image is to look.
  • Auxiliary inputs can facilitate applying correct textures to different regions of a synthetic image.
  • Auxiliary inputs can include semantic maps (e.g., facilitating image segmentation), depth maps, edges between objects, etc. Refinement of synthetic images enhances the use of synthetic images for solving problems in computer vision, including applications related autonomous driving, such as, image segmentation, identifying drivable paths, object tracking, and object three-dimensional (3D) pose estimation.
  • Semantic maps, depth maps, and object edges ensure that correct textures are applied to different regions of synthetic images.
  • a semantic map can segment a synthetic image into multiple regions and identify the content of each region, such as, foliage from a tree or the side of a green building.
  • a depth map can differentiate how each image region in a synthetic image is to appear, such as, varying levels of detail/texture based on the distance of the object from the camera.
  • Object edges can define transitions between different objects in a synthetic image.
  • aspects of the invention include an image processing system that refines synthetic (or virtual) images to improve the appearance of the synthetic (or virtual) images and provide higher quality (e.g., more realistic) synthetic (or virtual) images for training machine learning models.
  • the machine learning model can be used with autonomous vehicles and driver-assisted vehicles to accurately process and identify objects within images captured by vehicle cameras and sensors.
  • a Generative Adversarial Network can use machine learning to train two networks, a discriminator network and a generator network, that essentially play a game against (i.e., are adversarial to) one another.
  • the discriminator network is trained to differentiate between real data instances (e.g., real images) and synthetic data instances (e.g., virtual images) and classify data instances as either real or synthetic.
  • the generator network is trained to produce synthetic data instances the discriminator network classifies as real data instances.
  • a strategic equilibrium is reached when the discriminator network is incapable of assessing whether any data instance is synthetic or real. It may be that the generator network never directly observes real data instances. Instead, the generator network receives information about real data instances indirectly as seen through the parameters of the discriminator network.
  • a discriminator network differentiates between real images and synthetic (or virtual) images and classifies images as either real or synthetic (or virtual).
  • the generator network is trained to produce synthetic (or virtual) images.
  • a GAN can be extended to include a refiner network (which may or may not replace the generator network).
  • the refiner network observes a synthetic (or virtual) image and generates a variation of the synthetic (or virtual) image.
  • the variation of the synthetic (or virtual) image is intended to exhibit characteristics having increased similarity to real images, while retaining annotation metadata and labeling metadata.
  • the refiner network attempts to refine synthetic (or virtual) images so that the discriminator network classifies refined synthetic (or virtual) images as real images.
  • the refiner network also attempts to maintain similarities (e.g., regularize characteristics) between an input synthetic (or virtual) image and a refined synthetic (or virtual) image.
  • the refiner network can be extended to receive additional information, which can be generated as part of a synthesis process.
  • the refiner network can receive one or more of: semantic maps (e.g., facilitating image segmentation), depth maps, edges between objects, etc.
  • the refiner network receives an auxiliary image that encodes a pixel level sematic segmentation of the synthetic (or virtual) image as input.
  • the refiner network receives an auxiliary image that encodes a depth map of the contents of the synthetic (or virtual) image as input.
  • the refiner network can receive an auxiliary image that encodes edges between objects in the synthetic (or virtual) image.
  • a synthetic (or virtual) image may include foliage from a tree.
  • the semantic segmentation can indicate that the part of the synthetic (or virtual) image including the foliage is in fact foliage (and not for example, the side of a green building).
  • a depth map can be used to differentiate how the foliage is to appear as a function of distance from the camera.
  • the edges can be used to differentiate between different objects in the synthetic (or virtual) image.
  • Auxiliary data can be extracted from a dataset of real images used during training of the discriminator network. Extracting auxiliary data from a data set of real images can include using sensors, such as, LIDAR, that are synchronized with a camera data stream. For auxiliary data representing semantic segmentation, segmentation can be performed either by hand or by a semantic segmentation model. The GAN can then be formulated as a conditional GAN, where the discriminator network is conditioned on the supplied auxiliary data.
  • a GAN can leverage auxiliary data streams such as semantic maps and depth maps to help ensure correct textures are correctly applied to different regions of a synthetic (or virtual) image.
  • the GAN can generate refined synthetic (or virtual) images having increased realism while retaining annotations and/or labels for use in training additional models (e.g., computer vision, autonomous driving, etc.).
  • FIG. 1 illustrates an example block diagram of a computing device 100 .
  • Computing device 100 can be used to perform various procedures, such as those discussed herein.
  • Computing device 100 can function as a server, a client, or any other computing entity.
  • Computing device 100 can perform various communication and data transfer functions as described herein and can execute one or more application programs, such as the application programs described herein.
  • Computing device 100 can be any of a wide variety of computing devices, such as a mobile telephone or other mobile device, a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.
  • Computing device 100 includes one or more processor(s) 102 , one or more memory device(s) 104 , one or more interface(s) 106 , one or more mass storage device(s) 108 , one or more Input/Output (I/O) device(s) 110 , and a display device 130 all of which are coupled to a bus 112 .
  • Processor(s) 102 include one or more processors or controllers that execute instructions stored in memory device(s) 104 and/or mass storage device(s) 108 .
  • Processor(s) 102 may also include various types of computer storage media, such as cache memory.
  • Memory device(s) 104 include various computer storage media, such as volatile memory (e.g., random access memory (RAM) 114 ) and/or nonvolatile memory (e.g., read-only memory (ROM) 116 ). Memory device(s) 104 may also include rewritable ROM, such as Flash memory.
  • volatile memory e.g., random access memory (RAM) 114
  • ROM read-only memory
  • Memory device(s) 104 may also include rewritable ROM, such as Flash memory.
  • Mass storage device(s) 108 include various computer storage media, such as magnetic tapes, magnetic disks, optical disks, solid state memory (e.g., Flash memory), and so forth. As depicted in FIG. 1 , a particular mass storage device is a hard disk drive 124 . Various drives may also be included in mass storage device(s) 108 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 108 include removable media 126 and/or non-removable media.
  • I/O device(s) 110 include various devices that allow data and/or other information to be input to or retrieved from computing device 100 .
  • Example I/O device(s) 110 include cursor control devices, keyboards, keypads, barcode scanners, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, cameras, lenses, radars, CCDs or other image capture devices, and the like.
  • Display device 130 includes any type of device capable of displaying information to one or more users of computing device 100 .
  • Examples of display device 130 include a monitor, display terminal, video projection device, and the like.
  • Interface(s) 106 include various interfaces that allow computing device 100 to interact with other systems, devices, or computing environments as well as humans.
  • Example interface(s) 106 can include any number of different network interfaces 120 , such as interfaces to personal area networks (PANs), local area networks (LANs), wide area networks (WANs), wireless networks (e.g., near field communication (NFC), Bluetooth, Wi-Fi, etc., networks), and the Internet.
  • Other interfaces include user interface 118 and peripheral device interface 122 .
  • Bus 112 allows processor(s) 102 , memory device(s) 104 , interface(s) 106 , mass storage device(s) 108 , and I/O device(s) 110 to communicate with one another, as well as other devices or components coupled to bus 112 .
  • Bus 112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
  • FIG. 2 illustrates an example generative adversarial network (GAN) 200 that facilitates refining synthetic data using auxiliary inputs.
  • GAN generative adversarial network
  • Generative adversarial network (GAN) 200 can be implemented using components of computing device 100 .
  • GAN 200 includes generator 201 , refiner 202 , and discriminator 203 .
  • Generator 201 can generate and output virtual images including synthetic image data and annotations.
  • the synthetic image data can represent an image of a roadway scene.
  • the annotations annotate the synthetic image data with ground truth data for the roadway scene.
  • the annotations can be a by-product of generating the synthetic image data.
  • generator 201 is a gaming engine.
  • synthetic image data can lack sufficient realism, especially for higher resolution images and/or for images containing more complex objects.
  • a human observer can typically differentiate a real image from a virtual image generated by a gaming engine.
  • refiner 202 can access virtual images and refine virtual images to improve realism.
  • Refiner 202 can receive a virtual image from generator 201 .
  • Refiner 202 can access auxiliary data, such as, for example, image segmentation, a depth map, object edges, etc.
  • Refiner 202 can refine (transform) a virtual image into a refined virtual image based on the auxiliary data.
  • refiner 202 can use the content of auxiliary data as hints to improve the realism of the virtual image without altering annotations.
  • Refiner 202 can output refined virtual images.
  • Discriminator 203 can receive a refined virtual image from refiner 202 . Discriminator 203 can classify a refined virtual image as “real” or “synthetic”. When an image is classified as “real”, discriminator 203 can make the refined virtual image available for use in training other neural networks. For example, discriminator 203 can make refined virtual images classified as “real” available for training computer vision neural networks, including those related to autonomous driving.
  • discriminator 203 can generate feedback parameters for further improving the realism of the refined virtual image.
  • Discriminator 203 can send the feedback parameters to refiner 202 and/or to generator 201 .
  • Refiner 202 and/or generator 201 can use the feedback parameters to further improve the realism of previously refined virtual images (possibly with further reference to auxiliary data).
  • Further refined virtual images can be sent to discriminator 203 .
  • a virtual image can be further refined (transformed) based on auxiliary data and/or feedback parameters until discriminator 203 classifies the virtual image as “real” (or until no further improvements to realism are possible, after performing a specified number of refinements, etc.).
  • FIG. 3 illustrates a flow chart of an example method for 300 refining synthetic data with GAN 200 using auxiliary inputs. Method 300 will be described with respect to the components and data of GAN 200 .
  • Generator 201 can generate virtual image 211 representing an image of a roadway scene (e.g., an image of a road, an image of a highway, an image of an interstate, an image of a parking lot, an image of an intersection, etc.).
  • Virtual image 211 includes synthetic image data 212 and annotations 213 .
  • Synthetic image data 212 can include pixel values for pixels in virtual image 211 .
  • Annotations 213 annotate the synthetic image data with ground truth data for the roadway scene.
  • Generator 201 can output virtual image 211 .
  • Method 300 includes accessing synthetic image data representing an image of a roadway scene, the synthetic image data including annotations, the annotations annotating the synthetic image data with ground truth data for the roadway scene ( 301 ).
  • refiner 202 can access virtual image 211 , including synthetic image data 212 and annotations 213 , for a scene that may be encountered during driving (e.g., intersection, road, parking lot, etc.).
  • Method 300 includes accessing one or more auxiliary data streams for the image ( 302 ).
  • refiner 202 can access one or more of: image segmentation 222 , depth map 223 , and object edges 224 from auxiliary data 221 .
  • Image segmentation 222 can segment virtual image 211 into multiple regions and identify the content of each region, such as, foliage from a tree or the side of a green building.
  • Depth map 223 can differentiate how each image region in virtual image 211 is to appear, such as, varying levels of detail/texture based on the distance of the object from the camera.
  • Object edges 224 can define transitions between different objects in virtual image 211 .
  • Method 300 includes using contents of the one or more auxiliary data streams as hints to refine the synthetic image data, refining the synthetic image data improving realism of the image without altering the annotations ( 303 ).
  • refiner 202 can use the contents of one or more of: image segmentation 222 , depth map 223 , and object edges 224 as hints to refine (transform) virtual image 211 into refined synthetic image data 212 .
  • Refiner 202 can refine synthetic image data 211 into refined synthetic image data 212 without altering annotations 213 .
  • Refined synthetic image data 212 can improve the realism of the scene relative to synthetic image data 211 .
  • image segmentation 222 is included in an auxiliary image that encodes a pixel level sematic segmentation of virtual image 211 .
  • depth map 223 is included in another auxiliary image that encodes a depth map of the contents virtual image 211 .
  • objects edges 224 is included in a further auxiliary image that encodes edges between objects in virtual image 211 .
  • refiner 202 can use one or more auxiliary images to refine synthetic image data 211 into refined synthetic image data 212
  • generator 201 generates auxiliary data 221 as a by-product of generating virtual image 211 .
  • auxiliary data 221 is extracted from a dataset of real images used to train discriminator 203 .
  • Method 300 includes outputting the refined synthetic image data, the refined synthetic image data representing a refined image of the roadway scene ( 304 ).
  • refiner 202 can output refined virtual image 214 for the scene that may be encountered during driving.
  • Refined virtual image 214 includes refined synthetic image data 216 and annotations 213 .
  • Discriminator 203 can access refined virtual image 214 .
  • Discriminator 203 can use refined synthetic image data 216 and annotations 213 to make image type classification 217 for refined virtual image 214 .
  • Image type classification 217 classifies refined virtual image 214 as “real” or “synthetic”. If discriminator 203 classifies refined virtual image 214 as “real”, discriminator 203 can make refined virtual image 214 available for use in training other neural networks, such as, computer vision neural networks, including those related to autonomous driving.
  • discriminator 203 classifies refined virtual image 214 as “synthetic”, discriminator 203 can generate image feedback parameters 218 for further improving the realism of refined virtual image 214 .
  • Discriminator 203 can send image feedback parameters 218 to refiner 202 and/or to generator 201 .
  • Refiner 202 and/or generator 201 can use image feedback parameters 218 to further improve the realism of refined virtual image 214 (possibly with further reference to auxiliary data 221 ). Further refined virtual images can be sent to discriminator 203 .
  • Refiner 202 and/or generator 201 can further refine (transform) refined virtual image 214 based on auxiliary data 221 and/or image feedback parameters 218 (or additional other feedback parameters).
  • Image refinement can continue until discriminator 203 classifies a further refined virtual image (further refined from refined virtual image 214 ) as “real” (or until no further improvements to realism are possible, after performing a specified number of refinements, etc.)
  • FIG. 4 illustrates an example data flow 400 for refining synthetic data with a generative adversarial network using auxiliary inputs.
  • Generator 401 generates virtual image 411 , image segmentation image 433 , and depth map image 423 .
  • Refiner 402 uses the contents of image segmentation image 433 and depth map image 423 (e.g., as hints) to refine (transform) virtual image 411 into refined virtual image 414 .
  • the realism of refined virtual image 414 can be improved relative to virtual image 411 .
  • Discriminator 403 classifies refined virtual image 414 as “real” or “synthetic”
  • one or more processors are configured to execute instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) to perform any of a plurality of described operations.
  • the one or more processors can access information from system memory and/or store information in system memory.
  • the one or more processors can transform information between different formats, such as, for example, virtual images, synthetic image data, annotations, auxiliary data, auxiliary images, image segmentation, depth maps, object edges, refined virtual images, refined synthetic data, image type classifications, image feedback parameters, etc.
  • System memory can be coupled to the one or more processors and can store instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) executed by the one or more processors.
  • the system memory can also be configured to store any of a plurality of other types of data generated by the described components, such as, for example, virtual images, synthetic image data, annotations, auxiliary data, auxiliary images, image segmentation, depth maps, object edges, refined virtual images, refined synthetic data, image type classifications, image feedback parameters, etc.
  • Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
  • Computer storage media includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • SSDs solid state drives
  • PCM phase-change memory
  • An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network.
  • a “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices.
  • Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
  • Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • the computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code.
  • the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash or other vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like.
  • the disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks.
  • program modules may be located in both local and remote memory storage devices.
  • ASICs application specific integrated circuits
  • a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code.
  • processors may include hardware logic/electrical circuitry controlled by the computer code.
  • At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium.
  • Such software when executed in one or more data processing devices, causes a device to operate as described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention extends to methods, systems, and computer program products for refining synthetic data with a Generative Adversarial Network (GAN) using auxiliary inputs. Refined synthetic data can be rendered more realistically than the original synthetic data. Refined synthetic data also retains annotation metadata and labeling metadata used for training of machine learning models. GANs can be extended to use auxiliary channels as inputs to a refiner network to provide hints about increasing the realism of synthetic data. Refinement of synthetic data enhances the use of synthetic data for additional applications.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
Not applicable.
BACKGROUND 1. Field of the Invention
This invention relates generally to the field of formulating realistic training data for training machine learning models, and, more particularly, to refining synthetic data with a generative adversarial network using auxiliary inputs.
2. Related Art
The process of annotating and labeling relevant portions of image training data (e.g., still images or video) for training machine learning models can be tedious, time-consuming, and expensive. To reduce these annotating and labeling burdens, synthetic data (e.g., virtual images generated by gaming or other graphical engines) can be used. Annotating synthetic data is more straightforward as annotation is a direct by-product of generating the synthetic data.
BRIEF DESCRIPTION OF THE DRAWINGS
The specific features, aspects and advantages of the present invention will become better understood with regard to the following description and accompanying drawings where:
FIG. 1 illustrates an example block diagram of a computing device.
FIG. 2 illustrates an example generative adversarial network that facilitates refining synthetic data using auxiliary inputs.
FIG. 3 illustrates a flow chart of an example method for refining synthetic data with a generative adversarial network using auxiliary inputs.
FIG. 4 illustrates an example data flow for refining synthetic data with a generative adversarial network using auxiliary inputs.
DETAILED DESCRIPTION
The present invention extends to methods, systems, and computer program products for refining synthetic data with a Generative Adversarial Network using auxiliary inputs.
Aspects of the invention include using Generative Adversarial Networks (“GANs”) to refine synthetic data. Refined synthetic data can be rendered more realistically than the original synthetic data. Refined synthetic data also retains annotation metadata and labeling metadata used for training of machine learning models. GANs can be extended to use auxiliary channels as inputs to a refiner network to provide hints about increasing the realism of synthetic data. Refinement of synthetic data enhances the use of synthetic data for additional applications.
In one aspect, a GAN is used to refine a synthetic (or virtual) image, for example, an image generated by a gaming engine, into a more realistic refined synthetic (or virtual) image. The more realistic refined synthetic image retains annotation metadata and labeling metadata of the synthetic image used for training of machine learning models. Auxiliary inputs are provided to a refiner network to provide hints about how the more realistic refined synthetic image is to look. Auxiliary inputs can facilitate applying correct textures to different regions of a synthetic image. Auxiliary inputs can include semantic maps (e.g., facilitating image segmentation), depth maps, edges between objects, etc. Refinement of synthetic images enhances the use of synthetic images for solving problems in computer vision, including applications related autonomous driving, such as, image segmentation, identifying drivable paths, object tracking, and object three-dimensional (3D) pose estimation.
Semantic maps, depth maps, and object edges ensure that correct textures are applied to different regions of synthetic images. For example, a semantic map can segment a synthetic image into multiple regions and identify the content of each region, such as, foliage from a tree or the side of a green building. A depth map can differentiate how each image region in a synthetic image is to appear, such as, varying levels of detail/texture based on the distance of the object from the camera. Object edges can define transitions between different objects in a synthetic image.
Accordingly, aspects of the invention include an image processing system that refines synthetic (or virtual) images to improve the appearance of the synthetic (or virtual) images and provide higher quality (e.g., more realistic) synthetic (or virtual) images for training machine learning models. When the training of a machine learning models is complete, the machine learning model can be used with autonomous vehicles and driver-assisted vehicles to accurately process and identify objects within images captured by vehicle cameras and sensors.
A Generative Adversarial Network (GAN) can use machine learning to train two networks, a discriminator network and a generator network, that essentially play a game against (i.e., are adversarial to) one another. The discriminator network is trained to differentiate between real data instances (e.g., real images) and synthetic data instances (e.g., virtual images) and classify data instances as either real or synthetic. The generator network is trained to produce synthetic data instances the discriminator network classifies as real data instances. A strategic equilibrium is reached when the discriminator network is incapable of assessing whether any data instance is synthetic or real. It may be that the generator network never directly observes real data instances. Instead, the generator network receives information about real data instances indirectly as seen through the parameters of the discriminator network.
In one aspect, a discriminator network differentiates between real images and synthetic (or virtual) images and classifies images as either real or synthetic (or virtual). In this aspect, the generator network is trained to produce synthetic (or virtual) images. A GAN can be extended to include a refiner network (which may or may not replace the generator network). The refiner network observes a synthetic (or virtual) image and generates a variation of the synthetic (or virtual) image. The variation of the synthetic (or virtual) image is intended to exhibit characteristics having increased similarity to real images, while retaining annotation metadata and labeling metadata. The refiner network attempts to refine synthetic (or virtual) images so that the discriminator network classifies refined synthetic (or virtual) images as real images. The refiner network also attempts to maintain similarities (e.g., regularize characteristics) between an input synthetic (or virtual) image and a refined synthetic (or virtual) image.
The refiner network can be extended to receive additional information, which can be generated as part of a synthesis process. For example, the refiner network can receive one or more of: semantic maps (e.g., facilitating image segmentation), depth maps, edges between objects, etc. In one aspect, the refiner network receives an auxiliary image that encodes a pixel level sematic segmentation of the synthetic (or virtual) image as input. In another aspect, the refiner network receives an auxiliary image that encodes a depth map of the contents of the synthetic (or virtual) image as input. In a further aspect, the refiner network can receive an auxiliary image that encodes edges between objects in the synthetic (or virtual) image.
For example, a synthetic (or virtual) image may include foliage from a tree. The semantic segmentation can indicate that the part of the synthetic (or virtual) image including the foliage is in fact foliage (and not for example, the side of a green building). A depth map can be used to differentiate how the foliage is to appear as a function of distance from the camera. The edges can be used to differentiate between different objects in the synthetic (or virtual) image.
Auxiliary data can be extracted from a dataset of real images used during training of the discriminator network. Extracting auxiliary data from a data set of real images can include using sensors, such as, LIDAR, that are synchronized with a camera data stream. For auxiliary data representing semantic segmentation, segmentation can be performed either by hand or by a semantic segmentation model. The GAN can then be formulated as a conditional GAN, where the discriminator network is conditioned on the supplied auxiliary data.
Accordingly, a GAN can leverage auxiliary data streams such as semantic maps and depth maps to help ensure correct textures are correctly applied to different regions of a synthetic (or virtual) image. The GAN can generate refined synthetic (or virtual) images having increased realism while retaining annotations and/or labels for use in training additional models (e.g., computer vision, autonomous driving, etc.).
FIG. 1 illustrates an example block diagram of a computing device 100. Computing device 100 can be used to perform various procedures, such as those discussed herein. Computing device 100 can function as a server, a client, or any other computing entity. Computing device 100 can perform various communication and data transfer functions as described herein and can execute one or more application programs, such as the application programs described herein. Computing device 100 can be any of a wide variety of computing devices, such as a mobile telephone or other mobile device, a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.
Computing device 100 includes one or more processor(s) 102, one or more memory device(s) 104, one or more interface(s) 106, one or more mass storage device(s) 108, one or more Input/Output (I/O) device(s) 110, and a display device 130 all of which are coupled to a bus 112. Processor(s) 102 include one or more processors or controllers that execute instructions stored in memory device(s) 104 and/or mass storage device(s) 108. Processor(s) 102 may also include various types of computer storage media, such as cache memory.
Memory device(s) 104 include various computer storage media, such as volatile memory (e.g., random access memory (RAM) 114) and/or nonvolatile memory (e.g., read-only memory (ROM) 116). Memory device(s) 104 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 108 include various computer storage media, such as magnetic tapes, magnetic disks, optical disks, solid state memory (e.g., Flash memory), and so forth. As depicted in FIG. 1, a particular mass storage device is a hard disk drive 124. Various drives may also be included in mass storage device(s) 108 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 108 include removable media 126 and/or non-removable media.
I/O device(s) 110 include various devices that allow data and/or other information to be input to or retrieved from computing device 100. Example I/O device(s) 110 include cursor control devices, keyboards, keypads, barcode scanners, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, cameras, lenses, radars, CCDs or other image capture devices, and the like.
Display device 130 includes any type of device capable of displaying information to one or more users of computing device 100. Examples of display device 130 include a monitor, display terminal, video projection device, and the like.
Interface(s) 106 include various interfaces that allow computing device 100 to interact with other systems, devices, or computing environments as well as humans. Example interface(s) 106 can include any number of different network interfaces 120, such as interfaces to personal area networks (PANs), local area networks (LANs), wide area networks (WANs), wireless networks (e.g., near field communication (NFC), Bluetooth, Wi-Fi, etc., networks), and the Internet. Other interfaces include user interface 118 and peripheral device interface 122.
Bus 112 allows processor(s) 102, memory device(s) 104, interface(s) 106, mass storage device(s) 108, and I/O device(s) 110 to communicate with one another, as well as other devices or components coupled to bus 112. Bus 112 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
FIG. 2 illustrates an example generative adversarial network (GAN) 200 that facilitates refining synthetic data using auxiliary inputs. Generative adversarial network (GAN) 200 can be implemented using components of computing device 100.
As depicted, GAN 200, includes generator 201, refiner 202, and discriminator 203. Generator 201 can generate and output virtual images including synthetic image data and annotations. The synthetic image data can represent an image of a roadway scene. The annotations annotate the synthetic image data with ground truth data for the roadway scene. The annotations can be a by-product of generating the synthetic image data. In one aspect, generator 201 is a gaming engine.
However, synthetic image data can lack sufficient realism, especially for higher resolution images and/or for images containing more complex objects. A human observer can typically differentiate a real image from a virtual image generated by a gaming engine.
As such, refiner 202 can access virtual images and refine virtual images to improve realism. Refiner 202 can receive a virtual image from generator 201. Refiner 202 can access auxiliary data, such as, for example, image segmentation, a depth map, object edges, etc. Refiner 202 can refine (transform) a virtual image into a refined virtual image based on the auxiliary data. For example, refiner 202 can use the content of auxiliary data as hints to improve the realism of the virtual image without altering annotations. Refiner 202 can output refined virtual images.
Discriminator 203 can receive a refined virtual image from refiner 202. Discriminator 203 can classify a refined virtual image as “real” or “synthetic”. When an image is classified as “real”, discriminator 203 can make the refined virtual image available for use in training other neural networks. For example, discriminator 203 can make refined virtual images classified as “real” available for training computer vision neural networks, including those related to autonomous driving.
When an image is classified as “synthetic”, discriminator 203 can generate feedback parameters for further improving the realism of the refined virtual image. Discriminator 203 can send the feedback parameters to refiner 202 and/or to generator 201. Refiner 202 and/or generator 201 can use the feedback parameters to further improve the realism of previously refined virtual images (possibly with further reference to auxiliary data). Further refined virtual images can be sent to discriminator 203. A virtual image can be further refined (transformed) based on auxiliary data and/or feedback parameters until discriminator 203 classifies the virtual image as “real” (or until no further improvements to realism are possible, after performing a specified number of refinements, etc.).
FIG. 3 illustrates a flow chart of an example method for 300 refining synthetic data with GAN 200 using auxiliary inputs. Method 300 will be described with respect to the components and data of GAN 200.
Generator 201 can generate virtual image 211 representing an image of a roadway scene (e.g., an image of a road, an image of a highway, an image of an interstate, an image of a parking lot, an image of an intersection, etc.). Virtual image 211 includes synthetic image data 212 and annotations 213. Synthetic image data 212 can include pixel values for pixels in virtual image 211. Annotations 213 annotate the synthetic image data with ground truth data for the roadway scene. Generator 201 can output virtual image 211.
Method 300 includes accessing synthetic image data representing an image of a roadway scene, the synthetic image data including annotations, the annotations annotating the synthetic image data with ground truth data for the roadway scene (301). For example, refiner 202 can access virtual image 211, including synthetic image data 212 and annotations 213, for a scene that may be encountered during driving (e.g., intersection, road, parking lot, etc.). Method 300 includes accessing one or more auxiliary data streams for the image (302). For example, refiner 202 can access one or more of: image segmentation 222, depth map 223, and object edges 224 from auxiliary data 221.
Image segmentation 222 can segment virtual image 211 into multiple regions and identify the content of each region, such as, foliage from a tree or the side of a green building. Depth map 223 can differentiate how each image region in virtual image 211 is to appear, such as, varying levels of detail/texture based on the distance of the object from the camera. Object edges 224 can define transitions between different objects in virtual image 211.
Method 300 includes using contents of the one or more auxiliary data streams as hints to refine the synthetic image data, refining the synthetic image data improving realism of the image without altering the annotations (303). For example, refiner 202 can use the contents of one or more of: image segmentation 222, depth map 223, and object edges 224 as hints to refine (transform) virtual image 211 into refined synthetic image data 212. Refiner 202 can refine synthetic image data 211 into refined synthetic image data 212 without altering annotations 213. Refined synthetic image data 212 can improve the realism of the scene relative to synthetic image data 211.
In one aspect, image segmentation 222 is included in an auxiliary image that encodes a pixel level sematic segmentation of virtual image 211. In another aspect, depth map 223 is included in another auxiliary image that encodes a depth map of the contents virtual image 211. In a further aspect, objects edges 224 is included in a further auxiliary image that encodes edges between objects in virtual image 211. Thus, refiner 202 can use one or more auxiliary images to refine synthetic image data 211 into refined synthetic image data 212
In one aspect, generator 201 generates auxiliary data 221 as a by-product of generating virtual image 211. In another aspect, auxiliary data 221 is extracted from a dataset of real images used to train discriminator 203.
Method 300 includes outputting the refined synthetic image data, the refined synthetic image data representing a refined image of the roadway scene (304). For example, refiner 202 can output refined virtual image 214 for the scene that may be encountered during driving. Refined virtual image 214 includes refined synthetic image data 216 and annotations 213.
Discriminator 203 can access refined virtual image 214. Discriminator 203 can use refined synthetic image data 216 and annotations 213 to make image type classification 217 for refined virtual image 214. Image type classification 217 classifies refined virtual image 214 as “real” or “synthetic”. If discriminator 203 classifies refined virtual image 214 as “real”, discriminator 203 can make refined virtual image 214 available for use in training other neural networks, such as, computer vision neural networks, including those related to autonomous driving.
On the other hand, if discriminator 203 classifies refined virtual image 214 as “synthetic”, discriminator 203 can generate image feedback parameters 218 for further improving the realism of refined virtual image 214. Discriminator 203 can send image feedback parameters 218 to refiner 202 and/or to generator 201. Refiner 202 and/or generator 201 can use image feedback parameters 218 to further improve the realism of refined virtual image 214 (possibly with further reference to auxiliary data 221). Further refined virtual images can be sent to discriminator 203. Refiner 202 and/or generator 201 can further refine (transform) refined virtual image 214 based on auxiliary data 221 and/or image feedback parameters 218 (or additional other feedback parameters). Image refinement can continue until discriminator 203 classifies a further refined virtual image (further refined from refined virtual image 214) as “real” (or until no further improvements to realism are possible, after performing a specified number of refinements, etc.)
FIG. 4 illustrates an example data flow 400 for refining synthetic data with a generative adversarial network using auxiliary inputs. Generator 401 generates virtual image 411, image segmentation image 433, and depth map image 423. Refiner 402 uses the contents of image segmentation image 433 and depth map image 423 (e.g., as hints) to refine (transform) virtual image 411 into refined virtual image 414. The realism of refined virtual image 414 can be improved relative to virtual image 411. Discriminator 403 classifies refined virtual image 414 as “real” or “synthetic”
In one aspect, one or more processors are configured to execute instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) to perform any of a plurality of described operations. The one or more processors can access information from system memory and/or store information in system memory. The one or more processors can transform information between different formats, such as, for example, virtual images, synthetic image data, annotations, auxiliary data, auxiliary images, image segmentation, depth maps, object edges, refined virtual images, refined synthetic data, image type classifications, image feedback parameters, etc.
System memory can be coupled to the one or more processors and can store instructions (e.g., computer-readable instructions, computer-executable instructions, etc.) executed by the one or more processors. The system memory can also be configured to store any of a plurality of other types of data generated by the described components, such as, for example, virtual images, synthetic image data, annotations, auxiliary data, auxiliary images, image segmentation, depth maps, object edges, refined virtual images, refined synthetic data, image type classifications, image feedback parameters, etc.
In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash or other vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.
It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).
At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.

Claims (20)

What is claimed:
1. A method comprising:
accessing synthetic image data representing an image of a roadway scene and including ground truth data annotations;
accessing auxiliary data including image segmentation data, depth map data, and object edge data, the image segmentation data segmenting the synthetic image into multiple regions and indicating an object in each of the multiple regions, the depth map differentiating how each of the multiple regions is to appear based on object distance from a camera, the object edge data defining transitions between a plurality of objects in the synthetic image data;
generating refined synthetic image data using the image segmentation data, the depth map data, and the object edge data, as hints, including refining the synthetic image data by applying textures to the synthetic image data considering the transitions between different objects, from among the plurality of objects, in different regions, from among the multiple regions, have different object distances from the camera and without altering the ground truth data annotations; and
outputting the refined synthetic image data.
2. The method of claim 1, wherein accessing the auxiliary data comprises accessing one or more auxiliary data streams corresponding to the image of the roadway scene.
3. The method of claim 2, wherein accessing the auxiliary data comprises accessing a pixel level semantic segmentation of the image of the roadway scene.
4. The method of claim 2, wherein accessing the synthetic image data comprises accessing pixel values for pixels in the image of the roadway scene.
5. The method of claim 1, wherein accessing the auxiliary data comprises accessing a depth map image and an image segmentation image.
6. The method of claim 1, wherein accessing the auxiliary data including the image segmentation data comprises accessing image segmentation data indicating one of foliage or a side of a building in a region.
7. A method for refining machine learning model training data, the method comprising:
accessing synthetic image data representing an image of a roadway scene and including annotations annotating the synthetic image data with ground truth data for the roadway scene;
accessing one or more auxiliary data streams corresponding to the image including image segmentation data, depth map data, and object edge data, the image segmentation data segmenting the synthetic image into multiple regions and indicating an object in each of the multiple regions, the depth map differentiating how each of the multiple regions is to appear based on object distance from a camera, the object edge data defining transitions between a plurality of objects in the synthetic image data;
refining the synthetic image data using contents of the image segmentation data, the depth map data, and the object edge data, as hints, including applying correct textures to the synthetic image data considering transitions between different objects, from among the plurality of objects, in different regions, from among the multiple regions, have different object distances from the camera and without altering the annotations; and
outputting the refined synthetic image data representing a refined image of the roadway scene.
8. The method of claim 7, wherein accessing the synthetic image data representing the image of a roadway scene comprises accessing previously refined synthetic image data representing the image of the roadway scene;
further comprising receiving one or more feedback parameters associated with a discriminator decision classifying the previously refined synthetic data; and
wherein refining the synthetic image data comprises using the one or more feedback parameters to further refine the previously refined synthetic image data without altering the annotations.
9. The method of claim 7, wherein accessing the one or more auxiliary data streams corresponding to the image of the roadway scene comprises accessing a pixel level semantic segmentation of the image of the roadway scene.
10. The method of claim 7, wherein accessing the one or more auxiliary data streams corresponding to the image of the roadway scene comprises accessing the depth map data that defines varying levels of detail for objects based on distance of the objects from a camera.
11. The method of claim 7, wherein accessing the synthetic image data comprises accessing pixel values for pixels in the image of the roadway scene.
12. The method of claim 7, further comprising extracting an auxiliary data stream from other image data.
13. The method of claim 12, wherein extracting the auxiliary data stream from the other image data comprises extracting the auxiliary data stream from a sensor that is synchronized with a camera data stream.
14. The method of claim 7, further comprising using the refined synthetic image data to train a machine learning module associated with autonomous driving of a vehicle.
15. The method of claim 7, wherein accessing the one or more auxiliary data streams comprises accessing image segmentation data indicating one of foliage or a side of a building in a region.
16. A computer system comprising:
system memory storing instructions; and
one or more processors executing the instructions stored in the system memory to perform the following:
access synthetic image data representing an image of a roadway scene and including annotations annotating the synthetic image data with ground truth data for the roadway scene;
access auxiliary data streams corresponding to the image including image segmentation data, depth map data, and object edge data, the image segmentation data segmenting the synthetic image into multiple regions and indicating an object in each of the multiple regions, the depth map differentiating how each of the multiple regions is to appear based on object distance from a camera, the object edge data defining transitions between different objects in the synthetic image data;
refine the synthetic image data use contents of the image segmentation data, the depth map data, and the object edge data, as hints, including applying textures to the synthetic image data considering transitions between different objects in different regions, from among the multiple regions, have different object distances from the camera and without altering the annotations; and
output the refined synthetic image data.
17. The computer system of claim 16, wherein the instructions configured to access the synthetic image data representing the image of the roadway scene comprise instructions configured to access previously refined synthetic image data representing the image of the roadway scene;
further comprising instructions configured to receive feedback parameters associated with a discriminator decision classifying the previously refined synthetic data; and
wherein the instructions configured to refine the synthetic image data comprise instructions configured to use the feedback parameters to further refine the previously refined synthetic image data without altering the annotations.
18. The computer system of claim 16, further comprising instructions configured to extract an auxiliary data stream, from among the auxiliary data streams, from a sensor that is synchronized with a camera data stream.
19. The computer system of claim 16, further comprising instructions configured to use the refined synthetic image data to train a machine learning module associated with autonomous driving of a vehicle.
20. The computer system of claim 16, wherein the instructions configured to access the auxiliary data streams comprise instructions configured to access image segmentation data indicating one of foliage or a side of a building in a region.
US15/699,653 2017-09-08 2017-09-08 Refining synthetic data with a generative adversarial network using auxiliary inputs Active 2038-02-08 US10726304B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/699,653 US10726304B2 (en) 2017-09-08 2017-09-08 Refining synthetic data with a generative adversarial network using auxiliary inputs
DE102018121808.7A DE102018121808A1 (en) 2017-09-08 2018-09-06 REFINING SYNTHETIC DATA WITH A GENERATIVE ADVERSARIAL NETWORK USING AUXILIARY INPUTS
CN201811035739.0A CN109472365A (en) 2017-09-08 2018-09-06 Network is fought to refine generated data by production using auxiliary input

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/699,653 US10726304B2 (en) 2017-09-08 2017-09-08 Refining synthetic data with a generative adversarial network using auxiliary inputs

Publications (2)

Publication Number Publication Date
US20190080206A1 US20190080206A1 (en) 2019-03-14
US10726304B2 true US10726304B2 (en) 2020-07-28

Family

ID=65441336

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/699,653 Active 2038-02-08 US10726304B2 (en) 2017-09-08 2017-09-08 Refining synthetic data with a generative adversarial network using auxiliary inputs

Country Status (3)

Country Link
US (1) US10726304B2 (en)
CN (1) CN109472365A (en)
DE (1) DE102018121808A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210374947A1 (en) * 2020-05-26 2021-12-02 Nvidia Corporation Contextual image translation using neural networks
WO2022050937A1 (en) * 2020-09-02 2022-03-10 Google Llc Condition-aware generation of panoramic imagery
WO2024054576A1 (en) 2022-09-08 2024-03-14 Booz Allen Hamilton Inc. System and method synthetic data generation

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10643320B2 (en) * 2017-11-15 2020-05-05 Toyota Research Institute, Inc. Adversarial learning of photorealistic post-processing of simulation with privileged information
US11321938B2 (en) * 2017-12-21 2022-05-03 Siemens Aktiengesellschaft Color adaptation using adversarial training networks
EP3525508B1 (en) * 2018-02-07 2020-11-11 Rohde & Schwarz GmbH & Co. KG Method and test system for mobile network testing
US10713569B2 (en) * 2018-05-31 2020-07-14 Toyota Research Institute, Inc. System and method for generating improved synthetic images
US11422259B2 (en) * 2018-06-28 2022-08-23 Zoox, Inc. Multi-resolution maps for localization
US10890663B2 (en) 2018-06-28 2021-01-12 Zoox, Inc. Loading multi-resolution maps for localization
US11518382B2 (en) * 2018-09-26 2022-12-06 Nec Corporation Learning to simulate
US11448518B2 (en) 2018-09-27 2022-09-20 Phiar Technologies, Inc. Augmented reality navigational overlay
US10495476B1 (en) 2018-09-27 2019-12-03 Phiar Technologies, Inc. Augmented reality navigation systems and methods
US11475248B2 (en) * 2018-10-30 2022-10-18 Toyota Research Institute, Inc. Auto-labeling of driving logs using analysis-by-synthesis and unsupervised domain adaptation
US11092966B2 (en) * 2018-12-14 2021-08-17 The Boeing Company Building an artificial-intelligence system for an autonomous vehicle
CN109858369A (en) * 2018-12-29 2019-06-07 百度在线网络技术(北京)有限公司 Automatic Pilot method and apparatus
US10380724B1 (en) * 2019-01-28 2019-08-13 StradVision, Inc. Learning method and learning device for reducing distortion occurred in warped image generated in process of stabilizing jittered image by using GAN to enhance fault tolerance and fluctuation robustness in extreme situations
US10373023B1 (en) * 2019-01-28 2019-08-06 StradVision, Inc. Learning method and learning device for runtime input transformation of real image on real world into virtual image on virtual world, to be used for object detection on real images, by using cycle GAN capable of being applied to domain adaptation
US10373026B1 (en) * 2019-01-28 2019-08-06 StradVision, Inc. Learning method and learning device for generation of virtual feature maps whose characteristics are same as or similar to those of real feature maps by using GAN capable of being applied to domain adaptation to be used in virtual driving environments
US10395392B1 (en) * 2019-01-31 2019-08-27 StradVision, Inc. Learning method and learning device for strategic transforming RGB training image sets into non-RGB training image sets, to be used for learning object detection on objects of images in non-RGB format, by using cycle GAN, resulting in significantly reducing computational load and reusing data
US10916046B2 (en) * 2019-02-28 2021-02-09 Disney Enterprises, Inc. Joint estimation from images
US11010642B2 (en) 2019-03-28 2021-05-18 General Electric Company Concurrent image and corresponding multi-channel auxiliary data generation for a generative model
CN110147830B (en) * 2019-05-07 2022-02-11 东软集团股份有限公司 Method for training image data generation network, image data classification method and device
CN110427799B (en) * 2019-06-12 2022-05-06 中国地质大学(武汉) Human hand depth image data enhancement method based on generation of countermeasure network
US11386496B2 (en) * 2019-07-26 2022-07-12 International Business Machines Corporation Generative network based probabilistic portfolio management
CN110415288B (en) * 2019-07-31 2022-04-08 达闼科技(北京)有限公司 Depth image generation method and device and computer readable storage medium
CN110503654B (en) * 2019-08-01 2022-04-26 中国科学院深圳先进技术研究院 Medical image segmentation method and system based on generation countermeasure network and electronic equipment
US20210125036A1 (en) * 2019-10-29 2021-04-29 Nvidia Corporation Determining object orientation from an image with machine learning
CN110998663B (en) * 2019-11-22 2023-12-01 驭势(上海)汽车科技有限公司 Image generation method of simulation scene, electronic equipment and storage medium
CN110933104B (en) * 2019-12-11 2022-05-17 成都卫士通信息产业股份有限公司 Malicious command detection method, device, equipment and medium
CN111191654B (en) * 2019-12-30 2023-03-24 重庆紫光华山智安科技有限公司 Road data generation method and device, electronic equipment and storage medium
US20210264284A1 (en) * 2020-02-25 2021-08-26 Ford Global Technologies, Llc Dynamically routed patch discriminator
US11250279B2 (en) * 2020-03-31 2022-02-15 Robert Bosch Gmbh Generative adversarial network models for small roadway object detection
US11599745B2 (en) * 2020-06-24 2023-03-07 Denso International America, Inc. System and method for generating synthetic training data
US11270164B1 (en) * 2020-09-24 2022-03-08 Ford Global Technologies, Llc Vehicle neural network
US11868439B2 (en) 2020-11-13 2024-01-09 Toyota Research Institute, Inc. Mixed-batch training of a multi-task network
US20230004760A1 (en) * 2021-06-28 2023-01-05 Nvidia Corporation Training object detection systems with generated images
CN113435509B (en) * 2021-06-28 2022-03-25 山东力聚机器人科技股份有限公司 Small sample scene classification and identification method and system based on meta-learning
US11983238B2 (en) * 2021-12-03 2024-05-14 International Business Machines Corporation Generating task-specific training data
DE102022112622A1 (en) 2022-05-19 2023-11-23 Cariad Se Method and processor circuit for determining training data sets for a machine learning model of an automated driving function and storage medium for the processor circuit
DE102022003091A1 (en) 2022-08-23 2024-02-29 Mercedes-Benz Group AG System for generating information or interaction elements
CN117233520B (en) * 2023-11-16 2024-01-26 青岛澎湃海洋探索技术有限公司 AUV propulsion system fault detection and evaluation method based on improved Sim-GAN

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7764293B2 (en) 2006-04-06 2010-07-27 Canon Kabushiki Kaisha Image processing apparatus, control method thereof, and program
US20100271367A1 (en) 2009-04-22 2010-10-28 Sony Computer Entertainment America Inc. Method and apparatus for combining a real world event and a computer simulation
US8224127B2 (en) 2007-05-02 2012-07-17 The Mitre Corporation Synthesis of databases of realistic, biologically-based 2-D images
US20170098152A1 (en) 2015-10-02 2017-04-06 Adobe Systems Incorporated Modifying at least one attribute of an image with at least one attribute extracted from another image
US9633282B2 (en) 2015-07-30 2017-04-25 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
US20180275658A1 (en) * 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US20180349526A1 (en) * 2016-06-28 2018-12-06 Cognata Ltd. Method and system for creating and simulating a realistic 3d virtual world
US20190072978A1 (en) * 2017-09-01 2019-03-07 GM Global Technology Operations LLC Methods and systems for generating realtime map information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2412471A1 (en) * 2002-12-17 2004-06-17 Concordia University A framework and a system for semantic content extraction in video sequences
CN108475330B (en) * 2015-11-09 2022-04-08 港大科桥有限公司 Auxiliary data for artifact aware view synthesis
US10043261B2 (en) * 2016-01-11 2018-08-07 Kla-Tencor Corp. Generating simulated output for a specimen

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7764293B2 (en) 2006-04-06 2010-07-27 Canon Kabushiki Kaisha Image processing apparatus, control method thereof, and program
US8224127B2 (en) 2007-05-02 2012-07-17 The Mitre Corporation Synthesis of databases of realistic, biologically-based 2-D images
US20100271367A1 (en) 2009-04-22 2010-10-28 Sony Computer Entertainment America Inc. Method and apparatus for combining a real world event and a computer simulation
US9633282B2 (en) 2015-07-30 2017-04-25 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
US20170098152A1 (en) 2015-10-02 2017-04-06 Adobe Systems Incorporated Modifying at least one attribute of an image with at least one attribute extracted from another image
US20180349526A1 (en) * 2016-06-28 2018-12-06 Cognata Ltd. Method and system for creating and simulating a realistic 3d virtual world
US20180275658A1 (en) * 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US20190072978A1 (en) * 2017-09-01 2019-03-07 GM Global Technology Operations LLC Methods and systems for generating realtime map information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Generative Adversarial Networks, Apr. 7, 2017.
Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." arXiv preprint arXiv:1611.07004 (2016).
Reed, Scott, et al. Generative Adversarial Text to Image Synthesis (2016).
Shrivastava, Ashish, et al. "Learning from Simulated and Unsupervised images through Adversarial Training." arXiv preprint arXiv:1612.07828 (2016).

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210374947A1 (en) * 2020-05-26 2021-12-02 Nvidia Corporation Contextual image translation using neural networks
WO2022050937A1 (en) * 2020-09-02 2022-03-10 Google Llc Condition-aware generation of panoramic imagery
WO2024054576A1 (en) 2022-09-08 2024-03-14 Booz Allen Hamilton Inc. System and method synthetic data generation

Also Published As

Publication number Publication date
US20190080206A1 (en) 2019-03-14
DE102018121808A1 (en) 2019-03-14
CN109472365A (en) 2019-03-15

Similar Documents

Publication Publication Date Title
US10726304B2 (en) Refining synthetic data with a generative adversarial network using auxiliary inputs
CN110458918B (en) Method and device for outputting information
CN111062871B (en) Image processing method and device, computer equipment and readable storage medium
JP5782404B2 (en) Image quality evaluation
US20170213112A1 (en) Utilizing deep learning for automatic digital image segmentation and stylization
GB2573849A (en) Utilizing a deep neural network-based model to identify visually similar digital images based on user-selected visual attributes
US11810326B2 (en) Determining camera parameters from a single digital image
CN111027563A (en) Text detection method, device and recognition system
US11538096B2 (en) Method, medium, and system for live preview via machine learning models
CN112215171B (en) Target detection method, device, equipment and computer readable storage medium
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2022227218A1 (en) Drug name recognition method and apparatus, and computer device and storage medium
US11798181B2 (en) Method and system for location detection of photographs using topographic techniques
JP2017059090A (en) Generation device, generation method, and generation program
WO2023207778A1 (en) Data recovery method and device, computer, and storage medium
US11823433B1 (en) Shadow removal for local feature detector and descriptor learning using a camera sensor sensitivity model
US10991085B2 (en) Classifying panoramic images
CN117252947A (en) Image processing method, image processing apparatus, computer, storage medium, and program product
WO2022226744A1 (en) Texture completion
US11423308B1 (en) Classification for image creation
Orhei Urban landmark detection using computer vision
CN111914850B (en) Picture feature extraction method, device, server and medium
JP2020534590A (en) Processing of visual input
CN117830601B (en) Three-dimensional visual positioning method, device, equipment and medium based on weak supervision
US20210224652A1 (en) Methods and systems for performing tasks on media using attribute specific joint learning

Legal Events

Date Code Title Description
AS Assignment

Owner name: FORD GLOBAL TECHNOLOGIES, LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOTSON, GUY;PUSKORIUS, GINTARAS VINCENT;NARIYAMBUT MURALI, VIDYA;SIGNING DATES FROM 20170821 TO 20170901;REEL/FRAME:043536/0515

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4