US10057593B2 - Apparatus and methods for distance estimation using stereo imagery - Google Patents

Apparatus and methods for distance estimation using stereo imagery Download PDF

Info

Publication number
US10057593B2
US10057593B2 US14/326,374 US201414326374A US10057593B2 US 10057593 B2 US10057593 B2 US 10057593B2 US 201414326374 A US201414326374 A US 201414326374A US 10057593 B2 US10057593 B2 US 10057593B2
Authority
US
United States
Prior art keywords
image
images
sequence
composite
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/326,374
Other versions
US20160014426A1 (en
Inventor
Micah Richert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Brain Corp
Original Assignee
Brain Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brain Corp filed Critical Brain Corp
Priority to US14/326,374 priority Critical patent/US10057593B2/en
Assigned to BRAIN CORPORATION reassignment BRAIN CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RICHERT, MICAH
Publication of US20160014426A1 publication Critical patent/US20160014426A1/en
Priority to US16/104,646 priority patent/US10820009B2/en
Application granted granted Critical
Publication of US10057593B2 publication Critical patent/US10057593B2/en
Assigned to HERCULES CAPITAL, INC. reassignment HERCULES CAPITAL, INC. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRAIN CORPORATION
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • G06K9/00355
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present disclosure relates to, inter cilia, computerized apparatus and methods for processing imagery from multiple sources for determining motion and distance to objects.
  • Object recognition in the context of computer vision relates to finding a given object in an image or a sequence of frames in a video segment.
  • video frames may contain multiple objects moving in one or more directions on a still or moving background.
  • Object representations also referred to as the “view”, may change from frame to frame due to a variety of object or environment transformations, such as for example rotation, movement, translation, change in lighting, background, noise, appearance of other objects, partial blocking and/or unblocking of the object, and/or other object transformations.
  • Robotic devices often employ video for navigation, target selection and/or obstacle avoidance. Determining motion of object from a moving robotic platform may require implementation of differential motion detection in an energy efficient manner. Depth of visual scene (e.g., distance to one or more objects) may be useful for operation of mobile robots as well.
  • One aspect of the disclosure relates to a non-transitory computer-readable storage medium having instructions embodied thereon, the instructions being executable to perform a method of determining a distance to an object.
  • a method of determining motion information within a visual scene includes: producing a first composite frame and a second composite frame by combining images from a first plurality of images and a second plurality of images of the visual scene; producing an interleaved sequence of composite frames comprising the first and the second composite frames; and evaluating the interleaved sequence to determine the motion information.
  • individual images of the first and second pluralities of images are provided by first and second sensing apparatus, respectively, the second sensing apparatus being separated spatially from the first sensing apparatus.
  • the first composite frame is characterized by a first placement configuration of (i) an image from the first plurality of images, and (ii) an image the second plurality of images; and the second composite frame is characterized by a second placement configuration of (i) an image from the first plurality of images and (ii) an image the second plurality of images.
  • the second placement may be different from the first placement.
  • the method further includes evaluating the motion information to determine a distance to an object disposed within the visual scene.
  • the evaluation of the interleaved sequence of frames may include for instance encoding the interleaved sequence of frames using an encoder process comprising a motion estimation process configured to provide data related to motion of a pixel group within the second composite frame relative to a pixel group within the first composite frame.
  • the pixel group within the second composite frame corresponds to a first representation of the object; the pixel group within the first composite frame corresponds to a second representation of the object; and the motion data are based at least on a displacement of first representation relative to the second representation.
  • a non-transitory computer-readable storage medium has instructions embodied thereon, the instructions being executable to produce a sequence of combined images from first and second sequences of images of a sensory scene by at least: producing a sequence of composite images, individual composite images comprising an image from the first sequence and/or an image from the second sequence; and evaluating the sequence of composite images to determine a depth parameter of the scene.
  • the first and second image sequences are provided by first and second image sensors, respectively, the first image sensor being disposed spatially separated from the second image sensor; and the first image sensor and the second image sensor are configured to provide images of the sensory scene.
  • the spatial separation is configured to produce a binocular disparity between images from the first sequence relative image from the second sequence; and the depth parameter determination comprises determination of the disparity.
  • individual images of the first image sequence and the second image sequence comprise a plurality of pixels; and the evaluating comprises encoding the sequence of composite images using a motion estimation encoder.
  • an image processing apparatus in another aspect of the disclosure, includes: an input interface configured to receive a stereo representation of a visual scene, the stereo representation comprising a first portion and a second portion; a logic component in communication with the interface and configured to arrange the first portion with the second portion into a concatenated frame, and form a sequence of concatenated frames by arranging first portions and second portions within a given concatenated frame in an alternate order relative to a preceding concatenated frame within the sequence; a video encoder component in data communication with the logic component and configured to encode the sequence of concatenated frames to produce a sequence of compressed frames; and a processing component in data communication with the video encoder and configured to obtain motion information based on an evaluation of the compressed frames.
  • the sequence of compressed frames comprises a keyframe characterized by an absence of the motion information; and the processing component is configured to not utilize the keyframe during the evaluation of the compressed frames.
  • the processing component is further configured to determine, based on the motion information, a depth parameter associated with the visual scene; and the encoder is configurable to produce the sequence of compressed frames at least at a frame rate selected between 1 and 100 frames per second inclusive; and the apparatus is embedded in an integrated circuit.
  • the image processing apparatus includes: computerized logic configured to: receive a plurality of stereo representations, each of the plurality being representative of a corresponding visual scene and comprising a first portion and a second portion; combine a first portion of a given stereo representation with a second portion thereof into a first frame in a first order; combine a first portion of another stereo representation with a second portion thereof into a second frame in a second order different from the first order; and form a sequence comprising at least the first and second frames; a video encoder in data communication with the computerized logic and configured to encode the sequence of frames to produce a sequence of encoded frames; and processing logic in data communication with the video encoder and configured to evaluate the sequence of encoded frames to determine motion information.
  • FIG. 1A is a graphical illustration depicting a top view of robotic apparatus configured to acquire stereo imagery, in accordance with one or more implementations.
  • FIG. 1B is a graphical illustration depicting a side view of a robotic apparatus comprising an adaptive controller apparatus of the disclosure, configured for autonomous navigation, in accordance with one or more implementations.
  • FIG. 2A is a graphical illustration depicting stereo imagery input obtained with two spatially displaced cameras for use with the disparity determination methodology, according to one or more implementations.
  • FIG. 2B is a graphical illustration depicting disparity between representations of objects corresponding to the frames of stereo imagery shown in FIG. 2A , according to one or more implementations.
  • FIG. 2C is a graphical illustration depicting input frames comprising a plurality of moving objects for use with the motion extraction, according to one or more implementations.
  • FIG. 3A is a logical block diagram depicting a determination of an input stream for motion processing using an alternating interleaver of stereo imagery input, according to one or more implementations.
  • FIG. 3B is a logical block diagram depicting a determination of an input stream for motion processing using an alternating interleaver of stereo imagery input, according to one or more implementations.
  • FIG. 3C is a logical block diagram depicting concatenation of two frames into a single frame for video encoding, according to one or more implementations.
  • FIG. 3D is a logical block diagram depicting a configuration of a sequence of frames for motion and disparity processing using interleaving of concatenated input from multiple sources.
  • FIG. 4A is a functional block diagram depicting a processing apparatus configured to determine disparity from multiple image sources, according to one or more implementations.
  • FIG. 4B is a functional block diagram depicting a processing apparatus configured to determine disparity from multiple image sources, according to one or more implementations.
  • FIG. 4C is a functional block diagram depicting a processing apparatus configured to determine disparity and/or motion using concatenation and/or interleaving of images from multiple sources, according to one or more implementations.
  • FIG. 5A is a graphical illustration depicting triple camera configuration used for disparity determination using image interleaving, according to one or more implementations.
  • FIG. 5B is a graphical illustration depicting quad camera configuration used for disparity determination using image interleaving, according to one or more implementations.
  • FIG. 5C is a graphical illustration depicting linear multiple camera configuration useful for determining multiple depths scales using image interleaving, according to one or more implementations.
  • FIG. 6A is a graphical illustration depicting an alternating interleaving of triple image input for use with the motion extraction, according to one or more implementations.
  • FIG. 6B is a graphical illustration depicting an alternating interleaving of quad image input for use with the motion extraction, according to one or more implementations.
  • FIG. 7 is a functional block diagram depicting a motion extraction apparatus, according to one or more implementations.
  • FIG. 8 is a functional block diagram depicting a video processing system, comprising a differential motion extraction apparatus, according to one or more implementations.
  • FIG. 9A is a graphical illustration depicting an encoded object for use with the motion extraction, according to one or more implementations.
  • FIG. 9B is a graphical illustration depicting motion of an encoded object for use with the motion extraction methodology, according to one or more implementations.
  • FIG. 9C is a graphical illustration depicting spatial distribution of motion extracted from encoded video, according to one or more implementations.
  • FIG. 10 is a logical flow diagram illustrating a method of determining a salient feature using encoded video motion information, in accordance with one or more implementations.
  • FIG. 11 is a logical flow diagram illustrating a method of data processing useful for determining features, in accordance with one or more implementations.
  • FIG. 12 is a logical flow diagram illustrating a method of executing an action configured based on a gesture detected using motion information, in accordance with one or more implementations.
  • FIG. 13 is a logical flow diagram illustrating a method of determining a depth of visual scene using encoded interleaved concatenated images, in accordance with one or more implementations.
  • FIG. 14 is a logical flow diagram illustrating a method of determining distance to objects using motion of interleaved concatenated image sequence, in accordance with one or more implementations.
  • FIG. 15 is a logical flow diagram illustrating a method of executing an action configured based on detecting an object in motion information, in accordance with one or more implementations.
  • FIGS. 16A-16D illustrate gestures of a human operator used for communicating control indications to a robotic device (such as one comprising a distance determination apparatus as described herein), in accordance with one or more implementations.
  • FIG. 17 is a graphical illustration depicting an exemplary unmanned robotic apparatus comprising distance determination apparatus of the disclosure configured for autonomous navigation, in accordance with one or more implementations.
  • bus is meant generally to denote all types of interconnection or communication architecture that is used to access the synaptic and neuron memory.
  • the “bus” could be optical, wireless, infrared or another type of communication medium.
  • the exact topology of the bus could be for example standard “bus”, hierarchical bus, network-on-chip, address-event-representation (AER) connection, or other type of communication topology used for accessing, e.g., different memories in pulse-based system.
  • the terms “computer”, “computing device”, and “computerized device” include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic device, personal communicators, tablet or “phablet” computers, portable navigation aids, J2ME equipped devices, smart TVs, cellular telephones, smart phones, personal integrated communication or entertainment devices, or literally any other device capable of executing a set of instructions and processing an incoming data signal.
  • PCs personal computers
  • PDAs personal digital assistants
  • handheld computers handheld computers
  • embedded computers embedded computers
  • programmable logic device personal communicators
  • tablet or “phablet” computers portable navigation aids
  • J2ME equipped devices smart TVs
  • cellular telephones smart phones
  • personal integrated communication or entertainment devices personal integrated communication or entertainment devices
  • As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function.
  • Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLABTM, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), JavaTM (including J2ME, Java Beans), Binary Runtime Environment (e.g., BREW), and other languages.
  • CORBA Common Object Request Broker Architecture
  • JavaTM including J2ME, Java Beans
  • Binary Runtime Environment e.g., BREW
  • connection As used herein, the terms “connection”, “link”, “synaptic channel”, “transmission channel”, “delay line”, are meant generally to denote a causal link between any two or more entities (whether physical or logical/virtual), which enables information exchange between the entities.
  • feature may refer to a representation of an object edge, determined by change in color, luminance, brightness, transparency, texture, and/or curvature.
  • object features may comprise, inter alfa, individual edges, intersections of edges (such as corners), orifices, and/or curvature
  • memory includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM. PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), merristor memory, and PSRAM.
  • flash memory e.g., NAND/NOR
  • processors microprocessor and digital processor
  • DSPs digital signal processors
  • RISC reduced instruction set computers
  • CIBC general-purpose processors
  • microprocessors microprocessors
  • gate arrays e.g., field programmable gate arrays (FPGAs)
  • PLDs reconfigurable computer fabrics
  • array processors secure microprocessors
  • ASICs application-specific integrated circuits
  • the term “network interface” refers to any signal, data, or software interface with a component, network or process including, without limitation, those of the FireWire (e.g., FW400, FW800, and/or other FireWire implementation.), USB (e.g., USB2), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnetTM), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, and/or other cellular interface implementation) or IrDA families.
  • FireWire e.g., FW400, FW800, and/or other FireWire implementation.
  • USB e.g., USB2
  • Ethernet
  • pulse As used herein, the terms “pulse”, “spike”, “burst of spikes”, and “pulse train” are meant generally to refer to, without limitation, any type of a pulsed signal, e.g., a rapid change in some characteristic of a signal, e.g., amplitude, intensity, phase or frequency, from a baseline value to a higher or lower value, followed by a rapid return to the baseline value and may refer to any of a single spike, a burst of spikes, an electronic pulse, a pulse in voltage, a pulse in electrical current, a software representation of a pulse and/or burst of pulses, a software message representing a discrete pulsed event, and any other pulse or pulse type associated with a discrete information transmission system or mechanism.
  • a pulsed signal e.g., a rapid change in some characteristic of a signal, e.g., amplitude, intensity, phase or frequency, from a baseline value to a higher or lower value, followed by a rapid
  • the term “receptive field” is used to describe sets of weighted inputs from filtered input elements, where the weights may be adjusted.
  • Wi-Fi refers to, without limitation, any of the variants of IEEE-Std. 802.11 or related standards including 802.11 a/b/g/n/s/v and 802.11-2012.
  • wireless means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, and/or other wireless interface implementation.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-AJTD-LTE, analog cellular, CDPD, RFID or NFC (e.g., EPC Global Gen. 2, ISO 14443, ISO 18000-3), satellite systems, millimeter wave or microwave systems, acoustic, and infrared (e.g., IrDA).
  • Wi-Fi Wireless Fidelity
  • the present disclosure provides, among other things, apparatus and methods for determining depth of field of a scene based on processing information from multiple sources detecting motion of objects and/or features in video in real time.
  • the video information may comprise for example multiple streams of frames received from a plurality of cameras disposed separate from one another.
  • Individual cameras may comprise an image sensor (e.g., charge-coupled device (CCD), CMOS device, and/or an active-pixel sensor (APS), photodiode arrays, and/or other sensors).
  • the stream of frames may comprise a pixel stream downloaded from a file.
  • An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate).
  • image representations e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations
  • YUV, YCbCr luminance-chrominance
  • CMYK cyan-magenta-yellow and key
  • grayscale and/or other image representations
  • data frames corresponding to other (non-visual) signal modalities such as sonograms, infrared (IR), radar or tomography images may be equally compatible with the processing methodology of the disclosure, or yet other configurations.
  • the video processing methodology described herein may enable a robotic controller to obtain motion and/or distance information using a specialized hardware video encoder.
  • Use of dedicated video encoders provides a computationally efficient way to determine motion and/or distance using video signals compared to processing techniques that employ general purpose processors for performing computations (e.g., optical flow, block matching, phase correlations and/or other.
  • Computational efficiency of hardware video encoders may be leveraged top reduce energy use, complexity, size, and/or cost of the processing component, increase autonomy of robotic device using the computationally efficient controller, and/or increase processing performance (e.g., image resolution, frame rate, number of cameras) for a given hardware specifications compared to the prior art.
  • Processing data from multiple spatially distributed sources may enable depth of field determination using a disparity methodology.
  • distance d to an object may be determined using binocular disparity D as follows:
  • FIG. 1A depicts a top view of mobile robotic apparatus comprising two cameras configured to provide sensory information for determining distance based on the disparity.
  • the apparatus 100 may comprise for instance a robotic vehicle outfitted with a motion and/or distance detection apparatus configured in accordance with one or more implementations, e.g., such as illustrated in FIGS. 4A-4C , below.
  • the robotic apparatus 100 may comprise left and right cameras 106 , 108 disposed at a distance 102 from one another.
  • the robotic apparatus 100 may navigate in a direction 104 .
  • One or more obstacles may be present in path of the apparatus 100 , e.g., a ball 112 and a box 122 , disposed at distance 110 , 120 , respectively, from the apparatus 110 .
  • travel paths from a given object may be unequal to one another. As shown in FIG. 1A , the path 114 is longer compared to the path 116 , and the path 126 is longer than the path 124 .
  • Difference in path lengths may cause a difference in apparent position of the objects 112 , 122 in image frame(s) provided by the camera 106 relative the image frame provided by the camera 108 .
  • FIG. 2A depicts a typical stereo imagery input for use with the disparity determination methodology, according to one or more implementations.
  • the frames 200 , 210 in FIG. 2A may be acquired by the two spatially displaced cameras 106 , 108 of the apparatus 100 in FIG. 1A .
  • Object representations 216 , 212 of the frame 210 may be displaced horizontally relative to object representations 206 , 202 , respectively, of the frame 200 .
  • Object representations 202 , 212 , 206 , 216 may correspond to objects 112 , 122 , respectively, in FIG. 1A .
  • FIG. 2B illustrates the disparity between representations of objects corresponding to the frames of stereo imagery shown in FIG. 2A .
  • Object representations 236 , 246 in frame 23 may be characterized by a horizontal disparity 234
  • similarly object representations 222 232 may be characterized by a corresponding horizontal disparity 224 .
  • Disparity 224 , 234 may be inversely proportional to distance between the camera and the respective object (e.g., the distance 110 , 120 in FIG. 1A ); i.e., the shorter the distance, the greater the disparity, due to the greater subtended arc.
  • FIG. 1B depicts a mobile robotic apparatus comprising a motion detection apparatus configured, e.g., in accordance with the exemplary implementations illustrated in FIGS. 7-8 , infra.
  • the robotic apparatus 160 may comprise a camera 166 .
  • the camera 166 may be characterized by a field of view 168 (e.g., an extent of the observable world that may be captured by the camera lens at a given moment).
  • the camera 166 may provide information associated with objects within the field of view 168 .
  • the camera 166 may provide frames of pixels of luminance and/or color, refreshed at 25 Hz frame rate.
  • other frame rates may be used (whether constant or variable), as may other types of information provided by the camera(s) 166 .
  • One or more objects may be present in the camera field of view.
  • the motion of the objects may result in a displacement of pixels representing the objects within successive frames, such as is described in U.S. patent application Ser. No. 13/689,717 filed on Nov. 29, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, issued as U.S. Pat. No. 9,193,075, incorporated herein by reference in its entirety.
  • motion of the objects within the camera 166 field if view 168 may comprise the self-motion component and the object motion component.
  • motion of objects in FIG. 1B may comprise apparent motion 180 of the stationary background 176 and the boundary (e.g., the component 172 associated with the floor boundary); (ii) component 178 associated with the moving ball 174 that comprises a superposition of the ball displacement and motion of the camera; and/or other components.
  • determination of the ball 174 motion may be particularly challenging when the camera 160 is in motion (e.g., during panning) and/or when the field of view is changing (e.g., when zooming in/out).
  • FIG. 2C depicts two exemplary frames (e.g., provided by the camera 166 in FIG. 1A ) comprising multiple moving objects useful with the motion estimation methodology described herein.
  • the frames 240 , 250 may comprise an object 246 , 256 that may move in a given direction (e.g., 288 ).
  • the frames 240 , 250 may comprise an object 242 , 252 that may move back and forth in a direction indicated by arrow 244 .
  • Motion along curved trajectories may be resolved by using linear piece-wise approximation, wherein motion between successive frames may be interpreted as linear.
  • An increased frame rate and/or image resolution may be employed with complex motion trajectories.
  • the object 242 may comprise a target (e.g., ball) that may be moved back and forth in order to indicate to a controller of, e.g., the robotic vehicle 160 in FIG. 1B , a target to follow.
  • Frames 240 , 250 may represent position of objects at two time instances. Due to the presence of multiple motions, detection of object 242 , 252 may be not straightforward due to, for example, portions of the frames 250 being characterized by differential motion.
  • background in the presence of differential motion, background (and/or self-motion) may be determined using a statistical analysis of motion distribution within a given encoded frame.
  • Various statistical parameters may be determined, e.g., median, mean plus/minus n standard deviations, and/or others, in order to determine one or more prevailing (dominant) motion vectors for the frame.
  • the prevailing motion may be removed (e.g., via a vector subtraction) from the frame motion distribution in order to determine residual motion.
  • the residual motion may be analyzed (e.g., using a threshold technique) in order to detect one or more features that may be moving differently from the prevailing motion.
  • prevailing motion may be determined using a clustering technique. For example, a motion filed within the frame may be partitioned into a plurality of clusters based on analysis of motion distribution. The largest area cluster may be associated with the prevailing (dominant) motion, or may be removed (masked off) from the frame to obtain residual motion distribution. The residual motion may be analyzed in order to determine the presence of one or more features based on remaining clusters of motion.
  • image frames provided by a plurality of cameras may be utilized in order to determine depth of field and/or distance to objects using the disparity information.
  • the disparity may be considered as object motion occurring between the capture of frame 200 and the capture of frame 210 in FIG. 2A .
  • the disparity (e.g., apparent motion) 224 , 234 in FIG. 32B may be obtained using, in the exemplary implementation, motion estimation.
  • Various motion estimation algorithms exist (e.g., optical flow methodology, such as that described in in U.S. patent application Ser. No.
  • the apparent motion due to disparity may be determined using motion estimation information provided by a video encoder.
  • frames provided by individual cameras e.g., 106 , 108 in FIG. 1A
  • FIG. 3A illustrates determination of an input stream for motion processing using an alternating interleaver of stereo frame input, according to one or more implementations.
  • the frame sequences 300 , 310 (also referred to as channel A, B) may correspond to data provided by two cameras (e.g., left/right cameras 106 , 108 in FIG. 1A ), and/or data loaded from a disc or other source, in one or more implementations.
  • the frame sequences 300 , 310 comprising may be processed by an alternating interleaver process 320 configured to produce an interleaved frame sequence 309 .
  • the sequence 309 may comprise alternating frames (e.g., 302 , 312 ) from left/right cameras, in some implementations.
  • the frames from left/right cameras e.g., 302 , 312
  • FIG. 3B illustrates determination of an input stream for motion processing using an alternating interleaver of stereo frame input, according to one or more implementations.
  • the A, B frame sequences 300 , 310 comprising (e.g., frames 302 , 312 ) may be processed by the alternating interleaver process 320 configured to produce an interleaved frame sequence 322 .
  • the sequence 322 may be configured to comprise alternating pairs of frames from a given channel. As shown in FIG.
  • frames B1, B2 from channel B acquired at times t1, t2 may be followed by frames A1, A2 from channel A acquired at times t1, t2, followed by frames B3, B4 from channel B acquired at times t3, t4, wherein t4>t3>t2>t1.
  • the frame A1 may be repeated and/or preceded by a blank frame in the interleaved sequence 322 .
  • Use of an alternating approach may provide, inter alfa, both motion and disparity information within a single encoded stream.
  • Sequences of interleaved frames may be utilized in order to determine depth of field of view and/or distance to objects using motion encoding, as described in detail below with respect to FIGS. 4A-5C .
  • FIG. 4A illustrates a processing apparatus configured to determine disparity from two image sources, according to one or more implementations.
  • the apparatus 400 may comprise two image sources 404 , 405 configured to provide information environment 402 .
  • the sources 404 , 405 may comprise digital and/or analog cameras disposed separate from one another.
  • Individual cameras may comprise an image sensor (CCD, CMOS device, and/or an APS, photodiode arrays, and/or other sensors). It will be appreciated that in some implementations, such separation between the image sensors may be achieved even when the sensors are disposed on the same substrate or “chip” (e.g., two sensors placed at opposite ends of the same substrate/chip).
  • the image sources 4054 , 405 may comprise video files on a storage device.
  • An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate).
  • RGB values e.g., refreshed at a 25 Hz or other suitable frame rate.
  • image representations e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations
  • data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar, or tomography images may be equally compatible with the processing methodology of the disclosure, or yet other configurations.
  • Image frames 406 , 407 provided by the sources 404 , 405 may be interleaved by the interleaver apparatus 410 .
  • the interleaver apparatus 410 may comprise 2 ⁇ 1 multiplexer configured to provide one of the input channels 406 , 407 at its output 412 at a given time.
  • the output 412 may comprise an alternating interleaved stream (e.g., 309 ), an alternating interleaved stream of frames (e.g., 322 in FIG. 3B ), or yet other option, in accordance with one or more implementations.
  • the interleaved output 412 may be provided to a motion estimation component 420 .
  • the motion estimation component may comprise a video encoder comprising one or more motion estimation operations.
  • the component 420 may comprise for instance a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Qualcomm® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other.
  • Video encoding effectuated by the component 420 may comprise any applicable standard comprising motion estimation between one or more current images and one or more preceding images.
  • Some exemplary encoding implementations include H.264/MPEG-4 advanced video coding described, e.g., in ISO/IEC 14496-10, 2009 —MPEG -4 Part 10 , Advanced Video Coding , H.263 standard described in, e.g., ITU - T 11.263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (January/2005) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, Video coding for low hit rate communication ; H.262/MPEG-2, described in e.g., ISO/IEC 13818-2 2013 Oct.
  • the motion estimation component 420 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of one or more features and estimating the motion of individual detected features.
  • Output 422 of the motion estimation component may be provided to a processing component 430 configured to determine one or more parameters of interest, including e.g., depth of the scene 402 and/or distance to objects that may be present in the scene, using motion based disparity determination methodology.
  • a processing component 430 configured to determine one or more parameters of interest, including e.g., depth of the scene 402 and/or distance to objects that may be present in the scene, using motion based disparity determination methodology.
  • the component 420 may comprise MPEG-4/H.264 encoder configured to produce the encoded stream 315 .
  • the encoding of frame pair 302 , 312 may produce motion information for the encoded frame 317 .
  • the motion information of the frame 317 may comprise e.g., horizontal and/or vertical displacement (dx, dy) of blocks of pixels (macroblocks) and be interpreted as caused by a disparity between scene representations of frame 302 and frame 304 .
  • analysis of motion information for the frame 317 may produce the disparity D between left and right image frames.
  • the encoded frames 317 , 319 may provide disparity estimates D1, D2 associated with frames acquired at times t1, t2.
  • Frames 316 , 318 may be ignored (skipped) for the purposes of the disparity determination.
  • Disparity estimates D1, D2 may be used to determine distance to one or more objects that may be present within frames of sequences 300 , 310 , 315 .
  • the component 430 may be configured to parse the compressed video stream 422 in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A ).
  • the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L.
  • the extracted motion information may be used for disparity and or distance determination.
  • Output 432 of the component 430 may be provided to another component (e.g., a controller of a robot).
  • Various uses of the depth information are contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or others that will be recognized by those of ordinary skill when provided the present disclosure.
  • the component 420 may comprise MPEG-4/H.264 encoder configured to produce encoded stream 330 .
  • 312 may produce motion information accessible using the encoded frame 332 .
  • the motion information of the frame 332 may comprise horizontal and/or vertical displacement (dx, dy) of blocks of pixels and be interpreted as caused by a disparity between scene representations of frame 302 and frame 312 .
  • of stereo vision e.g., as described above with respect to FIG.
  • the encoded frames 332 , 336 may provide disparity estimates D1, D3 associated with frames acquired at times t1, t3.
  • the encoded frame 334 may provide negative disparity estimate ( ⁇ D2) associated with frames acquired at time t2.
  • Frames 333 , 338 may provide motion information associated with the frame sequence 310 .
  • Frame 335 may provide motion information associated with the frame sequence 300 .
  • Disparity estimates D1, D2, D3 may be used to determine distance to one or more objects that may be present within frames of sequences 300 , 310 , 330 using, e.g., Eqn. 1.
  • FIGS. 3A-3C Although interleaving and/or concatenation of frames from two sources is illustrated in FIGS. 3A-3C , the methodology described herein may be employed for any practical number of sources (e.g., three, four as shown and described with respect to FIGS. 5A-5C below, and/or a greater number of sources).
  • FIG. 4B illustrates a processing apparatus configured to determine disparity from two or more image sources, according to one or more implementations.
  • the apparatus 440 may comprise a plurality of image sources (e.g., 443 , 445 ) configured to provide information related to the environment 402 .
  • the sources 443 , 445 may comprise sources described with respect to FIG. 4A above.
  • Image frames 446 , 448 provided by the sources 443 , 445 may be interleaved by the interleaver apparatus 450 .
  • the interleaver apparatus 450 may comprise N ⁇ 1 multiplexer configured to provide data from one of the input channels 446 , 448 at its output 452 at a given time.
  • the output 452 may comprise an alternating interleaved stream (e.g., 600 in FIG. 6A ), an alternating interleaved stream of frames (e.g., constructed similar to the stream 322 in FIG. 3B ) in accordance with one or more implementations.
  • the interleaved output 452 may be provided to a processing component 456 .
  • the component 456 may comprise motion estimation logic.
  • the motion estimation logic may comprise a video encoder comprising motion estimation operation.
  • the component 456 may comprise a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Qualcomm® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other.
  • Video encoding effectuated by the component 456 may comprise any applicable standard comprising motion estimation between current frame and preceding frame.
  • the motion estimation component 456 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features.
  • the input 452 may be encoded using a video encoder (e.g., MPEG-4, H.265)
  • the component 456 may be configured to parse the encoded video stream in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A ).
  • the component 456 may comprise logic configured to determine depth of the scene 402 and/or distance to objects that may be present in the scene using motion determined based disparity determination methodology.
  • the extracted motion information may be used for disparity and or distance determination.
  • Output 458 of the component 456 may be provided to another component (e.g., a controller of a robot).
  • Various uses of the depth information may be contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or others. Determining motion disparity and/or the distance may be performed for example using operations described above with respect to FIGS. 3A-4A .
  • data from multiple sources may be concatenated or otherwise combined/aggregated prior to encoding.
  • source images from left and right cameras e.g., 106 , 108 in FIG. 1A
  • one image also referred to as a combined and/or composite image.
  • the combined or concatenated image may comprise twice as many pixels (e.g., twice the width or twice the height) as any one of the source images.
  • the source images may be trimmed, compressed, and/or subsampled so that the concatenated image may comprise the same data size as the source image(s).
  • FIG. 3C depicts combining two images into a single image for video encoding, according to one or more implementations.
  • the frame sequences 300 , 310 may correspond to data provided by two cameras (e.g., left/right cameras 106 , 108 in FIG. 1A ), and/or data loaded from a disc or other source, in one or more implementations.
  • frame 361 from the sequence 310 and frame 362 from the sequence 300 may be disposed on top of one another so as to form the combined frame 364 .
  • the frame layout shown in FIG. 3C is exemplary, and aimed to illustrate one possible implementation of frame combination and various other layouts (e.g., side by side, reversed order) may be utilized.
  • the combined (e.g., concatenated) frame output 366 may be provided to an encoder (e.g., the encoder component 420 and/or processor component 470 , described below with respect to FIGS. 4A, 4C , respectively).
  • dimensions of the combined frame may be selected to match dimension of one or more input frames (e.g., frames of sequences 300 , 310 ).
  • the frames 361 , 362 from the frames of sequences 300 , 310 may be adjusted (e.g., resampled, compressed and/or cropped) to fit into the frame 364 .
  • a dimension (e.g., 368 ) of the combined frame 364 may be selected to comprise a combination of the respective dimensions of the source frames (e.g., the dimensions 367 , 369 of frames 361 , 362 ).
  • dimension of the combined frame (e.g., 364 ) the may be configured based on a dimension of one of the source frames (e.g., the dimension 367 or 369 ).
  • a dimension of one of the source frames e.g., the dimension 367 or 369 .
  • concatenation of a frame at 100 ⁇ 100 pixel (resolution 1), a frame at 50 ⁇ 50 pixel size (resolution 2), and a frame at 25 ⁇ 25 pixel size (resolution 3) may correspond to a concatenated frame of 150 ⁇ 100 pixel dimension.
  • FIG. 3D illustrates a configuration of a sequence of frames for motion and disparity processing using combining input from multiple sources, according to one or more implementations.
  • the A, B frame sequences 300 , 310 comprising (e.g., frames 302 , 312 ) may be processed by the alternating interleaver process 340 configured to produce a sequence of concatenated frames 350 .
  • the sequence 350 may comprise a plurality of frames (e.g., 342 , 344 in FIG. 3D ). Individual frames of the sequence 350 may be configured using a combination of frames from the sequences 300 , 310 .
  • the process of configuring frames of the sequence 350 may be referred to as interleaving of concatenated frames, as a given portion of one combined frame (e.g., top portion of the frame 342 ) may be configured to contain frames from a source (e.g., source A) that may differ from the source of the content within respective portion within another combined frame (e.g., frame from source B in the combined frame 344 ).
  • a source e.g., source A
  • individual frames of the sequence 350 e.g., the frames 342 , 344
  • the relative position of the source frames (e.g., 302 , 312 ) within the concatenated frame(s) (e.g., 342 , 344 ) may be configured in accordance with particular implementations.
  • vertical placement of the source frames within the combined frame may be switched every two frames (e.g., the frames 342 , 347 , 348 may comprise frame from one source ( 300 ) in the top portion and the frame from another source ( 310 ) in the bottom portion.
  • the frame source may be altered every n (e.g., two) frames.
  • the frames 342 , 347 , 348 may comprise frame from one source ( 300 ) in the top portion.
  • Frames 344 , 346 , 349 may comprise frames from another source ( 310 ) in the top portion.
  • the frame sequence 350 may be encoded by an encoder process denoted by arrow 352 .
  • the encoder process 352 may comprise H.264, H.265, MPEG-4, and/or other applicable encoder characterized by motion estimation operation.
  • the encoding process 352 may produce the encoded frame sequence 354 . Use of concatenation and interleaving to construct the sequence 350 may enable to obtain, inter cilia, motion and disparity information within a single encoded frame sequence 354 .
  • the encoded frames 353 , 356 , 359 may provide disparity estimates DXn.
  • disparity estimates DA1, DA2, DA3 denote a disparity between frames of A/B sequences occurring at times t1, t2, t3, with the A sequence frame being the reference frame.
  • Disparity estimates DB1, DB2, DB3 denote disparity between frames of B/A sequences occurring at times t1, t2, t3, with the B sequence frame being the reference frame.
  • Motion estimates mA1, mA2, denote motion estimate between frames of A sequences occurring at times t1, t2 and t2, t3 within the A sequence.
  • Motion estimates mB1, mB2, denote motion estimate between frames of B sequences occurring at times t1, t2 and t2, t3 within the B sequence.
  • Motion and/or disparity estimates may be obtained by parsing the sequence of encoded frames (e.g., 354 in FIG. 3D ).
  • the disparity information may be used to, inter alia, determine distance to one or more objects within the input sequence.
  • Concatenation and interleaving of frames from multiple image sources as shown in FIG. 3D represents one exemplary implementation.
  • the implementation of FIG. 3D may be configured to provide two estimates of motion (e.g., the encoded frame 355 ) or two estimates of disparity (e.g., the frame 353 ).
  • a variety of implementations may be employed for concatenation and interleaving of frames from multiple image sources, e.g., such as presented in Tables 1 through 7 below.
  • Concatenation and interleaving implementation of inputs A,B shown in Table 1 may enable determination of motion for the source A and source B from a given encoded concatenated frame.
  • Concatenation and interleaving implementation of inputs A,B shown in Tables 2, 3 may enable determination of motion for the source 13 , A, respectively, from a given encoded concatenated frame.
  • the disparity between frames of the sources A,B may be determined from every other encoded concatenated frame of the implementation of Tables 2, 3.
  • Concatenation and interleaving implementation of inputs A,B shown in Tables 3 may enable determination of disparity d between frames of the sources A,B from a given encoded concatenated frame. Motion information may be unavailable when using the frame configuration implementation shown in Table 4.
  • a combined frame may comprise data from three or more source frames.
  • Table 5 illustrates one such implementation of concatenation and interleaving of inputs A, B.
  • a given encoded combined frame may enable determination of motion (for one of the sources A or B) and disparity D between frames of the sources A,B.
  • the combination operation may comprise replication of a given image (e.g., image B1 in Table 2 and/or image A1 in Table 5) and concatenating multiple copies of a given image within a concatenated image.
  • a given image e.g., image B1 in Table 2 and/or image A1 in Table 5
  • input concatenation and/or interleaving may use input frames characterized by two or more resolutions.
  • a given concatenated frame may comprise a left portion comprising an input frame (e.g., A/B) at resolution a first resolution, Res I (e.g., 1280 ⁇ 720 pixels).
  • the right portion may comprise a frame at another resolution, Res II, in this example characterized by quarter-pixel count (e.g., 640 ⁇ 360 pixels) compared to the Res I frame, and a frame at a third resolution Res III, characterized by a 1/16 th pixel count (e.g., 320 ⁇ 180 pixels) compared to the Res I frame.
  • FIG. 7 illustrates one such implementation, wherein frames from sources A, B may be concatenated using two frame resolutions (denoted by Res I and Res II).
  • the Res I resolution may correspond to 1280 ⁇ 720 pixels, 640 ⁇ 360 pixels or other resolution; the Res II resolution may correspond to a 640 ⁇ 360 pixels, 320 ⁇ 180, or other resolution.
  • the frame configuration of Table 7 may enable obtaining motion m and disparity D from the left and right portions, respectively, of a given encoded concatenated frame.
  • FIG. 4C depicts a processing apparatus configured to deter mine disparity and/or motion using concatenation and/or interleaving of images from multiple sources, according to one or more implementations.
  • the apparatus 460 may be configured to receive input from a plurality of image sources (e.g., 404 , 405 ) configured to provide information related to the environment 402 .
  • a plurality of image sources e.g., 404 , 405
  • the sources 404 , 405 may comprise the sources described with respect to FIG. 4A above.
  • Image sequences 446 , 448 provided by the sources 404 , 405 may be converted using the illustrated component 462 .
  • the component 462 may comprise an N ⁇ 1 converter configured to place data from one of the input sequences 466 , 468 into its output frame at a given time.
  • the component 462 may be configured to implement frame concatenation, resampling, trimming, and/or other manipulations of input frames 466 , 468 .
  • operation of the component 462 may be configured using approaches described with respect to FIGS. 3C-3D herein, and/or Tables 1-7, above.
  • the output 464 may comprise, e.g., a sequence of concatenated frames (e.g., the sequence 350 in FIG. 3D ).
  • the output 464 may be provided to a processing component 470 .
  • the component 470 may comprise for example motion estimation logic.
  • the motion estimation logic-enabled component may comprise a video encoder comprising one or more motion estimation operations or algorithms.
  • the component 470 may comprise for instance an integrated circuit (IC) disposed on a single or multiple die, a component of a processing system (e.g., video encoder block of a Qualcomm® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other configuration.
  • Video encoding effectuated by the component 470 may comprise any applicable standard comprising motion estimation between one or more current images and one or more preceding images.
  • Some exemplary encoding implementations include H.264/MPEG-4 advanced video coding described, e.g., in ISO/IEC 14496-10, 2009 —MPEG -4 Part 10 , Advanced Video Coding , H.263 standard described in, e.g., ITU - T H 263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (January/2005) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication ; H.262/MPEG-2, described in e.g., ISO/IEC 13818-2 2013 Oct.
  • the component 470 may comprise logic configured to determine depth of the scene 402 , and/or distance to objects that may be present in the scene 402 , using motion based disparity determination methodology.
  • the extracted motion information may be used for disparity and or distance determination.
  • the output 472 of the component 470 may be provided to another component (e.g., a controller of a robot).
  • Various uses of the depth information are contemplated herein, such as for example object detection, object localization, distance estimation, trajectory planning, and gesture detection. Determining motion disparity and/or the distance may be performed, for example, using the operations described above with respect to FIGS. 3A-4A .
  • the motion estimation logic referenced above may comprise a video encoder comprising one or more motion estimation operations.
  • the component 456 may comprise for instance a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Qualcomm® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, and/or an OEM printed circuit board.
  • Video encoding effectuated by the component 456 may comprise any applicable standard comprising motion estimation between current frame and preceding frame, such as e.g., those disclosed supra.
  • the motion estimation component 456 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features.
  • the component 456 may be configured to parse the encoded video stream in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A ).
  • the encoded frame sequences 315 , 330 , 354 in FIGS. 3A-3C and/or output 422 in FIG. 4A may comprise one or more frames (also referred to as keyframes) that may not contain motion information.
  • the processing component 430 , 456 and/or 470 of FIGS. 4A-4C may be configured to detect and ignore (e.g., skip) frames that do not convey motion information.
  • FIG. 5A illustrates a triple-camera configuration useful with disparity and/or motion determination using image interleaving and/or concatenation, according to one or more implementations.
  • the camera configuration 500 may comprise three cameras 510 , 520 , 530 , denoted A, B, C, respectively.
  • the configuration 500 may comprise a pair of horizontally spaced cameras (e.g., left 510 , right 520 ) and a vertically spaced camera ( 530 ).
  • the cameras 510 , 520 , 530 may be disposed in a linear array, and/or another configuration. Frames provided by the cameras 510 , 520 , 530 may be interleaved and/or concatenated using any applicable methodologies, including these described with respect to FIGS. 3A-3C , and/or 6 A herein.
  • interleaving sequences may be employed when processing frames provided by the cameras 510 , 520 , 530 .
  • encoding interleaved frame stream ABCA . . . (e.g., the stream 600 shown in FIG. 6A ) comprising frames provided by the cameras 510 , 520 , 530 using a motion estimation encoder, may provide motion due to disparity shown by arrows 504 , 514 , 524 in FIG. 5A .
  • Encoding interleaved frame stream ACBA . . . , (e.g., 610 in FIG. 6A ) using a motion estimation encoder may provide motion due to disparity shown by arrows 526 , 516 , 506 in FIG. 5A .
  • Various other interleaving sequences may be utilized, such as, for example, ABBCCAABBCCAA . . . , BACA . . . and/or other.
  • FIG. 5B illustrates a quad-camera configuration useful with disparity determination using image interleaving, according to one or more implementations.
  • the camera configuration 550 may comprise four cameras 550 , 552 , 556 , 558 , denoted A, B, C, D, respectively.
  • the configuration 550 may comprise two pairs of horizontally spaced cameras (e.g., left top 550 , right top 552 and left bottom 550 , right bottom 556 ) vertically spaced from one another.
  • Various other spatial camera configurations may be utilized as well.
  • Frames provided by the cameras 552 , 554 , 556 , 558 may be interleaved and/or combined (e.g., concatenated) using any applicable methodologies, including these described with respect to FIGS. 3A-3C , and/or 6 B.
  • interleaving sequences may be employed when processing frames provided by the cameras 552 , 554 , 556 , 558 .
  • interleaved frame stream ABCDA . . . (e.g., the stream 620 shown in FIG. 6B ) comprising frames provided by the cameras 552 , 554 , 556 , 558 using a motion estimation encoder may provide motion due to disparity shown by arrows 562 , 564 , 566 , 568 in FIG. 5B .
  • FIG. 6B may provide motion due to disparity shown by arrows 569 , 567 , 565 , 563 in FIG. 5B .
  • Various other interleaving sequences may be utilized, such as, for example sequences 624 , 628 , 630 , 634 illustrated in, FIG. 6B may be utilized.
  • Sequences comprising transitions between diagonally opposing cameras in FIG. 5B e.g., AC, CA, BD, DB and/or other
  • FIG. 5C illustrates a linear multiple sensor element configuration useful with determining multiple depths scales using image interleaving, according to one or more implementations.
  • the configuration 570 may comprise sensor elements 572 , 574 , 576 , 578 disposed in a linear array.
  • individual sensor elements may comprise cameras or camera sensors. Spacing between the elements 572 , 574 , 576 , 578 may be the same (uniform linear array) and/or varying (e.g., a power law, random, and/or other). In some implementations, non-uniform spacing may be used in order to implement e.g., a Vernier scale.
  • interleaving sequences may be employed when processing frames provided by the elements 572 , 574 , 576 , 758 , such as, for example sequences 620 , 624 , 628 , 630 , 634 , 638 illustrated in, FIG. 6B and/or other sequences (e.g. ABBCCDDAA . . . ).
  • Use of multiple elements of the array 570 may enable determination of multiple disparity estimations, e.g., shown by arrows 580 , 582 , 584 , 586 , 588 , 590 .
  • the frames from individual sensor elements shown and described above with respect to FIGS.
  • 5A-5C may be acquired simultaneous with one another using, e.g., multiple camera synchronization.
  • the disparity estimations corresponding to different sensor spacing may be characterized by different dynamic range, different resolution, and/or precision, e.g., in accordance with Eqn. 1.
  • closely spaced sensing elements e.g., 572 , 574
  • wide spaced elements e.g., 572 - 578
  • Wide spaced elements may be capable of determining distance to objects with greater precision (e.g., lower uncertainty) as compared to estimates produced by closely spaced sensing elements (e.g., 572 , 574 ).
  • multiple elements may be disposed in a non-linear array (e.g., rectangular and/or concave) thereby providing multiple perspectives and/or views of the scene to the processing component. Some views/perspectives may, e.g., reveal objects that may be hidden and/or partially obscured in other perspectives, thereby enabling more robust determination of object distance and/or object detection.
  • individual distance estimates (associated with individual camera pairs) may be combined using any appropriate methodologies (e.g., averaging, thresholding, median filtering), and/or other techniques to obtain a resultant distance estimate, characterized by greater precision and/or accuracy compared to individual estimates.
  • a distance estimate associated with one camera pair may be selected as the resultant distance estimate, thereby enabling robust distance determination in presence of occlusions that may (at least partly) block the object in a given set of frames.
  • the concatenation approaches may comprise placing two or more frames from two or more sequences into a single frame, e.g., as described above with respect to FIGS. 3C-3D , and/or Tables 1 through 4.
  • a frame from a given source may be repeated within the concatenated frame, e.g., as shown in Tables 2 and 7 above.
  • the concatenation operation may comprise resampling and/or resizing of the input frames, e.g., as described with respect to Tables 6-7 above.
  • FIG. 7 depicts a motion extraction apparatus, according to one or more implementations.
  • the apparatus 700 may comprise an encoder component 706 configured to encode input video stream 702 .
  • the input 702 may comprise one or more frames received from an image sensor (e.g., charge-coupled device (CCD), CMOS device, and/or an active-pixel sensor (APS), photodiode arrays, and/or other image sensors).
  • the input may comprise a pixel stream downloaded from a file.
  • An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate).
  • image representations e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations
  • YUV, YCbCr luminance-chrominance
  • CMYK cyan-magenta-yellow and key
  • grayscale and/or other image representations
  • data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar or tomography images are equally compatible with the processing methodology of the disclosure, or yet other configurations.
  • the component 706 may comprise a specialized video encoder configured to implement video encoding comprising a motion estimation operation.
  • the component 706 may comprise a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Qualcomm® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other.
  • Video encoding effectuated by the component 706 may comprise any applicable standard comprising motion estimation between current frame and preceding frame.
  • Some encoding implementations may comprise MPEG-4, H.262, H.263, H.264, H.265 video encoder such as described above with respect to FIG. 4A supra.
  • the component 706 may provide encoded video output 708 .
  • the output 708 may be characterized by a lower data rate (e.g., as represented by fewer bits per frame) as compared to the input video signal 702 .
  • the output 708 may comprise pixel luminance and/or chromaticity data.
  • the output 708 may comprise motion information, e.g., as illustrated in FIG. 9A which depicts output of a video encoder useful with the motion extraction methodology.
  • the output illustrated in FIG. 9A may correspond to occurrence of an object, e.g., moving ball represented by a hashed circle 900 in FIG. 9A in input 702 of FIG. 7 .
  • the encoded output 708 may comprise a luminance component (also referred to as “luma”) depicted by area 902 in FIG. 9A .
  • the encoded luminance may be comprised of a plurality of macroblocks 904 . Size of the macroblock may be configured in accordance with specifications of an application (e.g., encoding standard, video frame size, resolution, quality, refresh rate, bit depth, channel (e.g., luma, chroma), and/or other and be selected, for example, at 16 ⁇ 16 for luma channel, 8 ⁇ 8 for chroma channel for H.264 encoder.
  • an application e.g., encoding standard, video frame size, resolution, quality, refresh rate, bit depth, channel (e.g., luma, chroma), and/or other and be selected, for example, at 16 ⁇ 16 for luma channel, 8 ⁇ 8 for chroma channel for H.264 encoder.
  • the encoded output 708 may comprise motion information, denoted by area 910 in FIG. 9A .
  • Motion information may comprise one or more vectors (e.g., 916 ) associated with one or more macroblock (e.g., 914 ).
  • Compressed video 708 in FIG. 7 may be provided to a processing component 710 .
  • the component 710 may be configured to parse the compressed video stream 708 in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A ).
  • the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L.
  • the extracted motion information 712 may be provided to another component.
  • Various uses of the motion information may be contemplated such as, for example, object detection by recognizing the shape of the surface of the object, and/or by using depth to segment the scene, gesture detection by determining the orientation of the hands or other body parts, and/or other.
  • the compressed video may be provided via a pathway 714 to a target destination (e.g., general purpose processor for streaming to a display and/or other components).
  • a target destination e.g., general purpose processor for streaming to a display and/or
  • FIG. 8 depicts a video processing system, comprising a differential motion extraction apparatus, according to one or more implementations.
  • the system 800 of FIG. 8 may be configured to receive sensory input 802 .
  • the input 802 may comprise the input 702 described above with respect to FIG. 7 .
  • the input 802 may be encoded by a video encoder component 806 .
  • the component 806 may comprise the component 706 described above with respect to FIG. 7 .
  • the component 806 may be configured to encode the input 802 using one or more encoding formats (e.g., H.264).
  • the encoded signal 808 may be provided to component 810 .
  • the component 810 may be configured to parse the encoded signal 808 to extract motion information 812 by, e.g., extracting from the compressed video data the P slice (P-frame) data which contains the motion information (x and y components) or the macroblock motion for all macroblocks covering the current frame.
  • the extracted motion information may be used in controlling a robotic device.
  • the extracted motion information may comprise horizontal and/or vertical displacement (e.g., the motion vector components (dx, dy)) of a pixel group (e.g., a macroblock) between the current frame and a preceding frame.
  • a pixel group e.g., a macroblock
  • the parsed motion information may be represented using the YUV color model.
  • the (U,V) channels may represent the (dx,dy) displacement and the Y channel may be used for representing additional information (e.g., indicates as to whether the current frame is the keyframe, macroblock size (e.g., 16 ⁇ 16, 8 ⁇ 8 and/or other size, and/or other information).
  • additional information e.g., indicates as to whether the current frame is the keyframe, macroblock size (e.g., 16 ⁇ 16, 8 ⁇ 8 and/or other size, and/or other information).
  • macroblock size e.g., 16 ⁇ 16, 8 ⁇ 8 and/or other size, and/or other information.
  • the input 802 may be processed by a processing component 820 .
  • the component 820 may comprise an artificial neuron network (ANN) comprising a plurality of nodes.
  • ANN artificial neuron network
  • Individual nodes of the component 820 network may comprise neuron units characterized by a receptive field, e.g., region of space in which a presence of a stimulus may affect response of the neuron.
  • the units may comprise spiking neurons and the ANN may comprise a spiking neuron network, (SNN).
  • SNNs a spiking neuron network
  • Receptive fields of the network 820 units may be configured to span several pixels with the input 802 frames so as to effectuate sparse transformation of the input 802 .
  • Various applicable methodologies may be utilized in order to effectuate the sparse transformation, including, for example, those described in co-pending and co-owned U.S. patent application Ser. No. 13/540,429, entitled “SENSORY PROCESSING APPARATUS AND METHODS”, filed Jul. 2, 2012, and U.S. patent application Ser. No. 13/623,820, entitled “APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA USING ARTIFICIAL SPIKING NEURONS”, filed on Sep.
  • the encoding may comprise a sparse transformation, described in, e.g., U.S. patent application Ser. No. 14/191,383, entitled “APPARATUS AND METHODS FOR TEMPORAL PROXIMITY DETECTION”, filed on Feb. 26, 2014, the foregoing being incorporated herein by reference in its entirety.
  • the output 812 of the encoder 820 may be provided to the processing component 820 .
  • the component 820 may use the motion information 812 in order to determine characteristics (e.g., location, dimension, shape, and/or other) of one or more objects in sensory input 802 .
  • the component 820 may comprise an adaptive predictor component configured to determine a control output 826 for a robotic device (e.g., the vehicle 100 , 160 in FIGS. 1A-1B ) based on the input 812 and/or inputs 802 , 812 .
  • the input 812 and/or 802 may be used by the component 820 in order to predict control signal configured to cause the vehicle 160 in FIG.
  • FIG. 9B illustrates motion of an object obtained from encoded video, according to one or more implementations.
  • Hashed area 922 in FIG. 9B may represent luminance component of an image of a ball (e.g., 900 in FIG. 9A ).
  • the encoded output of FIG. 9A may comprise motion information, denoted by area 920 in.
  • Motion information may comprise one or more vectors (e.g., 926 ) associated with one or more macroblock (e.g., 924 ).
  • Encoded representations of FIGS. 9A-9B may be used to determine temporal distribution of motion associated with the ball 900 .
  • Motion pattern comprising alternating opposing motion vectors 916 , 926 may be employed to communicate an action indication to a robotic device.
  • a user may shake an object from left to right in front of a camera of an autonomous vehicle in order to indicate a target to be followed.
  • FIG. 9C illustrates spatial distribution of motion extracted from encoded video, according to one or more implementations.
  • the representation shown in FIG. 9C may comprise portion 930 comprising a first plurality of macroblocks 932 characterized by first motion direction 936 .
  • the representation shown in FIG. 9C may comprise portion 940 comprising a second plurality of macroblocks 942 characterized by second motion direction 946 .
  • the spatial motion map illustrated in FIG. 9C may be employed to communicate an action indication to a robotic device.
  • a user wave arms (in a crisscross manner) in order to indicate to a robotic device a stop, and/or other command.
  • motion information for a given frame may be characterized by a plurality of different motion vectors due to, e.g., motion of different objects, camera pan/zoom operation, and/or video acquisition from a moving platform.
  • video signal obtained by the camera 166 may comprise a representation of human making gestures superimposed on a moving background. Detection of one motion associated with the gestures on a background motion may be referred to as differential motion detection.
  • the background may be characterized by spatially coherent (uniform) motion. Background motion for a given frame may be estimated and removed.
  • the resultant motion field may be analyzed in order to determine, e.g., hand gesture(s) and/or objects.
  • a sequence of frames may be characterized by the background motion that is temporally coherent over timescale associated with the frame sequence. Background motion for the sequence of frames may be estimated and removed from individual frames within the sequence.
  • the resultant motion field may be analyzed in order to determine, e.g., hand gesture(s) and/or objects.
  • FIGS. 10-15 illustrate methods 1000 , 1100 , 1200 , 1300 , 1400 , 1500 for determining and using motion information from encoded video.
  • the operations of methods 1000 , 1100 , 1200 , 1300 , 1400 , 1500 presented below are intended to be illustrative. In some implementations, method 1000 , 1100 , 1200 , 1300 , 1400 , 1500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1000 , 1100 , 1200 , 1300 , 1400 , 1500 are illustrated in FIGS. 10-15 and described below is not intended to be limiting.
  • methods 1000 , 1100 , 1200 , 1300 , 1400 , 1500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information).
  • the one or more processing devices may include one or more devices executing some or all of the operations of methods 1000 , 1100 , 1200 , 1300 , 1400 , 1500 in response to instructions stored electronically on an electronic storage medium.
  • the one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 1000 , 1100 , 1200 , 1300 , 1400 , 1500 .
  • FIG. 10 illustrates a method of determining a salient feature using encoded video motion information, in accordance with one or more implementations.
  • Operations of method 1000 may be applied to processing of sensory data (e.g., audio, video, RADAR imagery, SONAR imagery, and/or other imagery), observation data, motor command activity in a robotic system, and/or other systems or data.
  • sensory data e.g., audio, video, RADAR imagery, SONAR imagery, and/or other imagery
  • observation data e.g., observation data, motor command activity in a robotic system, and/or other systems or data.
  • one or more a consecutive input video frames may be encoded.
  • the frames may be provided by an image sensor (e.g., CCD, CMOS device, and/or APS, photodiode arrays, and/or other image sensors).
  • the input may comprise a pixel stream downloaded from a file, such as a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate).
  • image representations e.g., bitmap, luminance-chrominance YUV, YCbCr, CMYK, grayscale, and/other image representations
  • Data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar or tomography images may be compatible with the processing methodology of the disclosure, and/or other configurations.
  • the frames may form real-time (live) video.
  • the encoding may comprise operations performed in accordance with any applicable encoding standard comprising motion estimation between successive frames (e.g., H.263, H.264, and/or other).
  • encoded video may be parsed in order to obtain motion information.
  • the motion information may comprise a plurality of motion vectors and their locations as associated with one or more macroblocks within the encoded frame (e.g., the vector 916 of macroblock 914 in FIG. 9A ).
  • a salient feature may be determined using motion information.
  • the feature determination may be based on analysis of motion spatial map within a given frame (e.g., the motion map comprising the area 930 , 940 in FIG. 9C ).
  • the feature determination may be configured based on analysis of motion temporal characteristics (e.g., persistence of motion features in a given location over multiple frames, comparing motion at a given location between two or more frames, and/or other).
  • FIG. 11 illustrates a method of data processing useful for determining features, in accordance with one or more implementations.
  • live video may be obtained during execution of a task.
  • the video may be obtained with a video camera disposed on the vehicle.
  • the video stream may be encoded using any applicable standard comprising motion estimation operation (e.g., H.263, H.264, and/or other).
  • motion information may be determined from the encoded video stream.
  • the encoded video stream may be parsed in order to obtain motion information.
  • the motion information may comprise a plurality of motion vectors and their locations as associated with one or more macroblocks within the encoded frame (e.g., the vector 916 of macroblock 914 in FIG. 9A ).
  • a location of an object within video frame may be determined using motion information obtained at operation 1104 .
  • the location determination may be based on temporal and/or spatial persistence (coherence) of motion over a given area and/or over several frames.
  • occurrence of a plurality of macroblocks characterized by motion vectors within a given margin from one another (e.g., 5-20% in one implementation) in a given frame may indicate a moving object.
  • the object associated with the location determined at operation 806 may be related to a task action. Based on the action determination, a control signal may be provided.
  • the control signal provision may be configured based on operation of an adaptive predictor, e.g., such as described in U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed on Mar. 15, 2013, incorporated supra.
  • the action may be executed.
  • the object may comprise a ball 174 in FIG. 1A
  • the motion information may indicate the ball moving to the left of the vehicle
  • the task may comprise target pursuit
  • the action may comprise a left turn by the vehicle.
  • FIG. 12 is a logical flow diagram illustrating a method of executing an action configured based on a gesture detected using motion information, in accordance with one or more implementations.
  • motion information may be determined using one or more encoded frames in a sequence of frames.
  • the motion information may comprise motion vectors due to gestures of a human (e.g., vectors 936 , 946 in FIG. 9B ).
  • the frame sequence may be configured using interleaving and/or combination operation, e.g., such as the exemplary concatenation shown and described with respect to FIGS. 3A-3D .
  • a spatia-temporal distribution of the motion information may be determined.
  • the motion map may comprise more areas of macroblocks (e.g., the area 910 in FIG. 9A and/or 90 in FIG. 9C ) characterized by similar motion vector components. (e.g., components of vector 946 in FIG. 9C ).
  • temporal motion distribution may be determined by analyzing motion associated with a portion of the frame (e.g., the area 940 in FIG. 9C ) over a plurality of consecutive frames.
  • disparity-related information may be determined based on the motion in the encoded frame sequence.
  • the frame sequence used for encoding may comprise interleaved and/or concatenated frames from left/right cameras (e.g., the cameras 106 , 108 in FIG. 1 )
  • the motion information in the encoded frame sequence may be due to disparity between left/right views of a scene.
  • the disparity determination of operation 1206 may be effectuated based on interleaving and/or combination of images from multiple sources using any applicable methodologies, such as those described herein (e.g., with respect to FIGS. 3A-3D , and/or FIGS. 6A-6B ).
  • a spatio-temporal distribution map of distance may be determined based on the disparity information obtained at operation 1206 .
  • the distance determination may be effectuated using Eqn. 1.
  • a gesture may be determined based on a spatio-temporal pattern that may occur within the motion distribution determined at operation 1204 and/or the distance distribution determined at operation 1207 .
  • a pattern of alternating motion vectors of a rectangular area within the frame may correspond to a crisscross motion of arms by the user indicating an alert (e.g., a stop) command to the robotic device.
  • motion information for a given frame may be characterized by a plurality of different motion vectors due to, e.g., motion of different objects, camera pan/zoom operation, and/or video acquisition from a moving platform.
  • video signal obtained by the camera 166 may comprise a representation of human making gestures superimposed on a moving background.
  • a distance to one part of human body e.g. of the head and/or the torso
  • an action may be executed in accordance with the gesture determined at operation 1206 .
  • the robotic device may stop trajectory navigation.
  • gesture detection may be effectuated by a spoofing controller, e.g., such as described in U.S. patent application Ser. No. 14/244,892, entitled “SPOOFING REMOTE CONTROL APPARATUS AND METHODS”, filed on Apr. 3, 2014, incorporated herein by reference in its entirety.
  • the spoofing controller may be trained to develop associations between the detected gestures and one or more remote control commands (by e.g., an IR remote operating a home appliance (TV)).
  • the developed associations may enable the spoofing controller to operate the TV in accordance with gestured of a user in lieu of the remote controller commands.
  • a commercially available off-the shelf hardware video encoder (e.g., 1006 in FIG. 10 ) may be used to provide a compressed video stream.
  • hardware encoders may be utilized in order to reduce video data rate in order to reduce storage, and/or bandwidth load associated with manipulation of video information.
  • Motion extraction methodology described herein may advantageously enable determination of motion information by an application device using available compressed video albeit that is used for other purposes (e.g., reduction in storage and/or bandwidth).
  • Use of available compressed video, comprising motion estimation data may substantially reduce computational load associated with motion determination, compared to existing techniques such as optic flow, and/or motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features.
  • FIG. 13 illustrates a method of determining a depth of visual scene using encoded interleaved stereo image information, in accordance with one or more implementations.
  • a monocular frame configuration may be obtained using a stereo image of a visual scene.
  • the monocular frame configuration may comprise an interleaved frame sequence 309 , 322 described above with respect to FIGS. 3A-3B and/or an interleaved concatenated frame sequence described above with respect to FIG. 3C .
  • the monocular frame configuration may comprise an interleaved frame sequence such as shown in FIGS. 6A and/or 6B .
  • monocular frame sequence may be encoded using a motion estimation encoder.
  • the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
  • depth of visual scene may be determined using motion information of the encoded data obtained at operation 1304 .
  • the motion information may be obtained by to parsing the compressed video stream (e.g., 422 in FIG. 4A ).
  • the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L.
  • the extracted motion information may be used for disparity and or distance determination.
  • Various uses of the depth information may be contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or other.
  • FIG. 14 illustrates a method of determining distance to objects using motion of interleaved concatenated image sequence, in accordance with one or more implementations.
  • frames from multiple cameras may be combined to produce e.g., a concatenated image.
  • the concatenated image may be configured in accordance with approach described above with respect to FIGS. 3C-3D and/or tables 1-7.
  • concatenated frames determined at operation 1401 may be interleaved to produce an interleaved frame sequence.
  • the interleaved frame sequence may comprise a frame sequence such as shown in FIGS. 3A-3B , FIGS. 6A and/or 6B .
  • the interleaved concatenated frame sequence may be encoded using a motion estimation encoder.
  • the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
  • an object may be detected based on a spatia-temporal pattern within the motion information within the encoded data.
  • the motion information may be obtained by to parsing the compressed video stream (e.g., 422 in FIG. 4A comprising, e.g., encoded frames 356 , 360 , 364 shown and described with respect to FIG. 3B ).
  • Object detection may be effectuated using any applicable methodologies including these described above with respect to FIGS. 9A-9C .
  • distance to the object identified at operation 1406 may be determined.
  • the distance determination may be configured based on the disparity data that may be obtained from the motion information of the encoded data (e.g., the frames 332 , 358 , 362 in FIG. 3B ).
  • Various uses of the distance information may be contemplated such as, for example, object detection, trajectory planning, gesture detection, obstacle avoidance, and/or other.
  • FIG. 15 illustrates a method of executing an action configured based on detecting an object in motion information, in accordance with one or more implementations.
  • an interleaved and/or concatenated frame sequence may be encoded using a motion estimation encoder.
  • the concatenation and/or interleaving may be configured based on frames from multiple sources, and comprise for example the operations described above with respect to FIGS. 3A-3D , and/or 6 A, 6 B.
  • the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
  • distance to the object may be determined using disparity determined from the motion information of the encoded data.
  • the distance determination may be configured based on the disparity data that may be obtained from the motion information of the encoded data (e.g., the frames 332 , 358 , 362 in FIG. 3B ).
  • an action may be associated with the object parameters determined at operation 1504 .
  • the object parameters may comprise object features (e.g., shape, color, identity), location, distance, speed, and/or other.
  • object features e.g., shape, color, identity
  • location e.g., location, distance, speed, and/or other.
  • the object may comprise a ball 112 in FIG. 1A rolling across the path of the vehicle 100 .
  • the distance to the ball 112 and the ball motion data may indicate that the vehicle 100 may collide with the ball 112 .
  • the action may comprise a turn left/right and/or reducing the speed of the vehicle 100 .
  • Action execution may be configured based on output of an adaptive predictor apparatus configured to predict control signal for the robotic vehicle 100 in FIG. 1 .
  • the predictor may be operated in accordance with a learning process such as described, for example, in U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed on Mar. 15, 2013, the foregoing being incorporated supra.
  • FIGS. 16A-16D illustrate gestures of a human operator used for communicating control indications to a robotic device comprising distance determination apparatus described herein, in accordance with one or more implementations.
  • FIG. 16A is a top view of a user and may illustrate a base posture of the user.
  • FIG. 16B may depict user gestures 1600 communicating a right turn action to a robotic device (e.g., the vehicle 100 in FIG. 1A .
  • the robotic device 100 may utilize stereo images provided by the cameras 106 , 108 in order to detect position of the user arms 1608 , 1608 .
  • the arm 1608 , 1604 position may be determined using the distance determination methodology configured based on encoding interleaved left/right portions of the stereo imagery.
  • the gesture in FIG. 16B may be determining based on a comparison of distance between the robot and the user arms in positions 1604 , 1608 in FIG.
  • the gesture in FIG. 16B may be determining based on a comparison of distance between the robot and the user arms in positions 1604 , 1608 relative the user head 1602 in FIG. 16B .
  • FIG. 16C is a side view of the user and may depict user gesture 1610 communicating a stop action to a robotic device (e.g., the vehicle 100 in FIG. 1A ).
  • the robotic device 100 may utilize stereo images provided by the cameras 106 , 108 in order to detect position of the user arms, head 1642 , 1612 , and/or hands 1614 , 1644 .
  • the hand 1642 , 1644 position may be determined using the distance determination methodology configured based on encoding interleaved left/right portions of the stereo imagery.
  • the gesture in FIG. 16C may be obtained based on a comparison of distance between the robot and the user hands in position 1614 in FIG. 16C relative the user hand in position 1644 in FIG. 16D .
  • the gesture in FIG. 16C may be determined based on a comparison of distance between the robot and the user hand in position 1614 relative the user head 1612 in FIG. 16C .
  • the user may communicate an indication to the robotic device by, e.g., appearing in view of the camera.
  • the user stepping in front of the vehicle may indicated to the vehicle a stop action
  • the present disclosure also contemplates a computerized controller apparatus for implementing, inter alia, motion and/or distance determination methodology in accordance with one or more implementations.
  • the controller apparatus may comprise a processing module configured to receive sensory input from sensory block (e.g., cameras 106 , 108 in FIG. 1A ).
  • the sensory module may comprise audio input/output portion.
  • the processing module may be configured to implement signal processing functionality (e.g., distance estimation, object detection based on motion maps, and/or other).
  • the controller apparatus may comprise memory configured to store executable instructions (e.g., operating system and/or application code, raw and/or processed data such as raw image fames and/or object views, teaching input, information related to one or more detected objects, and/or other information).
  • executable instructions e.g., operating system and/or application code, raw and/or processed data such as raw image fames and/or object views, teaching input, information related to one or more detected objects, and/or other information.
  • the processing module may interface with one or more of the mechanical, sensory, electrical, power components, communications interface, and/or other components via driver interfaces, software abstraction layers, and/or other interfacing techniques.
  • additional processing and memory capacity may be used to support these processes.
  • these components may be fully controlled by the processing module.
  • the memory and processing capacity may aid in processing code management for the controller apparatus (e.g. loading, replacement, initial startup and/or other operations).
  • the various components of the device may be remotely disposed from one another, and/or aggregated.
  • the instructions operating the haptic learning process may be executed on a server apparatus that may control the mechanical components via network or radio connection.
  • multiple mechanical, sensory, electrical units, and/or other components may be controlled by a single robotic controller via network/radio connectivity.
  • the mechanical components of the controller apparatus may include virtually any type of device capable of motion and/or performance of a desired function or task.
  • Examples of such devices may include one or more of motors, servos, pumps, hydraulics, pneumatics, stepper motors, rotational plates, micro-electro-mechanical devices (MEMS), electroactive polymers, shape memory alloy (SMA) activation, and/or other devices.
  • MEMS micro-electro-mechanical devices
  • SMA shape memory alloy
  • the sensor devices may interface with the processing module, and/or enable physical interaction and/or manipulation of the device.
  • the sensory devices may enable the controller apparatus to accept stimulus from external entities.
  • external entities may include one or more of video, audio, haptic, capacitive, radio, vibrational, ultrasonic, infrared, motion, and temperature sensors radar, lidar and/or sonar, and/or other external entities.
  • the module may implement logic configured to process user commands (e.g., gestures) and/or provide responses and/or acknowledgment to the user.
  • the electrical components may include virtually any electrical device for interaction and manipulation of the outside world.
  • Examples of such electrical devices may include one or more of light/radiation generating devices (e.g. LEDs, IR sources, light bulbs, and/or other devices), audio devices, monitors/displays, switches, heaters, coolers, ultrasound transducers, lasers, and/or other electrical devices. These devices may enable a wide array of applications for the apparatus in industrial, hobbyist, building management, medical device, military/intelligence, and/or other fields.
  • the communications interface may include one or more connections to external computerized devices to allow for, inter alia, management of the controller apparatus.
  • the connections may include one or more of the wireless or wireline interfaces discussed above, and may include customized or proprietary connections for specific applications.
  • the communications interface may be configured to receive sensory input from an external camera, a user interface (e.g., a headset microphone, a button, a touchpad, and/or other user interface), and/or provide sensory output (e.g., voice commands to a headset, visual feedback, and/or other sensory output).
  • the power system may be tailored to the needs of the application of the device. For example, for a small hobbyist robot or aid device, a wireless power solution (e.g. battery, solar cell, inductive (contactless) power source, rectification, and/or other wireless power solution) may be appropriate. However, for building management applications, battery backup/direct wall power may be superior, in some implementations.
  • the power system may be adaptable with respect to the training of the apparatus 1800 .
  • the controller apparatus may improve its efficiency (to include power consumption efficiency) through learned management techniques specifically tailored to the tasks performed by the controller apparatus.
  • Various aspects of the disclosure may advantageously be applied to design and operation of apparatus configured to process sensory data.
  • Implementations of the principles of the disclosure may be applicable to detecting objects by a wide variety of stationary and portable video devices, such as, for example, smart phones, portable communication devices, notebook, netbook and tablet computers, surveillance camera systems, and practically any other computerized device configured to process vision data.
  • the motion information may be used as a proxy for optic flow (estimated motion (dx,dy) on a grid across the frame of the video).
  • Use of available hardware encoders to obtain motion data may reduce energy use by portable devices, enable motion and/or distance detection on higher resolution video (e.g., resolutions greater than 320 ⁇ 240), improve motion detection resolution in order to, e.g., detect gestures, compared to optic flow detection techniques.
  • higher resolution video e.g., resolutions greater than 320 ⁇ 240
  • improve motion detection resolution in order to, e.g., detect gestures, compared to optic flow detection techniques.
  • Interleaving of frames from multiple spatially displaced cameras may enable determination of binocular disparity between pairs of camera images using motion estimation.
  • Use of an off-the shelf commercially available hardware video encoder e.g., MPEG-4, H.265 and/or other encoder
  • Motion estimation component compared to use of optical flow for determining motion.
  • Encoded into video may be parsed to obtain motion information.
  • Motion corresponding to a pair of frames from displaced cameras may be interpreted as a measure of disparity. The disparity may be utilized in order to determine depth of visual scene and/or distance to objects within visual scene.
  • embodying a motion and/or distance determination component and/or a distance determination component of the disclosure in a robotic vehicle may extend duration of autonomous operation of the robotic apparatus due to, in part, lower energy use that may be associated with motion/distance detection based on video encoded using hardware encoder, as compared to using video processing in a CPU (e.g., optical flow, and/or pixel block matching).
  • the increased autonomy may be characterized by the robotic device capability to perform a given action (e.g., a flight route and/or surveillance route) an additional number of times without recharging, and/or being capable of completing longer routes on a given charge as compared to the prior art solutions.
  • the reduced energy use may be leveraged for producing a smaller, lighter and/or less costly robot that may be capable of performing the action (e.g., navigate a given route) compared to the comparable device of the prior art.
  • An autonomous robotic device comprising a hardware video encoder may be capable to perform motion estimation for obstacle avoidance, tracking moving objects, stabilization, platform and/or enabling the robot to learn its own self motion.
  • a robotic device may be configured to follow a target (e.g., a person, a ball 112 in FIG. 1A , and/other object) at a distance (e.g., 110 in FIG. 1A ).
  • the robotic device may be configured to maintain distance from target within a range (e.g., not to exceed 50 meters, and not to approach closer than 2 meters when following a for vehicle, and/or not to exceed 5 meters, and not to approach closer than 0.25 meters when following the ball 112 in FIG. 1A .
  • controller of the robotic device may be configured to determine distance to the target and motion of the target using, e.g., the alternating interleaving methodology shown and described with respect to FIG. 3B .
  • FIG. 17 illustrates use of distance determination methodology by an unmanned robotic apparatus configured for autonomous navigation, in accordance with one or more implementations.
  • the unmanned autonomous vehicle (AUV) 1700 may comprise a plurality of cameras 1702 disposed spatially from one another. Video stream provided by the cameras 1702 may be interleaved and/or concatenated and encoded using any applicable methodology described herein (e.g., with respect to FIGS. 3A-3C, 6A-6B , and/or 9 A- 9 C).
  • the encoding may enable controller of the vehicle 1700 (e.g., 700 , 800 FIGS.
  • the vehicle controller may utilize the distance and/or vehicle motion information to control actuators 1704 when landing, during take-off and or navigating around obstacles.
  • the motion and/or distance detection methodology described herein may be employed for detecting salient objects in video input.
  • the saliency of an item (such as an object, a person, a pixel, and/or other) may be described by a characteristic by which the item may stand out relative to its neighbors.
  • a salient vehicle may comprise a vehicle that may be moving differently (e.g., going slower/faster than the rest of the traffic, weaving from lane to lane) compared to the rest of the traffic.
  • a salient object for target approach may comprise a stationary and/or moving ball on a moving background due to self-motion by the vehicle.
  • Implementations of the principles of the disclosure may be further applicable to a wide assortment of applications including computer-human interaction (e.g., recognition of gestures, voice, posture, face, and/or other interactions), controlling processes (e.g., processes associated with an industrial robot, autonomous and other vehicles, and/or other processes), augmented reality applications, access control (e.g., opening a door based on a gesture, opening an access way based on detection of an authorized person), detecting events (e.g., for visual surveillance or people or animal counting, tracking).
  • computer-human interaction e.g., recognition of gestures, voice, posture, face, and/or other interactions
  • controlling processes e.g., processes associated with an industrial robot, autonomous and other vehicles, and/or other processes
  • augmented reality applications e.g., access control (e.g., opening a door based on a gesture, opening an access way based on detection of an authorized person), detecting events (e.g., for visual surveillance or people or animal counting, tracking).
  • detecting events
  • a video processing system of the disclosure may be implemented in a variety of ways such as, for example, a software library, an IP core configured for implementation in a programmable logic device (e.g., FPGA), an ASIC, a remote server, comprising a computer readable apparatus storing computer executable instructions configured to perform feature detection.
  • a software library such as, for example, a software library, an IP core configured for implementation in a programmable logic device (e.g., FPGA), an ASIC, a remote server, comprising a computer readable apparatus storing computer executable instructions configured to perform feature detection.

Abstract

Frame sequences from multiple image sensors may be combined in order to form, for example, an interleaved frame sequence. Individual frames of the combined sequence may be configured a by combination (e.g., concatenation) of frames from one or more source sequences. The interleaved/concatenated frame sequence may be encoded using a motion estimation encoder. Output of the video encoder may be processed (e.g., parsed) in order to extract motion information present in the encoded video. The motion information may be utilized in order to determine a depth of visual scene, such as by using binocular disparity between two or more images by an adaptive controller in order to detect one or more objects salient to a given task. In one variant, depth information is utilized during control and operation of mobile robotic devices.

Description

CROSS-REFERENCE AND RELATED APPLICATIONS
This application is related to co-pending and co-owned U.S. patent application Ser. No. 14/285,414 entitled “APPARATUS AND METHODS FOR DISTANCE ESTIMATION USING MULTIPLE IMAGE SENSORS” filed on May 22, 2014, Ser. No. 14/285,466 entitled “APPARATUS AND METHODS FOR ROBOTIC OPERATION USING VIDEO IMAGERY” filed on May 22, 2014, and Ser. No. 14/285,385 entitled “APPARATUS AND METHODS FOR REAL TIME ESTIMATION OF DIFFERENTIAL MOTION IN LIVE VIDEO” filed on May 22, 2014, each of the foregoing incorporated herein by reference in its entirety.
COPYRIGHT
A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND
Field of the Disclosure
The present disclosure relates to, inter cilia, computerized apparatus and methods for processing imagery from multiple sources for determining motion and distance to objects.
Description of Related Art
Object recognition in the context of computer vision relates to finding a given object in an image or a sequence of frames in a video segment. Typically, video frames may contain multiple objects moving in one or more directions on a still or moving background. Object representations, also referred to as the “view”, may change from frame to frame due to a variety of object or environment transformations, such as for example rotation, movement, translation, change in lighting, background, noise, appearance of other objects, partial blocking and/or unblocking of the object, and/or other object transformations. Robotic devices often employ video for navigation, target selection and/or obstacle avoidance. Determining motion of object from a moving robotic platform may require implementation of differential motion detection in an energy efficient manner. Depth of visual scene (e.g., distance to one or more objects) may be useful for operation of mobile robots as well.
SUMMARY
One aspect of the disclosure relates to a non-transitory computer-readable storage medium having instructions embodied thereon, the instructions being executable to perform a method of determining a distance to an object.
In another aspect, a method of determining motion information within a visual scene is disclosed. In one implementation, the method includes: producing a first composite frame and a second composite frame by combining images from a first plurality of images and a second plurality of images of the visual scene; producing an interleaved sequence of composite frames comprising the first and the second composite frames; and evaluating the interleaved sequence to determine the motion information. In one variant, individual images of the first and second pluralities of images are provided by first and second sensing apparatus, respectively, the second sensing apparatus being separated spatially from the first sensing apparatus.
In another variant, the first composite frame is characterized by a first placement configuration of (i) an image from the first plurality of images, and (ii) an image the second plurality of images; and the second composite frame is characterized by a second placement configuration of (i) an image from the first plurality of images and (ii) an image the second plurality of images. The second placement may be different from the first placement.
In another variant, the method further includes evaluating the motion information to determine a distance to an object disposed within the visual scene. The evaluation of the interleaved sequence of frames may include for instance encoding the interleaved sequence of frames using an encoder process comprising a motion estimation process configured to provide data related to motion of a pixel group within the second composite frame relative to a pixel group within the first composite frame.
In yet another variant of the method, the pixel group within the second composite frame corresponds to a first representation of the object; the pixel group within the first composite frame corresponds to a second representation of the object; and the motion data are based at least on a displacement of first representation relative to the second representation.
In another aspect, a non-transitory computer-readable storage medium is disclosed. In one implementation, the medium has instructions embodied thereon, the instructions being executable to produce a sequence of combined images from first and second sequences of images of a sensory scene by at least: producing a sequence of composite images, individual composite images comprising an image from the first sequence and/or an image from the second sequence; and evaluating the sequence of composite images to determine a depth parameter of the scene.
In one variant, the first and second image sequences are provided by first and second image sensors, respectively, the first image sensor being disposed spatially separated from the second image sensor; and the first image sensor and the second image sensor are configured to provide images of the sensory scene. The spatial separation is configured to produce a binocular disparity between images from the first sequence relative image from the second sequence; and the depth parameter determination comprises determination of the disparity.
In another variant, individual images of the first image sequence and the second image sequence comprise a plurality of pixels; and the evaluating comprises encoding the sequence of composite images using a motion estimation encoder.
In another aspect of the disclosure, an image processing apparatus is disclosed. In one implementation, the apparatus includes: an input interface configured to receive a stereo representation of a visual scene, the stereo representation comprising a first portion and a second portion; a logic component in communication with the interface and configured to arrange the first portion with the second portion into a concatenated frame, and form a sequence of concatenated frames by arranging first portions and second portions within a given concatenated frame in an alternate order relative to a preceding concatenated frame within the sequence; a video encoder component in data communication with the logic component and configured to encode the sequence of concatenated frames to produce a sequence of compressed frames; and a processing component in data communication with the video encoder and configured to obtain motion information based on an evaluation of the compressed frames.
In one variant, the sequence of compressed frames comprises a keyframe characterized by an absence of the motion information; and the processing component is configured to not utilize the keyframe during the evaluation of the compressed frames.
In another variant, the processing component is further configured to determine, based on the motion information, a depth parameter associated with the visual scene; and the encoder is configurable to produce the sequence of compressed frames at least at a frame rate selected between 1 and 100 frames per second inclusive; and the apparatus is embedded in an integrated circuit.
In another implementation, the image processing apparatus includes: computerized logic configured to: receive a plurality of stereo representations, each of the plurality being representative of a corresponding visual scene and comprising a first portion and a second portion; combine a first portion of a given stereo representation with a second portion thereof into a first frame in a first order; combine a first portion of another stereo representation with a second portion thereof into a second frame in a second order different from the first order; and form a sequence comprising at least the first and second frames; a video encoder in data communication with the computerized logic and configured to encode the sequence of frames to produce a sequence of encoded frames; and processing logic in data communication with the video encoder and configured to evaluate the sequence of encoded frames to determine motion information.
These and other objects, features, and characteristics of the system and/or method disclosed herein, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A is a graphical illustration depicting a top view of robotic apparatus configured to acquire stereo imagery, in accordance with one or more implementations.
FIG. 1B is a graphical illustration depicting a side view of a robotic apparatus comprising an adaptive controller apparatus of the disclosure, configured for autonomous navigation, in accordance with one or more implementations.
FIG. 2A is a graphical illustration depicting stereo imagery input obtained with two spatially displaced cameras for use with the disparity determination methodology, according to one or more implementations.
FIG. 2B is a graphical illustration depicting disparity between representations of objects corresponding to the frames of stereo imagery shown in FIG. 2A, according to one or more implementations.
FIG. 2C is a graphical illustration depicting input frames comprising a plurality of moving objects for use with the motion extraction, according to one or more implementations.
FIG. 3A is a logical block diagram depicting a determination of an input stream for motion processing using an alternating interleaver of stereo imagery input, according to one or more implementations.
FIG. 3B is a logical block diagram depicting a determination of an input stream for motion processing using an alternating interleaver of stereo imagery input, according to one or more implementations.
FIG. 3C is a logical block diagram depicting concatenation of two frames into a single frame for video encoding, according to one or more implementations.
FIG. 3D is a logical block diagram depicting a configuration of a sequence of frames for motion and disparity processing using interleaving of concatenated input from multiple sources.
FIG. 4A is a functional block diagram depicting a processing apparatus configured to determine disparity from multiple image sources, according to one or more implementations.
FIG. 4B is a functional block diagram depicting a processing apparatus configured to determine disparity from multiple image sources, according to one or more implementations.
FIG. 4C is a functional block diagram depicting a processing apparatus configured to determine disparity and/or motion using concatenation and/or interleaving of images from multiple sources, according to one or more implementations.
FIG. 5A is a graphical illustration depicting triple camera configuration used for disparity determination using image interleaving, according to one or more implementations.
FIG. 5B is a graphical illustration depicting quad camera configuration used for disparity determination using image interleaving, according to one or more implementations.
FIG. 5C is a graphical illustration depicting linear multiple camera configuration useful for determining multiple depths scales using image interleaving, according to one or more implementations.
FIG. 6A is a graphical illustration depicting an alternating interleaving of triple image input for use with the motion extraction, according to one or more implementations.
FIG. 6B is a graphical illustration depicting an alternating interleaving of quad image input for use with the motion extraction, according to one or more implementations.
FIG. 7 is a functional block diagram depicting a motion extraction apparatus, according to one or more implementations.
FIG. 8 is a functional block diagram depicting a video processing system, comprising a differential motion extraction apparatus, according to one or more implementations.
FIG. 9A is a graphical illustration depicting an encoded object for use with the motion extraction, according to one or more implementations.
FIG. 9B is a graphical illustration depicting motion of an encoded object for use with the motion extraction methodology, according to one or more implementations.
FIG. 9C is a graphical illustration depicting spatial distribution of motion extracted from encoded video, according to one or more implementations.
FIG. 10 is a logical flow diagram illustrating a method of determining a salient feature using encoded video motion information, in accordance with one or more implementations.
FIG. 11 is a logical flow diagram illustrating a method of data processing useful for determining features, in accordance with one or more implementations.
FIG. 12 is a logical flow diagram illustrating a method of executing an action configured based on a gesture detected using motion information, in accordance with one or more implementations.
FIG. 13 is a logical flow diagram illustrating a method of determining a depth of visual scene using encoded interleaved concatenated images, in accordance with one or more implementations.
FIG. 14 is a logical flow diagram illustrating a method of determining distance to objects using motion of interleaved concatenated image sequence, in accordance with one or more implementations.
FIG. 15 is a logical flow diagram illustrating a method of executing an action configured based on detecting an object in motion information, in accordance with one or more implementations.
FIGS. 16A-16D illustrate gestures of a human operator used for communicating control indications to a robotic device (such as one comprising a distance determination apparatus as described herein), in accordance with one or more implementations.
FIG. 17 is a graphical illustration depicting an exemplary unmanned robotic apparatus comprising distance determination apparatus of the disclosure configured for autonomous navigation, in accordance with one or more implementations.
All Figures disclosed herein are © Copyright 2014 Brain Corporation. All rights reserved.
DETAILED DESCRIPTION
Implementations of the present disclosure will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the present technology. Notably, the figures and examples below are not meant to limit the scope of the present disclosure to a single implementation, but other implementations are possible by way of interchange of or combination with some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation may be combined with one or more features of any other implementation
In the present disclosure, an implementation showing a singular component should not be considered limiting; rather, the disclosure is intended to encompass other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein.
Further, the present disclosure encompasses present and future known equivalents to the components referred to herein by way of illustration.
As used herein, the term “bus” is meant generally to denote all types of interconnection or communication architecture that is used to access the synaptic and neuron memory. The “bus” could be optical, wireless, infrared or another type of communication medium. The exact topology of the bus could be for example standard “bus”, hierarchical bus, network-on-chip, address-event-representation (AER) connection, or other type of communication topology used for accessing, e.g., different memories in pulse-based system.
As used herein, the terms “computer”, “computing device”, and “computerized device” include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic device, personal communicators, tablet or “phablet” computers, portable navigation aids, J2ME equipped devices, smart TVs, cellular telephones, smart phones, personal integrated communication or entertainment devices, or literally any other device capable of executing a set of instructions and processing an incoming data signal.
As used herein, the term “computer program” or “software” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLAB™, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans), Binary Runtime Environment (e.g., BREW), and other languages.
As used herein, the terms “connection”, “link”, “synaptic channel”, “transmission channel”, “delay line”, are meant generally to denote a causal link between any two or more entities (whether physical or logical/virtual), which enables information exchange between the entities.
As used herein the term feature may refer to a representation of an object edge, determined by change in color, luminance, brightness, transparency, texture, and/or curvature. The object features may comprise, inter alfa, individual edges, intersections of edges (such as corners), orifices, and/or curvature
As used herein, the term “memory” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM. PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), merristor memory, and PSRAM.
As used herein, the terms “processor”, “microprocessor” and “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CIBC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
As used herein, the term “network interface” refers to any signal, data, or software interface with a component, network or process including, without limitation, those of the FireWire (e.g., FW400, FW800, and/or other FireWire implementation.), USB (e.g., USB2), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, and/or other cellular interface implementation) or IrDA families.
As used herein, the terms “pulse”, “spike”, “burst of spikes”, and “pulse train” are meant generally to refer to, without limitation, any type of a pulsed signal, e.g., a rapid change in some characteristic of a signal, e.g., amplitude, intensity, phase or frequency, from a baseline value to a higher or lower value, followed by a rapid return to the baseline value and may refer to any of a single spike, a burst of spikes, an electronic pulse, a pulse in voltage, a pulse in electrical current, a software representation of a pulse and/or burst of pulses, a software message representing a discrete pulsed event, and any other pulse or pulse type associated with a discrete information transmission system or mechanism.
As used herein, the term “receptive field” is used to describe sets of weighted inputs from filtered input elements, where the weights may be adjusted.
As used herein, the term “Wi-Fi” refers to, without limitation, any of the variants of IEEE-Std. 802.11 or related standards including 802.11 a/b/g/n/s/v and 802.11-2012.
As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, and/or other wireless interface implementation.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-AJTD-LTE, analog cellular, CDPD, RFID or NFC (e.g., EPC Global Gen. 2, ISO 14443, ISO 18000-3), satellite systems, millimeter wave or microwave systems, acoustic, and infrared (e.g., IrDA).
The present disclosure provides, among other things, apparatus and methods for determining depth of field of a scene based on processing information from multiple sources detecting motion of objects and/or features in video in real time. The video information may comprise for example multiple streams of frames received from a plurality of cameras disposed separate from one another. Individual cameras may comprise an image sensor (e.g., charge-coupled device (CCD), CMOS device, and/or an active-pixel sensor (APS), photodiode arrays, and/or other sensors). In one or more implementations, the stream of frames may comprise a pixel stream downloaded from a file. An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate). It will be appreciated by those skilled in the art when given this disclosure that the above-referenced image parameters are merely exemplary, and many other image representations (e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations) are equally applicable to and useful with the various aspects of the present disclosure. Furthermore, data frames corresponding to other (non-visual) signal modalities such as sonograms, infrared (IR), radar or tomography images may be equally compatible with the processing methodology of the disclosure, or yet other configurations.
The video processing methodology described herein may enable a robotic controller to obtain motion and/or distance information using a specialized hardware video encoder. Use of dedicated video encoders provides a computationally efficient way to determine motion and/or distance using video signals compared to processing techniques that employ general purpose processors for performing computations (e.g., optical flow, block matching, phase correlations and/or other. Computational efficiency of hardware video encoders may be leveraged top reduce energy use, complexity, size, and/or cost of the processing component, increase autonomy of robotic device using the computationally efficient controller, and/or increase processing performance (e.g., image resolution, frame rate, number of cameras) for a given hardware specifications compared to the prior art.
Processing data from multiple spatially distributed sources may enable depth of field determination using a disparity methodology. In some implementations of stereo vision, distance d to an object may be determined using binocular disparity D as follows:
d 1 D ( Eqn . 1 )
FIG. 1A depicts a top view of mobile robotic apparatus comprising two cameras configured to provide sensory information for determining distance based on the disparity. The apparatus 100 may comprise for instance a robotic vehicle outfitted with a motion and/or distance detection apparatus configured in accordance with one or more implementations, e.g., such as illustrated in FIGS. 4A-4C, below. The robotic apparatus 100 may comprise left and right cameras 106, 108 disposed at a distance 102 from one another. The robotic apparatus 100 may navigate in a direction 104. One or more obstacles may be present in path of the apparatus 100, e.g., a ball 112 and a box 122, disposed at distance 110, 120, respectively, from the apparatus 110. Due to the spatial separation 102 between the cameras 106, 108, travel paths from a given object (e.g., 114, 116 for the ball 112), 124, 126 for the box 122) may be unequal to one another. As shown in FIG. 1A, the path 114 is longer compared to the path 116, and the path 126 is longer than the path 124.
Difference in path lengths may cause a difference in apparent position of the objects 112, 122 in image frame(s) provided by the camera 106 relative the image frame provided by the camera 108.
FIG. 2A depicts a typical stereo imagery input for use with the disparity determination methodology, according to one or more implementations. The frames 200, 210 in FIG. 2A may be acquired by the two spatially displaced cameras 106, 108 of the apparatus 100 in FIG. 1A. Object representations 216, 212 of the frame 210 may be displaced horizontally relative to object representations 206, 202, respectively, of the frame 200. Object representations 202, 212, 206, 216 may correspond to objects 112, 122, respectively, in FIG. 1A.
FIG. 2B illustrates the disparity between representations of objects corresponding to the frames of stereo imagery shown in FIG. 2A. Object representations 236, 246 in frame 23 may be characterized by a horizontal disparity 234, and similarly object representations 222 232 may be characterized by a corresponding horizontal disparity 224. Disparity 224, 234 may be inversely proportional to distance between the camera and the respective object (e.g., the distance 110, 120 in FIG. 1A); i.e., the shorter the distance, the greater the disparity, due to the greater subtended arc.
FIG. 1B depicts a mobile robotic apparatus comprising a motion detection apparatus configured, e.g., in accordance with the exemplary implementations illustrated in FIGS. 7-8, infra. The robotic apparatus 160 may comprise a camera 166. The camera 166 may be characterized by a field of view 168 (e.g., an extent of the observable world that may be captured by the camera lens at a given moment). The camera 166 may provide information associated with objects within the field of view 168. In some implementations, the camera 166 may provide frames of pixels of luminance and/or color, refreshed at 25 Hz frame rate. However, it will be appreciated that, in some implementations, other frame rates may be used (whether constant or variable), as may other types of information provided by the camera(s) 166.
One or more objects (e.g., a floor 170, a stationary object 176, a moving object (e.g., ball 174), and/or other objects) may be present in the camera field of view. The motion of the objects may result in a displacement of pixels representing the objects within successive frames, such as is described in U.S. patent application Ser. No. 13/689,717 filed on Nov. 29, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, issued as U.S. Pat. No. 9,193,075, incorporated herein by reference in its entirety.
When the robotic apparatus 160 is in motion, such as shown by arrow 164 in FIG. 1B, motion of the objects within the camera 166 field if view 168 (e.g., denoted by arrows 172, 178, 180 in FIG. 1B) may comprise the self-motion component and the object motion component. By way of a non-limiting example, motion of objects in FIG. 1B may comprise apparent motion 180 of the stationary background 176 and the boundary (e.g., the component 172 associated with the floor boundary); (ii) component 178 associated with the moving ball 174 that comprises a superposition of the ball displacement and motion of the camera; and/or other components. As noted previously, determination of the ball 174 motion may be particularly challenging when the camera 160 is in motion (e.g., during panning) and/or when the field of view is changing (e.g., when zooming in/out).
FIG. 2C depicts two exemplary frames (e.g., provided by the camera 166 in FIG. 1A) comprising multiple moving objects useful with the motion estimation methodology described herein. The frames 240, 250 may comprise an object 246, 256 that may move in a given direction (e.g., 288). The frames 240, 250 may comprise an object 242, 252 that may move back and forth in a direction indicated by arrow 244. Motion along curved trajectories may be resolved by using linear piece-wise approximation, wherein motion between successive frames may be interpreted as linear. An increased frame rate and/or image resolution may be employed with complex motion trajectories. In some implementations of target approach by a robotic device, the object 242 may comprise a target (e.g., ball) that may be moved back and forth in order to indicate to a controller of, e.g., the robotic vehicle 160 in FIG. 1B, a target to follow. Frames 240, 250 may represent position of objects at two time instances. Due to the presence of multiple motions, detection of object 242, 252 may be not straightforward due to, for example, portions of the frames 250 being characterized by differential motion.
In some implementations of object detection in the presence of differential motion, background (and/or self-motion) may be determined using a statistical analysis of motion distribution within a given encoded frame. Various statistical parameters may be determined, e.g., median, mean plus/minus n standard deviations, and/or others, in order to determine one or more prevailing (dominant) motion vectors for the frame. The prevailing motion may be removed (e.g., via a vector subtraction) from the frame motion distribution in order to determine residual motion. The residual motion may be analyzed (e.g., using a threshold technique) in order to detect one or more features that may be moving differently from the prevailing motion.
In one or more implementations of object detection in the presence of differential motion, prevailing motion may be determined using a clustering technique. For example, a motion filed within the frame may be partitioned into a plurality of clusters based on analysis of motion distribution. The largest area cluster may be associated with the prevailing (dominant) motion, or may be removed (masked off) from the frame to obtain residual motion distribution. The residual motion may be analyzed in order to determine the presence of one or more features based on remaining clusters of motion.
In some implementations, image frames provided by a plurality of cameras (e.g., the cameras 106, 108 in FIG. 1A) may be utilized in order to determine depth of field and/or distance to objects using the disparity information. Comparing object representations 206, 202 of frame 200 to object representations 216, 212 of frame 210, the disparity may be considered as object motion occurring between the capture of frame 200 and the capture of frame 210 in FIG. 2A. The disparity (e.g., apparent motion) 224, 234 in FIG. 32B may be obtained using, in the exemplary implementation, motion estimation. Various motion estimation algorithms exist (e.g., optical flow methodology, such as that described in in U.S. patent application Ser. No. 13/689,717 filed on Nov. 29, 2012 and entitled “APPARATUS AND METHODS FOR OBJECT DETECTION VIA OPTICAL FLOW CANCELLATION”, issued as U.S. Pat. No. 9,193,075, incorporated herein by reference in its entirety, each of which may be used consistent with the various aspects of the present disclosure.
In some implementations, the apparent motion due to disparity may be determined using motion estimation information provided by a video encoder. In order to enable motion estimation by an encoder, frames provided by individual cameras (e.g., 106, 108 in FIG. 1A) may be combined to form a common video stream. FIG. 3A illustrates determination of an input stream for motion processing using an alternating interleaver of stereo frame input, according to one or more implementations. In FIG. 3A, the frame sequences 300,310 (also referred to as channel A, B) may correspond to data provided by two cameras (e.g., left/ right cameras 106, 108 in FIG. 1A), and/or data loaded from a disc or other source, in one or more implementations. The frame sequences 300, 310 comprising (e.g., frames 302, 312) may be processed by an alternating interleaver process 320 configured to produce an interleaved frame sequence 309. The sequence 309 may comprise alternating frames (e.g., 302, 312) from left/right cameras, in some implementations. In some implementations, the frames from left/right cameras (e.g., 302, 312) may be acquired simultaneous with one another using, e.g., multiple camera synchronization.
FIG. 3B illustrates determination of an input stream for motion processing using an alternating interleaver of stereo frame input, according to one or more implementations. The A, B frame sequences 300, 310 comprising (e.g., frames 302, 312) may be processed by the alternating interleaver process 320 configured to produce an interleaved frame sequence 322. The sequence 322 may be configured to comprise alternating pairs of frames from a given channel. As shown in FIG. 3A, frames B1, B2 from channel B acquired at times t1, t2, may be followed by frames A1, A2 from channel A acquired at times t1, t2, followed by frames B3, B4 from channel B acquired at times t3, t4, wherein t4>t3>t2>t1. In some implementations (not shown), the frame A1 may be repeated and/or preceded by a blank frame in the interleaved sequence 322. Use of an alternating approach may provide, inter alfa, both motion and disparity information within a single encoded stream.
Sequences of interleaved frames (e.g., 309, 322 FIGS. 3A-3B, and/or shown in FIGS. 6A-6B, below) may be utilized in order to determine depth of field of view and/or distance to objects using motion encoding, as described in detail below with respect to FIGS. 4A-5C.
FIG. 4A illustrates a processing apparatus configured to determine disparity from two image sources, according to one or more implementations. The apparatus 400 may comprise two image sources 404, 405 configured to provide information environment 402. In some implementations of visual data processing, the sources 404, 405 may comprise digital and/or analog cameras disposed separate from one another. Individual cameras may comprise an image sensor (CCD, CMOS device, and/or an APS, photodiode arrays, and/or other sensors). It will be appreciated that in some implementations, such separation between the image sensors may be achieved even when the sensors are disposed on the same substrate or “chip” (e.g., two sensors placed at opposite ends of the same substrate/chip). In one or more implementations, the image sources 4054, 405 may comprise video files on a storage device. An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate). It will be appreciated by those skilled in the art when given this disclosure that the above-referenced image parameters are merely exemplary, and many other image representations (e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations) are equally applicable to and useful with the various aspects of the present disclosure. Furthermore, data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar, or tomography images may be equally compatible with the processing methodology of the disclosure, or yet other configurations.
Image frames 406, 407 provided by the sources 404, 405 may be interleaved by the interleaver apparatus 410. In some implementations, the interleaver apparatus 410 may comprise 2×1 multiplexer configured to provide one of the input channels 406, 407 at its output 412 at a given time. The output 412 may comprise an alternating interleaved stream (e.g., 309), an alternating interleaved stream of frames (e.g., 322 in FIG. 3B), or yet other option, in accordance with one or more implementations.
The interleaved output 412 may be provided to a motion estimation component 420. In one or more implementations, the motion estimation component may comprise a video encoder comprising one or more motion estimation operations. The component 420 may comprise for instance a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Snapdragon® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other. Video encoding effectuated by the component 420 may comprise any applicable standard comprising motion estimation between one or more current images and one or more preceding images. Some exemplary encoding implementations include H.264/MPEG-4 advanced video coding described, e.g., in ISO/IEC 14496-10, 2009—MPEG-4 Part 10, Advanced Video Coding, H.263 standard described in, e.g., ITU-T 11.263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (January/2005) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, Video coding for low hit rate communication; H.262/MPEG-2, described in e.g., ISO/IEC 13818-2 2013 Oct. 1 Information technology—Generic coding of moving pictures and associated audio information—Part 2: Video, H.265 standard described in, e.g., ITU-T H263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (April/2013), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, High efficiency video coding; each of the foregoing being incorporated herein by reference in its entirety. See also Exhibit I hereto, which contains exemplary computer code useful for processing image data consistent with, e.g., the ISO/IEC 1196-10 and H.265 Standards referenced above.
In some implementations, the motion estimation component 420 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of one or more features and estimating the motion of individual detected features.
Output 422 of the motion estimation component may be provided to a processing component 430 configured to determine one or more parameters of interest, including e.g., depth of the scene 402 and/or distance to objects that may be present in the scene, using motion based disparity determination methodology.
Returning now to FIG. 3A, the encoding of the interleaved frame sequence 309 by the motion estimation component 420 of FIG. 4A is depicted by arrow 314 in FIG. 3A. In some implementations, the component 420 may comprise MPEG-4/H.264 encoder configured to produce the encoded stream 315. The encoding of frame pair 302, 312 may produce motion information for the encoded frame 317. The motion information of the frame 317 may comprise e.g., horizontal and/or vertical displacement (dx, dy) of blocks of pixels (macroblocks) and be interpreted as caused by a disparity between scene representations of frame 302 and frame 304. In some implementations of stereo vision (e.g., as described above with respect to FIG. 1A), analysis of motion information for the frame 317 (performed by the component 430 in FIG. 4A) may produce the disparity D between left and right image frames. As shown in FIG. 3A, the encoded frames 317, 319 may provide disparity estimates D1, D2 associated with frames acquired at times t1, t2. Frames 316, 318 may be ignored (skipped) for the purposes of the disparity determination. Disparity estimates D1, D2 may be used to determine distance to one or more objects that may be present within frames of sequences 300, 310, 315.
In one or more implementations, the component 430 may be configured to parse the compressed video stream 422 in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A). By way of an illustration, the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L. The extracted motion information may be used for disparity and or distance determination. Output 432 of the component 430 may be provided to another component (e.g., a controller of a robot). Various uses of the depth information are contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or others that will be recognized by those of ordinary skill when provided the present disclosure.
Returning now to FIG. 3B, encoding of the interleaved frame sequence 322 by the motion estimation component 420 of FIG. 4A is depicted by an arrow 328 in FIG. 3B. In some implementations, the component 420 may comprise MPEG-4/H.264 encoder configured to produce encoded stream 330. In encoding of frame pair 302, 312 may produce motion information accessible using the encoded frame 332. The motion information of the frame 332 may comprise horizontal and/or vertical displacement (dx, dy) of blocks of pixels and be interpreted as caused by a disparity between scene representations of frame 302 and frame 312. In some implementations, of stereo vision (e.g., as described above with respect to FIG. 1A) analysis of motion information for the frame 332 (performed by the component 430 in FIG. 4A) may produce the disparity D between left and right image frames. As shown in FIG. 3B, the encoded frames 332, 336 may provide disparity estimates D1, D3 associated with frames acquired at times t1, t3. The encoded frame 334 may provide negative disparity estimate (−D2) associated with frames acquired at time t2. Frames 333, 338 may provide motion information associated with the frame sequence 310. Frame 335 may provide motion information associated with the frame sequence 300. Disparity estimates D1, D2, D3 may be used to determine distance to one or more objects that may be present within frames of sequences 300, 310, 330 using, e.g., Eqn. 1.
Although interleaving and/or concatenation of frames from two sources is illustrated in FIGS. 3A-3C, the methodology described herein may be employed for any practical number of sources (e.g., three, four as shown and described with respect to FIGS. 5A-5C below, and/or a greater number of sources).
FIG. 4B illustrates a processing apparatus configured to determine disparity from two or more image sources, according to one or more implementations. The apparatus 440 may comprise a plurality of image sources (e.g., 443, 445) configured to provide information related to the environment 402. In some implementations of visual data processing, the sources 443, 445 may comprise sources described with respect to FIG. 4A above.
Image frames 446, 448 provided by the sources 443, 445 may be interleaved by the interleaver apparatus 450. In some implementations, the interleaver apparatus 450 may comprise N×1 multiplexer configured to provide data from one of the input channels 446, 448 at its output 452 at a given time. The output 452 may comprise an alternating interleaved stream (e.g., 600 in FIG. 6A), an alternating interleaved stream of frames (e.g., constructed similar to the stream 322 in FIG. 3B) in accordance with one or more implementations.
The interleaved output 452 may be provided to a processing component 456. The component 456 may comprise motion estimation logic. In one or more implementations, the motion estimation logic may comprise a video encoder comprising motion estimation operation. The component 456 may comprise a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Snapdragon® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other. Video encoding effectuated by the component 456 may comprise any applicable standard comprising motion estimation between current frame and preceding frame. In some implementations, the motion estimation component 456 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features. In one or more implementations wherein the input 452 may be encoded using a video encoder (e.g., MPEG-4, H.265), the component 456 may be configured to parse the encoded video stream in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A).
The component 456 may comprise logic configured to determine depth of the scene 402 and/or distance to objects that may be present in the scene using motion determined based disparity determination methodology. The extracted motion information may be used for disparity and or distance determination. Output 458 of the component 456 may be provided to another component (e.g., a controller of a robot). Various uses of the depth information may be contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or others. Determining motion disparity and/or the distance may be performed for example using operations described above with respect to FIGS. 3A-4A.
In some implementations, data from multiple sources may be concatenated or otherwise combined/aggregated prior to encoding. For example, source images from left and right cameras (e.g., 106, 108 in FIG. 1A) may be combined or concatenated into one image (also referred to as a combined and/or composite image). Various combinational approaches may be utilized: for example the combined or concatenated image may comprise twice as many pixels (e.g., twice the width or twice the height) as any one of the source images. In some implementations, the source images may be trimmed, compressed, and/or subsampled so that the concatenated image may comprise the same data size as the source image(s).
FIG. 3C depicts combining two images into a single image for video encoding, according to one or more implementations. The frame sequences 300,310 may correspond to data provided by two cameras (e.g., left/ right cameras 106, 108 in FIG. 1A), and/or data loaded from a disc or other source, in one or more implementations. As shown in FIG. 3C, frame 361 from the sequence 310 and frame 362 from the sequence 300 may be disposed on top of one another so as to form the combined frame 364. It will be recognized by those skilled in the arts that the frame layout shown in FIG. 3C is exemplary, and aimed to illustrate one possible implementation of frame combination and various other layouts (e.g., side by side, reversed order) may be utilized. The combined (e.g., concatenated) frame output 366 may be provided to an encoder (e.g., the encoder component 420 and/or processor component 470, described below with respect to FIGS. 4A, 4C, respectively).
In some implementations, dimensions of the combined frame (e.g., 364 in FIG. 3C) may be selected to match dimension of one or more input frames (e.g., frames of sequences 300, 310). The frames 361, 362 from the frames of sequences 300, 310 may be adjusted (e.g., resampled, compressed and/or cropped) to fit into the frame 364. In one or more implementations, a dimension (e.g., 368) of the combined frame 364 may be selected to comprise a combination of the respective dimensions of the source frames (e.g., the dimensions 367, 369 of frames 361, 362). In some implementations comprising concatenation of frames of two or more resolutions, e.g., such as described below with respect to Tables 6-7, dimension of the combined frame (e.g., 364) the may be configured based on a dimension of one of the source frames (e.g., the dimension 367 or 369). For example, concatenation of a frame at 100×100 pixel (resolution 1), a frame at 50×50 pixel size (resolution 2), and a frame at 25×25 pixel size (resolution 3), may correspond to a concatenated frame of 150×100 pixel dimension.
FIG. 3D illustrates a configuration of a sequence of frames for motion and disparity processing using combining input from multiple sources, according to one or more implementations. The A, B frame sequences 300, 310 comprising (e.g., frames 302, 312) may be processed by the alternating interleaver process 340 configured to produce a sequence of concatenated frames 350. The sequence 350 may comprise a plurality of frames (e.g., 342, 344 in FIG. 3D). Individual frames of the sequence 350 may be configured using a combination of frames from the sequences 300, 310. The process of configuring frames of the sequence 350 may be referred to as interleaving of concatenated frames, as a given portion of one combined frame (e.g., top portion of the frame 342) may be configured to contain frames from a source (e.g., source A) that may differ from the source of the content within respective portion within another combined frame (e.g., frame from source B in the combined frame 344). In some implementations, such as illustrated in FIG. 3D, individual frames of the sequence 350 (e.g., the frames 342, 344) may comprise a frame from the sequences 300 (e.g., the frame 302) and a frame from the sequences 310 (e.g., the frame 312). The relative position of the source frames (e.g., 302, 312) within the concatenated frame(s) (e.g., 342, 344) may be configured in accordance with particular implementations. By way of an illustration, as shown in FIG. 3D, vertical placement of the source frames within the combined frame may be switched every two frames (e.g., the frames 342, 347, 348 may comprise frame from one source (300) in the top portion and the frame from another source (310) in the bottom portion. For a given portion of the combined frame (e.g., top/bottom portion as shown in FIG. 3D) the frame source may be altered every n (e.g., two) frames. For example, the frames 342, 347, 348 may comprise frame from one source (300) in the top portion. Frames 344, 346, 349 may comprise frames from another source (310) in the top portion.
The frame sequence 350 may be encoded by an encoder process denoted by arrow 352. In one or more implementations, the encoder process 352 may comprise H.264, H.265, MPEG-4, and/or other applicable encoder characterized by motion estimation operation. The encoding process 352 may produce the encoded frame sequence 354. Use of concatenation and interleaving to construct the sequence 350 may enable to obtain, inter cilia, motion and disparity information within a single encoded frame sequence 354.
As shown in FIG. 3D, the encoded frames 353, 356, 359 may provide disparity estimates DXn. The notation in FIG. 3D is as follows: X={A,B} denotes frame source sequence; n={1, 2, . . . . N} denotes a frame index within the source sequence occurring at time to ={t1, t2, . . . , tN}. For example, disparity estimates DA1, DA2, DA3 denote a disparity between frames of A/B sequences occurring at times t1, t2, t3, with the A sequence frame being the reference frame. Disparity estimates DB1, DB2, DB3 denote disparity between frames of B/A sequences occurring at times t1, t2, t3, with the B sequence frame being the reference frame. Motion estimates mA1, mA2, denote motion estimate between frames of A sequences occurring at times t1, t2 and t2, t3 within the A sequence. Motion estimates mB1, mB2, denote motion estimate between frames of B sequences occurring at times t1, t2 and t2, t3 within the B sequence.
Motion and/or disparity estimates may be obtained by parsing the sequence of encoded frames (e.g., 354 in FIG. 3D). The disparity information may be used to, inter alia, determine distance to one or more objects within the input sequence.
Concatenation and interleaving of frames from multiple image sources as shown in FIG. 3D represents one exemplary implementation. For a given encoded concatenated frame, the implementation of FIG. 3D may be configured to provide two estimates of motion (e.g., the encoded frame 355) or two estimates of disparity (e.g., the frame 353). A variety of implementations may be employed for concatenation and interleaving of frames from multiple image sources, e.g., such as presented in Tables 1 through 7 below.
Concatenation and interleaving implementation of inputs A,B shown in Table 1 may enable determination of motion for the source A and source B from a given encoded concatenated frame.
Concatenation and interleaving implementation of inputs A,B shown in Tables 2, 3 may enable determination of motion for the source 13, A, respectively, from a given encoded concatenated frame. The disparity between frames of the sources A,B may be determined from every other encoded concatenated frame of the implementation of Tables 2, 3.
Concatenation and interleaving implementation of inputs A,B shown in Tables 3 may enable determination of disparity d between frames of the sources A,B from a given encoded concatenated frame. Motion information may be unavailable when using the frame configuration implementation shown in Table 4.
TABLE 1
Source Output
A1 B1
A2 B2 mA1 mB1
A3 B3 mA2 mB2
TABLE 1
Source Output
B1 B1
B2 A1 mB1 D1
B3 B3 mB2
B4 A3 mB3 D3
TABLE 2
Source Output
A1 B1
A2 A1 mA1 D1
A3 B3 mA2
A4 A3 mA3 D3
TABLE 3
Source Output
A1 A1/B1
B1 A2 D1
A3 B2 D2
B3 A4 D3
In some implementations, a combined frame may comprise data from three or more source frames. Table 5 illustrates one such implementation of concatenation and interleaving of inputs A, B. As shown in Table 5, a given encoded combined frame may enable determination of motion (for one of the sources A or B) and disparity D between frames of the sources A,B. The symbol denoted a portion of encoded frame that may be ignored during processing of motion/disparity information.
TABLE 4
Source Output
A1 A1/B1 A1
B1 A2 A2 D1 M1
A3 B2 A3 D2 m2
B3 A4 A4 D3 m3
In some implementations of processing of images from multiple sources the combination operation may comprise replication of a given image (e.g., image B1 in Table 2 and/or image A1 in Table 5) and concatenating multiple copies of a given image within a concatenated image.
In some implementations, input concatenation and/or interleaving may use input frames characterized by two or more resolutions. By way of an illustration (shown in Table 6 below), a given concatenated frame may comprise a left portion comprising an input frame (e.g., A/B) at resolution a first resolution, Res I (e.g., 1280×720 pixels). The right portion may comprise a frame at another resolution, Res II, in this example characterized by quarter-pixel count (e.g., 640×360 pixels) compared to the Res I frame, and a frame at a third resolution Res III, characterized by a 1/16th pixel count (e.g., 320×180 pixels) compared to the Res I frame.
Various other implementations may be utilized wherein frames of multiple resolutions (or where some frames have the same resolution and some do not) from one or more sources may be concatenated into a given frame. Table 7 below illustrates one such implementation, wherein frames from sources A, B may be concatenated using two frame resolutions (denoted by Res I and Res II). In some implementations, the Res I resolution may correspond to 1280×720 pixels, 640×360 pixels or other resolution; the Res II resolution may correspond to a 640×360 pixels, 320×180, or other resolution. The frame configuration of Table 7 may enable obtaining motion m and disparity D from the left and right portions, respectively, of a given encoded concatenated frame.
TABLE 5
Left Right
Res I Res II
Res III
Ignored
TABLE 6
Left Source Right Source Output 1 Output 2
A1 Res I A1 Res II
Any
A2 Res I B1 Res II m1 D1
A2 Res II
A3 Res I A3 Res II m2
B2 Res II D2
A4 Res I B3 Res II m3 D3
A4 Res II
FIG. 4C depicts a processing apparatus configured to deter mine disparity and/or motion using concatenation and/or interleaving of images from multiple sources, according to one or more implementations.
The apparatus 460 may be configured to receive input from a plurality of image sources (e.g., 404, 405) configured to provide information related to the environment 402. In some implementations of visual data processing, the sources 404, 405 may comprise the sources described with respect to FIG. 4A above.
Image sequences 446, 448 provided by the sources 404, 405 may be converted using the illustrated component 462. In some implementations, the component 462 may comprise an N×1 converter configured to place data from one of the input sequences 466, 468 into its output frame at a given time. In one or more implementations, the component 462 may be configured to implement frame concatenation, resampling, trimming, and/or other manipulations of input frames 466, 468. In some implementations, operation of the component 462 may be configured using approaches described with respect to FIGS. 3C-3D herein, and/or Tables 1-7, above. The output 464 may comprise, e.g., a sequence of concatenated frames (e.g., the sequence 350 in FIG. 3D).
The output 464 may be provided to a processing component 470. The component 470 may comprise for example motion estimation logic. In one or more implementations, the motion estimation logic-enabled component may comprise a video encoder comprising one or more motion estimation operations or algorithms. The component 470 may comprise for instance an integrated circuit (IC) disposed on a single or multiple die, a component of a processing system (e.g., video encoder block of a Snapdragon® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other configuration. Video encoding effectuated by the component 470 may comprise any applicable standard comprising motion estimation between one or more current images and one or more preceding images. Some exemplary encoding implementations include H.264/MPEG-4 advanced video coding described, e.g., in ISO/IEC 14496-10, 2009—MPEG-4 Part 10, Advanced Video Coding, H.263 standard described in, e.g., ITU-T H263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (January/2005) SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication; H.262/MPEG-2, described in e.g., ISO/IEC 13818-2 2013 Oct. 1 Information technology—Generic coding of moving pictures and associated audio information—Part 2: Video, H.265 standard described in, e.g., ITU-T 11.263 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (April/2013), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services—Coding of moving video, High efficiency video coding; each of the foregoing being incorporated herein by reference in its entirety. See also Exhibit I hereto, which contains exemplary computer code useful for processing image data consistent with, e.g., the ISO/IEC 1196-10 and H.265 Standards referenced above.
The component 470 may comprise logic configured to determine depth of the scene 402, and/or distance to objects that may be present in the scene 402, using motion based disparity determination methodology. The extracted motion information may be used for disparity and or distance determination. The output 472 of the component 470 may be provided to another component (e.g., a controller of a robot). Various uses of the depth information are contemplated herein, such as for example object detection, object localization, distance estimation, trajectory planning, and gesture detection. Determining motion disparity and/or the distance may be performed, for example, using the operations described above with respect to FIGS. 3A-4A.
In one or more implementations, the motion estimation logic referenced above may comprise a video encoder comprising one or more motion estimation operations. The component 456 may comprise for instance a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Snapdragon® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, and/or an OEM printed circuit board. Video encoding effectuated by the component 456 may comprise any applicable standard comprising motion estimation between current frame and preceding frame, such as e.g., those disclosed supra. In some implementations, the motion estimation component 456 may comprise logic configured to determine motion using optical flow, and/or other motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features. In one or more implementations (wherein the input 452 may be encoded using a video encoder (e.g., MPEG-4, H.265)), the component 456 may be configured to parse the encoded video stream in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A).
In some implementations of an encoder with motion estimation (e.g., MPEG-4/H.264), the encoded frame sequences 315, 330, 354 in FIGS. 3A-3C and/or output 422 in FIG. 4A may comprise one or more frames (also referred to as keyframes) that may not contain motion information. The processing component 430, 456 and/or 470 of FIGS. 4A-4C may be configured to detect and ignore (e.g., skip) frames that do not convey motion information.
The apparatus 440 of FIG. 4B and/or 460 of FIG. 4C may be utilized with multi-camera configurations, e.g., such as described below with respect to FIGS. 5A-5C. FIG. 5A illustrates a triple-camera configuration useful with disparity and/or motion determination using image interleaving and/or concatenation, according to one or more implementations. The camera configuration 500 may comprise three cameras 510, 520, 530, denoted A, B, C, respectively. In one or more implementations, the configuration 500 may comprise a pair of horizontally spaced cameras (e.g., left 510, right 520) and a vertically spaced camera (530). In some implementations (e.g., such as illustrated in FIG. 5C), the cameras 510, 520, 530 may be disposed in a linear array, and/or another configuration. Frames provided by the cameras 510, 520, 530 may be interleaved and/or concatenated using any applicable methodologies, including these described with respect to FIGS. 3A-3C, and/or 6A herein.
Various interleaving sequences may be employed when processing frames provided by the cameras 510, 520, 530. By way of illustration, encoding interleaved frame stream ABCA . . . (e.g., the stream 600 shown in FIG. 6A) comprising frames provided by the cameras 510, 520, 530 using a motion estimation encoder, may provide motion due to disparity shown by arrows 504, 514, 524 in FIG. 5A. Encoding interleaved frame stream ACBA . . . , (e.g., 610 in FIG. 6A) using a motion estimation encoder, may provide motion due to disparity shown by arrows 526, 516, 506 in FIG. 5A. Various other interleaving sequences may be utilized, such as, for example, ABBCCAABBCCAA . . . , BACA . . . and/or other.
FIG. 5B illustrates a quad-camera configuration useful with disparity determination using image interleaving, according to one or more implementations. The camera configuration 550 may comprise four cameras 550, 552, 556, 558, denoted A, B, C, D, respectively. In one or more implementations, the configuration 550 may comprise two pairs of horizontally spaced cameras (e.g., left top 550, right top 552 and left bottom 550, right bottom 556) vertically spaced from one another. Various other spatial camera configurations may be utilized as well. Frames provided by the cameras 552, 554, 556, 558 may be interleaved and/or combined (e.g., concatenated) using any applicable methodologies, including these described with respect to FIGS. 3A-3C, and/or 6B.
Various interleaving sequences may be employed when processing frames provided by the cameras 552, 554, 556, 558. By way of illustration, encoding interleaved frame stream ABCDA . . . (e.g., the stream 620 shown in FIG. 6B) comprising frames provided by the cameras 552, 554, 556, 558 using a motion estimation encoder may provide motion due to disparity shown by arrows 562, 564, 566, 568 in FIG. 5B. Encoding the interleaved frame stream ADCBA . . . , (e.g., 638 in FIG. 6B) using a motion estimation encoder, may provide motion due to disparity shown by arrows 569, 567, 565, 563 in FIG. 5B. Various other interleaving sequences may be utilized, such as, for example sequences 624, 628, 630, 634 illustrated in, FIG. 6B may be utilized. Sequences comprising transitions between diagonally opposing cameras in FIG. 5B (e.g., AC, CA, BD, DB and/or other) may be used to, inter cilia, determine disparity shown by arrows 544, 546.
FIG. 5C illustrates a linear multiple sensor element configuration useful with determining multiple depths scales using image interleaving, according to one or more implementations. The configuration 570 may comprise sensor elements 572, 574, 576, 578 disposed in a linear array. In one or more implementations, individual sensor elements may comprise cameras or camera sensors. Spacing between the elements 572, 574, 576, 578 may be the same (uniform linear array) and/or varying (e.g., a power law, random, and/or other). In some implementations, non-uniform spacing may be used in order to implement e.g., a Vernier scale.
Various interleaving sequences may be employed when processing frames provided by the elements 572, 574, 576, 758, such as, for example sequences 620, 624, 628, 630, 634, 638 illustrated in, FIG. 6B and/or other sequences (e.g. ABBCCDDAA . . . ). Use of multiple elements of the array 570 may enable determination of multiple disparity estimations, e.g., shown by arrows 580, 582, 584, 586, 588, 590. In some implementations, the frames from individual sensor elements shown and described above with respect to FIGS. 5A-5C (e.g., 510, 520, 530) may be acquired simultaneous with one another using, e.g., multiple camera synchronization. The disparity estimations corresponding to different sensor spacing (e.g., shown by arrows 580, 582, 584, 586, 588, 590 in FIG. 5C) may be characterized by different dynamic range, different resolution, and/or precision, e.g., in accordance with Eqn. 1. By way of an illustration, closely spaced sensing elements (e.g., 572, 574) may be capable of determining distance to objects disposed farther from the array as compared to wide spaced elements (e.g., 572-578). Wide spaced elements (e.g., 572-578) may be capable of determining distance to objects with greater precision (e.g., lower uncertainty) as compared to estimates produced by closely spaced sensing elements (e.g., 572, 574).
In some implementations, multiple elements (e.g., 572, 574, 576, 758) may be disposed in a non-linear array (e.g., rectangular and/or concave) thereby providing multiple perspectives and/or views of the scene to the processing component. Some views/perspectives may, e.g., reveal objects that may be hidden and/or partially obscured in other perspectives, thereby enabling more robust determination of object distance and/or object detection. In some implementations, individual distance estimates (associated with individual camera pairs) may be combined using any appropriate methodologies (e.g., averaging, thresholding, median filtering), and/or other techniques to obtain a resultant distance estimate, characterized by greater precision and/or accuracy compared to individual estimates. In one or more implementations, a distance estimate associated with one camera pair may be selected as the resultant distance estimate, thereby enabling robust distance determination in presence of occlusions that may (at least partly) block the object in a given set of frames.
Various concatenation approaches may be employed when processing frames from multiple sources, such as is described above with respect to FIGS. 5A-6B. In one or more implementations, the concatenation approaches may comprise placing two or more frames from two or more sequences into a single frame, e.g., as described above with respect to FIGS. 3C-3D, and/or Tables 1 through 4. In some implementations, a frame from a given source may be repeated within the concatenated frame, e.g., as shown in Tables 2 and 7 above. In one or more implementations, the concatenation operation may comprise resampling and/or resizing of the input frames, e.g., as described with respect to Tables 6-7 above.
FIG. 7 depicts a motion extraction apparatus, according to one or more implementations. The apparatus 700 may comprise an encoder component 706 configured to encode input video stream 702. The input 702 may comprise one or more frames received from an image sensor (e.g., charge-coupled device (CCD), CMOS device, and/or an active-pixel sensor (APS), photodiode arrays, and/or other image sensors). In one or more implementations, the input may comprise a pixel stream downloaded from a file. An example of such a file may include a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate). It will be appreciated by those skilled in the art when given this disclosure that the above-referenced image parameters are merely exemplary, and many other image representations (e.g., bitmap, luminance-chrominance (YUV, YCbCr), cyan-magenta-yellow and key (CMYK), grayscale, and/or other image representations) are equally applicable to and useful with the various aspects of the present disclosure. Furthermore, data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar or tomography images are equally compatible with the processing methodology of the disclosure, or yet other configurations.
The component 706 may comprise a specialized video encoder configured to implement video encoding comprising a motion estimation operation. In one or more implementations, the component 706 may comprise a dedicated integrated circuit (IC) disposed on a single or multiple die), a component of a processing system (e.g., video encoder block of a Snapdragon® system on a chip), an ASIC, an FPGA with a video encoder intellectual property (IP) core, an OEM printed circuit board, and/or other. Video encoding effectuated by the component 706 may comprise any applicable standard comprising motion estimation between current frame and preceding frame. Some encoding implementations may comprise MPEG-4, H.262, H.263, H.264, H.265 video encoder such as described above with respect to FIG. 4A supra.
The component 706 may provide encoded video output 708. The output 708 may be characterized by a lower data rate (e.g., as represented by fewer bits per frame) as compared to the input video signal 702. The output 708 may comprise pixel luminance and/or chromaticity data. The output 708 may comprise motion information, e.g., as illustrated in FIG. 9A which depicts output of a video encoder useful with the motion extraction methodology. In one or more implementations, the output illustrated in FIG. 9A may correspond to occurrence of an object, e.g., moving ball represented by a hashed circle 900 in FIG. 9A in input 702 of FIG. 7. The encoded output 708 may comprise a luminance component (also referred to as “luma”) depicted by area 902 in FIG. 9A. The encoded luminance may be comprised of a plurality of macroblocks 904. Size of the macroblock may be configured in accordance with specifications of an application (e.g., encoding standard, video frame size, resolution, quality, refresh rate, bit depth, channel (e.g., luma, chroma), and/or other and be selected, for example, at 16×16 for luma channel, 8×8 for chroma channel for H.264 encoder.
The encoded output 708 (that also may be referred to as the compressed video) may comprise motion information, denoted by area 910 in FIG. 9A. Motion information may comprise one or more vectors (e.g., 916) associated with one or more macroblock (e.g., 914).
Compressed video 708 in FIG. 7 may be provided to a processing component 710. The component 710 may be configured to parse the compressed video stream 708 in order to obtain motion information (e.g., map of vectors 916 in FIG. 9A). By way of an illustration, the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L. The extracted motion information 712 may be provided to another component. Various uses of the motion information may be contemplated such as, for example, object detection by recognizing the shape of the surface of the object, and/or by using depth to segment the scene, gesture detection by determining the orientation of the hands or other body parts, and/or other. In some implementations, the compressed video may be provided via a pathway 714 to a target destination (e.g., general purpose processor for streaming to a display and/or other components).
FIG. 8 depicts a video processing system, comprising a differential motion extraction apparatus, according to one or more implementations. The system 800 of FIG. 8 may be configured to receive sensory input 802. In some implementations, the input 802 may comprise the input 702 described above with respect to FIG. 7. The input 802 may be encoded by a video encoder component 806. In one or more implementations, the component 806 may comprise the component 706 described above with respect to FIG. 7. The component 806 may be configured to encode the input 802 using one or more encoding formats (e.g., H.264). The encoded signal 808 may be provided to component 810. In some implementations, the component 810 may be configured to parse the encoded signal 808 to extract motion information 812 by, e.g., extracting from the compressed video data the P slice (P-frame) data which contains the motion information (x and y components) or the macroblock motion for all macroblocks covering the current frame. The extracted motion information may be used in controlling a robotic device.
The extracted motion information (e.g., 712, 812 in FIGS. 7-8, respectively) may comprise horizontal and/or vertical displacement (e.g., the motion vector components (dx, dy)) of a pixel group (e.g., a macroblock) between the current frame and a preceding frame. In some implementations of video encoding useful with a pipeline-based multimedia framework (see, e.g., GStreamer framework, http://gstreamer.frcedesktop.org/) the parsed motion information may be represented using the YUV color model. In one such implementation, the (U,V) channels may represent the (dx,dy) displacement and the Y channel may be used for representing additional information (e.g., indicates as to whether the current frame is the keyframe, macroblock size (e.g., 16×16, 8×8 and/or other size, and/or other information). Using the (Y,U,V) model to represent motion information may advantageously reduce computational load on, e.g., the component 820, and enable access to motion information without necessitating further decoding/encoding operations in order to extract the motion vector components.
The input 802 may be processed by a processing component 820. The component 820 may comprise an artificial neuron network (ANN) comprising a plurality of nodes. Individual nodes of the component 820 network may comprise neuron units characterized by a receptive field, e.g., region of space in which a presence of a stimulus may affect response of the neuron. In some implementations, the units may comprise spiking neurons and the ANN may comprise a spiking neuron network, (SNN). Various implementations of SNNs may be utilized consistent with the disclosure, such as, for example, those described in co-owned, and co-pending U.S. patent application Ser. No. 13/774,934, entitled “APPARATUS AND METHODS FOR RATE-MODULATED PLASTICITY IN A SPIKING NEURON NETWORK” filed Feb. 22, 2013, Ser. No. 13/763,005, entitled “SPIKING NETWORK APPARATUS AND METHOD WITH BIMODAL SPIKE-TIMING DEPENDENT PLASTICITY” filed Feb. 8, 2013, Ser. No. 13/152,105, filed Jun. 2, 2011 and entitled “APPARATUS AND METHODS FOR TEMPORALLY PROXIMATE OBJECT RECOGNITION”, Ser. No. 13/487,533, filed Jun. 4, 2012 and entitled “STOCHASTIC SPIKING NETWORK LEARNING SYSTEMS AND APPARATUS FOR IMPLEMENTING TASK-SPECIFIC LEARNING USING SPIKING NEURONS AND METHODS”, Ser. No. 14/020,376, filed Sep. 6, 2013 and entitled “APPARATUS AND METHODS FOR EVENT-BASED PLASTICITY IN SPIKING NEURON NETWORKS”, Ser. No. 13/548,071, filed Jul. 12, 2012 and entitled “SPIKING NEURON NETWORK SENSORY PROCESSING APPARATUS AND METHODS”, commonly owned U.S. patent application Ser. No. 13/152,119, filed Jun. 2, 2011, entitled “SENSORY INPUT PROCESSING APPARATUS AND METHODS”, Ser. No. 13/540,429, filed Jun. 2, 2012 and entitled “SENSORY PROCESSING APPARATUS AND METHODS”, Ser. No. 13/623,820, filed Sep. 20, 2012 and entitled “APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA USING ARTIFICIAL SPIKING NEURONS”, Ser. No. 13/623,838, filed Sep. 20, 2012 and entitled “SPIKING NEURON NETWORK APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA”, Ser. No. 12/869,573, filed Aug. 26, 2010 and entitled “SYSTEMS AND METHODS FOR INVARIANT PULSE LATENCY CODING”, Ser. No. 12/869,583, filed Aug. 26, 2010, entitled “INVARIANT PULSE LATENCY CODING SYSTEMS AND METHODS”, Ser. No. 13/117,048, filed May 26, 2011 and entitled “APPARATUS AND METHODS FOR POLYCHRONOUS ENCODING AND MULTIPLEXING IN NEURONAL PROSTHETIC DEVICES”, Ser. No. 13/152,084, filed Jun. 2, 2011, entitled “APPARATUS AND METHODS FOR PULSE-CODE INVARIANT OBJECT RECOGNITION”, Ser. No. 13/239,255 filed Sep. 21, 2011, entitled “APPARATUS AND METHODS FOR SYNAPTIC UPDATE IN A PULSE-CODED NETWORK”, Ser. No. 13/487,576 entitled “DYNAMICALLY RECONFIGURABLE STOCHASTIC LEARNING APPARATUS AND METHODS”, filed Jun. 4, 2012, and U.S. Pat. No. 8,315,305, entitled “SYSTEMS AND METHODS FOR INVARIANT PULSE LATENCY CODING” issued Nov. 20, 2012, each of the foregoing being incorporated herein by reference in its entirety.
Receptive fields of the network 820 units may be configured to span several pixels with the input 802 frames so as to effectuate sparse transformation of the input 802. Various applicable methodologies may be utilized in order to effectuate the sparse transformation, including, for example, those described in co-pending and co-owned U.S. patent application Ser. No. 13/540,429, entitled “SENSORY PROCESSING APPARATUS AND METHODS”, filed Jul. 2, 2012, and U.S. patent application Ser. No. 13/623,820, entitled “APPARATUS AND METHODS FOR ENCODING OF SENSORY DATA USING ARTIFICIAL SPIKING NEURONS”, filed on Sep. 20, 2012, each of the foregoing being incorporated herein by reference in its entirety. In some implementations, the encoding may comprise a sparse transformation, described in, e.g., U.S. patent application Ser. No. 14/191,383, entitled “APPARATUS AND METHODS FOR TEMPORAL PROXIMITY DETECTION”, filed on Feb. 26, 2014, the foregoing being incorporated herein by reference in its entirety.
The output 812 of the encoder 820 may be provided to the processing component 820. In some implementations, the component 820 may use the motion information 812 in order to determine characteristics (e.g., location, dimension, shape, and/or other) of one or more objects in sensory input 802. In one or more implementations, the component 820 may comprise an adaptive predictor component configured to determine a control output 826 for a robotic device (e.g., the vehicle 100, 160 in FIGS. 1A-1B) based on the input 812 and/or inputs 802, 812. In some implementations of autonomous vehicle navigation, the input 812 and/or 802 may be used by the component 820 in order to predict control signal configured to cause the vehicle 160 in FIG. 1B to execute an obstacle avoidance action. Various implementations of predictors may be employed with the motion and/or distance detection approach described herein, including, e.g., U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed on Mar. 15, 2013, the foregoing being incorporated herein by reference in its entirety.
FIG. 9B illustrates motion of an object obtained from encoded video, according to one or more implementations. Hashed area 922 in FIG. 9B may represent luminance component of an image of a ball (e.g., 900 in FIG. 9A). The encoded output of FIG. 9A may comprise motion information, denoted by area 920 in. Motion information may comprise one or more vectors (e.g., 926) associated with one or more macroblock (e.g., 924). Encoded representations of FIGS. 9A-9B may be used to determine temporal distribution of motion associated with the ball 900. Motion pattern comprising alternating opposing motion vectors 916, 926 may be employed to communicate an action indication to a robotic device. In some implementations, a user may shake an object from left to right in front of a camera of an autonomous vehicle in order to indicate a target to be followed.
FIG. 9C illustrates spatial distribution of motion extracted from encoded video, according to one or more implementations. The representation shown in FIG. 9C may comprise portion 930 comprising a first plurality of macroblocks 932 characterized by first motion direction 936. The representation shown in FIG. 9C may comprise portion 940 comprising a second plurality of macroblocks 942 characterized by second motion direction 946. The spatial motion map illustrated in FIG. 9C may be employed to communicate an action indication to a robotic device. In some implementations, a user wave arms (in a crisscross manner) in order to indicate to a robotic device a stop, and/or other command.
In some implementations (not shown) motion information for a given frame may be characterized by a plurality of different motion vectors due to, e.g., motion of different objects, camera pan/zoom operation, and/or video acquisition from a moving platform. By way of an illustration of operation of the robotic vehicle 160 of FIG. 1B, video signal obtained by the camera 166 may comprise a representation of human making gestures superimposed on a moving background. Detection of one motion associated with the gestures on a background motion may be referred to as differential motion detection. In some implementations, the background may be characterized by spatially coherent (uniform) motion. Background motion for a given frame may be estimated and removed. The resultant motion field may be analyzed in order to determine, e.g., hand gesture(s) and/or objects. In one or more implementations, a sequence of frames may be characterized by the background motion that is temporally coherent over timescale associated with the frame sequence. Background motion for the sequence of frames may be estimated and removed from individual frames within the sequence. The resultant motion field may be analyzed in order to determine, e.g., hand gesture(s) and/or objects.
FIGS. 10-15 illustrate methods 1000, 1100, 1200, 1300, 1400, 1500 for determining and using motion information from encoded video. The operations of methods 1000, 1100, 1200, 1300, 1400, 1500 presented below are intended to be illustrative. In some implementations, method 1000, 1100, 1200, 1300, 1400, 1500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of method 1000, 1100, 1200, 1300, 1400, 1500 are illustrated in FIGS. 10-15 and described below is not intended to be limiting.
In some implementations, methods 1000, 1100, 1200, 1300, 1400, 1500 may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information). The one or more processing devices may include one or more devices executing some or all of the operations of methods 1000, 1100, 1200, 1300, 1400, 1500 in response to instructions stored electronically on an electronic storage medium. The one or more processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of methods 1000, 1100, 1200, 1300, 1400, 1500.
FIG. 10 illustrates a method of determining a salient feature using encoded video motion information, in accordance with one or more implementations.
Operations of method 1000 may be applied to processing of sensory data (e.g., audio, video, RADAR imagery, SONAR imagery, and/or other imagery), observation data, motor command activity in a robotic system, and/or other systems or data.
At operation 1002 of method 1000, one or more a consecutive input video frames may be encoded. In one or more implementations, the frames may be provided by an image sensor (e.g., CCD, CMOS device, and/or APS, photodiode arrays, and/or other image sensors). In some implementations, the input may comprise a pixel stream downloaded from a file, such as a stream of two-dimensional matrices of red green blue RGB values (e.g., refreshed at a 25 Hz or other suitable frame rate). It will be appreciated by those skilled in the art when given this disclosure that the above-referenced image parameters are merely exemplary, and many other image representations (e.g., bitmap, luminance-chrominance YUV, YCbCr, CMYK, grayscale, and/other image representations) may be applicable to and useful with the various implementations. Data frames corresponding to other (non-visual) signal modalities such as sonograms, IR, radar or tomography images may be compatible with the processing methodology of the disclosure, and/or other configurations. The frames may form real-time (live) video. In one or more implementations, the encoding may comprise operations performed in accordance with any applicable encoding standard comprising motion estimation between successive frames (e.g., H.263, H.264, and/or other).
At operation 1004 encoded video may be parsed in order to obtain motion information. In some implementations, the motion information may comprise a plurality of motion vectors and their locations as associated with one or more macroblocks within the encoded frame (e.g., the vector 916 of macroblock 914 in FIG. 9A).
At operation 1006 a salient feature may be determined using motion information. In one or more implementations, the feature determination may be based on analysis of motion spatial map within a given frame (e.g., the motion map comprising the area 930, 940 in FIG. 9C). In one or more implementations, the feature determination may be configured based on analysis of motion temporal characteristics (e.g., persistence of motion features in a given location over multiple frames, comparing motion at a given location between two or more frames, and/or other).
FIG. 11 illustrates a method of data processing useful for determining features, in accordance with one or more implementations.
At operation 1102 live video may be obtained during execution of a task. In some implementations of robotic vehicle navigation, the video may be obtained with a video camera disposed on the vehicle. The video stream may be encoded using any applicable standard comprising motion estimation operation (e.g., H.263, H.264, and/or other).
At operation 1104 motion information may be determined from the encoded video stream. In some implementations, the encoded video stream may be parsed in order to obtain motion information. In some implementations, the motion information may comprise a plurality of motion vectors and their locations as associated with one or more macroblocks within the encoded frame (e.g., the vector 916 of macroblock 914 in FIG. 9A).
At operation 1106 a location of an object within video frame may be determined using motion information obtained at operation 1104. In one or more implementations, the location determination may be based on temporal and/or spatial persistence (coherence) of motion over a given area and/or over several frames. By way of an illustration, occurrence of a plurality of macroblocks characterized by motion vectors within a given margin from one another (e.g., 5-20% in one implementation) in a given frame may indicate a moving object.
At operation 1108 the object associated with the location determined at operation 806 may be related to a task action. Based on the action determination, a control signal may be provided. In some implementations, the control signal provision may be configured based on operation of an adaptive predictor, e.g., such as described in U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed on Mar. 15, 2013, incorporated supra.
At operation 1110, the action may be executed. By way of an illustration, the object may comprise a ball 174 in FIG. 1A, the motion information may indicate the ball moving to the left of the vehicle, the task may comprise target pursuit, and the action may comprise a left turn by the vehicle.
FIG. 12 is a logical flow diagram illustrating a method of executing an action configured based on a gesture detected using motion information, in accordance with one or more implementations.
At operation 1202, motion information may be determined using one or more encoded frames in a sequence of frames. The motion information may comprise motion vectors due to gestures of a human (e.g., vectors 936, 946 in FIG. 9B). In some implementations, the frame sequence may be configured using interleaving and/or combination operation, e.g., such as the exemplary concatenation shown and described with respect to FIGS. 3A-3D.
At operation 1204 a spatia-temporal distribution of the motion information may be determined. In some implementations of spatial motion distribution, the motion map may comprise more areas of macroblocks (e.g., the area 910 in FIG. 9A and/or 90 in FIG. 9C) characterized by similar motion vector components. (e.g., components of vector 946 in FIG. 9C). In some implementations, temporal motion distribution may be determined by analyzing motion associated with a portion of the frame (e.g., the area 940 in FIG. 9C) over a plurality of consecutive frames.
At operation 1206 disparity-related information (such as e.g., aspatio-temporal distribution map of disparity) may be determined based on the motion in the encoded frame sequence. In some implementations (wherein the frame sequence used for encoding may comprise interleaved and/or concatenated frames from left/right cameras (e.g., the cameras 106, 108 in FIG. 1)), the motion information in the encoded frame sequence may be due to disparity between left/right views of a scene. The disparity determination of operation 1206 may be effectuated based on interleaving and/or combination of images from multiple sources using any applicable methodologies, such as those described herein (e.g., with respect to FIGS. 3A-3D, and/or FIGS. 6A-6B).
At operation 1207 a spatio-temporal distribution map of distance may be determined based on the disparity information obtained at operation 1206. In some implementations, the distance determination may be effectuated using Eqn. 1.
At operation 1208 a gesture may be determined based on a spatio-temporal pattern that may occur within the motion distribution determined at operation 1204 and/or the distance distribution determined at operation 1207. By way of an illustration, a pattern of alternating motion vectors of a rectangular area within the frame may correspond to a crisscross motion of arms by the user indicating an alert (e.g., a stop) command to the robotic device. In some implementations, motion information for a given frame may be characterized by a plurality of different motion vectors due to, e.g., motion of different objects, camera pan/zoom operation, and/or video acquisition from a moving platform. By way of an illustration of operation of the robotic vehicle 160 of FIG. 1B, video signal obtained by the camera 166 may comprise a representation of human making gestures superimposed on a moving background. A distance to one part of human body (e.g. of the head and/or the torso) may be used in order to determine a stop hand gesture, e.g., as described below with respect to FIG. 16C.
At operation 1208, an action may be executed in accordance with the gesture determined at operation 1206. For example, upon detecting the crisscross arm motion, and/or hand 1614 position illustrated in FIG. 16C, the robotic device may stop trajectory navigation.
The motion-based gesture detection methodology described herein may be employed for operation of a robotic appliance and/or remotely operated device. In some implementations, gesture detection may be effectuated by a spoofing controller, e.g., such as described in U.S. patent application Ser. No. 14/244,892, entitled “SPOOFING REMOTE CONTROL APPARATUS AND METHODS”, filed on Apr. 3, 2014, incorporated herein by reference in its entirety. The spoofing controller may be trained to develop associations between the detected gestures and one or more remote control commands (by e.g., an IR remote operating a home appliance (TV)). The developed associations may enable the spoofing controller to operate the TV in accordance with gestured of a user in lieu of the remote controller commands.
A commercially available off-the shelf hardware video encoder (e.g., 1006 in FIG. 10) may be used to provide a compressed video stream. Typically, hardware encoders may be utilized in order to reduce video data rate in order to reduce storage, and/or bandwidth load associated with manipulation of video information. Motion extraction methodology described herein may advantageously enable determination of motion information by an application device using available compressed video albeit that is used for other purposes (e.g., reduction in storage and/or bandwidth). Use of available compressed video, comprising motion estimation data (e.g., MPEG-4) may substantially reduce computational load associated with motion determination, compared to existing techniques such as optic flow, and/or motion estimation algorithms such as but not limited to: block-matching algorithm, phase correlation, as well as determining locations of features and estimating the motion of those features.
FIG. 13 illustrates a method of determining a depth of visual scene using encoded interleaved stereo image information, in accordance with one or more implementations.
At operation 1302 of method 1300, a monocular frame configuration may be obtained using a stereo image of a visual scene. In some implementations, the monocular frame configuration may comprise an interleaved frame sequence 309, 322 described above with respect to FIGS. 3A-3B and/or an interleaved concatenated frame sequence described above with respect to FIG. 3C. In one or more implementations of multi-camera image acquisition, the monocular frame configuration may comprise an interleaved frame sequence such as shown in FIGS. 6A and/or 6B.
At operation 1304 monocular frame sequence may be encoded using a motion estimation encoder. In some implementations, the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
At operation 1306 depth of visual scene may be determined using motion information of the encoded data obtained at operation 1304. In one or more implementations, the motion information may be obtained by to parsing the compressed video stream (e.g., 422 in FIG. 4A). By way of an illustration, the motion information may comprise a macroblock location L (e.g., index), x-component, and y-component of motion of pixels associated with the macroblock location L. The extracted motion information may be used for disparity and or distance determination. Various uses of the depth information may be contemplated such as, for example, object detection, object localization, distance estimation, trajectory planning, gesture detection, and/or other.
FIG. 14 illustrates a method of determining distance to objects using motion of interleaved concatenated image sequence, in accordance with one or more implementations.
At operation 1401 of method 1400, frames from multiple cameras may be combined to produce e.g., a concatenated image. In one or more implementations the concatenated image may be configured in accordance with approach described above with respect to FIGS. 3C-3D and/or tables 1-7.
At operation 1402 of method 1400, concatenated frames determined at operation 1401 may be interleaved to produce an interleaved frame sequence. In one or more implementations the interleaved frame sequence may comprise a frame sequence such as shown in FIGS. 3A-3B, FIGS. 6A and/or 6B.
At operation 1404 the interleaved concatenated frame sequence may be encoded using a motion estimation encoder. In some implementations, the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
At operation 1406 an object may be detected based on a spatia-temporal pattern within the motion information within the encoded data. In one or more implementations, the motion information may be obtained by to parsing the compressed video stream (e.g., 422 in FIG. 4A comprising, e.g., encoded frames 356, 360, 364 shown and described with respect to FIG. 3B). Object detection may be effectuated using any applicable methodologies including these described above with respect to FIGS. 9A-9C.
At operation 1408, distance to the object identified at operation 1406 may be determined. The distance determination may be configured based on the disparity data that may be obtained from the motion information of the encoded data (e.g., the frames 332, 358, 362 in FIG. 3B). Various uses of the distance information may be contemplated such as, for example, object detection, trajectory planning, gesture detection, obstacle avoidance, and/or other.
FIG. 15 illustrates a method of executing an action configured based on detecting an object in motion information, in accordance with one or more implementations.
At operation 1502 of method 1500 an interleaved and/or concatenated frame sequence may be encoded using a motion estimation encoder. The concatenation and/or interleaving may be configured based on frames from multiple sources, and comprise for example the operations described above with respect to FIGS. 3A-3D, and/or 6A, 6B. In some implementations, the encoding may be performed by a specialized video encoder comprising a motion estimation operation (e.g., MPEG-4, H.264, or other).
At operation 1504 distance to the object may be determined using disparity determined from the motion information of the encoded data. The distance determination may be configured based on the disparity data that may be obtained from the motion information of the encoded data (e.g., the frames 332, 358, 362 in FIG. 3B).
At operation 1506 an action may be associated with the object parameters determined at operation 1504. In some implementations, the object parameters may comprise object features (e.g., shape, color, identity), location, distance, speed, and/or other. By way of an illustration, the object may comprise a ball 112 in FIG. 1A rolling across the path of the vehicle 100. The distance to the ball 112 and the ball motion data may indicate that the vehicle 100 may collide with the ball 112. The action may comprise a turn left/right and/or reducing the speed of the vehicle 100.
At operation 1510 the action may be executed. Action execution may be configured based on output of an adaptive predictor apparatus configured to predict control signal for the robotic vehicle 100 in FIG. 1. In some implementations, the predictor may be operated in accordance with a learning process such as described, for example, in U.S. patent application Ser. No. 13/842,530, entitled “ADAPTIVE PREDICTOR APPARATUS AND METHODS”, filed on Mar. 15, 2013, the foregoing being incorporated supra.
FIGS. 16A-16D illustrate gestures of a human operator used for communicating control indications to a robotic device comprising distance determination apparatus described herein, in accordance with one or more implementations.
FIG. 16A is a top view of a user and may illustrate a base posture of the user. FIG. 16B may depict user gestures 1600 communicating a right turn action to a robotic device (e.g., the vehicle 100 in FIG. 1A. The robotic device 100 may utilize stereo images provided by the cameras 106, 108 in order to detect position of the user arms 1608, 1608. In some implementations, the arm 1608, 1604 position may be determined using the distance determination methodology configured based on encoding interleaved left/right portions of the stereo imagery. By way of an illustration, the gesture in FIG. 16B may be determining based on a comparison of distance between the robot and the user arms in positions 1604, 1608 in FIG. 16B relative the user arms in position 1624 in FIG. 16A. In one or more implementations, the gesture in FIG. 16B may be determining based on a comparison of distance between the robot and the user arms in positions 1604, 1608 relative the user head 1602 in FIG. 16B.
FIG. 16C is a side view of the user and may depict user gesture 1610 communicating a stop action to a robotic device (e.g., the vehicle 100 in FIG. 1A). The robotic device 100 may utilize stereo images provided by the cameras 106, 108 in order to detect position of the user arms, head 1642, 1612, and/or hands 1614, 1644. In some implementations, the hand 1642, 1644 position may be determined using the distance determination methodology configured based on encoding interleaved left/right portions of the stereo imagery. By way of an illustration, the gesture in FIG. 16C may be obtained based on a comparison of distance between the robot and the user hands in position 1614 in FIG. 16C relative the user hand in position 1644 in FIG. 16D. In one or more implementations, the gesture in FIG. 16C may be determined based on a comparison of distance between the robot and the user hand in position 1614 relative the user head 1612 in FIG. 16C. In some implementations (not shown) the user may communicate an indication to the robotic device by, e.g., appearing in view of the camera. By way of an illustrating, the user stepping in front of the vehicle may indicated to the vehicle a stop action
The present disclosure also contemplates a computerized controller apparatus for implementing, inter alia, motion and/or distance determination methodology in accordance with one or more implementations.
The controller apparatus (not shown) may comprise a processing module configured to receive sensory input from sensory block (e.g., cameras 106, 108 in FIG. 1A). In some implementations, the sensory module may comprise audio input/output portion. The processing module may be configured to implement signal processing functionality (e.g., distance estimation, object detection based on motion maps, and/or other).
The controller apparatus may comprise memory configured to store executable instructions (e.g., operating system and/or application code, raw and/or processed data such as raw image fames and/or object views, teaching input, information related to one or more detected objects, and/or other information).
In some implementations, the processing module may interface with one or more of the mechanical, sensory, electrical, power components, communications interface, and/or other components via driver interfaces, software abstraction layers, and/or other interfacing techniques. Thus, additional processing and memory capacity may be used to support these processes. However, it will be appreciated that these components may be fully controlled by the processing module. The memory and processing capacity may aid in processing code management for the controller apparatus (e.g. loading, replacement, initial startup and/or other operations). Consistent with the present disclosure, the various components of the device may be remotely disposed from one another, and/or aggregated. For example, the instructions operating the haptic learning process may be executed on a server apparatus that may control the mechanical components via network or radio connection. In some implementations, multiple mechanical, sensory, electrical units, and/or other components may be controlled by a single robotic controller via network/radio connectivity.
The mechanical components of the controller apparatus may include virtually any type of device capable of motion and/or performance of a desired function or task. Examples of such devices may include one or more of motors, servos, pumps, hydraulics, pneumatics, stepper motors, rotational plates, micro-electro-mechanical devices (MEMS), electroactive polymers, shape memory alloy (SMA) activation, and/or other devices. The sensor devices may interface with the processing module, and/or enable physical interaction and/or manipulation of the device.
The sensory devices may enable the controller apparatus to accept stimulus from external entities. Examples of such external entities may include one or more of video, audio, haptic, capacitive, radio, vibrational, ultrasonic, infrared, motion, and temperature sensors radar, lidar and/or sonar, and/or other external entities. The module may implement logic configured to process user commands (e.g., gestures) and/or provide responses and/or acknowledgment to the user.
The electrical components may include virtually any electrical device for interaction and manipulation of the outside world. Examples of such electrical devices may include one or more of light/radiation generating devices (e.g. LEDs, IR sources, light bulbs, and/or other devices), audio devices, monitors/displays, switches, heaters, coolers, ultrasound transducers, lasers, and/or other electrical devices. These devices may enable a wide array of applications for the apparatus in industrial, hobbyist, building management, medical device, military/intelligence, and/or other fields.
The communications interface may include one or more connections to external computerized devices to allow for, inter alia, management of the controller apparatus. The connections may include one or more of the wireless or wireline interfaces discussed above, and may include customized or proprietary connections for specific applications. The communications interface may be configured to receive sensory input from an external camera, a user interface (e.g., a headset microphone, a button, a touchpad, and/or other user interface), and/or provide sensory output (e.g., voice commands to a headset, visual feedback, and/or other sensory output).
The power system may be tailored to the needs of the application of the device. For example, for a small hobbyist robot or aid device, a wireless power solution (e.g. battery, solar cell, inductive (contactless) power source, rectification, and/or other wireless power solution) may be appropriate. However, for building management applications, battery backup/direct wall power may be superior, in some implementations. In addition, in some implementations, the power system may be adaptable with respect to the training of the apparatus 1800. Thus, the controller apparatus may improve its efficiency (to include power consumption efficiency) through learned management techniques specifically tailored to the tasks performed by the controller apparatus.
Various aspects of the disclosure may advantageously be applied to design and operation of apparatus configured to process sensory data. Implementations of the principles of the disclosure may be applicable to detecting objects by a wide variety of stationary and portable video devices, such as, for example, smart phones, portable communication devices, notebook, netbook and tablet computers, surveillance camera systems, and practically any other computerized device configured to process vision data. The motion information may be used as a proxy for optic flow (estimated motion (dx,dy) on a grid across the frame of the video). Use of available hardware encoders to obtain motion data may reduce energy use by portable devices, enable motion and/or distance detection on higher resolution video (e.g., resolutions greater than 320×240), improve motion detection resolution in order to, e.g., detect gestures, compared to optic flow detection techniques.
Interleaving of frames from multiple spatially displaced cameras may enable determination of binocular disparity between pairs of camera images using motion estimation. Use of an off-the shelf commercially available hardware video encoder (e.g., MPEG-4, H.265 and/or other encoder) comprising motion estimation, may substantially reduce cost, size, energy use of a motion estimation component, compared to use of optical flow for determining motion. Encoded into video may be parsed to obtain motion information. Motion corresponding to a pair of frames from displaced cameras may be interpreted as a measure of disparity. The disparity may be utilized in order to determine depth of visual scene and/or distance to objects within visual scene. By way of an illustration, embodying a motion and/or distance determination component and/or a distance determination component of the disclosure in a robotic vehicle (e.g., 100, 1700 in FIGS. 1A, 17) may extend duration of autonomous operation of the robotic apparatus due to, in part, lower energy use that may be associated with motion/distance detection based on video encoded using hardware encoder, as compared to using video processing in a CPU (e.g., optical flow, and/or pixel block matching). The increased autonomy may be characterized by the robotic device capability to perform a given action (e.g., a flight route and/or surveillance route) an additional number of times without recharging, and/or being capable of completing longer routes on a given charge as compared to the prior art solutions. In one or more implementations, the reduced energy use may be leveraged for producing a smaller, lighter and/or less costly robot that may be capable of performing the action (e.g., navigate a given route) compared to the comparable device of the prior art.
An autonomous robotic device comprising a hardware video encoder may be capable to perform motion estimation for obstacle avoidance, tracking moving objects, stabilization, platform and/or enabling the robot to learn its own self motion. By way of an illustration, a robotic device may be configured to follow a target (e.g., a person, a ball 112 in FIG. 1A, and/other object) at a distance (e.g., 110 in FIG. 1A). In one or more implementations of tracking, the robotic device may be configured to maintain distance from target within a range (e.g., not to exceed 50 meters, and not to approach closer than 2 meters when following a for vehicle, and/or not to exceed 5 meters, and not to approach closer than 0.25 meters when following the ball 112 in FIG. 1A. In one or more implementations of object tracking, approach, avoid, and/or other, controller of the robotic device may be configured to determine distance to the target and motion of the target using, e.g., the alternating interleaving methodology shown and described with respect to FIG. 3B. In some implementations, the distance may be determined using the interleaving methodology; the motion may be determined using the video encoding methodology.
FIG. 17 illustrates use of distance determination methodology by an unmanned robotic apparatus configured for autonomous navigation, in accordance with one or more implementations. The unmanned autonomous vehicle (AUV) 1700 may comprise a plurality of cameras 1702 disposed spatially from one another. Video stream provided by the cameras 1702 may be interleaved and/or concatenated and encoded using any applicable methodology described herein (e.g., with respect to FIGS. 3A-3C, 6A-6B, and/or 9A-9C). The encoding may enable controller of the vehicle 1700 (e.g., 700, 800 FIGS. 7-8) do determine distance 1706 between the vehicle 1700 and the landing location 1712, and/or distance 1718 to obstacles (e.g., 1710), using, e.g., output 432, 458, 472 described above with respect to FIGS. 4A-4C. The vehicle controller may utilize the distance and/or vehicle motion information to control actuators 1704 when landing, during take-off and or navigating around obstacles.
In some, implementations, the motion and/or distance detection methodology described herein may be employed for detecting salient objects in video input. The saliency of an item (such as an object, a person, a pixel, and/or other) may be described by a characteristic by which the item may stand out relative to its neighbors. For example, a salient vehicle may comprise a vehicle that may be moving differently (e.g., going slower/faster than the rest of the traffic, weaving from lane to lane) compared to the rest of the traffic. A salient object for target approach may comprise a stationary and/or moving ball on a moving background due to self-motion by the vehicle.
Implementations of the principles of the disclosure may be further applicable to a wide assortment of applications including computer-human interaction (e.g., recognition of gestures, voice, posture, face, and/or other interactions), controlling processes (e.g., processes associated with an industrial robot, autonomous and other vehicles, and/or other processes), augmented reality applications, access control (e.g., opening a door based on a gesture, opening an access way based on detection of an authorized person), detecting events (e.g., for visual surveillance or people or animal counting, tracking).
A video processing system of the disclosure may be implemented in a variety of ways such as, for example, a software library, an IP core configured for implementation in a programmable logic device (e.g., FPGA), an ASIC, a remote server, comprising a computer readable apparatus storing computer executable instructions configured to perform feature detection. Myriad other applications exist that will be recognized by those of ordinary skill given the present disclosure.
Although the system(s) and/or method(s) of this disclosure have been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred implementations, it is to be understood that such detail is solely for that purpose and that the disclosure is not limited to the disclosed implementations, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any implementation can be combined with one or more features of any other implementation.
EXHIBIT I - EXEMPLARY COMPUTER CODE
© Copyright 2014 Brain Corporation. All rights reserved
void ff_h263_update_motion_val(MpegEncContext * s){
const int mb_xy = s−>mb_y * s−>mb_stride + s−>mb_x;
//FIXME a lot of that is only needed for !low_delay
const int wrap = s−>b8_stride;
const int xy = s−>block_index[0];
int motion_x=0, motion_y=0;
const int block_size= 8>>s−>avctx−>lowres;
s−>current_picture.mbskip_table[mb_xy]= s−>mb_skipped;
if(s−>mv_type != MV_TYPE_8X8){
if (s−>mb_intra) {
motion_x = 0;
motion_y = 0;
} else if (s−>mv_type == MV_TYPE_16X16) {
motion_x = s−>mv[0][0][0];
motion_y = s−>mv[0][0][1];
} else /*if (s−>mv_type == MV_TYPE_FIELD)*/ {
int i;
motion_x = s−>mv[0][0][0] + s−>mv[0][1][0];
motion_y = s−>mv[0][0][1] + s−>mv[0][1][1];
motion_x = (motion_x>>1) | (motion_x&1);
for(i=0; i<2; i++){
s−>p_field_mv_table[i][0][mb_xy][0]= s−>mv[0][i][0];
s−>p_field_mv_table[i][0][mb_xy][1]= s−>mv[0][i][1];
}
s−>current_picture.ref_index[0][4*mb_xy ]=
s−>current_picture.ref_index[0][4*mb_xy + 1]= s−>field_select[0][0];
s−>current_picture.ref_index[0][4*mb_xy + 2]=
s−>current_picture.ref_index[0][4*mb_xy + 3]= s−>field_select[0][1];
}
/* no update if 8X8 because it has been done during parsing */
s−>current_picture.motion_val[0][xy][0] = motion_x;
s−>current_picture.motion_val[0][xy][1] = motion_y;
s−>current_picture.motion_val[0][xy + 1][0] = motion_x;
s−>current_picture.motion_val[0][xy + 1][1] = motion_y;
s−>current_picture.motion_val[0][xy + wrap][0] = motion_x;
s−>current_picture.motion_val[0][xy + wrap][1] = motion_y;
s−>current_picture.motion_val[0][xy + 1 + wrap][0] = motion_x;
s−>current_picture.motion_val[0][xy + 1 + wrap][1] = motion_y;
if(s−>avctx−>debug_mv) {
for (int i=0;i<2*block_size;i++) memset(s−>dest[0] + i * s−>linesize, 120 + s−
>current_picture.key_frame * 5, 2*block_size);
for (int i=0;i<block_size;i++) memset(s−>dest[1] + i * s−>uvlinesize, 128 + motion_x,
block_size);
for (int i=0;i<block_size;i++) memset(s−>dest[2] + i * s−>uvlinesize, 128 + motion_y,
block_size);
}
} else {
if(s−>avctx−>debug_mv) {
for (int i=0;i<block_size*2;i++) memset(s−>dest[0] + i * s−>linesize, 130, block_size*2);
for (int ywrap=0, y=0;y<2;ywrap+=wrap,y++) {
for (int x=0;x<2;x++) {
motion_x = s−>current_picture.motion_val[0][xy + x + ywrap][0];
motion_y = s−>current_picture.motion_val[0][xy + x + ywrap][1];
for (int i=0;i<block_size/2;i++) memset(s−>dest[1] + x*block_size/2 + (i +
y*block_size/2) * s−>uvlinesize, 128 + motion_x, block_size/2);
for (int i=0;i<block_size/2;i++) memset(s−>dest[2] + x*block_size/2 + (i +
y*block_size/2) * s−>uvlinesize, 128 + motion_y, block_size/2);
}
}
}
}
if(s−>encoding){ //FIXME encoding MUST be cleaned up
if (s−>mv_type == MV_TYPE_8X8)
s−>current_picture.mb_type[mb_xy]= MB_TYPE_L0 | MB_TYPE_8x8;
else if(s−>mb_intra)
s−>current_picture.mb_type[mb_xy]= MB_TYPE_INTRA;
else
s−>current_picture.mb_type[mb_xy]= MB_TYPE_L0 | MB_TYPE_16x16;
}
}

Claims (23)

What is claimed:
1. A non-transitory computer-readable storage medium having computer readable instructions stored thereon, that when executed by at least one processor causes the at least one processor to,
produce a sequence of composite images, each one of a respective sequence of composite images comprising a first image from a first sequence of images and a second image from a second sequence of images, the first image and the second image being joined adjacent to each other; and evaluate the sequence of composite images to determine a depth parameter of a scene, the evaluating of the sequence of composite images comprising encoding the sequence of composite images into an encoded frame sequence, the encoded frame sequence comprising disparity estimates and motion estimates obtained based on (i) a first set of images from the first sequence of images occurring at one or more time frames and (ii) a second set of images from the second sequence of images occurring at the one or more time frames,
wherein the sequence of composite images includes a first composite image and a second composite image, the first composite image is based on a combination of the second image from the second sequence of images and a plurality of replicas of the first image from the first sequence of images, the second composite image subsequent to the first composite image within the sequence of composite images is based on a combination of a third image from the second sequence of images and a plurality of replicas of a fourth image from the first sequence of images, the third image from the second sequence of images is acquired contemporaneously with the first image from the first sequence of images and subsequent to the second image from the second sequence of images, and the fourth image from the first sequence of images is acquired subsequent to the first image from the first sequence of images.
2. The computer readable storage medium of claim 1, wherein the at least one processor is further configured to execute the computer readable instructions to,
generate the first sequence of images based on input received of the scene by a first image sensor, and
generate the second sequence of images based on input received of the scene by a second image sensor, respectively, the first image sensor is spatially separated from the second image sensor such that a binocular disparity in the first sequence of images and the second sequence of images is achieved, the binocular disparity corresponds to the depth parameter.
3. The computer readable storage medium of claim 2, wherein, the first image from the first sequence of images comprises a first set of plurality of pixels, the second image from the second sequence of images comprises a second set of plurality of pixels, and the at least one processor is further configured to execute the computer readable instructions to encode the sequence of composite images using a motion estimation encoder.
4. The computer readable storage medium of claim 3, wherein, the first composite image of the sequence of composite images comprises the first image from the first sequence of images; and the second composite image occurring subsequent to the first composite image within the sequence of composite images configured, the second composite image based on a combination of a fifth image from the first sequence of images with a sixth image from the second sequence of images, the first image from the first sequence of images is acquired contemporaneously with the second image from the second sequence of images and prior to the fifth image from the first sequence of images.
5. The computer readable storage medium of claim 3, wherein: the motion estimation encoder is configured to, based on encoding the first composite image and the second composite image of the sequence of composite images, determine: a first displacement of at least one pixel in the first set of plurality of pixels within the first image from the first sequence of images relative to the second set of plurality of pixels within the second image from the second sequence of images; and a second displacement of at least one pixel in the first set of plurality of pixels within the fourth image from the first sequence of images relative to the second set of plurality of pixels within the first image from the first sequence of images; the first displacement of pixels corresponds to determination of the disparity; and the second displacement of pixels corresponds to determination of motion associated with one or more objects in the scene.
6. The computer readable storage medium of claim 3, wherein
the first composite image of the sequence of composite images is based on a combination of the first image from the first sequence of images and a fifth transformed image from the first sequence of images, the fifth transformed image includes a reduced pixel count compared to a pixel count in the first image from the first sequence of images.
7. The computer readable storage medium of claim 4, wherein, the motion estimation encoder is configured to determine a first displacement of at least one pixel in the first set of plurality of pixels within the image from the first sequence of images relative to the second set of plurality of pixels within the second image from the second sequence of images; and the binocular disparity is determined based on the first displacement.
8. The computer readable storage medium of claim 4, wherein, the first set of plurality pixels within the first image from the first sequence of images and the second set of plurality of pixels within the second image from the second sequence of images correspond to at least one object within the scene; and the at least one processor is further configured to execute the computer readable instructions to determine distance to the object based on the depth parameter.
9. The computer readable storage medium of claim 8,
wherein, the motion estimation encoder is further configured to determine a second displacement of a first pixel in the first set of plurality of pixels within the first image from the first sequence of images and a second pixel in the first set of plurality of pixels within the fourth image from the first sequence of images; and
the second displacement corresponds to information related to motion of the at least one object.
10. An image processing system, comprising: an input interface configured to receive a stereo representation of a visual scene, the stereo representation comprising a first portion and a second portion; a logic component in communication with the input interface and configured to: arrange the first portion of the stereo representation with the second portion of the stereo representation into a concatenated frame; and form a sequence of concatenated frames by arranging first portions of the stereo representation and second portions of the stereo representation within a first concatenated frame in an alternate order relative to a preceding concatenated frame within the sequence, the first concatenated frame comprising a different size from either the first portion or the second portion of the stereo representation; wherein the sequence of concatenated frames further includes a first composite image and a second composite image; the first composite image of the sequence of concatenated frames is based on a combination of an image from the second sequence of images and a plurality of replicas of an image from the first sequence of images; the second composite image subsequent to the first composite image within the sequence of concatenated frames, the second composite image is based on a combination of another image from the second sequence of images and a plurality of replicas of another image from the first sequence of images;
the another image from the second sequence of images is acquired contemporaneously with the image from the first sequence of images, and subsequent to the image from the second sequence of images; and the another image from the first sequence of images is acquired subsequent to the image from the first sequence of images;
a video encoder in data communication with the logic component and configured to encode the sequence of concatenated frames to produce a sequence of compressed frames; and a processor in data communication with the video encoder and configured to execute computer readable instructions to obtain motion information based on an evaluation of the compressed frames.
11. The system of claim 10, wherein,
the sequence of compressed frames comprises a keyframe characterized by an absence of the motion information; and the processor is further configured to execute the computer readable instructions to not utilize the keyframe during the evaluation of the compressed frames.
12. The system of claim 10, wherein,
the processor is further configured to execute computer readable instructions to determine, a depth parameter associated with the visual scene based on the motion information; and the video encoder is configurable to produce the sequence of compressed frames at least at a frame rate selected between 1 and 100 frames per second inclusive; and the image processing apparatus is embedded in an integrated circuit.
13. The system of claim 10, wherein, the video encoder is further configured to: estimate a displacement of a block of pixels within the first concatenated frame relative a block of pixels within the preceding frame; and wherein the motion information based on the displacement.
14. The system of claim 10, wherein,
individual ones of the first and the second portion each comprise a frame of pixels that are characterized by a first number of pixels; and
the arranging of the first portion with the second portion into the concatenated frame further comprises converting at least one of the first and the second portion into a converted frame that is characterized by a second number of pixels smaller than the first number of pixels.
15. An image processing apparatus system, comprising: computerized logic configured to:
receive a plurality of stereo representations, each of the plurality being representative of a corresponding visual scene and comprising a first portion and a second portion; combine a first portion of a given first stereo representation at a first time point with a second portion of the given first stereo representation into a first frame in a first relative arrangement;
combine a first portion of another stereo representation at a second time point with a second portion of the other stereo representation into a second frame in a second relative arrangement different from the first relative arrangement; and form a sequence comprising at least the first and second frames; produce a sequence of composite image, the sequence of composite images including a first composite image and a second composite image, wherein, the first composite image of the sequence of composite images is based on a combination of an image from the second sequence of images and a plurality of replicas of an image from the first sequence of images, the second composite image subsequent to the first composite image within the sequence of composite images, the second composite image is based on a combination of another image from the second sequence of images and a plurality of replicas of another image from the first sequence of images, the another image from the second sequence of images is acquired contemporaneously with the image from the first sequence of images, and subsequent to the image from the second sequence of images, and the another image from the first sequence of images is acquired subsequent to the image from the first sequence of images; and a video encoder in data communication with the computerized logic and configured to encode the sequence of frames to produce a sequence of encoded frames; and processing logic in data communication with the video encoder and configured to evaluate the sequence of encoded frames to determine motion information.
16. A method of determining motion information within a visual scene, the method comprising: producing a first composite frame and a second composite frame by combining images from a first plurality of images and a second plurality of images of the visual scene; producing an interleaved sequence of composite frames comprising the first and the second composite frames; and evaluating the interleaved sequence of composite frames to determine a stream of encoded frames comprising the motion information, the motion information comprising (i) information associated with a comparison of an image from the first plurality of images and an image from the second plurality of images, (ii) information associated a comparison of two images from the first plurality of images, and (iii) information associated with a comparison of two images from the second plurality of images;
wherein individual images of the first and second pluralities of images are provided by first and second sensing apparatus, respectively, the second sensing apparatus being separated spatially from the first sensing apparatus, and wherein the sequence of composite images includes a first composite image and a second composite image, the first composite image of the sequence of composite images is based on a combination of an image from the second sequence of images and a plurality of replicas of an image from the first sequence of images, the second composite image subsequent to the first composite image within the sequence of composite images, the second composite image is based on a combination of another image from the second sequence of images and a plurality of replicas of another image from the first sequence of images, the another image from the second sequence of images is acquired contemporaneously with the image from the first sequence of images, and subsequent to the image from the second sequence of images, and the another image from the first sequence of images is acquired subsequent to the image from the first sequence of images.
17. The method of claim 16, wherein, the first composite frame is characterized by a first placement configuration of (i) the image from the first plurality of images, and (ii) the image from the second plurality of images; and the second composite frame is characterized by a second placement configuration of (i) the image from the first plurality of images and (ii) the image from the second plurality of images; wherein the second placement configuration is different from the first placement configuration.
18. The method of claim 17, further comprising evaluating the motion information to determine a distance to an object disposed within the visual scene; wherein the evaluating of the interleaved sequence of frames comprises encoding the interleaved sequence of frames using an encoder process comprising a motion estimation process configured to provide data related to motion of a pixel group within the second composite frame relative to a pixel group within the first composite frame.
19. The method of claim 18, wherein, the pixel group within the second composite frame corresponds to a first representation of the object; the pixel group within the first composite frame corresponds to a second representation of the object; and the data related to motion are based at least on a displacement of first representation of the object relative to the second representation of the object.
20. The method of claim 18, wherein, the first composite frame comprises a first image from the first plurality of images, and a second image from the second plurality of images acquired contemporaneously with the first image from the first plurality of images; and the second composite frame comprises the first image from the first plurality of images and a third image of from the first plurality of images, the third image being acquired subsequent to the first image.
21. The method of claim 18, wherein, the first composite frame comprises a first image from the first plurality of images; the first composite frame further comprises a second image from the first plurality of images and a third image from the second plurality of images; the second image is acquired subsequent to the first image; and the third image is acquired contemporaneously with the first image.
22. The method of claim 19, wherein, the evaluating of the motion information comprises determining a binocular disparity between the image from the second plurality of images and the image from the first plurality of images, the binocular disparity being related to the distance of the object and the spatial separation; and the determination of the distance to the object is based at least on the determining of the binocular disparity.
23. The method of claim 22 wherein, individual images of the first and second pluralities of images comprise a plurality of pixels; the encoder process is selected from the group consisting of MPEG-4, H.262, H.263 H.264, and H.265 encoders; the first representation of the object corresponds to a first block of pixels within the image of the first plurality of images; the second representation of the object corresponds to a second block of pixels within the image of the second plurality of images; and the displacement is configured to relate a displacement of the first block of pixels relative the second block of pixels.
US14/326,374 2014-07-08 2014-07-08 Apparatus and methods for distance estimation using stereo imagery Active 2036-08-06 US10057593B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/326,374 US10057593B2 (en) 2014-07-08 2014-07-08 Apparatus and methods for distance estimation using stereo imagery
US16/104,646 US10820009B2 (en) 2014-07-08 2018-08-17 Apparatus and methods for distance estimation using stereo imagery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/326,374 US10057593B2 (en) 2014-07-08 2014-07-08 Apparatus and methods for distance estimation using stereo imagery

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/104,646 Continuation US10820009B2 (en) 2014-07-08 2018-08-17 Apparatus and methods for distance estimation using stereo imagery

Publications (2)

Publication Number Publication Date
US20160014426A1 US20160014426A1 (en) 2016-01-14
US10057593B2 true US10057593B2 (en) 2018-08-21

Family

ID=55068541

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/326,374 Active 2036-08-06 US10057593B2 (en) 2014-07-08 2014-07-08 Apparatus and methods for distance estimation using stereo imagery
US16/104,646 Active US10820009B2 (en) 2014-07-08 2018-08-17 Apparatus and methods for distance estimation using stereo imagery

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/104,646 Active US10820009B2 (en) 2014-07-08 2018-08-17 Apparatus and methods for distance estimation using stereo imagery

Country Status (1)

Country Link
US (2) US10057593B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178631A1 (en) * 2014-05-22 2019-06-13 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium
US10986325B2 (en) * 2018-09-12 2021-04-20 Nvidia Corporation Scene flow estimation using shared features
US11514298B2 (en) * 2018-10-31 2022-11-29 International Business Machines Corporation High-frame-rate real-time multiscale spatiotemporal disparity on distributed low-power event-based neuromorphic hardware

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9566710B2 (en) 2011-06-02 2017-02-14 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US9764468B2 (en) 2013-03-15 2017-09-19 Brain Corporation Adaptive predictor apparatus and methods
US9242372B2 (en) 2013-05-31 2016-01-26 Brain Corporation Adaptive robotic interface apparatus and methods
US9792546B2 (en) 2013-06-14 2017-10-17 Brain Corporation Hierarchical robotic controller apparatus and methods
US9314924B1 (en) 2013-06-14 2016-04-19 Brain Corporation Predictive robotic controller apparatus and methods
US9579789B2 (en) 2013-09-27 2017-02-28 Brain Corporation Apparatus and methods for training of robotic control arbitration
US9597797B2 (en) 2013-11-01 2017-03-21 Brain Corporation Apparatus and methods for haptic training of robots
US9463571B2 (en) 2013-11-01 2016-10-11 Brian Corporation Apparatus and methods for online training of robots
US9358685B2 (en) 2014-02-03 2016-06-07 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US9630318B2 (en) 2014-10-02 2017-04-25 Brain Corporation Feature detection apparatus and methods for training of robotic navigation
KR102326062B1 (en) * 2014-11-12 2021-11-12 현대모비스 주식회사 System and method for avoiding obstacle for autonomous vehicle
US9717387B1 (en) 2015-02-26 2017-08-01 Brain Corporation Apparatus and methods for programming and training of robotic household appliances
EP3099030A1 (en) * 2015-05-26 2016-11-30 Thomson Licensing Method and device for encoding/decoding a packet comprising data representative of a haptic effect
US10277904B2 (en) * 2015-08-28 2019-04-30 Qualcomm Incorporated Channel line buffer data packing scheme for video codecs
CN105869166B (en) * 2016-03-29 2018-07-10 北方工业大学 A kind of human motion recognition method and system based on binocular vision
WO2018157460A1 (en) * 2017-03-03 2018-09-07 华为技术有限公司 Method and device for counting human motions
CN107545247B (en) * 2017-08-23 2020-05-12 北京伟景智能科技有限公司 Stereo cognition method based on binocular recognition
US20190187721A1 (en) * 2017-12-15 2019-06-20 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and system, and mobile robot using the same
CN108076341B (en) * 2017-12-19 2019-10-25 武汉大学 In-orbit real-time digital image stabilization method and system is imaged in a kind of video satellite
EP3918437A4 (en) * 2019-01-31 2022-11-02 Brain Corporation Systems and methods for laser and imaging odometry for autonomous robots
US11346941B2 (en) * 2019-03-26 2022-05-31 Texas Instruments Incorporated Shaped ultrasonic transmission and echo processing with coding
US11050938B2 (en) 2019-07-03 2021-06-29 Gopro, Inc. Apparatus and methods for pre-processing and stabilization of captured image data

Citations (186)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5063603A (en) 1989-11-06 1991-11-05 David Sarnoff Research Center, Inc. Dynamic method for recognizing objects and image processing system therefor
JPH0487423A (en) 1990-07-31 1992-03-19 Toshiba Corp Decoding circuit
US5138447A (en) 1991-02-11 1992-08-11 General Instrument Corporation Method and apparatus for communicating compressed digital video signals using multiple processors
US5216752A (en) 1990-12-19 1993-06-01 Baylor College Of Medicine Interspike interval decoding neural network
US5272535A (en) 1991-06-13 1993-12-21 Loral Fairchild Corporation Image sensor with exposure control, selectable interlaced, pseudo interlaced or non-interlaced readout and video compression
US5355435A (en) 1992-05-18 1994-10-11 New Mexico State University Technology Transfer Corp. Asynchronous temporal neural processing element
US5638359A (en) 1992-12-14 1997-06-10 Nokia Telecommunications Oy Method for congestion management in a frame relay network and a node in a frame relay network
US5673367A (en) 1992-10-01 1997-09-30 Buckley; Theresa M. Method for neural network control of motion using real-time environmental feedback
RU2108612C1 (en) 1994-09-14 1998-04-10 Круглов Сергей Петрович Adaptive control system with identifier and implicit reference model
US5875108A (en) 1991-12-23 1999-02-23 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6009418A (en) 1996-05-02 1999-12-28 Cooper; David L. Method and apparatus for neural networking using semantic attractor architecture
US6014653A (en) 1996-01-26 2000-01-11 Thaler; Stephen L. Non-algorithmically implemented artificial neural networks and components thereof
US6035389A (en) 1998-08-11 2000-03-07 Intel Corporation Scheduling instructions with different latencies
US20020038294A1 (en) 2000-06-16 2002-03-28 Masakazu Matsugu Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements
US6418424B1 (en) 1991-12-23 2002-07-09 Steven M. Hoffberg Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6458157B1 (en) 1997-08-04 2002-10-01 Suaning Gregg Joergen Retinal stimulator
US20020176025A1 (en) 2001-03-05 2002-11-28 Chang-Su Kim Systems and methods for encoding redundant motion vectors in compressed video bitstreams
US6501794B1 (en) 2000-05-22 2002-12-31 Microsoft Corporate System and related methods for analyzing compressed media content
US6509854B1 (en) 1997-03-16 2003-01-21 Hitachi, Ltd. DA conversion circuit
US20030050903A1 (en) 1997-06-11 2003-03-13 Jim-Shih Liaw Dynamic synapse for signal processing in neural networks
US6545705B1 (en) 1998-04-10 2003-04-08 Lynx System Developers, Inc. Camera with object recognition/data output
US6545708B1 (en) 1997-07-11 2003-04-08 Sony Corporation Camera controlling device and method for predicted viewing
US6546291B2 (en) 2000-02-16 2003-04-08 Massachusetts Eye & Ear Infirmary Balance prosthesis
US6556610B1 (en) 2001-04-12 2003-04-29 E20 Communications, Inc. Semiconductor lasers
US6581046B1 (en) 1997-10-10 2003-06-17 Yeda Research And Development Co. Ltd. Neuronal phase-locked loops
US6625317B1 (en) 1995-09-12 2003-09-23 Art Gaffin Visual imaging system and method
US20030216919A1 (en) 2002-05-13 2003-11-20 Roushar Joseph C. Multi-dimensional method and apparatus for automated language interpretation
US6678590B1 (en) * 2000-10-17 2004-01-13 Bbnt Solutions Llc Vehicle navigation system with vision system preprocessor using MPEG encoder
US20040054964A1 (en) 1999-03-18 2004-03-18 Xerox Corporation. Methods and systems for real-time storyboarding with a web page and graphical user interface for automatic video parsing and browsing
US20040136439A1 (en) 2002-11-15 2004-07-15 Brandon Dewberry Methods and systems acquiring impulse signals
US20040170330A1 (en) 1998-08-12 2004-09-02 Pixonics, Inc. Video coding reconstruction apparatus and methods
US20040193670A1 (en) 2001-05-21 2004-09-30 Langan John D. Spatio-temporal filter and method
US20040233987A1 (en) 2003-05-21 2004-11-25 Porikli Fatih M. Method for segmenting 3D objects from compressed videos
US20050015351A1 (en) 2003-07-18 2005-01-20 Alex Nugent Nanotechnology neural network methods and systems
US20050036649A1 (en) 2001-08-23 2005-02-17 Jun Yokono Robot apparatus, face recognition method, and face recognition apparatus
US20050047647A1 (en) 2003-06-10 2005-03-03 Ueli Rutishauser System and method for attentional selection
US20050096539A1 (en) 2003-10-31 2005-05-05 Siemens Medical Solutions Usa, Inc. Intelligent ultrasound examination storage system
US20050283450A1 (en) 2004-06-11 2005-12-22 Masakazu Matsugu Information processing apparatus, information processing method, pattern recognition apparatus, and pattern recognition method
US7016783B2 (en) 2003-03-28 2006-03-21 Delphi Technologies, Inc. Collision avoidance with active steering and braking
US20060088191A1 (en) 2004-10-25 2006-04-27 Tong Zhang Video content understanding through real time video motion analysis
US20060094001A1 (en) 2002-11-29 2006-05-04 Torre Vicent E Method and device for image processing and learning with neuronal cultures
US20060127042A1 (en) 2004-12-09 2006-06-15 Park Sung W Structure of navigation information for video data recorded on a recording medium and recording and reproducing methods and apparatuses using the structure
US20060129728A1 (en) 2004-12-09 2006-06-15 Hampel Craig E Memory interface with workload adaptive encode/decode
US20060161218A1 (en) 2003-11-26 2006-07-20 Wicab, Inc. Systems and methods for treating traumatic brain injury
US20060188168A1 (en) 2005-02-22 2006-08-24 Sheraizin Vitaly S Enhancement of decompressed video
US7113867B1 (en) 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
US20070022068A1 (en) 2005-07-01 2007-01-25 Ralph Linsker Neural networks for prediction and control
US20070071100A1 (en) 2005-09-27 2007-03-29 Fang Shi Encoder assisted frame rate up conversion using various motion models
US20070109409A1 (en) * 2004-12-17 2007-05-17 Sehoon Yea Method and System for Processing Multiview Videos for View Synthesis using Skip and Direct Modes
US20070176643A1 (en) 2005-06-17 2007-08-02 Alex Nugent Universal logic gate utilizing nanotechnology
US20070208678A1 (en) 2004-03-17 2007-09-06 Canon Kabushiki Kaisha Parallel Pulse Signal Processing Apparatus, Pattern Recognition Apparatus, And Image Input Apparatus
US20080043848A1 (en) 1999-11-29 2008-02-21 Kuhn Peter M Video/audio signal processing method and video/audio signal processing apparatus
US20080100482A1 (en) 2003-05-27 2008-05-01 Lazar Aurel A Multichannel Time Encoding And Decoding Of A Signal
US20080152236A1 (en) 2006-12-22 2008-06-26 Canon Kabushiki Kaisha Image processing method and apparatus
WO2008083335A2 (en) 2006-12-29 2008-07-10 Neurosciences Research Foundation, Inc. Solving the distal reward problem through linkage of stdp and dopamine signaling
US20080174700A1 (en) 2005-03-10 2008-07-24 Nobumichi Takaba Display Device, Contrast Adjusting Method and Contrast Adjusting Program
US20080199072A1 (en) 2003-02-27 2008-08-21 Sony Corporation Image processing device and method, learning device and method, recording medium, and program
US20080205764A1 (en) 2007-02-26 2008-08-28 Yoshiaki Iwai Information processing apparatus, method, and program
US7430682B2 (en) 2005-09-30 2008-09-30 Symbol Technologies, Inc. Processing image data from multiple sources
US20080237446A1 (en) 2007-02-16 2008-10-02 Texas Instruments Incorporated Solid-state image pickup device and method
US20080252723A1 (en) 2007-02-23 2008-10-16 Johnson Controls Technology Company Video processing systems and methods
US20080267458A1 (en) 2007-04-27 2008-10-30 University Of Ottawa Face image log creation
WO2008132066A1 (en) 2007-04-27 2008-11-06 Siemens Aktiengesellschaft A method for computer-assisted learning of one or more neural networks
US20090028384A1 (en) 2005-04-18 2009-01-29 Alexander Bovyrin Three-dimensional road map estimation from video sequences by tracking pedestrians
US20090043722A1 (en) 2003-03-27 2009-02-12 Alex Nugent Adaptive neural network utilizing nanotechnology-based components
US20090096863A1 (en) * 2007-10-10 2009-04-16 Samsung Electronics Co., Ltd. Method and apparatus for reducing fatigue resulting from viewing three-dimensional image display, and method and apparatus for generating data stream of low visual fatigue three-dimensional image
US20090096927A1 (en) 2007-10-15 2009-04-16 Camp Jr William O System and method for video coding using variable compression and object motion tracking
US20090106030A1 (en) 2004-11-09 2009-04-23 Koninklijke Philips Electronics, N.V. Method of signal encoding
US20090141938A1 (en) 2007-11-08 2009-06-04 Elctronics And Telecommunications Research Institute Robot vision system and detection method
US20090195640A1 (en) * 2008-01-31 2009-08-06 Samsung Electronics Co., Ltd. Method and apparatus for generating stereoscopic image data stream for temporally partial three-dimensional (3d) data, and method and apparatus for displaying temporally partial 3d data of stereoscopic image
US20090202114A1 (en) 2008-02-13 2009-08-13 Sebastien Morin Live-Action Image Capture
US7580907B1 (en) 2004-01-14 2009-08-25 Evolved Machines, Inc. Invariant object recognition
US20090287624A1 (en) 2005-12-23 2009-11-19 Societe De Commercialisation De Produits De La Recherche Applique-Socpra-Sciences Et Genie S.E.C. Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer
US20090304231A1 (en) 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
US20090312985A1 (en) 2008-06-12 2009-12-17 Eliazar Austin I D Multiple hypothesis tracking
US20090323809A1 (en) 2008-06-25 2009-12-31 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US7653255B2 (en) 2004-06-02 2010-01-26 Adobe Systems Incorporated Image region of interest encoding
US20100036457A1 (en) 2008-08-07 2010-02-11 Massachusetts Institute Of Technology Coding for visual prostheses
US20100073371A1 (en) 2008-09-25 2010-03-25 Pixia Corp. Large format video archival, storage, and retrieval system and method
US20100080297A1 (en) 2008-09-30 2010-04-01 Microsoft Corporation Techniques to perform fast motion estimation
US20100081958A1 (en) 2006-10-02 2010-04-01 She Christy L Pulse-based feature extraction for neural recordings
US20100086171A1 (en) 2008-10-02 2010-04-08 Silverbrook Research Pty Ltd Method of imaging coding pattern having merged data symbols
US20100100482A1 (en) 2007-01-23 2010-04-22 Sxip Identity Corp. Intermediate Data Generation For Transaction Processing
US7737933B2 (en) 2000-09-26 2010-06-15 Toshiba Matsushita Display Technology Co., Ltd. Display unit and drive system thereof and an information display unit
US20100166320A1 (en) 2008-12-26 2010-07-01 Paquier Williams J F Multi-stage image pattern recognizer
US20100225824A1 (en) 2007-06-28 2010-09-09 The Trustees Of Columbia University In The City Of New York Multi-Input Multi-Output Time Encoding And Decoding Machines
US20100235310A1 (en) 2009-01-27 2010-09-16 Gage Fred H Temporally dynamic artificial neural networks
US20100271511A1 (en) 2009-04-24 2010-10-28 Canon Kabushiki Kaisha Processing multi-view digital images
US20100290530A1 (en) 2009-05-14 2010-11-18 Qualcomm Incorporated Motion vector processing
US20100299296A1 (en) 2009-05-21 2010-11-25 International Business Machines Corporation Electronic learning synapse with spike-timing dependent plasticity using unipolar memory-switching elements
US7849030B2 (en) 2006-05-31 2010-12-07 Hartford Fire Insurance Company Method and system for classifying documents
RU2406105C2 (en) 2006-06-13 2010-12-10 Филипп Геннадьевич Нестерук Method of processing information in neural networks
US20110002191A1 (en) 2006-12-07 2011-01-06 Alion Science & Technology Active sonar apparatuses and methods
US20110016071A1 (en) 2009-07-20 2011-01-20 Guillen Marcos E Method for efficiently simulating the information processing in cells and tissues of the nervous system with a temporal series compressed encoding neural network
US20110063409A1 (en) 2009-09-11 2011-03-17 Nokia Corporation Encoding and decoding a multi-view video signal
US20110103480A1 (en) 2009-10-30 2011-05-05 Qualcomm Incorporated Global motion parameter estimation using block-based motion vectors
US20110119215A1 (en) 2009-11-13 2011-05-19 International Business Machines Corporation Hardware analog-digital neural networks
US20110119214A1 (en) 2009-11-18 2011-05-19 International Business Machines Corporation Area efficient neuromorphic circuits
US20110137843A1 (en) 2008-08-28 2011-06-09 Massachusetts Institute Of Technology Circuits and Methods Representative of Spike Timing Dependent Plasticity of Neurons
US20110134242A1 (en) 2008-07-08 2011-06-09 Gerrit Jacobus Loubser Apparatus and method of classifying movement of objects in a monitoring zone
US20110160741A1 (en) 2008-06-09 2011-06-30 Hiroyuki Asano Medical treatment tool for tubular organ
US20110164110A1 (en) * 2010-01-03 2011-07-07 Sensio Technologies Inc. Method and system for detecting compressed stereoscopic frames in a digital video signal
US20110170792A1 (en) 2008-09-23 2011-07-14 Dolby Laboratories Licensing Corporation Encoding and Decoding Architecture of Checkerboard Multiplexed Image Data
RU2424561C2 (en) 2005-08-31 2011-07-20 Майкрософт Корпорейшн Training convolutional neural network on graphics processing units
US8000967B2 (en) 2005-03-09 2011-08-16 Telefonaktiebolaget Lm Ericsson (Publ) Low-complexity code excited linear prediction encoding
US20110206122A1 (en) 2010-02-25 2011-08-25 International Business Machines Corporation Method and Apparatus for Encoding Surveillance Video
US20110222603A1 (en) 2008-06-03 2011-09-15 Thales Method and System Making It Possible to Protect A Compressed Video Stream Against Errors Arising During a Transmission
US20110228092A1 (en) 2010-03-19 2011-09-22 University-Industry Cooperation Group Of Kyung Hee University Surveillance system
US20110242341A1 (en) 2010-03-31 2011-10-06 Agrawal Amit K Method and System for Generating High Temporal Resolution Video from Low Temporal Resolution Videos
CN102226740A (en) 2011-04-18 2011-10-26 中国计量学院 Bearing fault detection method based on manner of controlling stochastic resonance by external periodic signal
US20110280300A1 (en) * 2009-01-29 2011-11-17 Dolby Laboratories Licensing Corporation Methods and Devices for Sub-Sampling and Interleaving Multiple Images, EG Stereoscopic
US20120011090A1 (en) 2010-07-07 2012-01-12 Qualcomm Incorporated Methods and systems for three-memristor synapse with stdp and dopamine signaling
US8108147B1 (en) 2009-02-06 2012-01-31 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for automatic omni-directional visual motion-based collision avoidance
US20120026304A1 (en) * 2010-07-27 2012-02-02 Kabushiki Kaisha Toshiba Stereoscopic video output device and backlight control method
US20120057634A1 (en) 2010-09-02 2012-03-08 Fang Shi Systems and Methods for Video Content Analysis
US20120072189A1 (en) 2006-06-30 2012-03-22 Bae Systems Plc Sensor systems for estimating field
US20120083982A1 (en) 2010-10-05 2012-04-05 Zachary Thomas Bonefas System and method for governing a speed of an autonomous vehicle
US20120084240A1 (en) 2010-09-30 2012-04-05 International Business Machines Corporation Phase change memory synaptronic circuit for spiking computation, association and recall
US20120109866A1 (en) 2010-10-29 2012-05-03 International Business Machines Corporation Compact cognitive synaptic computing circuits
US20120130566A1 (en) 2008-12-15 2012-05-24 Saab Ab Method and system for facilitating autonomous landing of aerial vehicles on a surface
US20120140032A1 (en) * 2010-11-23 2012-06-07 Circa3D, Llc Formatting 3d content for low frame-rate displays
US20120162450A1 (en) 2010-12-23 2012-06-28 Sungsoo Park Digital image stabilization device and method
US20120212579A1 (en) 2009-10-20 2012-08-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and Arrangement for Multi-View Video Compression
US20120236114A1 (en) * 2011-03-18 2012-09-20 Te-Hao Chang Depth information generator for generating depth information output by only processing part of received images having different views, and related depth information generating method and depth adjusting apparatus thereof
US20120243733A1 (en) 2011-03-22 2012-09-27 Morpho, Inc. Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program
US20120256941A1 (en) 2011-04-08 2012-10-11 Dolby Laboratories Licensing Corporation Local Definition of Global Image Transformations
US8315305B2 (en) 2010-03-26 2012-11-20 Brain Corporation Systems and methods for invariant pulse latency coding
US20120294486A1 (en) 2011-05-17 2012-11-22 Snell Limited Detecting stereoscopic images
US20120303091A1 (en) 2010-03-26 2012-11-29 Izhikevich Eugene M Apparatus and methods for polychronous encoding and multiplexing in neuronal prosthetic devices
US20120300843A1 (en) * 2010-01-29 2012-11-29 Thomson Licensing Block-based interleaving
US20120308076A1 (en) 2010-03-26 2012-12-06 Filip Lukasz Piekniewski Apparatus and methods for temporally proximate object recognition
US20120308136A1 (en) 2010-03-26 2012-12-06 Izhikevich Eugene M Apparatus and methods for pulse-code invariant object recognition
US20120330447A1 (en) 2010-11-16 2012-12-27 Gerlach Adam R Surface data acquisition, storage, and assessment system
US20130022111A1 (en) * 2011-07-22 2013-01-24 Qualcomm Incorporated Coding motion depth maps with depth range variation
US20130051680A1 (en) 2011-08-31 2013-02-28 Olympus Corporation Image processing device, image processing method, and computer readable recording device
US20130050574A1 (en) 2011-08-29 2013-02-28 Futurewei Technologies Inc. System and Method for Retargeting Video Sequences
US8390707B2 (en) 2008-02-28 2013-03-05 Kabushiki Kaisha Toshiba Solid-state imaging device and manufacturing method thereof
US20130073498A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient link between neuronal models and neuromorphic systems
US20130073500A1 (en) 2011-09-21 2013-03-21 Botond Szatmary High level neuromorphic network description apparatus and methods
US20130073492A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient implementation of event-triggered plasticity rules in neuromorphic systems
US20130073491A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Apparatus and methods for synaptic update in a pulse-coded network
US20130073499A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Apparatus and method for partial evaluation of synaptic updates based on system events
US20130073484A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient memory management in neuromorphic systems
US20130073496A1 (en) 2011-09-21 2013-03-21 Botond Szatmary Tag-based apparatus and methods for neural networks
US20130073495A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for neuromorphic systems
US8416847B2 (en) 1998-12-21 2013-04-09 Zin Stai Pte. In, Llc Separate plane compression using plurality of compression methods including ZLN and ZLD methods
US20130148882A1 (en) 2011-12-13 2013-06-13 Morris Lee Detecting objects in images using color histograms
US20130151450A1 (en) 2011-12-07 2013-06-13 Filip Ponulak Neural network apparatus and methods for signal conversion
US20130176430A1 (en) 2012-01-06 2013-07-11 Pelco, Inc. Context aware moving object detection
US20130218821A1 (en) 2011-09-21 2013-08-22 Botond Szatmary Round-trip engineering apparatus and methods for neural networks
US8542875B2 (en) 2010-09-17 2013-09-24 Honeywell International Inc. Image processing based on visual attention and reduced search based generated regions of interest
US20130297542A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Sensory input processing apparatus in a spiking neural network
US20130297539A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Spiking neural network object recognition apparatus and methods
US20130297541A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Spiking neural network feedback apparatus and methods
US20130325777A1 (en) 2012-06-04 2013-12-05 Csaba Petre Spiking neuron network apparatus and methods
US20130325775A1 (en) 2012-06-04 2013-12-05 Brain Corporation Dynamically reconfigurable stochastic learning apparatus and methods
US20130325773A1 (en) 2012-06-04 2013-12-05 Brain Corporation Stochastic apparatus and methods for implementing generalized learning rules
US20130325766A1 (en) 2012-06-04 2013-12-05 Csaba Petre Spiking neuron network apparatus and methods
US20130325774A1 (en) 2012-06-04 2013-12-05 Brain Corporation Learning stochastic apparatus and methods
US20130325768A1 (en) 2012-06-04 2013-12-05 Brain Corporation Stochastic spiking network learning apparatus and methods
US20140012788A1 (en) 2012-07-03 2014-01-09 Filip Piekniewski Conditional plasticity spiking neuron network apparatus and methods
US20140016858A1 (en) 2012-07-12 2014-01-16 Micah Richert Spiking neuron network sensory processing apparatus and methods
US20140032459A1 (en) 2012-07-27 2014-01-30 Brain Corporation Apparatus and methods for generalized state-dependent learning in spiking neuron networks
US20140032458A1 (en) 2012-07-27 2014-01-30 Oleg Sinyavskiy Apparatus and methods for efficient updates in spiking neuron network
US20140052679A1 (en) 2011-09-21 2014-02-20 Oleg Sinyavskiy Apparatus and methods for implementing event-based updates in spiking neuron networks
US20140064609A1 (en) 2010-08-26 2014-03-06 Csaba Petre Sensory input processing apparatus and methods
US20140122398A1 (en) 2012-10-25 2014-05-01 Brain Corporation Modulated plasticity apparatus and methods for spiking neuron network
US20140122399A1 (en) 2012-10-25 2014-05-01 Brain Corporation Apparatus and methods for activity-based plasticity in a spiking neuron network
US20140119654A1 (en) 2012-10-30 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for determining a boundary of an obstacle which occludes an object in an image
US20140122397A1 (en) 2012-10-25 2014-05-01 Brain Corporation Adaptive plasticity apparatus and methods for spiking neuron network
US20140125773A1 (en) * 2012-11-05 2014-05-08 Kabushiki Kaisha Toshiba Image processing methods and apparatus
US20140156574A1 (en) 2012-11-30 2014-06-05 Brain Corporation Rate stabilization through plasticity in spiking neuron network
US20140201126A1 (en) 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
US20140241612A1 (en) * 2013-02-23 2014-08-28 Microsoft Corporation Real time stereo matching
US20140379179A1 (en) 2013-06-24 2014-12-25 Honeywell International Inc. Systems and methods for autonomous landing using a three dimensional evidence grid
US20150077639A1 (en) 2013-09-16 2015-03-19 Thomson Licensing Color video processing system and method, and corresponding computer program
US20150127154A1 (en) 2011-06-02 2015-05-07 Brain Corporation Reduced degree of freedom robotic controller apparatus and methods
US20150127155A1 (en) 2011-06-02 2015-05-07 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US20150181168A1 (en) 2013-12-20 2015-06-25 DDD IP Ventures, Ltd. Interactive quality improvement for video conferencing
US20150217449A1 (en) 2014-02-03 2015-08-06 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US20150281715A1 (en) 2014-03-25 2015-10-01 Sean J. Lawrence Object tracking in encoded video streams
US9150220B2 (en) 2013-12-04 2015-10-06 Mobileye Vision Technologies Ltd. Systems and methods for mimicking a leading vehicle
US20150304634A1 (en) 2011-08-04 2015-10-22 John George Karvounis Mapping and tracking system
US20150312547A1 (en) * 2012-12-13 2015-10-29 Rai Radiotelevisione Italiana S.P.A. Apparatus and method for generating and rebuilding a video stream
US20160003946A1 (en) 2014-07-03 2016-01-07 Advanced Scientific Concepts, Inc. Ladar sensor for a dense environment
US20160009413A1 (en) 2013-08-23 2016-01-14 Korea Aerospace Research Institute Apparatus and method of charging and housing of unmanned vertical take-off and landing (vtol) aircraft

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8187097B1 (en) * 2008-06-04 2012-05-29 Zhang Evan Y W Measurement and segment of participant's motion in game play
WO2011078883A1 (en) * 2009-12-24 2011-06-30 Trumbull Ventures Llc Method and apparatus for photographing and projecting moving images in three dimensions
KR102063385B1 (en) * 2013-01-30 2020-01-07 인텔 코포레이션 Content adaptive entropy coding for next generation video
US9552633B2 (en) * 2014-03-07 2017-01-24 Qualcomm Incorporated Depth aware enhancement for stereo video
US10613642B2 (en) * 2014-03-12 2020-04-07 Microsoft Technology Licensing, Llc Gesture parameter tuning

Patent Citations (196)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5063603A (en) 1989-11-06 1991-11-05 David Sarnoff Research Center, Inc. Dynamic method for recognizing objects and image processing system therefor
JPH0487423A (en) 1990-07-31 1992-03-19 Toshiba Corp Decoding circuit
US5216752A (en) 1990-12-19 1993-06-01 Baylor College Of Medicine Interspike interval decoding neural network
US5138447A (en) 1991-02-11 1992-08-11 General Instrument Corporation Method and apparatus for communicating compressed digital video signals using multiple processors
US5272535A (en) 1991-06-13 1993-12-21 Loral Fairchild Corporation Image sensor with exposure control, selectable interlaced, pseudo interlaced or non-interlaced readout and video compression
US6418424B1 (en) 1991-12-23 2002-07-09 Steven M. Hoffberg Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5875108A (en) 1991-12-23 1999-02-23 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5355435A (en) 1992-05-18 1994-10-11 New Mexico State University Technology Transfer Corp. Asynchronous temporal neural processing element
US5673367A (en) 1992-10-01 1997-09-30 Buckley; Theresa M. Method for neural network control of motion using real-time environmental feedback
US5638359A (en) 1992-12-14 1997-06-10 Nokia Telecommunications Oy Method for congestion management in a frame relay network and a node in a frame relay network
RU2108612C1 (en) 1994-09-14 1998-04-10 Круглов Сергей Петрович Adaptive control system with identifier and implicit reference model
US6625317B1 (en) 1995-09-12 2003-09-23 Art Gaffin Visual imaging system and method
US6014653A (en) 1996-01-26 2000-01-11 Thaler; Stephen L. Non-algorithmically implemented artificial neural networks and components thereof
US6009418A (en) 1996-05-02 1999-12-28 Cooper; David L. Method and apparatus for neural networking using semantic attractor architecture
US6509854B1 (en) 1997-03-16 2003-01-21 Hitachi, Ltd. DA conversion circuit
US20030050903A1 (en) 1997-06-11 2003-03-13 Jim-Shih Liaw Dynamic synapse for signal processing in neural networks
US6545708B1 (en) 1997-07-11 2003-04-08 Sony Corporation Camera controlling device and method for predicted viewing
US6458157B1 (en) 1997-08-04 2002-10-01 Suaning Gregg Joergen Retinal stimulator
US6581046B1 (en) 1997-10-10 2003-06-17 Yeda Research And Development Co. Ltd. Neuronal phase-locked loops
US6545705B1 (en) 1998-04-10 2003-04-08 Lynx System Developers, Inc. Camera with object recognition/data output
US6035389A (en) 1998-08-11 2000-03-07 Intel Corporation Scheduling instructions with different latencies
US20040170330A1 (en) 1998-08-12 2004-09-02 Pixonics, Inc. Video coding reconstruction apparatus and methods
US8416847B2 (en) 1998-12-21 2013-04-09 Zin Stai Pte. In, Llc Separate plane compression using plurality of compression methods including ZLN and ZLD methods
US20040054964A1 (en) 1999-03-18 2004-03-18 Xerox Corporation. Methods and systems for real-time storyboarding with a web page and graphical user interface for automatic video parsing and browsing
US20080043848A1 (en) 1999-11-29 2008-02-21 Kuhn Peter M Video/audio signal processing method and video/audio signal processing apparatus
US6546291B2 (en) 2000-02-16 2003-04-08 Massachusetts Eye & Ear Infirmary Balance prosthesis
US6501794B1 (en) 2000-05-22 2002-12-31 Microsoft Corporate System and related methods for analyzing compressed media content
US20020038294A1 (en) 2000-06-16 2002-03-28 Masakazu Matsugu Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements
US7737933B2 (en) 2000-09-26 2010-06-15 Toshiba Matsushita Display Technology Co., Ltd. Display unit and drive system thereof and an information display unit
US6678590B1 (en) * 2000-10-17 2004-01-13 Bbnt Solutions Llc Vehicle navigation system with vision system preprocessor using MPEG encoder
US7113867B1 (en) 2000-11-26 2006-09-26 Mobileye Technologies Limited System and method for detecting obstacles to vehicle motion and determining time to contact therewith using sequences of images
US20020176025A1 (en) 2001-03-05 2002-11-28 Chang-Su Kim Systems and methods for encoding redundant motion vectors in compressed video bitstreams
US6556610B1 (en) 2001-04-12 2003-04-29 E20 Communications, Inc. Semiconductor lasers
US20040193670A1 (en) 2001-05-21 2004-09-30 Langan John D. Spatio-temporal filter and method
US20050036649A1 (en) 2001-08-23 2005-02-17 Jun Yokono Robot apparatus, face recognition method, and face recognition apparatus
US20030216919A1 (en) 2002-05-13 2003-11-20 Roushar Joseph C. Multi-dimensional method and apparatus for automated language interpretation
US20040136439A1 (en) 2002-11-15 2004-07-15 Brandon Dewberry Methods and systems acquiring impulse signals
US20060094001A1 (en) 2002-11-29 2006-05-04 Torre Vicent E Method and device for image processing and learning with neuronal cultures
US20080199072A1 (en) 2003-02-27 2008-08-21 Sony Corporation Image processing device and method, learning device and method, recording medium, and program
US20090043722A1 (en) 2003-03-27 2009-02-12 Alex Nugent Adaptive neural network utilizing nanotechnology-based components
US7016783B2 (en) 2003-03-28 2006-03-21 Delphi Technologies, Inc. Collision avoidance with active steering and braking
US20040233987A1 (en) 2003-05-21 2004-11-25 Porikli Fatih M. Method for segmenting 3D objects from compressed videos
US7142602B2 (en) 2003-05-21 2006-11-28 Mitsubishi Electric Research Laboratories, Inc. Method for segmenting 3D objects from compressed videos
US20080100482A1 (en) 2003-05-27 2008-05-01 Lazar Aurel A Multichannel Time Encoding And Decoding Of A Signal
US20050047647A1 (en) 2003-06-10 2005-03-03 Ueli Rutishauser System and method for attentional selection
US20050015351A1 (en) 2003-07-18 2005-01-20 Alex Nugent Nanotechnology neural network methods and systems
US20050096539A1 (en) 2003-10-31 2005-05-05 Siemens Medical Solutions Usa, Inc. Intelligent ultrasound examination storage system
US20060161218A1 (en) 2003-11-26 2006-07-20 Wicab, Inc. Systems and methods for treating traumatic brain injury
US7580907B1 (en) 2004-01-14 2009-08-25 Evolved Machines, Inc. Invariant object recognition
US20070208678A1 (en) 2004-03-17 2007-09-06 Canon Kabushiki Kaisha Parallel Pulse Signal Processing Apparatus, Pattern Recognition Apparatus, And Image Input Apparatus
US7653255B2 (en) 2004-06-02 2010-01-26 Adobe Systems Incorporated Image region of interest encoding
US8015130B2 (en) 2004-06-11 2011-09-06 Canon Kabushiki Kaisha Information processing apparatus, information processing method, pattern recognition apparatus, and pattern recognition method
US20050283450A1 (en) 2004-06-11 2005-12-22 Masakazu Matsugu Information processing apparatus, information processing method, pattern recognition apparatus, and pattern recognition method
US20060088191A1 (en) 2004-10-25 2006-04-27 Tong Zhang Video content understanding through real time video motion analysis
US7447337B2 (en) 2004-10-25 2008-11-04 Hewlett-Packard Development Company, L.P. Video content understanding through real time video motion analysis
US20090106030A1 (en) 2004-11-09 2009-04-23 Koninklijke Philips Electronics, N.V. Method of signal encoding
US20060129728A1 (en) 2004-12-09 2006-06-15 Hampel Craig E Memory interface with workload adaptive encode/decode
US20060127042A1 (en) 2004-12-09 2006-06-15 Park Sung W Structure of navigation information for video data recorded on a recording medium and recording and reproducing methods and apparatuses using the structure
US20070109409A1 (en) * 2004-12-17 2007-05-17 Sehoon Yea Method and System for Processing Multiview Videos for View Synthesis using Skip and Direct Modes
US20060188168A1 (en) 2005-02-22 2006-08-24 Sheraizin Vitaly S Enhancement of decompressed video
US8000967B2 (en) 2005-03-09 2011-08-16 Telefonaktiebolaget Lm Ericsson (Publ) Low-complexity code excited linear prediction encoding
US20080174700A1 (en) 2005-03-10 2008-07-24 Nobumichi Takaba Display Device, Contrast Adjusting Method and Contrast Adjusting Program
US20090028384A1 (en) 2005-04-18 2009-01-29 Alexander Bovyrin Three-dimensional road map estimation from video sequences by tracking pedestrians
US20070176643A1 (en) 2005-06-17 2007-08-02 Alex Nugent Universal logic gate utilizing nanotechnology
US20070022068A1 (en) 2005-07-01 2007-01-25 Ralph Linsker Neural networks for prediction and control
RU2424561C2 (en) 2005-08-31 2011-07-20 Майкрософт Корпорейшн Training convolutional neural network on graphics processing units
US20070071100A1 (en) 2005-09-27 2007-03-29 Fang Shi Encoder assisted frame rate up conversion using various motion models
US7430682B2 (en) 2005-09-30 2008-09-30 Symbol Technologies, Inc. Processing image data from multiple sources
US20090287624A1 (en) 2005-12-23 2009-11-19 Societe De Commercialisation De Produits De La Recherche Applique-Socpra-Sciences Et Genie S.E.C. Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer
US7849030B2 (en) 2006-05-31 2010-12-07 Hartford Fire Insurance Company Method and system for classifying documents
RU2406105C2 (en) 2006-06-13 2010-12-10 Филипп Геннадьевич Нестерук Method of processing information in neural networks
US20120072189A1 (en) 2006-06-30 2012-03-22 Bae Systems Plc Sensor systems for estimating field
US20100081958A1 (en) 2006-10-02 2010-04-01 She Christy L Pulse-based feature extraction for neural recordings
US20110002191A1 (en) 2006-12-07 2011-01-06 Alion Science & Technology Active sonar apparatuses and methods
US20080152236A1 (en) 2006-12-22 2008-06-26 Canon Kabushiki Kaisha Image processing method and apparatus
US8103602B2 (en) 2006-12-29 2012-01-24 Neurosciences Research Foundation, Inc. Solving the distal reward problem through linkage of STDP and dopamine signaling
WO2008083335A2 (en) 2006-12-29 2008-07-10 Neurosciences Research Foundation, Inc. Solving the distal reward problem through linkage of stdp and dopamine signaling
US20100100482A1 (en) 2007-01-23 2010-04-22 Sxip Identity Corp. Intermediate Data Generation For Transaction Processing
US20080237446A1 (en) 2007-02-16 2008-10-02 Texas Instruments Incorporated Solid-state image pickup device and method
US20080252723A1 (en) 2007-02-23 2008-10-16 Johnson Controls Technology Company Video processing systems and methods
US20080205764A1 (en) 2007-02-26 2008-08-28 Yoshiaki Iwai Information processing apparatus, method, and program
WO2008132066A1 (en) 2007-04-27 2008-11-06 Siemens Aktiengesellschaft A method for computer-assisted learning of one or more neural networks
US20080267458A1 (en) 2007-04-27 2008-10-30 University Of Ottawa Face image log creation
US20100225824A1 (en) 2007-06-28 2010-09-09 The Trustees Of Columbia University In The City Of New York Multi-Input Multi-Output Time Encoding And Decoding Machines
US20090096863A1 (en) * 2007-10-10 2009-04-16 Samsung Electronics Co., Ltd. Method and apparatus for reducing fatigue resulting from viewing three-dimensional image display, and method and apparatus for generating data stream of low visual fatigue three-dimensional image
US20090096927A1 (en) 2007-10-15 2009-04-16 Camp Jr William O System and method for video coding using variable compression and object motion tracking
US20090141938A1 (en) 2007-11-08 2009-06-04 Elctronics And Telecommunications Research Institute Robot vision system and detection method
US20090195640A1 (en) * 2008-01-31 2009-08-06 Samsung Electronics Co., Ltd. Method and apparatus for generating stereoscopic image data stream for temporally partial three-dimensional (3d) data, and method and apparatus for displaying temporally partial 3d data of stereoscopic image
US20090202114A1 (en) 2008-02-13 2009-08-13 Sebastien Morin Live-Action Image Capture
US8390707B2 (en) 2008-02-28 2013-03-05 Kabushiki Kaisha Toshiba Solid-state imaging device and manufacturing method thereof
US20110222603A1 (en) 2008-06-03 2011-09-15 Thales Method and System Making It Possible to Protect A Compressed Video Stream Against Errors Arising During a Transmission
US20090304231A1 (en) 2008-06-09 2009-12-10 Arcsoft, Inc. Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device
US20110160741A1 (en) 2008-06-09 2011-06-30 Hiroyuki Asano Medical treatment tool for tubular organ
US20090312985A1 (en) 2008-06-12 2009-12-17 Eliazar Austin I D Multiple hypothesis tracking
US20090323809A1 (en) 2008-06-25 2009-12-31 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US20110134242A1 (en) 2008-07-08 2011-06-09 Gerrit Jacobus Loubser Apparatus and method of classifying movement of objects in a monitoring zone
US20100036457A1 (en) 2008-08-07 2010-02-11 Massachusetts Institute Of Technology Coding for visual prostheses
US20110137843A1 (en) 2008-08-28 2011-06-09 Massachusetts Institute Of Technology Circuits and Methods Representative of Spike Timing Dependent Plasticity of Neurons
US20110170792A1 (en) 2008-09-23 2011-07-14 Dolby Laboratories Licensing Corporation Encoding and Decoding Architecture of Checkerboard Multiplexed Image Data
US20100073371A1 (en) 2008-09-25 2010-03-25 Pixia Corp. Large format video archival, storage, and retrieval system and method
US20100080297A1 (en) 2008-09-30 2010-04-01 Microsoft Corporation Techniques to perform fast motion estimation
US20100086171A1 (en) 2008-10-02 2010-04-08 Silverbrook Research Pty Ltd Method of imaging coding pattern having merged data symbols
US20120130566A1 (en) 2008-12-15 2012-05-24 Saab Ab Method and system for facilitating autonomous landing of aerial vehicles on a surface
US8160354B2 (en) 2008-12-26 2012-04-17 Five Apes, Inc. Multi-stage image pattern recognizer
US20100166320A1 (en) 2008-12-26 2010-07-01 Paquier Williams J F Multi-stage image pattern recognizer
US20100235310A1 (en) 2009-01-27 2010-09-16 Gage Fred H Temporally dynamic artificial neural networks
US20110280300A1 (en) * 2009-01-29 2011-11-17 Dolby Laboratories Licensing Corporation Methods and Devices for Sub-Sampling and Interleaving Multiple Images, EG Stereoscopic
US8108147B1 (en) 2009-02-06 2012-01-31 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for automatic omni-directional visual motion-based collision avoidance
US20100271511A1 (en) 2009-04-24 2010-10-28 Canon Kabushiki Kaisha Processing multi-view digital images
US20100290530A1 (en) 2009-05-14 2010-11-18 Qualcomm Incorporated Motion vector processing
US20100299296A1 (en) 2009-05-21 2010-11-25 International Business Machines Corporation Electronic learning synapse with spike-timing dependent plasticity using unipolar memory-switching elements
US8200593B2 (en) 2009-07-20 2012-06-12 Corticaldb Inc Method for efficiently simulating the information processing in cells and tissues of the nervous system with a temporal series compressed encoding neural network
US20110016071A1 (en) 2009-07-20 2011-01-20 Guillen Marcos E Method for efficiently simulating the information processing in cells and tissues of the nervous system with a temporal series compressed encoding neural network
US20110063409A1 (en) 2009-09-11 2011-03-17 Nokia Corporation Encoding and decoding a multi-view video signal
US20120212579A1 (en) 2009-10-20 2012-08-23 Telefonaktiebolaget Lm Ericsson (Publ) Method and Arrangement for Multi-View Video Compression
US20110103480A1 (en) 2009-10-30 2011-05-05 Qualcomm Incorporated Global motion parameter estimation using block-based motion vectors
US20110119215A1 (en) 2009-11-13 2011-05-19 International Business Machines Corporation Hardware analog-digital neural networks
US20110119214A1 (en) 2009-11-18 2011-05-19 International Business Machines Corporation Area efficient neuromorphic circuits
US8311965B2 (en) 2009-11-18 2012-11-13 International Business Machines Corporation Area efficient neuromorphic circuits using field effect transistors (FET) and variable resistance material
US20110164110A1 (en) * 2010-01-03 2011-07-07 Sensio Technologies Inc. Method and system for detecting compressed stereoscopic frames in a digital video signal
US20120300843A1 (en) * 2010-01-29 2012-11-29 Thomson Licensing Block-based interleaving
US20110206122A1 (en) 2010-02-25 2011-08-25 International Business Machines Corporation Method and Apparatus for Encoding Surveillance Video
US20110228092A1 (en) 2010-03-19 2011-09-22 University-Industry Cooperation Group Of Kyung Hee University Surveillance system
US8467623B2 (en) 2010-03-26 2013-06-18 Brain Corporation Invariant pulse latency coding systems and methods systems and methods
US8315305B2 (en) 2010-03-26 2012-11-20 Brain Corporation Systems and methods for invariant pulse latency coding
US20120308136A1 (en) 2010-03-26 2012-12-06 Izhikevich Eugene M Apparatus and methods for pulse-code invariant object recognition
US20120308076A1 (en) 2010-03-26 2012-12-06 Filip Lukasz Piekniewski Apparatus and methods for temporally proximate object recognition
US20120303091A1 (en) 2010-03-26 2012-11-29 Izhikevich Eugene M Apparatus and methods for polychronous encoding and multiplexing in neuronal prosthetic devices
US20130251278A1 (en) 2010-03-26 2013-09-26 Eugene M. Izhikevich Invariant pulse latency coding systems and methods
US20110242341A1 (en) 2010-03-31 2011-10-06 Agrawal Amit K Method and System for Generating High Temporal Resolution Video from Low Temporal Resolution Videos
US20120011090A1 (en) 2010-07-07 2012-01-12 Qualcomm Incorporated Methods and systems for three-memristor synapse with stdp and dopamine signaling
US20120026304A1 (en) * 2010-07-27 2012-02-02 Kabushiki Kaisha Toshiba Stereoscopic video output device and backlight control method
US20140064609A1 (en) 2010-08-26 2014-03-06 Csaba Petre Sensory input processing apparatus and methods
US20120057634A1 (en) 2010-09-02 2012-03-08 Fang Shi Systems and Methods for Video Content Analysis
US8542875B2 (en) 2010-09-17 2013-09-24 Honeywell International Inc. Image processing based on visual attention and reduced search based generated regions of interest
US20120084240A1 (en) 2010-09-30 2012-04-05 International Business Machines Corporation Phase change memory synaptronic circuit for spiking computation, association and recall
US20120083982A1 (en) 2010-10-05 2012-04-05 Zachary Thomas Bonefas System and method for governing a speed of an autonomous vehicle
US20120109866A1 (en) 2010-10-29 2012-05-03 International Business Machines Corporation Compact cognitive synaptic computing circuits
US20120330447A1 (en) 2010-11-16 2012-12-27 Gerlach Adam R Surface data acquisition, storage, and assessment system
US20120140032A1 (en) * 2010-11-23 2012-06-07 Circa3D, Llc Formatting 3d content for low frame-rate displays
US20120162450A1 (en) 2010-12-23 2012-06-28 Sungsoo Park Digital image stabilization device and method
US20120236114A1 (en) * 2011-03-18 2012-09-20 Te-Hao Chang Depth information generator for generating depth information output by only processing part of received images having different views, and related depth information generating method and depth adjusting apparatus thereof
US20120243733A1 (en) 2011-03-22 2012-09-27 Morpho, Inc. Moving object detecting device, moving object detecting method, moving object detection program, moving object tracking device, moving object tracking method, and moving object tracking program
US20120256941A1 (en) 2011-04-08 2012-10-11 Dolby Laboratories Licensing Corporation Local Definition of Global Image Transformations
CN102226740A (en) 2011-04-18 2011-10-26 中国计量学院 Bearing fault detection method based on manner of controlling stochastic resonance by external periodic signal
US20120294486A1 (en) 2011-05-17 2012-11-22 Snell Limited Detecting stereoscopic images
US20150127155A1 (en) 2011-06-02 2015-05-07 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US20150127154A1 (en) 2011-06-02 2015-05-07 Brain Corporation Reduced degree of freedom robotic controller apparatus and methods
US20130022111A1 (en) * 2011-07-22 2013-01-24 Qualcomm Incorporated Coding motion depth maps with depth range variation
US20150304634A1 (en) 2011-08-04 2015-10-22 John George Karvounis Mapping and tracking system
US20130050574A1 (en) 2011-08-29 2013-02-28 Futurewei Technologies Inc. System and Method for Retargeting Video Sequences
US20130051680A1 (en) 2011-08-31 2013-02-28 Olympus Corporation Image processing device, image processing method, and computer readable recording device
US20140052679A1 (en) 2011-09-21 2014-02-20 Oleg Sinyavskiy Apparatus and methods for implementing event-based updates in spiking neuron networks
US20130073500A1 (en) 2011-09-21 2013-03-21 Botond Szatmary High level neuromorphic network description apparatus and methods
US20130073491A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Apparatus and methods for synaptic update in a pulse-coded network
US20130073495A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for neuromorphic systems
US8712939B2 (en) 2011-09-21 2014-04-29 Brain Corporation Tag-based apparatus and methods for neural networks
US20130218821A1 (en) 2011-09-21 2013-08-22 Botond Szatmary Round-trip engineering apparatus and methods for neural networks
US20130073496A1 (en) 2011-09-21 2013-03-21 Botond Szatmary Tag-based apparatus and methods for neural networks
US20130073484A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient memory management in neuromorphic systems
US20130073492A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient implementation of event-triggered plasticity rules in neuromorphic systems
US20130073499A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Apparatus and method for partial evaluation of synaptic updates based on system events
US20130073498A1 (en) 2011-09-21 2013-03-21 Eugene M. Izhikevich Elementary network description for efficient link between neuronal models and neuromorphic systems
US20130151450A1 (en) 2011-12-07 2013-06-13 Filip Ponulak Neural network apparatus and methods for signal conversion
US20130148882A1 (en) 2011-12-13 2013-06-13 Morris Lee Detecting objects in images using color histograms
US20130176430A1 (en) 2012-01-06 2013-07-11 Pelco, Inc. Context aware moving object detection
US20130297541A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Spiking neural network feedback apparatus and methods
US20130297539A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Spiking neural network object recognition apparatus and methods
US20130297542A1 (en) 2012-05-07 2013-11-07 Filip Piekniewski Sensory input processing apparatus in a spiking neural network
US20130325777A1 (en) 2012-06-04 2013-12-05 Csaba Petre Spiking neuron network apparatus and methods
US20130325768A1 (en) 2012-06-04 2013-12-05 Brain Corporation Stochastic spiking network learning apparatus and methods
US20130325774A1 (en) 2012-06-04 2013-12-05 Brain Corporation Learning stochastic apparatus and methods
US20130325775A1 (en) 2012-06-04 2013-12-05 Brain Corporation Dynamically reconfigurable stochastic learning apparatus and methods
US20130325773A1 (en) 2012-06-04 2013-12-05 Brain Corporation Stochastic apparatus and methods for implementing generalized learning rules
US20130325766A1 (en) 2012-06-04 2013-12-05 Csaba Petre Spiking neuron network apparatus and methods
US20140012788A1 (en) 2012-07-03 2014-01-09 Filip Piekniewski Conditional plasticity spiking neuron network apparatus and methods
US20140016858A1 (en) 2012-07-12 2014-01-16 Micah Richert Spiking neuron network sensory processing apparatus and methods
US20140032459A1 (en) 2012-07-27 2014-01-30 Brain Corporation Apparatus and methods for generalized state-dependent learning in spiking neuron networks
US20140032458A1 (en) 2012-07-27 2014-01-30 Oleg Sinyavskiy Apparatus and methods for efficient updates in spiking neuron network
US20140201126A1 (en) 2012-09-15 2014-07-17 Lotfi A. Zadeh Methods and Systems for Applications for Z-numbers
US20140122398A1 (en) 2012-10-25 2014-05-01 Brain Corporation Modulated plasticity apparatus and methods for spiking neuron network
US20140122399A1 (en) 2012-10-25 2014-05-01 Brain Corporation Apparatus and methods for activity-based plasticity in a spiking neuron network
US20140122397A1 (en) 2012-10-25 2014-05-01 Brain Corporation Adaptive plasticity apparatus and methods for spiking neuron network
US20140119654A1 (en) 2012-10-30 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for determining a boundary of an obstacle which occludes an object in an image
US20140125773A1 (en) * 2012-11-05 2014-05-08 Kabushiki Kaisha Toshiba Image processing methods and apparatus
US20140156574A1 (en) 2012-11-30 2014-06-05 Brain Corporation Rate stabilization through plasticity in spiking neuron network
US20150312547A1 (en) * 2012-12-13 2015-10-29 Rai Radiotelevisione Italiana S.P.A. Apparatus and method for generating and rebuilding a video stream
US20140241612A1 (en) * 2013-02-23 2014-08-28 Microsoft Corporation Real time stereo matching
US20140379179A1 (en) 2013-06-24 2014-12-25 Honeywell International Inc. Systems and methods for autonomous landing using a three dimensional evidence grid
US20160009413A1 (en) 2013-08-23 2016-01-14 Korea Aerospace Research Institute Apparatus and method of charging and housing of unmanned vertical take-off and landing (vtol) aircraft
US20150077639A1 (en) 2013-09-16 2015-03-19 Thomson Licensing Color video processing system and method, and corresponding computer program
US9150220B2 (en) 2013-12-04 2015-10-06 Mobileye Vision Technologies Ltd. Systems and methods for mimicking a leading vehicle
US20150181168A1 (en) 2013-12-20 2015-06-25 DDD IP Ventures, Ltd. Interactive quality improvement for video conferencing
US20150217449A1 (en) 2014-02-03 2015-08-06 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US20150281715A1 (en) 2014-03-25 2015-10-01 Sean J. Lawrence Object tracking in encoded video streams
US20160003946A1 (en) 2014-07-03 2016-01-07 Advanced Scientific Concepts, Inc. Ladar sensor for a dense environment

Non-Patent Citations (99)

* Cited by examiner, † Cited by third party
Title
Berkes and Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision (2005) vol. 5 (6).
Bohte, ‘Spiking Nueral Networks’ Doctorate at the University of Leiden, Holland, Mar. 5, 2003, pp. 1-133 [retrieved on Nov. 14, 2012]. Retrieved from the interne &It&ahref="http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf">http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf&It/a>&It;url: />.
Bohte, 'Spiking Nueral Networks' Doctorate at the University of Leiden, Holland, Mar. 5, 2003, pp. 1-133 [retrieved on Nov. 14, 2012]. Retrieved from the interne &It&ahref="http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf"&gt;http://homepages.cwi.nl/˜sbohte/publication/phdthesis.pdf&It/a&gt;&It;url: /&gt;.
Brette et al., Brian: a simple and flexible simulator for spiking neural networks, The Neuromorphic Engineer, Jul. 1, 2009, pp. 1-4, doi: 10.2417/1200906.1659.
Cessac et al. ‘Overview of facts and issues about neural coding by spikes.’ Journal of Physiology, Paris 104.1 (2010): 5.
Cessac et al. 'Overview of facts and issues about neural coding by spikes.' Journal of Physiology, Paris 104.1 (2010): 5.
Cuntz et al., ‘One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Paractical Application’ PLOS Computational Biology, 6 (8), Published Aug. 5, 2010.
Cuntz et al., 'One Rule to Grow Them All: A General Theory of Neuronal Branching and Its Paractical Application' PLOS Computational Biology, 6 (8), Published Aug. 5, 2010.
Davison et al., PyNN: a common interface for neuronal network simulators, Frontiers in Neuroinformatics, Jan. 2009, pp. 1-10, vol. 2, Article 11.
Djurfeldt, Mikael, The Connection-set Algebra: a formalism for the representation of connectivity structure in neuronal network models, implementations in Python and C++, and their use in simulators BMC Neuroscience Jul. 18, 2011 pp. 1 12(Suppl 1):P80.
Dorval et al. ‘Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.’ Journal of neuroscience methods 173.1 (2008): 129.
Dorval et al. 'Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.' Journal of neuroscience methods 173.1 (2008): 129.
Fidjeland, et al., "Accelerated Simulation of Spiking Neural Networks Using GPUs," WCCI 2010 IEEE World Congress on Computational Intelligience, Jul. 18-23, 2010-CCIB, Barcelona, Spain, pp. 536-543, [retrieved on Nov. 14, 2012]. Retrieved from the Internet: URL:http://www.doc.ic.ac.ukl-mpsha/IJCNN10b.pdf.
Fidjeland, et al., "Accelerated Simulation of Spiking Neural Networks Using GPUs," WCCI 2010 IEEE World Congress on Computational Intelligience, Jul. 18-23, 2010—CCIB, Barcelona, Spain, pp. 536-543, [retrieved on Nov. 14, 2012]. Retrieved from the Internet: URL:http://www.doc.ic.ac.ukl-mpsha/IJCNN10b.pdf.
Field, G.; Chichilnisky, E, Information Processing in the Primate Retina: Circuitry and Coding. Annual Review of Neuroscience, 2007, 30(1), 1-30.
Fiete, et al, Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity. Neuron 65, Feb. 25, 2010, pp. 563-576.
Floreano et al., ‘Neuroevolution: from architectures to learning’ Evol. Intel. Jan. 1, 2008:47-62, [retrieved Dec. 30, 2013] [retrieved online from URL:http://inforscienee.eptl.cb/record/112676/files/FloreanoDuerrMattiussi2008.pdf&It;http: />.
Floreano et al., 'Neuroevolution: from architectures to learning' Evol. Intel. Jan. 1, 2008:47-62, [retrieved Dec. 30, 2013] [retrieved online from URL:http://inforscienee.eptl.cb/record/112676/files/FloreanoDuerrMattiussi2008.pdf&It;http: /&gt;.
Florian, Biologically Inspired Neural Networks for the Control of Embodied Agents, Technical Report Coneural-03-03 Version 1.0 [online], Nov. 30, 2003 [retrieved on Nov. 24, 2014]. Retrieved from the Internet: &It;URL:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.4931-&rep1&type=pdf&gt;.
Florian, Biologically Inspired Neural Networks for the Control of Embodied Agents, Technical Report Coneural-03-03 Version 1.0 [online], Nov. 30, 2003 [retrieved on Nov. 24, 2014]. Retrieved from the Internet: &It;URL:http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.216.4931-&rep1&type=pdf>.
Foldiak, P. Learning invariance from transformation sequences. Neural Computation, 1991, 3(2), 194-200.
Froemke et al., Temporal modulation of spike-timing-dependent plasticity, Frontiers in Synaptic Neuroscience, vol. 2, Article 19, pp. 1-16 [online] Jun. 2010 [retrieved on Dec. 16, 2013]. Retrieved from the internet: &It;frontiersin.org /&gt;.
Froemke et al., Temporal modulation of spike-timing-dependent plasticity, Frontiers in Synaptic Neuroscience, vol. 2, Article 19, pp. 1-16 [online] Jun. 2010 [retrieved on Dec. 16, 2013]. Retrieved from the internet: &It;frontiersin.org />.
Gerstner et al. (1996) A neuronal learning rule for sub-millisecond temporal coding. Nature vol. 383 (6595) pp. 76-78.
Gewaltig et al.. ‘NEST (Neural Simulation Tool)’, Scholarpedia, 2007. pp. 1-15. 2(4): 1430, doi: 1 0.4249/scholarpedia.1430.
Gewaltig et al.. 'NEST (Neural Simulation Tool)', Scholarpedia, 2007. pp. 1-15. 2(4): 1430, doi: 1 0.4249/scholarpedia.1430.
Gleeson et al., NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail, PLoS Computational Biology, Jun. 2010, pp. 1-19 vol. 6 Issue 6.
Gluck, Stimulus Generalization and Representation in Adaptive Network Models of Category Learning [online], 1991 [retrieved Aug. 24, 2013]. Retrieved from the Internet: http:// www.google.coinlurl ?sa-t&rct-j&q-Giuck+ 4)/022STIMULUS+GENERALIZATION+AND+REPRESENTATION N+1N +ADAPTIVE+NETWORK+MODELS±0F+CATEGORY+LEARN I NG%22+ 1991.
Gluck, Stimulus Generalization and Representation in Adaptive Network Models of Category Learning [online], 1991 [retrieved Aug. 24, 2013]. Retrieved from the Internet: http:// www.google.coinlurl ?sa—t&rct—j&q—Giuck+ 4)/022STIMULUS+GENERALIZATION+AND+REPRESENTATION N+1N +ADAPTIVE+NETWORK+MODELS±0F+CATEGORY+LEARN I NG%22+ 1991.
Gollisch et al., ‘Rapid neural coding in the retina with relative spike latencies.’ Science 319.5866 (2008): 1108-1111.
Gollisch et al., 'Rapid neural coding in the retina with relative spike latencies.' Science 319.5866 (2008): 1108-1111.
Goodman et al., Brian: a simulator for spiking neural networks in Python, Frontiers in Neuroinformatics, Nov. 2008, pp. 1-10, vol. 2, Article 5.
Gorchetchnikov et al., NineML: declarative, mathematically-explicit descriptions of spiking neuronal networks, Frontiers in Neuroinformatics, Conference Abstract: 4th INCF Congress of Neuroinformatics, doi: 1 0.3389/conffninf.2011.08.00098.
Graham, Lyle J., The Surf-Hippo Reference Manual, http://www.neurophys.biomedicale.univparis5. fr/graham/surf-hippo-files/Surf-Hippo%20Reference%20Manual.pdf, Mar. 2002. pp. 1-128.
Hopfield JJ (1995) Pattern recognition computation using action potential timing for stimulus representation.Nature 376: 33-36.
Izhikevich E. M, and Hoppensteadt F.C. (2009) Polychronous Wavefront Computations. International Journal of Bifurcation and Chaos, 19:1733-1739.
Izhikevich E.M. (2004) Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural Networks, 15:1063-1070.
Izhikevich E.M. (2006) Polychronization: Computation With Spikes. Neural Computation, 18:245-282.
Izhikevich et al., ‘Relating STDP to BCM’, Neural Computation (2003) 15, 1511-1523.
Izhikevich et al., 'Relating STDP to BCM', Neural Computation (2003) 15, 1511-1523.
Izhikevich, ‘Simple Model of Spiking Neurons’, IEEE Transactions on Neural Networks, vol. 14, No. 6, Nov. 2003, pp. 1569-1572.
Izhikevich, E,M. (2007) Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting, The MIT Press, 2007.
Izhikevich, 'Simple Model of Spiking Neurons', IEEE Transactions on Neural Networks, vol. 14, No. 6, Nov. 2003, pp. 1569-1572.
Janowitz, M.K.; Van Rossum, M.C.W. Excitability changes that complement Hebbian learning. Network, Computation in Neural Systems, 2006, 17 (1), 31-41.
Karbowski et al., ‘Multispikes and Synchronization in a Large Neural Network with Temporal Delays’, Neural Computation 12. 1573-1606 (2000).
Karbowski et al., 'Multispikes and Synchronization in a Large Neural Network with Temporal Delays', Neural Computation 12. 1573-1606 (2000).
Khotanzad. ‘Classification of invariant image representations using a neural network’ IEEE. Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990, pp. 1028-1038 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet &It;URL:http://www-ee.uta.edu/eeweb/IP/Courses/SPR/Reference/ Khotanzad.pdf.
Khotanzad. 'Classification of invariant image representations using a neural network' IEEE. Transactions on Acoustics, Speech, and Signal Processing, vol. 38, No. 6, Jun. 1990, pp. 1028-1038 [online], [retrieved on Dec. 10, 2013]. Retrieved from the Internet &It;URL:http://www-ee.uta.edu/eeweb/IP/Courses/SPR/Reference/ Khotanzad.pdf.
Knoblauch, et at Memory Capacities for Synaptic and Structural Plasticity, Neural Computation 2009, pp. 1-45.
Laurent, ‘Issue 1—nnql Refactor Nucleus into its own file—Neural Network Query Language’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: URL:https://code.google.com/p/nnql/issues/detail?id=1.
Laurent, ‘The Neural Network Query Language (NNQL) Reference’ [retrieved on Nov. 12, 2013]. Retrieved from the Internet: &It;URL‘https://code.google.com/p/ nnql/issues/detail?id=1>.
Laurent, 'Issue 1-nnql Refactor Nucleus into its own file-Neural Network Query Language' [retrieved on Nov. 12, 2013]. Retrieved from the Internet: URL:https://code.google.com/p/nnql/issues/detail?id=1.
Laurent, 'The Neural Network Query Language (NNQL) Reference' [retrieved on Nov. 12, 2013]. Retrieved from the Internet: &It;URL'https://code.google.com/p/ nnql/issues/detail?id=1&gt;.
Lazar et al. ‘A video time encoding machine’, In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08 2008, pp. 717-720.
Lazar et al. ‘Consistent recovery of sensory stimuli encoded with MIMO neural circuits.’ Computational intelligence and neuroscience (2010): 2.
Lazar et al. ‘Multichannel time encoding with integrate-and-fire neurons.’ Neurocomputing 65 (2005): 401-407.
Lazar et al. 'A video time encoding machine', In Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08 2008, pp. 717-720.
Lazar et al. 'Consistent recovery of sensory stimuli encoded with MIMO neural circuits.' Computational intelligence and neuroscience (2010): 2.
Lazar et al. 'Multichannel time encoding with integrate-and-fire neurons.' Neurocomputing 65 (2005): 401-407.
Masquelier and Thorpe, Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity. Neural Networks (IJCNN), The 2010 International Joint Conference on DOI-10.1109/1JCNN.2010.5596934 (2010) pp. 1-8.
Masquelier and Thorpe, Learning to recognize objects using waves of spikes and Spike Timing-Dependent Plasticity. Neural Networks (IJCNN), The 2010 International Joint Conference on DOI—10.1109/1JCNN.2010.5596934 (2010) pp. 1-8.
Masquelier, Timothee, ‘Relative spike time coding and STOP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.’ Journal of computational neuroscience 32.3 (2012): 425-441.
Masquelier, Timothee, 'Relative spike time coding and STOP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.' Journal of computational neuroscience 32.3 (2012): 425-441.
Meister, M, Multineuronal codes in retinal signaling. Proceedings of the National Academy of sciences. 1996, 93, 609-614.
Meister, M.; Berry, M.J. The neural code of the retina, Neuron, 1999, 22, 435-450.
Nichols, A Reconfigurable Computing Architecture for Implementing Artificial Neural Networks on FPGA, Master's Thesis, The University of Guelph, 2003, pp. 1-235.
Oster M., Lichtsteiner P,, Delbruck T, Liu S, A Spike-Based Saccadic Recognition System, ISCAS 2007. IEEE International Symposium on Circuits and Systems, 2009, pp. 3083-3086.
Paugam-Moisy et al., "Computing with spiking neuron networks" G. Rozenberg T. Back, J. Kok (Eds.), Handbook of Natural Computing, Springer-Verlag (2010) [retrieved Dec. 30, 2013], [retrieved online from link.springer.com ].
Pavlidis et al. ‘Spiking neural network training using evolutionary algorithms. In: Proceedings 2005 IEEE International Joint Conference on Neural Networkds, 2005. IJCNN'05, vol. 4, pp. 2190-2194 Publication Date Jul. 31, 2005 [online] [Retrieved on Dec. 10, 2013] Retrieved from the Internet URL: http://citeseerx.ist.psu.edu! viewdoc/download?doi=0.1.1.5.4346&rep—repl&type-pdf.
Pavlidis et al. 'Spiking neural network training using evolutionary algorithms. In: Proceedings 2005 IEEE International Joint Conference on Neural Networkds, 2005. IJCNN'05, vol. 4, pp. 2190-2194 Publication Date Jul. 31, 2005 [online] [Retrieved on Dec. 10, 2013] Retrieved from the Internet URL: http://citeseerx.ist.psu.edu! viewdoc/download?doi=0.1.1.5.4346&rep-repl&type-pdf.
Rekeczky, et al., "Cellular Multiadaptive Analogic Architecture: A Computational Framework for UAV Applications." May 2004.
Revow M., Williams C and Hinton, G.E., 1996. Using Generative Models for Handwritten Digit Recognition, IEEE Trans. on Pattern Analysis and Machine Intelligence, 18, No. 6, Jun. 1996.
Sanchez, Efficient Simulation Scheme for Spiking Neural Networks. Doctoral Thesis. (Juiversita di Granada Mar. 28, 2008, pp. 1-104.
Sato et al., ‘Pulse interval and width modulation for video transmission.’ Cable Television, IEEE Transactions on 4 (1978): 165-173.
Sato et al., 'Pulse interval and width modulation for video transmission.' Cable Television, IEEE Transactions on 4 (1978): 165-173.
Schemmel, J., et al., Implementing synaptic plasticity in a VLSI spiking neural network model. In: Proceedings of the 20061ntemational Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Aug. 24, 2012]. Retrieved from the Internet &It;URL: http://www.kip.uniheidelberg.deNeroeffentlichungen/download.cgi/4620/ps/1774.pdf&gt; Introduction.
Schemmel, J., et al., Implementing synaptic plasticity in a VLSI spiking neural network model. In: Proceedings of the 20061ntemational Joint Conference on Neural Networks (IJCNN'06), IEEE Press (2006) Jul. 16-21, 2006, pp. 1-6 [online], [retrieved on Aug. 24, 2012]. Retrieved from the Internet &It;URL: http://www.kip.uniheidelberg.deNeroeffentlichungen/download.cgi/4620/ps/1774.pdf> Introduction.
Schnitzer, M.J.; Meister, M.; Multineuronal Firing Patterns in the Signal from Eye to Brain. Neuron, 2003, 37, 499- 511.
Serrano-Gotarredona, et al, "On Real-Time: AER 2-D Convolutions Hardware for Neuromorphic Spike-based Cortical Processing", Jul. 2008.
Simulink.RTM. model [online], [Retrieved on Dec. 10, 2013] Retrieved from http://www.mathworks.com/ products/simulink/index.html.
Sinyavskiy et al. ‘Reinforcement learning of a spiking neural network in the task of control of an agent in a virtual discrete environment’ Rus, J. Nonlin. Dyn., 2011, vol. 7, No. 4 (Mobile Robots), pp. 859-875, chapters 1-8 (Russian Article with English Abstract).
Sinyavskiy et al. 'Reinforcement learning of a spiking neural network in the task of control of an agent in a virtual discrete environment' Rus, J. Nonlin. Dyn., 2011, vol. 7, No. 4 (Mobile Robots), pp. 859-875, chapters 1-8 (Russian Article with English Abstract).
Sjostrom et al., ‘Spike-Timing Dependent Plasticity’ Scholarpedia, 5(2):1362 (2010), pp. 1-18.
Sjostrom et al., 'Spike-Timing Dependent Plasticity' Scholarpedia, 5(2):1362 (2010), pp. 1-18.
Szatmary et al "Spike-timing Theory of Working Memory" PLoS Computational Biology, vol. 6, Issue 8, Aug. 19, 2010 [retrieved on Dec. 30, 2013]. Retrieved from the Internet: URL: http://www.ploscompbioLorg/article/info%3Adoi% 2F10.1371%2Fjournal,pcbi.1000879&It;url:&gt;&It;/url:&gt;.
Szatmary et al "Spike-timing Theory of Working Memory" PLoS Computational Biology, vol. 6, Issue 8, Aug. 19, 2010 [retrieved on Dec. 30, 2013]. Retrieved from the Internet: URL: http://www.ploscompbioLorg/article/info%3Adoi% 2F10.1371%2Fjournal,pcbi.1000879&It;url:>&It;/url:>.
Thomas, S., et al., 2004, Realistic Modeling of Simple and Complex Cell Tuning in the HMAX Model, and Implications for Invariant Object Recognition in Cortex, AI Memo 2004-017 Jul. 2004.
Thorpe S.; Ultra-Rapid Scene Categorization with a Wave of Spikes. In H,H. Bulthoff et al. (eds.), Biologically Motivated Computer Vision, Lecture Notes in Computer Science, 2002, 2525, pp. 1-15, Springer-Verlag, Berlin.
Thorpe, S.J., Delorme, A. & VanRullen, R, (2001). Spike-based strategies for rapid processing. Neural Networks 14, pp. 715-725.
Thorpe, S.J., Guyonneau, R., Guilbaud, N,, Allegraud, J-M, & VanRullen, R. (2004), SpikeNet: real-time visual processing with one spike per neuron. Neurocomputing, 58-60, pp. 857-864.
Van Rullen R,; Thorpe, S, Rate Coding versus temporal order coding: What the Retinal ganglion cells tell the visual cortex. Neural computation, 2001, 13, 1255-1283.
Van Rullen, R, & Koch, C. (2003), Is perception discrete or continuous? Trends in Cognitive Sciences 7(5), pp. 207-213.
Van Rullen, R., Guyonneau, R. & Thorpe, S.J. (2005). Spike times make sense. Trends in Neurosciences 28(1).
Wallis, G.; Rolls, E. T. A model of invariant object recognition in the visual system, Progress in Neurobiology. 1997, 51, 167-194.
Wang ‘The time dimension for scene analysis.’ Neural Networks, IEEE Transactions on 16.6 (2005): 1401- 1426.
Wang 'The time dimension for scene analysis.' Neural Networks, IEEE Transactions on 16.6 (2005): 1401- 1426.
Wiskott, L.; Sejnowski, T.J. Slow feature analysis: Unsupervised learning of invariances, Neural Computation, 2002, 14, (4), 715-770.
Wysoski et al, "Fast and Adaptive Network of Spiking Neuron for Multi-view Visual Pattern Recognition", May 3, 2008, Elsevier, Neurocomputing vol. 71, pp. 2563-2575.
Zarandy et al. "Bi-i: A Standalone Ultra High Speed Cellular Vision System." In: [online]. Dated Jun. 13, 2005 (Jun. 13, 2005). Retrieved on Aug. 16, 2012 (Aug. 16, 2012). Retrieved from the Internet at URL: http://ieeexplore.ieee.orgixplilogin.jsp?tp=tarnumber=14387388turl=http%3A%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%Farnumber%3D1438738.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190178631A1 (en) * 2014-05-22 2019-06-13 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
US10989521B2 (en) * 2014-05-22 2021-04-27 Brain Corporation Apparatus and methods for distance estimation using multiple image sensors
US10986325B2 (en) * 2018-09-12 2021-04-20 Nvidia Corporation Scene flow estimation using shared features
US11514298B2 (en) * 2018-10-31 2022-11-29 International Business Machines Corporation High-frame-rate real-time multiscale spatiotemporal disparity on distributed low-power event-based neuromorphic hardware
CN112203095A (en) * 2020-12-04 2021-01-08 腾讯科技(深圳)有限公司 Video motion estimation method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
US10820009B2 (en) 2020-10-27
US20160014426A1 (en) 2016-01-14
US20190007695A1 (en) 2019-01-03

Similar Documents

Publication Publication Date Title
US10820009B2 (en) Apparatus and methods for distance estimation using stereo imagery
US10989521B2 (en) Apparatus and methods for distance estimation using multiple image sensors
US9713982B2 (en) Apparatus and methods for robotic operation using video imagery
US10728570B2 (en) Apparatus and methods for real time estimation of differential motion in live video
US10657409B2 (en) Methods and apparatus for tracking objects using saliency
CN109791697B (en) Predicting depth from image data using statistical models
US11430263B2 (en) Machine vision with dimensional data reduction
CN107492121B (en) Two-dimensional human body bone point positioning method of monocular depth video
US20180278820A1 (en) Optical detection apparatus and methods
CN113286194A (en) Video processing method and device, electronic equipment and readable storage medium
EP2693300A2 (en) Device and method for recognizing gesture based on direction of gesture
CN115427982A (en) Methods, systems, and media for identifying human behavior in digital video using convolutional neural networks
KR20140074201A (en) Tracking device
CN114641800A (en) Method and system for forecasting crowd dynamics
US20220210388A1 (en) Disparity map building using guide node
KR20220147828A (en) Apparatus and Method for Generating Depth Map using Monocular Image
JP2023131014A (en) Action prediction device, method, and program
Olaya et al. A foveated stereo vision system for active depth perception
CN116433726A (en) End-to-end vehicle track prediction method based on machine vision

Legal Events

Date Code Title Description
AS Assignment

Owner name: BRAIN CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RICHERT, MICAH;REEL/FRAME:037276/0702

Effective date: 20151211

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HERCULES CAPITAL, INC., CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:BRAIN CORPORATION;REEL/FRAME:057851/0574

Effective date: 20211004

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 4