WO2018136144A1 - Cognitive signal processor for simultaneous denoising and blind source separation - Google Patents

Cognitive signal processor for simultaneous denoising and blind source separation Download PDF

Info

Publication number
WO2018136144A1
WO2018136144A1 PCT/US2017/062561 US2017062561W WO2018136144A1 WO 2018136144 A1 WO2018136144 A1 WO 2018136144A1 US 2017062561 W US2017062561 W US 2017062561W WO 2018136144 A1 WO2018136144 A1 WO 2018136144A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
reservoir
filter
denoised
state
Prior art date
Application number
PCT/US2017/062561
Other languages
French (fr)
Inventor
Peter Petre
Bryan H. FONG
Shankar R. RAO
Charles E. Martin
Original Assignee
Hrl Laboratories, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/452,412 external-priority patent/US10153806B1/en
Priority claimed from US15/452,155 external-priority patent/US10484043B1/en
Application filed by Hrl Laboratories, Llc filed Critical Hrl Laboratories, Llc
Priority to EP17892664.8A priority Critical patent/EP3571514A4/en
Priority to CN201780078246.2A priority patent/CN110088635B/en
Publication of WO2018136144A1 publication Critical patent/WO2018136144A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/414Discriminating targets with respect to background clutter
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03HIMPEDANCE NETWORKS, e.g. RESONANT CIRCUITS; RESONATORS
    • H03H21/00Adaptive networks
    • H03H21/0012Digital adaptive filters
    • H03H21/0025Particular filtering methods
    • H03H2021/0034Blind source separation

Definitions

  • the present invention relates to blind source separator and, more
  • Blind signal separation also known as blind source separation, is the
  • any sin e- antenn measures ' multiple source signals. There may e more than one antenna measuring the signals, but in general each antenna "sees " all of the source signals and creates a different linear mixture of them. The task, is then to use the- easured mixture signals in order to recover the original source signals.
  • the case of a single antenna operating in isolation is especially challenging because there is no sense of spatial resolution to aid in the extraction process.
  • Filter-based methods use filtering to smooth out noise from a signal, but are too simplistic to simultaneously maintain the low-frequency long- erni tr nds of a signal while adapting to the high- frequency -abrupt transitions.
  • Training-based methods rely on a "dictionary" that models the signals of interest Such a dictionary must be trained in an offline process, and requires training data that may not be available.
  • the dictionary often requires a large amount -of memory and computation to be stored and leverage on the platform, making such approaches infeasiMe for ultra-low size-., weight-, and power (SWaP) systems.
  • SWaP weight-, and power
  • the cognitive, signal processor comprises one or more processors and a memory.
  • the memory is, for example, a non-transitory computer-readable medium having executable instructions encoded thereon, such that upon execution of the instructions, the one or more processors perform several operations, I other aspects, the one or .more processors are hardwired or otherwise configured to perform the operations herein.
  • the cognitive signal processor receives a mixture signal that comprises a plurality of source signals.
  • a denoised reservoir state signal is generated b mapping the mixture signal to a dynamic reservoir to perform signal denoising.
  • at least one separated source signal is identified by adaptively filtering the denoised reservoir state signal.
  • filtering the denoised reservoir state signal is performed with a bank of filters.
  • the syst em perofrrns an operation of controlling the bank of filters to cause each filter within the bank of filters to filter a unique waveform.
  • each filter has an adaptable center frequency.
  • aspec t adaptively filtering the denoised reservoir state signal further comprises operations of: detecting that a particular frequency band possesses a pulse; switching a first filter to a tracking state wit a center frequency equal to a resonant frequenc of a reservoir state corresponding to the particular frequency band; and setting the center frequency of the first filter as a protected region to prevent other filters within a bank of filters from sharing the center fr eqtieacy.
  • adaptively filtering the denoised reservoir state signal further comprises operations of: switching the first filter to a holding state if the first filter loses the pulse of the particular frequency hand: maintaining the first filter in the holding state for a fixed period of time while maintaining the protected region; and if during the fixed period of time the pulse returns, switching the first filter to the tracking state, otherwise switching the first filter to an inactive state and removing the protected region,
  • generating the denoised reservoir state signal further comprises geDeratiiig a predicted input signal a small-time step aliead of the mixture signal, wherein an error between the predicted input signal and mixture signal is used to update output weights of the dynamic eservoir.
  • generating the denoised reservoir state signal is performed •with a dynamic reservoir implemented in analog hardware by satisfying a set of ordinary differential equations.
  • generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in software or digital hardware by converting a set of ordinary differential equations to delay difference equations.
  • the present invention also includes a computer program product and a computer implemented method.
  • the computer program product includes computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having one. or more processors, such, that upon execution of the iastmetions, the one or more proc essors perform the operations listed herein.
  • the computer implemented method includes an -act of causing a computer to execute such instructions and perform the resulting operations.
  • FIG. 1 A is a block diagram depicting the components of system accordin to various embodiments of the present invention.
  • FIG, IB is a block diagram depicting the a system according to various embodiments of the present invention.
  • FIG. 2 is an illustration of a computer program product embodying an aspect of the present inventi on;
  • FIG. 3 is an illustration depicting system architecture for a cognitive signal processor according to various embodiments of the present invention.
  • FIG. 4 is an illustration depicting reservoir computer mapping of an input signal vector to a hig -dimensional state-space that models underlying time- varying dynamics of the signal generation process
  • FIG, 7 is an illustration depicting a continuous time architecture for the dynamic reservoir and adaptive signal prediction modules
  • FIG, 8 is an illustration depicting an architecture of blind source separation (BSS) filters according to various embodiments of the present invention that make use of reservoir states;
  • BSS blind source separation
  • FIQ. is a graph depicting an optimal transfer ⁇ function for an initial state of an adaptive filter
  • FIG. 10 is a diagram of the filter controller according to. various
  • FIG. 11 A is a chart illustrating an approx imatio of the input si gnal
  • FIG . 11 B is a chart illustrating an approximation of the input signal
  • the present, invention relates to blind source separator and, more
  • Various embodiments of the invention include three '"principal" aspects.
  • the first is a system for signal processing (i.e., signal processor).
  • the system is typically in the form of a computer system operating software or in the form of a "hard-coded" instruction set. This system may be incorporated, into a wide variety of devices that provide different functionalities.
  • the second principal aspect is a method, typically in the form of software, operated using a data processing system (computer ⁇ .
  • the third principal aspect is a computer program product.
  • the computer program produci generally represents computer-readable instructions stored on a non-transitory computer-readable medium such as an optical storage device, e,g., a compact disc (CD) or digital versatile disc (DVD), or a magnetic storage device such as a floppy disk or magnetic tape.
  • FIG. 1 A block diagram depicting an example of a system (i.e.., computer system
  • FIG. I. A The computer ' system 100 is configured to perform calculations, processes, .operations, and/pr functions associated with a program, or algorithm.
  • certain processes and steps discussed herein are realized as a series of instructions (e.g., software program) that reside within computer reada le memory un ts and are executed by one or more processors of the computer system 100. When executed, the instructions cause the computer system 100 to perform specific actions and exhibit specific behavior, such as described herein.
  • the system includes one or more processors configured to perform the various operations described herein, in one aspect, the system includes one or more processors and memory, the memory being a non-transitory computer- readable medium having executable instructions encoded thereon, such that upon execution of the instructions, the one or more perform the operations.
  • the processors are hardwired to perform the opertions.
  • the system 100 may be configured to perform operations by executing instructions from a computer memory or by being hardwired to perform certain tasks.
  • the computer system 100 may include an address/dat bus 1 2 that is
  • the processor 1 4 is configured to process information, and instructions.
  • the processor 104 include one or more of a microprocessor, a parallel processor, an application-specific integrated circuit (ASIC), a digital ASIC, a programmable logic array (PLA), complex programmable logic device (CPLD), and field programmable gate array (FPGA), [00056]
  • the computet system 100 is configured to utilize one or more data -storage units.
  • the computer system 1 0 may include a volatile memory unit 106 (e.g., random access memory (“RAM”), static RAM,-.dynamic RAM, etc) coupled with the address/data bus 102, wherein a volatile memory unit 106 is configured to store information and instructions for the processor 104.
  • the computer system 100 further may include a non-volatile memory unit 108 (e.g., read-onl memory (“ROM”), programmable ROM (“PROM”), erasable programmable ROM (“EP OM”). electrically erasable programmable ROM "EE PROM”), flash .memory, etc.) coupled with the address data bus 102, wherein the nonvolatile memory unit 108 is configured to store static information and instructions for the processor 104.
  • ROM read-onl memory
  • PROM erasable programmable ROM
  • flash .memory etc.
  • the computer system 100 may execute instructions- retrieved from an online data storage unit such as in "Cloud” computing.
  • the computer system 100 also may Include one or more interfaces, such as an interface 1 10, coupled with the address/data bus 102.
  • the one or more interfaces are configured to enable the computer system 100 to interface with other electronic devices and computer systems.
  • the communication interfaces implemented by the one or more interfaces ma include wireline (e.g., serial cables, modems, network adaptors, etc.) and/or wireless (e.g., wireless modems, wireless network adaptors, etc.) communication technolofiv.
  • the computer system 100 may include an input device 112 coupled with the address/data bus 102. wherein the input device 1 32 is configured to communicate information and command selections to the processor 100.
  • the input device 1 12 is an alphanumeric input device, such as a keyboard, that may include alphanumeric and/or function keys.
  • the input device 1 ⁇ 2 may be an input device other than an alphanumeric input device.
  • the computer system 100 may include a cursor control device 1 14 coupled with the address/data bus J.02, wherein the cursor control device 114 is configured to communicate user input iniOrmation and/or command selections to. the processor 100.
  • the cursor control device 114 is implemented using a device such as a mouse, a track-ball, a track-pad, an optical tracking device, or a touch screen.
  • a device such as a mouse, a track-ball, a track-pad, an optical tracking device, or a touch screen.
  • the cursor control device 114 is directed and/or act vated via input from the input device 1 1.2, such as in response to the use of special keys and key sequence commands associated with the input device 112.
  • the cursor control device 1 14 is configured to be directed or guided by voice commands.
  • the computer system 100 further may include one or more
  • a storage device 116 coupled with file address/data bus 102.
  • the storage device 116 is configured to store information and/or computer executable instructions.
  • the storage device 1 16 is a storage device such as a magnetic or optical disk drive (e.g., hard disk drive (“HDD”), floppy diskette, compact disk read only memory (“CD-ROM”), digital versatile disk (“DVD”)).
  • a display device 1 18 is coupled with the address/data bus 102 or any other suitable location or component of the system 100, wherein the display device 1 1 S is configured to display video and/or graphics.
  • the display device 118 may include a cathode ray tube (“CRT”), liquid crystal display (“LCD”), field emission display (“FED”), piasma display, light emitting diode (“LED”) or any other display device suitable for displaying video and or graphic images and .alphanumeric characters recognizable to a user.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • FED field emission display
  • LED light emitting diode
  • the computer system 1 0 presented herein is an example computing
  • the non-limiting example of the computer system 100 is not strictly limited to being a computer system.
  • an aspect provides that the computer system 100 represents a type of data processing analysis that ma be used in accordance with va rious aspect described herein.
  • other computin systems may also be
  • one or more operations of various aspects of the present technolog are controlled or implemented using computer-execiitabie instructions, such as program modules, being executed by a computer
  • program modules include routines, programs, objects, components and/or data structures that are configured to perform particular tasks or implement particular abstract data types.
  • an aspect provides that one or more aspects of the present technology are implemented by utilising one or more dktribnied computing environments, such as where tasks are performed by remote processing devices that are linked through a communications network, or such as where various program modules are located in both local and remote computer-storage media including memory-storage device .
  • FIG. 2 An illustrative diagram of a computer program product (i.e., storage device) embodying the present invention is depicted in FIG. 2.
  • the computer program product is depicted as floppy disk 200 or an optical disk 202 such as a CD or DVD.
  • the computer program produc generall represents computer-readable instructions stored on any compatible non-transitory computer-readable medium.
  • the term "instructions” as used with respect to this invention generally indicates a set of operations to be performed on a computer, and may represent pieces of a whole program or individual, separable, software modules.
  • Non-l imiting examples- of "instruction 1 ' inc lude computer program code (source or object code) arid "hard-coded" electronics (i.e.
  • the "instruction” is stored on any non-transitory computer-readable medium, such as in the memory of a computer or on a flopp disk, CD-ROM, and a flash drive. la either event, the instructions are encoded on a now-transitory computer-readable medium.
  • This disclosure provides a system for signal processing (or otherwise
  • CSP cognitive signal processor
  • a recognition signal processor that takes an input signal containing a mixture of pulse waveforms over a very large (e.g., > 30G z) bandwidth and simultaneously detioises the input signal and performs blind source separation (BSS) on the input signal
  • BSS blind source separation
  • the system may use a bank of Finite Impulse Response (FIR) filters constructed by applying tunable mixing weights to the state output of a dynamic reservoir.
  • FIR Finite Impulse Response
  • the mixing weights (and consequently frequency response) for each filter may be adapted using a novel gradient descent procedure that is described in further detail below.
  • the filters may be activated and initialized by using the reservoir state output to determine when a source signal is present near a given frequency band.
  • the adaptation scheme employed by the system may incorporate an explicit mechanism that limits how close different filters can approach one another in the frequency domain, which is used to ensure that each fitter extracts a unique signal.
  • the system may incorporate a dynamic phenomenon that is analogous to momentu and allows filters to track signals through repeated colli sion with other signals in the time-frequency domain.
  • the CSP can detect and track many pulse wavefomis over an ultra-wide bandwidth of over 30Ghz employing very narrow bandwidth filters with high frequency resolution, and yet still exhibit very low signal detection latencies on the order of 0.1 nanoseconds.
  • the system is capable of denoising signals in real-time using a constraint that covers a wide range of electromagnetic and acoustic signals of interest.
  • Many other current approaches use powerful, but computationally expensive constraints, such as signal complexity measures, or rely on loose constraints, such as filter banks, which may be less computationally expensive but have limited capacity to capture the structure of real-world source signals, in contrast, the system improves upon the prior art by utilizing the constraint that the wa veforms of interest in a source signal can be linearly predicted over a short interval of time, which can be computed quickly with limited computational cost.
  • the reservoir states each correspond to the amount of input signal energy near a particular frequency. This allows the CSP to generate a real-time spectrogram of a complex input signal that ca be implemented efficiently in hardware.
  • the CSP can simultaneously extract large
  • the CSP does not require a. raultx-antenna array or a large, bank, of fix d predefined filters, which is Beaded by many other methods for BSS.
  • the system is able to track source signals continuously, even if the signal is momentar ily lost, as well as reducing the incidence of false alarms. Furthermore, each filter extracts a unique source single, thus avoiding the extraction of confounding and unnecessary duplicates.
  • the filters are able to track multiple signals through repeated collisions i the time-frequency domain. This is a scenario that very few .state-of-the-art blind .source, separation methods can handle.
  • the system described herein has several applications.
  • the system can b used with Electronic Support Measures (ESM) receivers developed by Argon ST and with other systems on airborne platforms.
  • ESM Electronic Support Measures
  • the system is also applicable to vehicle (e.g., UAV, plane, car, boat, robot) or man-portable applications, such as rapid detection and separation of significant objects (e.g. , obstacles, terrain, other vehicles, persons, animals) from clutter from radar antenn signals.
  • vehicle e.g., UAV, plane, car, boat, robot
  • man-portable applications such as rapid detection and separation of significant objects (e.g. , obstacles, terrain, other vehicles, persons, animals) from clutter from radar antenn signals.
  • significant objects e.g. , obstacles, terrain, other vehicles, persons, animals
  • cars or other vehicles may use radars to detect and avoid obstacles. Due to clutter, such as trees, other cars, and walls, the radar returns for obstacles may be weak relative to other returns within the spectrum and also obscured by them.
  • the system 100 described herein simultaneously de-noises radio f equency (RF) signals, such as those collected by radar receivers 120 (e.g, antenna, sensors, etc.), and boosts detection of weak returns that could correspond to significant objects, while also separating out all narrowband (tone-line) pulses corresponding to different objects in the scene.
  • RF radio f equency
  • the system 100 can cause a vehicle 122 to act (by being connected to an interfacing with an
  • the system 100 may generate commands and -control operations of vehicle systems that can be adjusted, such as vehicle suspension or safety systems such as airbags and seatbelts, etc.
  • this disclosure provides a system for signal processing (or “cognitive” signal processor (CSP) ⁇ that denoises a mixed input signal and performs blind source separation on the input signal to extract and separate the signals.
  • the architecture for the CSP is shown in FIG . 3.
  • the first component is a dynamic reservoir computer (RC) 300, which is the
  • the reservoir computer 300 accepts the mixture signals 302 as input and maps it to a high-dimensional dynamical system known as the dynamic reservoir.
  • the RC 300 has a predefined number of outputs, which are generated by continually mapping the reservoir states through a set of distinct linear functions with one such function defined per output.
  • the CSP uses a "dynamic" reservoir computer because the reservoir state signals are continuously passed through a delay embedding ⁇ which creates a finite temporal record of the values of the reservoir state (i.e., reservoir state history 304) .
  • the reservoir state history 304 of the dynamic •reservoir in FIG> 3 corresponds/to element 602 as depicted in FIG. 6, That is, given a reservoir state x(t) ceremoni history length K+l, and delay width t, the reservoir
  • delay embedding when combined with the optimized reservoir design, enables the CSP to perform signal prediction and denoisiiig on both the original input signal and the individual reservoir states.
  • the second component is a adaptive signal prediction ' module 306 that use gradient descent-based short time prediction to adapt the output (i.e., reservoir state history 304 ⁇ of the reservoir to prod oce a prediction of the input signal 302 a small time-step in the future-. Since the noise in the input signal 302 is inherently random and unpredictable, the predicted inpu signal will be free of noise. The error between the predicted input signal and actual input signal is used by the adaptive signal prediction module to further tune the output weights of the reservoir in an iterative process to generate denoised reservoir states 308.
  • the third component of the CSP is a bank of adaptabl e Bli nd Source
  • BSS filters 310 that separate out aid track pulses from the input signal 302 mixture.
  • This component is a key aspect of the system. Unlike previous BSS systems that use a bank of adaptable filters, the system described herein implements, the filters as liaear combinations, of the reservoir states. This is much more efficient to implement in hardware than implementing standalone FIR or IIR filters.
  • Each of the BSS filters in th bank of filters 310 is act vated by a filter controller that measures the reservoir state energy to detect the presence of a signal in a particul ar band.
  • the BSS filters 310 also include mechanisms for modifying center frequencies in order to track pulses.
  • Each BSS filter 310 includes a filter adapter that updates the center frequencies of the particular filter based on the error function and the filter states.
  • the frequency update is depicted as -element 804 in FIG. S.
  • the fourth component is a control/ source selector 312 that ensures that each BSS filter 310 tracks a unique pulse.
  • the control/source selector 312 accepts the filter 310 outputs as input and is responsible for controlling which filters are actively tracking pulse waveforms, resulting n original denoised pulse
  • waveforms i.e., detected signals 314.
  • the CSP is based on a form of neuromorphic (brain-inspired) signal
  • a reservoir computer 300 is a special form of a recurrent neta al network (a neural network, with feedback connections) that operates by projecting the input signal vector 302 in to a high-dimensional reservoir 400 state space which contains .an equivalent - dynamical mode ! of the signal generation process capturing all of the available and actionable information about, the input 302.
  • a reservoir 400 has readout layers that can be trained, either off-line or on-line, to learn desired outputs by utilizing the state functions.
  • an RC 300 has the power o recurrent neural networks to model ' on-stationary (time-varying) processes and phenomena, but with simple readout layers and training algorithms thai ar both accurate and efficient.
  • the reservoir states can be mapped to useful outputs 304, including denoised inputs, signal classes, separated signals, and anomalies using trainable linear readout layers 402.
  • y(s) C T x(s) + Du(s)
  • u( ), an y(s) are the state-space representations of the fKerroir state, input signal, and output, respectively.
  • a state space filter implements time-domain filtering algorithm, and as seen in FIG. 5, the different components of the reservoir 400 state-space representation have a direct correspondence with different parameters in the reservoir computer 300.
  • the reservoir connectivity matrix weight (A 500 determines the fi ter pole locations.
  • the reservoir computer 300 can implement an adaptable (nonlinear) state-space filter.
  • connectivity matrix (A) and the input-to-reservoir mapping matrix (B) are typically chosen randomly.
  • the entries of A and B can be independent, identically distributed samples from a zero-mean, unit variance Gaussian distribution.
  • Such random reservoir weight matrices have been successfully used in many previous applications, such as pattern recognition.
  • the individual values of the reservoir states are meaningless in isolation, and can only be used for an application when combined together via learned- mixing weights.
  • the reservoir state update require computation proportional to the square of the number of nodes, which become infeasible as the number of reservoir nodes increase. [00077] Described below is a method for optimizing the reservoir weigh matrix (A) for the tasks of signal denoising and bl ind source separation. For signal
  • each reservoir state in our optimized reservoir measures the amount of signal energy near a particular resonant frequency, which can he used a cueing
  • the BSS subsystem ca use the designed reservoir states as a basis with which to construct a bank of adaptable FIR filters to track individual narrowband pulses within the input signal mixture.
  • the com uta i of the designed reservoir state scales linearly with the number of nodes, thus enabling efficient implementation in low-power hardware.
  • the matrix A most be real; additionally , when describing a passive 1IR filter, the matrix A has eigenvalues (poles of the filter) that are either purely real and negati ve corresponding to purely damped modes, or eigenvalues that come in complex conjugate- pairs, with negative real parts to the eigenvalues.
  • This observation allows the matrix A to be put into a purely real block-diagonal form with, a real blocfc-cliagonalizing similarity transform.
  • the block- diaeonalized matrix SAS ⁇ l has the form:
  • n is the number of complex conjugate poles, with N— 2n; including purely damped poles as well as introducing purely diagonal eigenvalues into the canonical form (for some applications, system matrices A with, only complex conjugate pair poles are used).
  • N 2n
  • A block diagonal
  • the denoised signal can be reconstructed using the response of the state system to delayed copies of the input signal «(t).
  • all delays on the input signal u(t) can be converted to delays on the state space vector x(t).
  • N d delays on the input signal with basic delay ⁇ , the Laplace domain response is
  • the response to a time-harmonic input signal in a 2x2 sub-block can be computed analytically, with the asymptotic response to a input signal with angular frequency ⁇ given by x+ ⁇ ( ⁇ )— ⁇ - ⁇ — ⁇ ?,
  • the maximum response value can be determined by differentiating this expression with respect to ⁇ , and solving for the input signal frequency giving zero derivative. Assuming thai the damping is small, i.e., A r is small, to lowest order the maximum response is at the resonant frequency ⁇ — t .
  • phase delay embedding is a technique developed in dynamical system theory to model the dynamics of a chao.de system ftom its observation -u 0 (t ) using delayed versions of the observation as new input vector u t).
  • an unknown (potentially chaotic) dynamical system, embedded in an N -dimensional state space has an m-dimensional attractor.
  • a dynamic reservoir 400 is constructed by applying the delay-embedding 600 to each of the reservoir states to provide a time history 602 of reservoir dynamics.
  • delay-embedded states When combined with the designed reservoir states, delay-embedded states enable each state to he predicted and denoised separately, which can be used to generate a denoised spectrogram of the input signal.
  • everything to the left of the time history "602 is a diagrammatic instantiation of the differential -equation below it; x(i) ⁇ Ax(t) + Bu 0 (t), The triangles 604 radicate multiplication by a scalar, vector, or matrix eonstrant
  • the plus sign 606 indicates summation of two or more signals, and the
  • integration sign 608 indicates a running integral.
  • the input signal 11 ⁇ 2(£) is mapped into the reservoir by the vector B , and the change in reservoir state x(t) is determined by combining J? « 0 (i) with the current reservoir state x(t) scaled by the state transition matrix A.
  • the integral 608 indicates that the reservoir indicates that the reservoir state is obtained by the running integral of the change in reservoir state x(t),
  • This section describes the adaptive signal prediction module that uses the dynamic reservoir states in order to perform signal denoising.
  • the 1) delay embedded observations can effectively model dynamicai system behavior and 2 ⁇ reservoirs with dela -embedded- ' state can be designed to have th same behavior as reservoirs with delay-embedded inpois.
  • the system described herein leverages the time history of these reservoir state variables to perform short-term predictions of the observations.
  • the system uses a dynamic reservoir computer to learn the prediction function F; u 0 (t + ⁇ ) ⁇ [w 0 i )
  • FIG. 7 depicts a continuous time architecture of the adaptive signal prediction module 306,
  • the model shows the dynamic reservoir 400 with fixed connections and adaptable output layers attached to
  • a wideband ADC frontend provides input to the dynamic reservoir 400, whose output layer weights are adapted based on short-time prediction to de- noise the input signal.
  • the weights of the output layers are adapted via the gradient learning algorithm described below.
  • the system uses an online gradient descent alaorithra. The idea is to enforce exact or otherwise better prediction of the current time point that is used in the delay embedding.
  • the predicted input value at time (t + ⁇ ) is calculated from the current value the of the output weights ( 3 ⁇ 4(t),d(t)) and the current and past values of the states 3 ⁇ 4 and the :
  • y(i - r) is the delayed output expressed by the delayed valued o x and u and the current -values, of the .output weights and d, and thus is general f (t ⁇ — x) ⁇ >'(£— r).
  • this approximation is reasonable, and allows the system to not req ire storage of time histories, of output weights, facilitating mote efficient hardware implementation.
  • the ODEs for the dynamic reservoir and the weight adaptation system can be implemented directly in analog hardware.
  • digital hardware e.g., field-programmable gate arrays
  • T ⁇ t— (i - t) t) is a shifted version of the triangle function T( ):
  • Algorithm 1 Iterative algorithm for general discret time model
  • each state element is adapted by the same global error function.
  • the first element of the reservoir state veetor 3 ⁇ 4(t) is sent through a length if delay embedding to produce the delay-embedded vector (a3 ⁇ 4 ⁇ i) > 3 ⁇ 4(£ - T T ), ... , 3 ⁇ 4(£ " - ⁇ * ⁇ ⁇ )).
  • the delay embedded vector of ⁇ (t) is combined using adaptable linear mixing weights C , ... ,C lff+1 arid delayed by t SK to obtain auxiliary state Jct(t).
  • the linear mixing weights C lt , ... , C 1K+1 are adapted via gradient descent using the error signal 3 ⁇ 4 (£) based on the learning modes described above. This process proceeds analogously for each of the reservoir state elements x z (t), ... , x N (£) to produce auxiliary state elements 3 ⁇ 4(£), ... ,%(£).
  • the set of auxiliary states (t), ... ; 1 ⁇ 2 ⁇ t) are combined using adaptable linear mixing weights C Q1 , Across C 0N and delayed by t SK to obtain the final, output signal. y(t).
  • C 0 # are adapted, via gradient descent using the error signal e 0 (t)— u 0 (t— T DIO )— y t— T DOO ).
  • the delay parameters ⁇ 1; ... , T N , T I S , T D ss ⁇ 3 ⁇ 4 / o- r Dos.' ⁇ ⁇ ⁇ can all be adjusted by the user based o the timing requirements of the computing hardware used to instantiate this invention, [000107] (4,5) Blind Source Separation using Reservoir States
  • the architecture for the Blind Source Separation (BSS) filters 310 used to separate and track each pulse from the signal mixture is shown in PICT 8.
  • the input to this module is the set of denoised reservoir state signals 308.
  • These signals are fed into a Reservoir FIR filter block 800, which implements an FIR filter with a fixed bandwidth and adjustable, center frequency by applying a particular se of linear mixing weights to the reservoir state signals.
  • the method for constructing linear mixing weights .that, when plied to the reservoir states, implement an FIR filter with a ' given, bandwidth and center .frequency is detailed below in Section (4,5.1 ).
  • the BSS filters (in the FIR Filter Block 800) extract unique source signals by maximizing the power of each filter output signal
  • the state of each BSS filter is controlled by a Filter Controller block 802, which measures the power in each reservoir state to determine which BSS filters are actively tracking signals, and which frequency hands within: the in u signal. contain pulses for BSS filters to track.
  • the Frequency Update block 804 accepts the filter output signals 806 as input and uses this information to update the filter center requencies.
  • the center f equencies of the filters (in die FIR Filter Block 800) are updated with the new center frequencies completing the feedback loop.
  • the source signals are extracted and features, such as Pulse Descriptor Words (PDWs), may then be extracted from the separated source signals.
  • PWs Pulse Descriptor Words
  • the components of the BSS filters 310 are described in further detail below. [0001 10] (4.5.1 ) Reservoir State-based FIR Filters
  • the first stage of the BSS filter module 310 is a FI Filter Block 800 which includes a set of adaptable FIR filters. These FIR filters are implemented as linear mixing weights applied to the reservoir state signals (e.g., adaptively filtering) to achieve a desired filter response. Each filter in the hank 800 receives the set of denoised reservoir state signals as input,
  • the •frequency of a sourc signal may be a function of ' time. This system, only requires that a source signal be well characterized by a single frequency over any short interval of time. [0001 i 3]
  • the power function is a feedback component of this module and is
  • each filter may be viewed as ha ving its own power function, with all individual power functions having the same form.
  • the objective is to maximize the power, which tends to drive the filter's cente frequency towards one of the source signal frequencies.
  • the adaptatio for this objective occurs on a fast time-scale in order to cover a very wide bandwidth.
  • the output of the power function is the power signal
  • the power signal is used to adapt the filter center frequencies and determine the filter states.
  • the power signal is defined as the normalized power of the filter output.
  • the eigenvalues of A all have small real part, corresponding to low-loss systems, the imaginary parts of the eigenvalues determine the resonant frequencies of the filter. To achieve a channel ized response, one would like to choose the poles of A based on the expected bandwidth of the input signal. However, because the reservoir state transition matrix is feed, the filter parameters must be adapted using the C mixing weight matrix.
  • the in vention uses a numerical minimization procedure to determine the coefficients of € that yield a transfer function that is as close as possible to a desired transfer function with given ripple piOperties, Because the filter coefficients C undergo further modification in this online learning procedure, one need only to determine initial C coefficients enabling rapid convergence in the feedback scheme.
  • An example output of the optimization procedure for a channelized transfer function with periodic ripple is shown in FIG. 9,
  • FIG. is a graph depicting an exarapie optimal transfer function for the initial state of the adapti e filter. Poles are chosen to give 20 channels between frequencies 900 of 1.05 and 2.0 plotted against gain 902, The desired gain 904 varies between 1 and 0.707. The optimal filter is determined by numerical minimization procedure for C coefficients 906.
  • the Filter State- Controller 802 is responsible for determining . - he curren state of each filter (in the FIR Filter Block 800).
  • Filters exist in one of three possible states; inactive, tracking, and holding.
  • a filter is in the inactive state when, it does not currently have a source signal to extract.
  • a filter is in the tracking state while it is actively extracting a source signal.
  • a filter is in the holding state when it was in the process of extracting a source signal, bu lost the signal.
  • the following state transitions are permitted: inactive H> inactive, searching ⁇ tracking, tracking ⁇ tr cking, tracking - holding, holding holding, holding - tracking, holding - inactive.
  • FIQ. 10 is a diagram of the algorithm implemented by the Fil ter State
  • the filter controller uses the power signals derived from the reservoir state signals to update the filter states, power, state, signalCotmt, and holdingCounf are filter-specific variables.
  • noiseThreshold, minSign lCount, and maxHoldCowU are fixed numeric parameters that are the same for every filter and are set by the user.
  • a fi lter may be in one of three possible states: inactive, ⁇ racking, and holding.
  • the -variables holdmgCount and signolCouru are specific to each filter.
  • a signalCount is evaluated 1002
  • the signalCotmi variable is the number of consecutive time-steps that the reservoir state power signal has been above the threshold, i signalCount is greater than the parameter 1004 because an actual signal is being observed, not just noise. Otherwise 1006, if the filter's state is holding and the variable hoidmgGnmt is greater than the parameter
  • the Filter Center Frequency Adapter is responsible for updating the
  • each filter exists in one of three states (i.e., inactive,
  • the filter In the inactive state, the filter is not tracking any particular signal.
  • the filter Once the filter ' controller detects that a particular frequency band contains enough power to indicate the presence of a pulse, the filter enters the tracking state initialized with a center frequency equal to the resonant frequency of the reservoir state that is above the eThreshold, and the filters protected region is set. if during the tracking state a filter loses the signal it was tracking, then it will enter the holding state, i the holding state a filter is held at lis current center frecpenc for a fixed period of time axHo ' idC wtt and the filter 's protected region, retrains in place.
  • RProp Resi lient Propagation
  • Literature Reference No. 4 A non-limiting example of a gradient descent algorithm is Resi lient Propagation (RProp) (see Literature Reference No. 4).
  • RProp uses only the sign information of the gradient rather than its magnitude, making it favorable for limited precision hardware implementation.
  • the RProp update is given by where d t ⁇ sgn(p(x, f t + ⁇ )— p(x,f t )) is the sign of the derivative of the filter output power, and Af t is the frequency increment.
  • Af t is determined by the sequence of sign changes of the output power derivative: ,
  • Momentum are used to determine the next center frequency. If the filter state is holding, only Gradient Descent is used.
  • the variable trackhotdCoimt is the number of consecutive time-steps that the filter has been in. ei ther the tracking or the holding state.
  • the condition track oldCoutn >fitIJmii allows only those filters that have been tracking a source signal sufficiently long to use the Momentum, method.- I f the suggested next center frequency produced by Gradient Descent is f 8 and that suggested by Momentum is fro, then the next center frequency is given by f ⁇ « :::: 3 ⁇ 4 + a * ⁇ 1 ⁇ 2, where ci and cs are positive constants such that ci + C2 - I .
  • the fourth module of the Cognitive Signal Processor is the control/source selector 31.2.
  • the control/source selector 312 prevents more than one filler from extracting any given source signal at the same time. It enforces the protected region of each filter that, is in the tracking or holding state.
  • the protected region is an interval in the frequency domain that is centered on a filter's center frequency.
  • the center frequency of a filter is not permitted to exist withm another filter's protected region.
  • a general policy governing the resolution of conflicts that wise when a filter attempts to move within another filter's protected region is not prescribed since such a policy i s dependen t on the speci fics of the center frequency adaptation algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Neurology (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Complex Calculations (AREA)

Abstract

Described is a cognitive signal processor for signal denoising aid blind source separation. During operation, the cognitive signal processor receives a mixture signal that comprises a plurality of source signals. A denoised reservoir state signal is generated by mapping the mixture signal to a dynamic reservoir to perform signal denoising. At least one separated source signal is identified by adaptively filtering the denoised reservoir state signal.

Description

[0001 ] COGNITIVE SIGNAL PROCESSOR FOR SIMULTANEOUS- DENOISiNG
AND BOND SOURCE SEPARATION
[0001] GOVERNMENT RIGHTS
[0003·] This invention was made with government support under U.S. Government
Contract Number N0QO14- 12-C-0027. The government has certain rights in the invention.
[0004] CROSS-REFERENCE TO RELATED APPLICATIONS [0005] The present application is a Contmuaiion-in-Part Application of U.S. Non- Provisional Application No. 15/452,412, filed on March 07, 20! 7.
[0006] The present application is ALSO a Continuatio -in-Part application of U .S.
'Non-Provisional Application No. 15/452,155, filed on March 07, 2017.
[0007] The present application is ALSO a non-provisional application of U.S.
Provisional Application No. 62/447,883, filed on January 18, 2017, the entirety of which is incorporated herein by reference.
[0008] BACKGROUND OF INVENTION
[0009] ( I) Field of Invention
[0001 ] The present invention relates to blind source separator and, more
specifically, to signal processor that denoises a mixed input signal and performs blind source separation on the input signal to extract and separate the signals,
[000 i 1 ] (2) Description of Related Art
[00012] Blind signal separation, also known as blind source separation, is the
separation of a set of source signals from a set of mixed signals (mixture signals), without the aid of information (or with very little information) about tire source signals or the mixing process. Thus, in the Blind Source Separation (BSS) problem, any sin e- antenn measures 'multiple source signals. There may e more than one antenna measuring the signals, but in general each antenna "sees " all of the source signals and creates a different linear mixture of them. The task, is then to use the- easured mixture signals in order to recover the original source signals. The case of a single antenna operating in isolation is especially challenging because there is no sense of spatial resolution to aid in the extraction process.
[00013} State-of-the-art systems for detecting, localizing, and classifying source emitters from passive- RF antennas over an ultra-wide bandwidth require high rate analog-to-digital converters (ADC). Such high-rate ADCs are expensive and power hungry, and due to fundamental physical limits (such as the Walden curve (See the List of incorporated Literature References, Literature Reference No, 9)) are not capable of achieving the sampling rate needed to capture the ultra-wide bandwidth. To mitigate this, slate of the art electronic- support measure (ESM) systems either use spectrum sweeping (which is too slow to handle agile emitters) or a suite of digital channelizers, which have large size, weight, and power requirements, in addition, the detection, localization, and classification algorithm ESM systems use are typically based on the fast Fourier transform, with high computational complexit and memor requirements that mate it difficult to operate them in real-time over an ultra-wide bandwidth.
[00014] Further, mixed signals are typically noisy, makin them difficult to separate.
Conventional methods for denoising signals fall into two categories: filter-based methods, and training-based approached. Filter-based methods use filtering to smooth out noise from a signal, but are too simplistic to simultaneously maintain the low-frequency long- erni tr nds of a signal while adapting to the high- frequency -abrupt transitions. Training-based methods rely on a "dictionary" that models the signals of interest Such a dictionary must be trained in an offline process, and requires training data that may not be available. In addition, the dictionary often requires a large amount -of memory and computation to be stored and leverage on the platform, making such approaches infeasiMe for ultra-low size-., weight-, and power (SWaP) systems. [00015] Conventional methods for BSS typically require a greater number of input mixtures (which maps directly to a greater number of antennas) than die number of source signals, limiting their applicability in SWaP-constraiaed scenarios (see literature Reference No. 1), Some extensions to con ventional BSS have addressed the "liaderdeteriTiined" scenario (with fewer mixtures than sources) that leverage prior knowledge about the sources, such as having "low
complexity'* or having a sparse representation with respect to a teamed dictionary. Such models of prior knowledge are too broad, enabling the system to overfit an entire mixture as a single source, and require large amounts of 'memory to store the "dictionary and computation to recover the presentation of the input mixtures with respect to the dictionary (see Literature Reference Nos.
1 and 3). In Literature Reference No. 3, the authors coupled the BSS algorithm with an 11R bandpass filter with tunable center frequency in order to separate temporally correlated sources. This work is still quite limited, requiring at least as many mixtures as sources, requiring that the mixtures be "prewhitened" to have an identity-valued covariance matrix, and using the second-order statistics of sources as the sole cue for separation.
[000 S6} Thus, a continuing need exists for a system that reduces the computation and hardware complexity needed to implement filtering, filter adaptation, and/or signal tracking and, in doing so, allow for the system to be developed on low- power hardware, including finite programmable gate arrays (FPGAs) and custom digital application-specific integrated circuits (ASICs). [00017] SUMMARY Of INVENTION
00018] The present disclosure provides a cognitive signal processor for signal denoistng and blind source separation. In various embodiments, the cognitive, signal processor comprises one or more processors and a memory. The memory, is, for example, a non-transitory computer-readable medium having executable instructions encoded thereon, such that upon execution of the instructions, the one or more processors perform several operations, I other aspects, the one or .more processors are hardwired or otherwise configured to perform the operations herein. For example, during operation, the cognitive signal processor receives a mixture signal that comprises a plurality of source signals. A denoised reservoir state signal is generated b mapping the mixture signal to a dynamic reservoir to perform signal denoising. Finally, at least one separated source signal is identified by adaptively filtering the denoised reservoir state signal.
[0001 ] In another aspect, filtering the denoised reservoir state signal is performed with a bank of filters.
[00020] In another aspec t , the syst em perofrrns an operation of controlling the bank of filters to cause each filter within the bank of filters to filter a unique waveform.
[00021 j Further, each filter has an adaptable center frequency.
[00022] In another aspec t adaptively filtering the denoised reservoir state signal further comprises operations of: detecting that a particular frequency band possesses a pulse; switching a first filter to a tracking state wit a center frequency equal to a resonant frequenc of a reservoir state corresponding to the particular frequency band; and setting the center frequency of the first filter as a protected region to prevent other filters within a bank of filters from sharing the center fr eqtieacy.
[00023] In yet another aspect, adaptively filtering the denoised reservoir state signal further comprises operations of: switching the first filter to a holding state if the first filter loses the pulse of the particular frequency hand: maintaining the first filter in the holding state for a fixed period of time while maintaining the protected region; and if during the fixed period of time the pulse returns, switching the first filter to the tracking state, otherwise switching the first filter to an inactive state and removing the protected region,
[00024] In another aspect, generating the denoised reservoir state signal .further
comprises delay embedding the reservoir state signal to generate a reservoir state history,
[00025] Additionally, generating the denoised reservoir state signal further comprises geDeratiiig a predicted input signal a small-time step aliead of the mixture signal, wherein an error between the predicted input signal and mixture signal is used to update output weights of the dynamic eservoir.
[00026] In another aspect, generating the denoised reservoir state signal is performed •with a dynamic reservoir implemented in analog hardware by satisfying a set of ordinary differential equations. [00027] Further, generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in software or digital hardware by converting a set of ordinary differential equations to delay difference equations.
[00028] Finally, the present invention also includes a computer program product and a computer implemented method. The computer program product includes computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having one. or more processors, such, that upon execution of the iastmetions, the one or more proc essors perform the operations listed herein. Alternatively, the computer implemented method includes an -act of causing a computer to execute such instructions and perform the resulting operations.
[00029] BRIEF DESCRIPTION OF THE DRAWINGS
[00030] The objects, features and advantages' of the present invention will be
apparent from the following detailed descriptions of the various aspects of the invention in conjunction with reference to the following drawings, where:
[00031] FIG. 1 A is a block diagram depicting the components of system accordin to various embodiments of the present invention; [00032] FIG, IB is a block diagram depicting the a system according to various embodiments of the present invention;
[00033] FIG. 2 is an illustration of a computer program product embodying an aspect of the present inventi on;
[00034] FIG. 3 is an illustration depicting system architecture for a cognitive signal processor according to various embodiments of the present invention;
[00035] FIG. 4 is an illustration depicting reservoir computer mapping of an input signal vector to a hig -dimensional state-space that models underlying time- varying dynamics of the signal generation process;
[00036] FIG. 5 is an illustration depicting a correspondence between state-space representation components and parameters in. the reservoir computer; [00037] FIG. 6 is an illustration depicting a dynamic reservoir that applies a delay embeddin to the reservoir states to provide a time history of reservoir
dynamics;. [00038] FIG, 7 is an illustration depicting a continuous time architecture for the dynamic reservoir and adaptive signal prediction modules;
[00039] FIG, 8 is an illustration depicting an architecture of blind source separation (BSS) filters according to various embodiments of the present invention that make use of reservoir states;
[00040] FIQ. is a graph depicting an optimal transferfunction for an initial state of an adaptive filter; [00041] FIG. 10 is a diagram of the filter controller according to. various
embodiments of die present invention;
[00042] FIG. 11 A is a chart illustrating an approx imatio of the input si gnal
u(t) using uniform sampling at period I and
[00043] FIG . 11 B is a chart illustrating an approximation of the input signal
u(t) using a linear basis function.
[00044] DETAILED DESCRIPTION
[00045] The present, invention relates to blind source separator and, more
specifically, to signal processor that dermises a mixed input signal and performs blind source separation on the input signal to extract and separate the signals. The following description is presented to enable one of ordinar skill in the art to make and use the invention and to incorporate it in. the contexi of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles, defined herein may be applied to a wide range of aspects. Thus, the present invention is not intended to be limited to the aspects presented, but is to he accorded the widest scope consistent with the princi ies and novel features disciosed herein,
[00046] In the fol!owing detailed description, numerous specific details are set forth i order to pro v ide a more thorough understanding of the present invention. However, it will be apparent to one skilled i the art that die present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than, in detail, in order to avoid obscuring the present .invention.
[00047] The reader's attention is directed to all papers and documents which, are filed concurrently wit this specification and whic are open to public inspection with this specification, and the contents of ail such papers and documents are
incorporated herein by refereece. AO the features disciosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otiierwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
[00048] Furthermore, any element in a claim that does not explicitly state "means for" performing a specified function, or "step for" performing a specific function, is not to be nterpreted as a "means" or "step" clause as specified in 35
U.S.C, Section 1 12, Paragraph 6. In particular, the use of "step of or "act of in the claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6. [00049] Before describing the inveutioa- in detail, first a list of cited references is provided. Next, a descripti on of the various principal aspects of t he present invention is provided. Subsequently, an introduction provides the reader with a general understanding of the present in vention. Finally, specific details of various embodiment of the present invention are provided to give an
understanding of the specific aspec ts.
)] (1) List of Incorporated Literature References
[00051 ] The following references are cited throughout this application. For clarity and convenience, the references are listed herei as a central resoorce for the reader. The following references are hereb incorporated- by reference as though fully set forth herein. The references are cited in the application by referring to the corresponding literature reference number, as follows:
1. S. Choi, A. Cichocki, H.-M. Park, and S ,-Y. Lee, "Blind Source Separation and independent Component Analysis: A Review," Neural Information Processing -- Letters, VoL65 No.1 , January 2005.
2. A, Cichocki and A, Belouchrani, "Sources separation of temporally correlated sources from noisy data using a bank of band-pass filters," in Proc. of Independent Component Analysis and Signal Separation (ICA- 2001), pp. 173-178, San Diego, USA, Dec. 9-13, 2001.
3. A, Hyvarinen, "Complexity Pursuit; Separating Interesting Components from Time Series," Neural Computation, Vol. 13, No. 4, pp. 883-898, April 2001.
4. Igei, C, and Hasten, M., "improving the Rprop learning algorithm", in Proc . of the 2!!d Int. Symposium on Neural Computation (NC'2000), pp . 1 15-121, ICSC Academic Press, 2000.
5. . Legenstein, et al. "Edge of Chaos and Prediction of Computational Performance for Neural Microcircuit Models," Neural Networks, 20(3), 2007. 6. W. aass, "Liquid Computing" fwc, of the Conference CiE'07;
COMPUTABIUTY EUROPE 2007, sie«a ( y)
7. F, Tafcens, "Detecting Strange Attractors in Turbulence," Dynamical Systems and Turbulence, Lecture Notes in Mathematics Vol. 898, 198! . 8. D, Verstraeten, et al. "An experimental unification of reservoir computing methods". Neural Networks, vol, 20, no, 3, April 2007.
9. R. H. Walden, "Analog-to-digital converter survey and analysis," IEEE j. Set Areas Common., vol, 51, pp. 539-548, 1999.
10, H, Yap, et al., "A First Analysis of the Stability of Takens' Embedding," in Proc. of the IEEE Global Conference on Signal and Information
Processing (G!obalSiP) symposium on Information Processing for Big Data, December 2014. 00052] (2) Principal Aspects
[00053] Various embodiments of the invention include three '"principal" aspects.
The first is a system for signal processing (i.e., signal processor). The system is typically in the form of a computer system operating software or in the form of a "hard-coded" instruction set. This system may be incorporated, into a wide variety of devices that provide different functionalities. The second principal aspect is a method, typically in the form of software, operated using a data processing system (computer}. The third principal aspect is a computer program product. The computer program produci generally represents computer-readable instructions stored on a non-transitory computer-readable medium such as an optical storage device, e,g., a compact disc (CD) or digital versatile disc (DVD), or a magnetic storage device such as a floppy disk or magnetic tape. Other, non- limiting examples of computer-readable media include hard disks, read-only memory (ROM), and flash-type memories. These aspects will be described in more detail below. [00054] A block diagram depicting an example of a system (i.e.., computer system
100} of the present invention is provided in FIG, I. A. The computer 'system 100 is configured to perform calculations, processes, .operations, and/pr functions associated with a program, or algorithm. In one aspect, certain processes and steps discussed herein are realized as a series of instructions (e.g., software program) that reside within computer reada le memory un ts and are executed by one or more processors of the computer system 100. When executed, the instructions cause the computer system 100 to perform specific actions and exhibit specific behavior, such as described herein. For example, in various embodiments, the system includes one or more processors configured to perform the various operations described herein, in one aspect, the system includes one or more processors and memory, the memory being a non-transitory computer- readable medium having executable instructions encoded thereon, such that upon execution of the instructions, the one or more perform the operations. In other aspects, the processors are hardwired to perform the opertions. Thus, the system 100 may be configured to perform operations by executing instructions from a computer memory or by being hardwired to perform certain tasks.
[00055] The computer system 100 ma include an address/dat bus 1 2 that is
configured to communicate information. Additionally, one or more data processing units, such as a processo 104 (or processors), are coupled, with the address/data bus 1 2. The processor 1 4 is configured to process information, and instructions. In various aspects, the processor 104 (or processors) include one or more of a microprocessor, a parallel processor, an application-specific integrated circuit (ASIC), a digital ASIC, a programmable logic array (PLA), complex programmable logic device (CPLD), and field programmable gate array (FPGA), [00056] The computet system 100 is configured to utilize one or more data -storage units. The computer system 1 0 may include a volatile memory unit 106 (e.g., random access memory ("RAM"), static RAM,-.dynamic RAM, etc) coupled with the address/data bus 102, wherein a volatile memory unit 106 is configured to store information and instructions for the processor 104. The computer system 100 further may include a non-volatile memory unit 108 (e.g., read-onl memory ("ROM"), programmable ROM ("PROM"), erasable programmable ROM ("EP OM"). electrically erasable programmable ROM "EE PROM"), flash .memory, etc.) coupled with the address data bus 102, wherein the nonvolatile memory unit 108 is configured to store static information and instructions for the processor 104. Alternatively, the computer system 100 may execute instructions- retrieved from an online data storage unit such as in "Cloud" computing. In an aspect, the computer system 100 also may Include one or more interfaces, such as an interface 1 10, coupled with the address/data bus 102. The one or more interfaces are configured to enable the computer system 100 to interface with other electronic devices and computer systems. The communication interfaces implemented by the one or more interfaces ma include wireline (e.g., serial cables, modems, network adaptors, etc.) and/or wireless (e.g., wireless modems, wireless network adaptors, etc.) communication technolofiv.
[00057] In one aspect, the computer system 100 may include an input device 112 coupled with the address/data bus 102. wherein the input device 1 32 is configured to communicate information and command selections to the processor 100. in accordance with one aspect, the input device 1 12 is an alphanumeric input device, such as a keyboard, that may include alphanumeric and/or function keys. Alternatively, the input device 1 Ϊ2 may be an input device other than an alphanumeric input device. In an aspect, the computer system 100 may include a cursor control device 1 14 coupled with the address/data bus J.02, wherein the cursor control device 114 is configured to communicate user input iniOrmation and/or command selections to. the processor 100. In an aspect, the cursor control device 114 is implemented using a device such as a mouse, a track-ball, a track-pad, an optical tracking device, or a touch screen. The foregoing notwithstanding, in an aspect, the cursor control device 114 is directed and/or act vated via input from the input device 1 1.2, such as in response to the use of special keys and key sequence commands associated with the input device 112. In an alternative aspect, the cursor control device 1 14 is configured to be directed or guided by voice commands.
[00058] In an aspect, the computer system 100 further may include one or more
optional computer usable data storage devices, such as a storage device 116, coupled with file address/data bus 102. The storage device 116 is configured to store information and/or computer executable instructions. In one aspect, the storage device 1 16 is a storage device such as a magnetic or optical disk drive (e.g., hard disk drive ("HDD"), floppy diskette, compact disk read only memory ("CD-ROM"), digital versatile disk ("DVD")). Pursuant to one aspect, a display device 1 18 is coupled with the address/data bus 102 or any other suitable location or component of the system 100, wherein the display device 1 1 S is configured to display video and/or graphics. In an aspect, the display device 118 may include a cathode ray tube ("CRT"), liquid crystal display ("LCD"), field emission display ("FED"), piasma display, light emitting diode ("LED") or any other display device suitable for displaying video and or graphic images and .alphanumeric characters recognizable to a user.
[00059] The computer system 1 0 presented herein is an example computing
environment in accordance with an aspect. However, the non-limiting example of the computer system 100 is not strictly limited to being a computer system. For example, an aspect provides that the computer system 100 represents a type of data processing analysis that ma be used in accordance with va rious aspect described herein. Moreover, other computin systems may also be
implemented. Indeed, the spirit and- scope of the present technology is not limited to any single data processing environment Thus, in an aspect, one or more operations of various aspects of the present technolog are controlled or implemented using computer-execiitabie instructions, such as program modules, being executed by a computer, in one implementation, such program modules include routines, programs, objects, components and/or data structures that are configured to perform particular tasks or implement particular abstract data types. In addition, an aspect provides that one or more aspects of the present technology are implemented by utilising one or more dktribnied computing environments, such as where tasks are performed by remote processing devices that are linked through a communications network, or such as where various program modules are located in both local and remote computer-storage media including memory-storage device . An illustrative diagram of a computer program product (i.e., storage device) embodying the present invention is depicted in FIG. 2. The computer program product is depicted as floppy disk 200 or an optical disk 202 such as a CD or DVD. However, as mentioned previously, the computer program produc generall represents computer-readable instructions stored on any compatible non-transitory computer-readable medium. The term "instructions" as used with respect to this invention generally indicates a set of operations to be performed on a computer, and may represent pieces of a whole program or individual, separable, software modules. Non-l imiting examples- of "instruction1' inc lude computer program code (source or object code) arid "hard-coded" electronics (i.e. computer operations coded into a computer chip). The "instruction" is stored on any non-transitory computer-readable medium, such as in the memory of a computer or on a flopp disk, CD-ROM, and a flash drive. la either event, the instructions are encoded on a now-transitory computer-readable medium.
[00061 ] (3) In traduction
[00062] This disclosure provides a system for signal processing (or otherwise
referred to as a "cognitive" signal processor (CSP)) that takes an input signal containing a mixture of pulse waveforms over a very large (e.g., > 30G z) bandwidth and simultaneously detioises the input signal and performs blind source separation (BSS) on the input signal By denoising and performing BSS on the input signal , the system effectively extracts and separates the individual pulse waveforms from the mixture..
[00063] Some embodiments of the CSP incorporate one or more of the following features. First, the system may use a bank of Finite Impulse Response (FIR) filters constructed by applying tunable mixing weights to the state output of a dynamic reservoir. Second the mixing weights (and consequently frequency response) for each filter may be adapted using a novel gradient descent procedure that is described in further detail below. Third, the filters may be activated and initialized by using the reservoir state output to determine when a source signal is present near a given frequency band. The adaptation scheme employed by the system may incorporate an explicit mechanism that limits how close different filters can approach one another in the frequency domain, which is used to ensure that each fitter extracts a unique signal. Fourth, the system may incorporate a dynamic phenomenon that is analogous to momentu and allows filters to track signals through repeated colli sion with other signals in the time-frequency domain.
[00064] Various embodiments of the system described herein may possess several advantages/improvements over state-of-the art methods in challenging denoising or BSS scenarios. Such advantages i ptovemeats include one or more o f the following:
1 , For sonie embodiments, because die CSP perfomis adaptive filtering, its .hardware-based embodiment may have much less weight and power than current brute- force channelization methods.
2. For some embodiments, by employing the reservoir states as a cueing mechanism, the CSP can detect and track many pulse wavefomis over an ultra-wide bandwidth of over 30Ghz employing very narrow bandwidth filters with high frequency resolution, and yet still exhibit very low signal detection latencies on the order of 0.1 nanoseconds.
3. For some embodiments, the system, is capable of denoising signals in real-time using a constraint that covers a wide range of electromagnetic and acoustic signals of interest. Many other current approaches use powerful, but computationally expensive constraints, such as signal complexity measures, or rely on loose constraints, such as filter banks, which may be less computationally expensive but have limited capacity to capture the structure of real-world source signals, in contrast, the system improves upon the prior art by utilizing the constraint that the wa veforms of interest in a source signal can be linearly predicted over a short interval of time, which can be computed quickly with limited computational cost.
4. For some embodiments, in the deterrmmstkally designed reservoir, the reservoir states each correspond to the amount of input signal energy near a particular frequency. This allows the CSP to generate a real-time spectrogram of a complex input signal that ca be implemented efficiently in hardware.
5. For some embodiments, the CSP can simultaneously extract large
numbers of noisy source signals that ha ve been linearly mixed by a single antenna. That is, in order to be effective the CSP does not require a. raultx-antenna array or a large, bank, of fix d predefined filters, which is Beaded by many other methods for BSS.
6. For soni embodiments, due to ^corporation of state-dependent filter behavior, the system is able to track source signals continuously, even if the signal is momentar ily lost, as well as reducing the incidence of false alarms. Furthermore, each filter extracts a unique source single, thus avoiding the extraction of confounding and unnecessary duplicates.
7. For some embodiments, due to a unique Momentum component, which has low memory and computational requirements, the filters are able to track multiple signals through repeated collisions i the time-frequency domain. This is a scenario that very few .state-of-the-art blind .source, separation methods can handle. As can be appreciated by those skilled In the art, the system described herein has several applications. For example, the system can b used with Electronic Support Measures (ESM) receivers developed by Argon ST and with other systems on airborne platforms. The system is also applicable to vehicle (e.g., UAV, plane, car, boat, robot) or man-portable applications, such as rapid detection and separation of significant objects (e.g. , obstacles, terrain, other vehicles, persons, animals) from clutter from radar antenn signals. In
autonomous vehicle operation, cars or other vehicles may use radars to detect and avoid obstacles. Due to clutter, such as trees, other cars, and walls, the radar returns for obstacles may be weak relative to other returns within the spectrum and also obscured by them. In one aspect and as shown in FIG, IB, the system 100 described herein simultaneously de-noises radio f equency (RF) signals, such as those collected by radar receivers 120 (e.g, antenna, sensors, etc.), and boosts detection of weak returns that could correspond to significant objects, while also separating out all narrowband (tone-line) pulses corresponding to different objects in the scene. Separatio of significant object pul ses from clutter pulses reduces the likelihood thai the autonomous vehicle will be confused by clutter and can then, effectively delect and a void a 'significant object. For example, once a ..significant, object is detected, the system 100 can cause a vehicle 122 to act (by being connected to an interfacing with an
appropriate vehicle control system) based on the significant object, such as slowing, accelerating, stopping, turning, and/or otherwise maneuvering around the significant objeci. Other actions based on the obstacle are also possible, such as causing the vehicle 122 to inform or warn a vehicle occupant and/or vehicle operator about, the obstacle with an audible warning, a light, text, and/or an image, such as a radar dispiay image. For further examples, the system 100 may generate commands and -control operations of vehicle systems that can be adjusted, such as vehicle suspension or safety systems such as airbags and seatbelts, etc.
[00066] (4) Specific Details of Various Embodiments
[00067] As noted above, this disclosure provides a system for signal processing (or "cognitive" signal processor (CSP)} that denoises a mixed input signal and performs blind source separation on the input signal to extract and separate the signals. The architecture for the CSP is shown in FIG . 3. As shown, the first component is a dynamic reservoir computer (RC) 300, which is the
"neuromorphic" (brain-inspired) aspect of the system. The reservoir computer 300 accepts the mixture signals 302 as input and maps it to a high-dimensional dynamical system known as the dynamic reservoir. The RC 300 has a predefined number of outputs, which are generated by continually mapping the reservoir states through a set of distinct linear functions with one such function defined per output. The CSP uses a "dynamic" reservoir computer because the reservoir state signals are continuously passed through a delay embedding^ which creates a finite temporal record of the values of the reservoir state (i.e., reservoir state history 304) . The reservoir state history 304 of the dynamic •reservoir in FIG> 3 corresponds/to element 602 as depicted in FIG. 6, That is, given a reservoir state x(t)„ history length K+l, and delay width t, the reservoir
Figure imgf000020_0001
delay embedding, when combined with the optimized reservoir design, enables the CSP to perform signal prediction and denoisiiig on both the original input signal and the individual reservoir states.
[00068] The second component is a adaptive signal prediction 'module 306 that use gradient descent-based short time prediction to adapt the output (i.e., reservoir state history 304} of the reservoir to prod oce a prediction of the input signal 302 a small time-step in the future-. Since the noise in the input signal 302 is inherently random and unpredictable, the predicted inpu signal will be free of noise. The error between the predicted input signal and actual input signal is used by the adaptive signal prediction module to further tune the output weights of the reservoir in an iterative process to generate denoised reservoir states 308.
[00069] The third component of the CSP is a bank of adaptabl e Bli nd Source
Separation (BSS) filters 310 that separate out aid track pulses from the input signal 302 mixture. This component is a key aspect of the system. Unlike previous BSS systems that use a bank of adaptable filters, the system described herein implements, the filters as liaear combinations, of the reservoir states. This is much more efficient to implement in hardware than implementing standalone FIR or IIR filters. Each of the BSS filters in th bank of filters 310 is act vated by a filter controller that measures the reservoir state energy to detect the presence of a signal in a particul ar band. The BSS filters 310 also include mechanisms for modifying center frequencies in order to track pulses. Each BSS filter 310 includes a filter adapter that updates the center frequencies of the particular filter based on the error function and the filter states. The frequency update is depicted as -element 804 in FIG. S. |00070] The fourth component is a control/ source selector 312 that ensures that each BSS filter 310 tracks a unique pulse. The control/source selector 312 accepts the filter 310 outputs as input and is responsible for controlling which filters are actively tracking pulse waveforms, resulting n original denoised pulse
waveforms (i.e., detected signals 314) that represent a final outpu of the system.
[00071 ] (4.1 ) Reservoir Computing
[00072] The CSP is based on a form of neuromorphic (brain-inspired) signal
processing known as reservoir computing ( C) (see Literature Reference Nos. 5, 6, and 8 for a description of reservoir computing). As seen in FIG. 4, a reservoir computer 300 is a special form of a recurrent neta al network (a neural network, with feedback connections) that operates by projecting the input signal vector 302 in to a high-dimensional reservoir 400 state space which contains .an equivalent - dynamical mode ! of the signal generation process capturing all of the available and actionable information about, the input 302. A reservoir 400 has readout layers that can be trained, either off-line or on-line, to learn desired outputs by utilizing the state functions. Thus, an RC 300 has the power o recurrent neural networks to model' on-stationary (time-varying) processes and phenomena, but with simple readout layers and training algorithms thai ar both accurate and efficient. The reservoir states can be mapped to useful outputs 304, including denoised inputs, signal classes, separated signals, and anomalies using trainable linear readout layers 402.
[00073] There is a strong connection between reservoir computing and state-space filtering. Co ventional RF/microwave filters- typically implement the Laplace domain filtering algorithm:
sx(s) ~ Ax($) - Bii(s)
y(s) = CTx(s) + Du(s) where u( ), an y(s) are the state-space representations of the fKerroir state, input signal, and output, respectively.
[00074] A state space filter implements time-domain filtering algorithm, and as seen in FIG. 5, the different components of the reservoir 400 state-space representation have a direct correspondence with different parameters in the reservoir computer 300. In particular, the reservoir connectivity matrix weight (A 500 determines the fi ter pole locations. Similarly, the output layer weights
( ) 502 determine the filter zero locations. As the output layer weights 502 are adaptable, the reservoir computer 300 can implement an adaptable (nonlinear) state-space filter.
[00075] (4.2) Optimized Reservoir Design for Signal Denolsing and Blind Source Separation
[00076] In conventional reservoir computers, the weights in both the reservoir
connectivity matrix (A) and the input-to-reservoir mapping matrix (B) are typically chosen randomly. As a noniiniiting example, the entries of A and B can be independent, identically distributed samples from a zero-mean, unit variance Gaussian distribution. Such random reservoir weight matrices have been successfully used in many previous applications, such as pattern recognition. However, in these kinds of reservoirs, the individual values of the reservoir states are meaningless in isolation, and can only be used for an application when combined together via learned- mixing weights. In addition, to implement suc a reservoir i low-power hardware (e.g., an FPGA or digital ASIC), the reservoir state update require computation proportional to the square of the number of nodes, which become infeasible as the number of reservoir nodes increase. [00077] Described below is a method for optimizing the reservoir weigh matrix (A) for the tasks of signal denoising and bl ind source separation. For signal
denoising, the designed reservoir states are better able to predict the input signal, resulting in increased SNR of the denoised signal For blind source separation, each reservoir state in our optimized reservoir measures the amount of signal energy near a particular resonant frequency, which can he used a cueing
mechanism to make the BSS subsystem start tracking a pulse near the given frequency, in addition, the BSS subsystem ca use the designed reservoir states as a basis with which to construct a bank of adaptable FIR filters to track individual narrowband pulses within the input signal mixture. Finally, the com uta i of the designed reservoir state scales linearly with the number of nodes, thus enabling efficient implementation in low-power hardware.
[00078] To mathematically derive the RC output layer iteration adaptations that are used to implement the optimized denoising algorithm, the linear state space described by the equations in FIG. 4 is used. For .A and J? independent of the. input and state space vector, the formal solution of the state equation is given as follows: x(t) — e At. x(0) + ds e~ASB (s) which can be verified by time-differentiating both sides of the equation. Here, eM a matrix exponential and the time integral is over a matrix quantity. An important point to note is that the initial time' in the formal solution is arbitrary (up to causality , of course); thus, for any τ > 0:
Figure imgf000023_0001
[00079] Given the state space vector at some time along with the system
parameters A and B and input signal u(tj + Δί) over the interval of interest 0 < At < τ the system can compute all future values of the state space vector at t + M, This form naturally leads itself to parallel confutation in a diseretized form, and. is the basis for the optimized signal, prediction and denoismg
algorithm.
[00080] Several observations can be made about the linear state space system that enable significant simplifications in the implementation of the algorithm. As stated earlier, the matrix A most be real; additionally , when describing a passive 1IR filter, the matrix A has eigenvalues (poles of the filter) that are either purely real and negati ve corresponding to purely damped modes, or eigenvalues that come in complex conjugate- pairs, with negative real parts to the eigenvalues. This observation allows the matrix A to be put into a purely real block-diagonal form with, a real blocfc-cliagonalizing similarity transform. The block- diaeonalized matrix SAS~l has the form:
Figure imgf000024_0001
[0008.1 ] Here n is the number of complex conjugate poles, with N— 2n; including purely damped poles as well as introducing purely diagonal eigenvalues into the canonical form (for some applications, system matrices A with, only complex conjugate pair poles are used). Because an state space system can be converted to a system with block diagonal -4 by similarity transforms on B, and C, it is assumed in the following that A is block diagonal [00082] Described below is th state space system in the Laplace domain. The b!oel diagonal form provides the following:
Figure imgf000025_0001
where overhats denote Laplace domain quantities. Notice that the combination of J? and C entries in the numerator in the sum contribute to only two independent quantities for each j. For each block-diagonal subspace, or equivalently, each oscillator, the contribution to the response has four independent degrees of freedom (two components each of B and C) and two constraints. This allows the system to fix all compcments of B to be 1 , and control the transfer function with only modifications to C.
[00083] For denoising applications, the denoised signal can be reconstructed using the response of the state system to delayed copies of the input signal «(t). Following the analysis of the previous paragraph, all delays on the input signal u(t) can be converted to delays on the state space vector x(t). With Nd delays on the input signal, with basic delay τ, the Laplace domain response is
Figure imgf000025_0002
[00084] On the other hand, a system with ¾ delays on the state space vector has
Laplace domain response as follows: y{$) ~
I - ?>::: 0 e--,;mrfi(s) + i)fi(s).
[00085] The state space delayed response can be made exactly the same as the nput signal delayed response by the following identifications: 1¾ί-ι = CZ B2j ~
Figure imgf000026_0001
#2 1)· B the following it is assumed that all delays in the system are to the state space vector *(£)- [00086] Implementation of the state space system on low-power hardware such as FPGAs may require not only discretization of the associa ted system equations, but proper normalization for the state space vector- Consider a single 2x2 sub- block of the block-diagonaiized linear state space system. The response to a time-harmonic input signal in a 2x2 sub-block can be computed analytically, with the asymptotic response to a input signal with angular frequency ω given by x+ ή(ω)— }-~— ·?, This form assumes that the B entries for the sub-block have been set to 1 s in accordance with the arguments above. The maximum response value can be determined by differentiating this expression with respect to ω, and solving for the input signal frequency giving zero derivative. Assuming thai the damping is small, i.e., Ar is small, to lowest order the maximum response is at the resonant frequency ω— t. Thus, each state space component can be properly normalized so that i ts response never exceeds a given value. [00087] (4.3) Dynamic Reservoir Using Delay Embedding of Reservoir States [00088] This section describes how to make a "dynamic reservoir" by applying a phase delay embedding. Phase delay embedding is a technique developed in dynamical system theory to model the dynamics of a chao.de system ftom its observation -u0(t ) using delayed versions of the observation as new input vector u t). Suppose that an unknown (potentially chaotic) dynamical system, embedded in an N -dimensional state space has an m-dimensional attractor. This means that though the state space has N parameters, signals from the dynamical system form trajectories tha all he o an m-dimensional sub-manifold M of the state space, and can theoretically (though not practically) be specified b as few as m parameters. The observations (received signal) i½(£) = h\x(t)] is a projection of the state space. The phase delay embedding produces a new input vector u(t) from n delayed versions of the observation signal u0(t)
concatenated together. According to the work of Taken ( see Literature
Reference No. 7), given fairly broad assumptions on the cutvature of the sub- manifold M and the nondegenerate nature of the projection ¾[-], if the number of delay coordinate dimensionality n > 2m + 1, then the phase delay
embedding u(t) preserves the topological structure (Le., shape) of the dynamical system, and thus can be used to reconstruct the dynamical system from observations. More recent work (see Literature Reference No. 10) shows that the delay coordinate dimensionality ca be increased more (but still not a function of the ambient dimensionality N) to be able to preserve both the topology and geometry of the dynamical system, witho ut complete knowledge of the dynamical system or the observation function. As seen in FIG. 6, for some embodiments, a dynamic reservoir 400 is constructed by applying the delay-embedding 600 to each of the reservoir states to provide a time history 602 of reservoir dynamics. When combined with the designed reservoir states, delay-embedded states enable each state to he predicted and denoised separately, which can be used to generate a denoised spectrogram of the input signal. [00090] in FIG. 6, everything to the left of the time history "602 is a diagrammatic instantiation of the differential -equation below it; x(i) ~ Ax(t) + Bu0(t), The triangles 604 radicate multiplication by a scalar, vector, or matrix eonstrant The plus sign 606 indicates summation of two or more signals, and the
integration sign 608 indicates a running integral. The input signal 1½(£) is mapped into the reservoir by the vector B , and the change in reservoir state x(t) is determined by combining J?«0(i) with the current reservoir state x(t) scaled by the state transition matrix A. The integral 608 indicates that the reservoir indicates that the reservoir state is obtained by the running integral of the change in reservoir state x(t),
[0009.1 ] (4.4) Adaptive Signal Prediction using Reservoir States
[00092] This section describes the adaptive signal prediction module that uses the dynamic reservoir states in order to perform signal denoising. Given thai the 1) delay embedded observations can effectively model dynamicai system behavior and 2} reservoirs with dela -embedded-' state can be designed to have th same behavior as reservoirs with delay-embedded inpois. the system described herein leverages the time history of these reservoir state variables to perform short-term predictions of the observations. The system uses a dynamic reservoir computer to learn the prediction function F; u0(t + τ) ~ [w0i )|
[00093 J For farther understanding, FIG. 7 depicts a continuous time architecture of the adaptive signal prediction module 306, The model shows the dynamic reservoir 400 with fixed connections and adaptable output layers attached to
Figure imgf000028_0001
it, A wideband ADC frontend provides input to the dynamic reservoir 400, whose output layer weights are adapted based on short-time prediction to de- noise the input signal. The weights of the output layers are adapted via the gradient learning algorithm described below. The gradient- descent learning algorithm is based oti short-time prediction of the input signal. Since noise is random and unpredictable, the predicted signal y(t == ·¾ø.(? + r) will be be free of noise.
[00094] The dynamic reservoir 400 in FIG. 6 satisfies the following set of coupled ordinary differential equations (ODE): (r) = Ax(t) + uo(t)
(if' x(t ~~ (k ~~ 1)T) + d(t)Tu(t), where (£) [u0(t),«0(t - τ), ,„, t½(t - Κτ)]τ,
[00095] To perform short-time predictio of the input signal, the system uses an online gradient descent alaorithra. The idea is to enforce exact or otherwise better prediction of the current time point that is used in the delay embedding. The predicted input value at time (t + τ) is calculated from the current value the of the output weights ( ¾(t),d(t)) and the current and past values of the states ¾ and the :
£[£x> ··· -
Figure imgf000029_0001
parameters that weight the importance of the output weights
Figure imgf000029_0002
y(t - T) - fe(i)T x(t - kx) + d(t)T (t -- 1 ) . Note that y(i - r) is the delayed output expressed by the delayed valued o x and u and the current -values, of the .output weights and d, and thus is general f (t ·— x)≠ >'(£— r). However, this approximation is reasonable, and allows the system to not req ire storage of time histories, of output weights, facilitating mote efficient hardware implementation.
[00096 me gradients of
Figure imgf000030_0001
andd. Based on these gradients, th weight updates to {^(t)} t ' and d(t) satisfy the foliowing ordinary di.fferen.tiat equations (ODEs):
Cji it) - -gcCjtit) + ½£(f ) (i - kr), k - 1,2, ... , K + 1 d(i) ~ ~ga ( ) + ¼¾f(i)u(t— τ), where gc — 2Art and gd— 2Art is the "forgetting" rates with respect to
Figure imgf000030_0002
and μα are the learning rates with respect to and d, and ε(ί) ^ uQ(t) ~~ (f ~ τ) is the error signal.
[00097] The ODEs for the dynamic reservoir and the weight adaptation system can be implemented directly in analog hardware. To implement the above ODE s in software or efficient, digital hardware (e.g., field-programmable gate arrays
(FPGAs) or custom digital application-specific integrated circuits (ASICs)), the update equations must be discretized. For implementing the process in. software or digital hardware, the ODEs are converted to delay difference equations
(DDEs). For a linear dynamical system with the state-space representation: y(t) = (t)¾t) + i>(t)ti(£).
[0009 ] Given the discrete time-step size τ, the equivalent DDE is obtained, that describes the exact same filter dynamics;
Figure imgf000031_0001
y(f) - C(i x(t) + .D(i)«(t).
[00099] This shows that the current, reservoir state x(t) is a function of the reservoir state at the previous time step x(t— r) and the input signal u(t) over the interval [t— τ, t]. Since the entire continuous interval is not available in software or digital hardware, in the digital CSP, u(t) is approximated over the interval using linear basis functions. Given the sampling period At, for u(t), a set of samples can be collected as follows: uf ~ u(t— (i— l)4t), 1 < t < ne +
T
1, where n6—™ in the number of sampling intervals within the time window defined by t (see FIG, 11 A)}. As. seen, in FIG. 11 B, the .'input signal is approximated from the samples as u(r) « Ui Nt(t), where ¾( =
T{t— (i - t) t) is a shifted version of the triangle function T( ):
Figure imgf000031_0002
[0001 1] Based on the linear basis approximaiion, the DDE for the reservoir state x(t) becomes:
Figure imgf000032_0001
[000102] Without loss of generality, £ = r. If the two auxiliary matrices Be nd J¾.e are defined as follows:
Figure imgf000032_0002
then X(T) can
Figure imgf000032_0003
[000103] Based on this, iterative updates can be derived for the state *), output (y), and weights d^, which is summarized in Algorithm 1 below.
Algorithm 1 : Iterative algorithm for general discret time model
Initialization:
x[l] = 0, ¾[1] = 0, d[l] = 0 k = 1,2,...,(/C + 1) iteration (n > 2):
w[n] ~ [ 0[n.],«0[n— Ντ), ...,u0[n
x[n] ~ Ax[n - 1] + B«.0[n]
Figure imgf000032_0004
£k in] » (1 - Mg cjcln - 1] + Δίμ έ[η) . - kNt] k = 1,2, .... (K + 1) d[n] = (1 - Atgd)dln - 1] + 4¾i{«] [n - Nv]
K+l
y[n] = £fe [nfxfn - (fc - 1)ϊ¾ 4- [η]¾ι] 4] It is noted that the exampl architecture in FIG. 7 enables the user to select different learning methods governed fay local and/or global learning rales. The local/global learning rales are set by selecting which of the two signals is input into the summing j unctions 700 and 702 thai calculate the error signals
¾( >— > %(£)- Without loss of generality, examine the summing junction 700 corresponding to the first error signal ¾(£} ~ st(t).— ¾(ΐ). n learning method #1 (global learning), the switches to the left and right of the summing junction 700 are both flipped upward, so that ~ u0(t— ims) and
fbt(l) - y(t - TDOS% and thus€t(t) ~ u()(t - TD!S) - y( - tD0S) is the error between the delayed input signal and global output signal. In the global learning mode, each state element is adapted by the same global error function. In learning method #2 (local learning and as shown), the switch to the left and right of the summing junction 700 are flipped downward, so that■¾(£)— Χχ(ΐ) and ~ xt(t— % ), and thus ¾(£)-¾(.:') ~~ ¾(£— rms) is the error between the predicted state element and delayed version of the actual state element. This same beha vior applies for the summing junction 702
corresponding to error signals ¾ (i)> *·· % 00 · Since there are only two learning modes, all switches in all of FIG. 7 are either flipped upward or flipped downward, 5] In the ease when the state update error signals are generated from the noisy input state variables and their denoised versions the .learning is local and it; does not include any information from the output signal. In this case, only the last output layer that combines the de-noised states into the output uses global error. •namely the difference between the input ami output For use in this system, the local learning rules are -used to ensure that the individual reservoir states, used by the BSS subsystem to develop adaptiv FIR filters, are sufficiently de-noised. [000106] In FIG.?, the input signal t½(t) is fed into the dynamic reservoir 400,
producing the N -dimensional reservoir state vector %(£ ). The elements of the reservoir state vectors Xi(t), ,„ ,¾(£) are each fed individually into their own adaptive signal prediction modules. Without loss of generality, the first element of the reservoir state veetor ¾(t) is sent through a length if delay embedding to produce the delay-embedded vector (a¾{i)> ¾(£ - TT), ... , ¾(£" - Κ*ι~)). The delay embedded vector of Χχ (t) is combined using adaptable linear mixing weights C , ... ,Clff+1 arid delayed by tSK to obtain auxiliary state Jct(t). The linear mixing weights Clt, ... , C1K+1 are adapted via gradient descent using the error signal ¾ (£) based on the learning modes described above. This process proceeds analogously for each of the reservoir state elements xz(t), ... , xN (£) to produce auxiliary state elements ¾(£), ... ,%(£). The set of auxiliary states (t), ... ; ½{t) are combined using adaptable linear mixing weights CQ1, ..... C0N and delayed by tSK to obtain the final, output signal. y(t). The linear mixing weights 0i, ... , C0# are adapted, via gradient descent using the error signal e0(t)— u0(t— TDIO)— y t— TDOO). m the above, the delay parameters τ1; ... , TN, TI S, TDss< ¾/o- rDos.' τ ΰοο can all be adjusted by the user based o the timing requirements of the computing hardware used to instantiate this invention, [000107] (4,5) Blind Source Separation using Reservoir States
[000108] The architecture for the Blind Source Separation (BSS) filters 310 used to separate and track each pulse from the signal mixture is shown in PICT 8. The input to this module is the set of denoised reservoir state signals 308. These signals are fed into a Reservoir FIR filter block 800, which implements an FIR filter with a fixed bandwidth and adjustable, center frequency by applying a particular se of linear mixing weights to the reservoir state signals. The method for constructing linear mixing weights .that, when plied to the reservoir states, implement an FIR filter with a' given, bandwidth and center .frequency is detailed below in Section (4,5.1 ).
[000109] The BSS filters (in the FIR Filter Block 800) extract unique source signals by maximizing the power of each filter output signal The state of each BSS filter is controlled by a Filter Controller block 802, which measures the power in each reservoir state to determine which BSS filters are actively tracking signals, and which frequency hands within: the in u signal. contain pulses for BSS filters to track. The Frequency Update block 804 accepts the filter output signals 806 as input and uses this information to update the filter center requencies. The center f equencies of the filters (in die FIR Filter Block 800) are updated with the new center frequencies completing the feedback loop. As the system operates the source signals are extracted and features, such as Pulse Descriptor Words (PDWs), may then be extracted from the separated source signals. The components of the BSS filters 310 are described in further detail below. [0001 10] (4.5.1 ) Reservoir State-based FIR Filters
[0001 11] The first stage of the BSS filter module 310 is a FI Filter Block 800 which includes a set of adaptable FIR filters. These FIR filters are implemented as linear mixing weights applied to the reservoir state signals (e.g., adaptively filtering) to achieve a desired filter response. Each filter in the hank 800 receives the set of denoised reservoir state signals as input,
[0001 12] The center frequency of each filter is adaptable, while its bandwidth is fixed.
As a mixture signal is rim through the filters eac one adapts in such a way tha its center frequency converges on the frequency of a unique source signal. The •frequency of a sourc signal may be a function of 'time. This system, only requires that a source signal be well characterized by a single frequency over any short interval of time. [0001 i 3] The power function is a feedback component of this module and is
responsible for guiding the adaptatio of the filter center frequencies.
Conceptually, each filter may be viewed as ha ving its own power function, with all individual power functions having the same form. The power is computed as p(x, ftr =. .¾«t_ (AC 8) * x n))" where x is the input signal, h fl) is- the FIR filter with fixed bandwidth and center frequency fl. The objective is to maximize the power, which tends to drive the filter's cente frequency towards one of the source signal frequencies. The adaptatio for this objective occurs on a fast time-scale in order to cover a very wide bandwidth. The output of the power function is the power signal The power signal is used to adapt the filter center frequencies and determine the filter states. The power signal is defined as the normalized power of the filter output. The normalized power is given by ~ »:=Ρ~Μ( ( ! ) * * ) ) var(x(ty) where M is the number of samples used in the average, x(t) is the input to the filter, and pw(x( ) is the variance in the input computed over the same M samples.
[0001 14] Described below is the procedure used to determine the pole and zero
structure of the filter based on the updated center frequency. A filter is described by the following state space system equations:
- Ax(t) + Bu(t)
y(t) = Cx(t) + /.
[0001 15] The poles of the filter are given by the eigenvalues of the reservoir state transition matri A, while the zeros of the fil ter can be changed using the other state space system coefficients, B,€> and I), For a passive filter, the matrix A. lias eigenvalues that are in the left half plane and are either purely real. (and negative), or come in complex■■conjugate pairs. This observation allows the filter structure -to be block-diagonalized, so that one consider the system as N independent filters , each with frequency described by the imaginary part of the complex conjugate ain of A . When the eigenvalues of A all have small real part, corresponding to low-loss systems, the imaginary parts of the eigenvalues determine the resonant frequencies of the filter. To achieve a channel ized response, one would like to choose the poles of A based on the expected bandwidth of the input signal. However, because the reservoir state transition matrix is feed, the filter parameters must be adapted using the C mixing weight matrix.
£0001 16] With the eigenvalues (poles) of A chosen as described above, the in vention then uses a numerical minimization procedure to determine the coefficients of€ that yield a transfer function that is as close as possible to a desired transfer function with given ripple piOperties, Because the filter coefficients C undergo further modification in this online learning procedure, one need only to determine initial C coefficients enabling rapid convergence in the feedback scheme. An example output of the optimization procedure for a channelized transfer function with periodic ripple is shown in FIG. 9,
[000 S 17] FIG. is a graph depicting an exarapie optimal transfer function for the initial state of the adapti e filter. Poles are chosen to give 20 channels between frequencies 900 of 1.05 and 2.0 plotted against gain 902, The desired gain 904 varies between 1 and 0.707. The optimal filter is determined by numerical minimization procedure for C coefficients 906.
[0001 18] (4.5.2) Filter Controller [0001 19] The Filter State- Controller 802 is responsible for determining.- he curren state of each filter (in the FIR Filter Block 800). Filters exist in one of three possible states; inactive, tracking, and holding. A filter is in the inactive state when, it does not currently have a source signal to extract. A filter is in the tracking state while it is actively extracting a source signal A filter is in the holding state when it was in the process of extracting a source signal, bu lost the signal. The following state transitions are permitted: inactive H> inactive, searching ~ tracking, tracking ^tr cking, tracking - holding, holding holding, holding - tracking, holding - inactive.
[000120] FIQ. 10 is a diagram of the algorithm implemented by the Fil ter State
Controller. The filter controller uses the power signals derived from the reservoir state signals to update the filter states, power, state, signalCotmt, and holdingCounf are filter-specific variables. noiseThreshold, minSign lCount, and maxHoldCowU are fixed numeric parameters that are the same for every filter and are set by the user. A fi lter may be in one of three possible states: inactive, {racking, and holding. The -variables holdmgCount and signolCouru are specific to each filter. [000121 ] First, the presence or absence of a signal with respect to a given filter is determined 1000 by thresholding the power as the normalized power of the reservoir state signal, in the presence of signal, the value of this measure will rise above a pre-defined threshold. The threshold (noiseThreshold) is determined by observing the value under pure noise. If the power signal is above the threshold it means that the filter is detecting a source signal in a particular frequency band.
[000122] Next, a signalCount is evaluated 1002, The signalCotmi variable is the number of consecutive time-steps that the reservoir state power signal has been above the threshold, i signalCount is greater than the parameter
Figure imgf000039_0001
1004 because an actual signal is being observed, not just noise. Otherwise 1006, if the filter's state is holding and the variable hoidmgGnmt is greater than the parameter
• maxHoldi mnt, then the filter s slate is set hack to inactive 1008. 'If this is false, then no change 1010 is made to the filter's state. The variable ho!dingComtt is the number of consecutive time-steps that the filter has been in the holding state. 'St is used to limit the amount of time that a filter can spend in this state.
[00 123] If the power signal drops below nomT r shold and the filter is in the
(racking state 1012, then, it transitions to the holding state 1014. On the other hand 101 , if it is already in the holding state and hoidmgCmm! is greater than maxHoldCounl, the filter transitions hack to the inactive state 1016, Otherwise, no change 1018 is made to the filter's state.
[000124] (4.5.3) Frequency Update Block
[000125] The Filter Center Frequency Adapter is responsible for updating the
positions of the filters in the frequency domain by utilizing the fil ter output power signals. In addition to adapting the filters, it ensures that a filter is able to track a source signal for the duration of at least a single pulse. This way no pulses that are extracted from the mixture signal are split between different filter outputs.
[000126] As noted above, each filter exists in one of three states (i.e., inactive,
tracking, and holding). In the inactive state, the filter is not tracking any particular signal. Once the filter 'controller detects that a particular frequency band contains enough power to indicate the presence of a pulse, the filter enters the tracking state initialized with a center frequency equal to the resonant frequency of the reservoir state that is above the eThreshold, and the filters protected region is set. if during the tracking state a filter loses the signal it was tracking, then it will enter the holding state, i the holding state a filter is held at lis current center frecpenc for a fixed period of time axHo' idC wtt and the filter 's protected region, retrains in place. A non-limiting example of such & fixed period of time is ia art embodiment of a related invention as described in U.S. Non-Provisional Application No. 15/452 J 55 , where max oidCowu was set to 450 filter outputs, which corresponded to 4500 time steps for an input signal with samp Sing rate 180 GHz.If during this period of time a signal returns, then the filter is switched back to the tracking state. On the other hand, if after time mm-HaldCoimi no s gnal lias returned, then the filter is switched to the inactive state and its protected region is removed. ] The center frequency of the filter can be updated usin a combination of a gradient descent algorithm and the Momentum method. A non-limiting example of a gradient descent algorithm is Resi lient Propagation (RProp) (see Literature Reference No. 4). RProp uses only the sign information of the gradient rather than its magnitude, making it favorable for limited precision hardware implementation. The RProp update is given by
Figure imgf000040_0001
where dt ~ sgn(p(x, ft + ε)— p(x,ft)) is the sign of the derivative of the filter output power, and Aft is the frequency increment. Aft is determined by the sequence of sign changes of the output power derivative: ,
Figure imgf000040_0002
wher μ+, _, Afimti and - mia are user-defined parameters that determine the dynamics of the RPROP update. The Momentum method updates the center fre uency by fittin a linear function to some number of past center frequencies and then extrapolating this linear model to 'the next time-step.-
[000128] If the filter state is tracking and the variable trackholdCoutU Is greater than the user-specified parameter fitLimit, then both Gradient Descent and
Momentum are used to determine the next center frequency. If the filter state is holding, only Gradient Descent is used. The variable trackhotdCoimt is the number of consecutive time-steps that the filter has been in. ei ther the tracking or the holding state. The condition track oldCoutn >fitIJmii allows only those filters that have been tracking a source signal sufficiently long to use the Momentum, method.- I f the suggested next center frequency produced by Gradient Descent is f8 and that suggested by Momentum is fro, then the next center frequency is given by f∞« ::::
Figure imgf000041_0001
¾ + a * ί½, where ci and cs are positive constants such that ci + C2 - I . If ci C2„ then there is more emphasis on the portion of the mixture signal that the filter is currently seeing, while if C2 > ci the linear trend of past center frequencies plays a stronger role in determining the new frequency. Typically, a - C2 - 0.5, If the filter state is holding and trackholdCount > fit Limit, then sufficient signal tracking has been per ormed to utilize Momentum to update the center frequency, but a signal is not currently being tracked and thus Gradient Descent is not used. Otherwise, there is no change to the filter center frequency.
[0001291 (4.6) Control / Source Selector Module
[000130] As shown in FIG. 3, the fourth module of the Cognitive Signal Processor is the control/source selector 31.2. The control/source selector 312 prevents more than one filler from extracting any given source signal at the same time. It enforces the protected region of each filter that, is in the tracking or holding state. The protected region is an interval in the frequency domain that is centered on a filter's center frequency. For some embodiments, the center frequency of a filter is not permitted to exist withm another filter's protected region. A general policy governing the resolution of conflicts that wise when a filter attempts to move within another filter's protected region is not prescribed since such a policy i s dependen t on the speci fics of the center frequency adaptation algorithm. For example, as illustrated in an embodiment of a related invention as described in U.S. Non-Provisional Application No. 15/452,155, for a input signal sampling rate of 180 Ghz, the closest that a new center frequency suggested by Gradient Descent ca n be any old center frequency of other filters is 5 GHz, if there is a conflict then the filter that had the center frequency at the previous time instant is allowed to maintain that center frequency, and the filter that suggested the new center frequenc must remain with its current center frequency. 1 J Finally, while this invention has been described in terms of several embodiments, on of ordinary skill in the art wi ll read il recognize that the invention may have other applications in other environments, it should be noted that many embodiments and implementations are possible. Further, the following claims are in no way intended to limit the scope of the present invention to the specific embodiments described above. In addition, an recitation of "means for" is intended to evoke a meam-phis-iunciion reading of an element and a claim, whereas, any elements that do not specifically use the reci tation "means for", are not intended to be read as inearis-plus-tunction elements, even if the claim otherwise includes the word "means". Further, while particular method steps have been recited in a particular order, the method steps may occur in an desired order and fall within the scope of the present invention.

Claims

CLAIM
What is claimed is:
1 , A cognitiv signal processor for signal denoising and blind source separation,, the cognitive signal processor comprising:
one or more processors configured to perform operations of;
receiving a mixture signal that comprises a plurality of source signals;
generating a denoised reservoir state signal by mapping the
.mix tore signal to a dynamic reservoir to perform signal denoising; and identifying at least one separated source signal by adaptive! y filtering the denoised reservoir state signal .
The cognitive signal processor as set forth in Claim 15 wherein, -filtering the denoised reservoir state signal is performed with a bank of filters.
The cognitive signal processor as set forth in Claim 2, further comprising m operation of controlling the bank of iilters to cause each filter within the hank of filters to filter a unique waveform.
The cogniti ve signal processor as set forth in Claim 3, wherein each, filter has an adaptabl center frequency.
The cognitive signal processor as set forth in Claim 1 , wherein, adaptively filtering the denoised reservoir state signal further comprises operations of: detecting that a particular frequency band possesses a pulse; switching a first filter to a tracking state with a center frequency equal to a resonant frequency of a reservoir state corresponding to the particular frequency band; and setting the center f equency of the first filter as a protected region to prevent other filters within a bank of filters from sharing the cente frequency .
The cognitive signal processor as set forth in Claim 5, wherein, adaptively filtering the denoised reservoir state signal further comprises operations of: switching the first filter to a holding state if the first filter loses the pulse of the particular frequency hand;
maintaining the first filter in the holding state for a fixed period of time while maintaining the protected region;
if during the fixed period of time the pulse returns, switching the first filter to the .tracking state, other ise- switching the first filter to an inactive state and removing the protected region.
The cognitive signal processor as set forth in Claim 1, wherein generating the denoised reservoir state signal further comprises delay embedding the reservoir state signal to generate a reservoir state history.
The cognitive signal processor as set forth in Claim 1 , wherein generating the denoised reservoir state signal further comprises generating a predicted input signal a small-time step ahead of the mixture signal, wherein an error between the predicted input signal and mixture signal is used to update output weights of the dynamic reservoir.
The cognitive signal processor as set forth in Claim 1 , wherein generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in analog hardware by satisfying a set of ordinary differential equations.
10. The cogaiiive signal processor as set forth m Claim ί ..wherein generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in software or digital hardware by converting a set of- ordinary differential equations to delay difference equations-.
11. A computer program product for for signal denoising and. blind source
separation, the computer program product comprising:
a non-transitory computer-readable medium having executable instructions encoded, thereon, such that upon execution of the instructions by one or more processors, the one or more processors perform operations of;
receiving a mixture signal that comprises a plurality of source signals;
generating a denoised reservoir state signal by mapping the mixture signal to a dynamic reservoir to perform signal denoising; and identifying at least one separated source signal by adaptively filtering the denoised reservoir state signal.
12, The computer program product as set forth in Claim 1 , wherein filtering the denoised reservoir state signal is performed with a bank of filters.
13. The computer program product as. set forth in Claim 1 , further comprising an operation of controlling the bank of filters to cause each filter within the bank of filters to filter a unique waveform. 14. Th computer program product- s set forth in Claim 13, wherein each, filter has an adaptable center frequency.
15, The computer program product as set forth in Claim 1.1 , wherein adaptively filtering the denoised reservoir state signal further comprises operations of: detecting that a particular frequency band possesses a pulse; switching a first filter to a tracking state with a .center frequency equal to a resonant frequency of a reservoir state corresponding to the particular frequency hand; and
setting the center frequency of the first filter as a protecte region to prevent other filters within a bank of filters from sharing the center frequency.
16. The computer program product as set forth in Claim 15, wherein adaptiveiy filtering the denoised reservoir state signal further comprises operations of:
switching the first filter to a holding state if the first fi lter loses the pulse of the partic ular frequency band;
maintaining the first filter in the holding state for a fixed period of time while maintaining the protected region;
if during the fixed period of time the pulse returns, swi tching the first filter to the tracking state, otherwise switching the first filter to an inactive state and removing the protected region.
17. The computer program product as set forth in Claim 1 1 , wherein generating the denoised reservoir state signal further comprises delay embedding the reservoir state signal to generate a reservoir state history'.
18. The computer program product as set fort in Claim 1 1, wherein -generating the denoised reservoir state signal further comprises generating a predicted input signal a small-time step ahead of the mixture signal, wherein an error between the predicted input signal and mixture signal is used to update output weights of the dynamic reservoir.
1 . The computer program product as set forth in Claim 1 1 , wherein generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in analog hardware by 'satisfying a set of -ordinary differential equations.
20. The computer program product as set forth in Claim 1 1., -wherein, generating the denoised reservoir state signal is performed with a dynamic- reservoir implemented in software or digital hardware by converting a set of ordinary differential equations to delay difference equations.
21. A computer implemented method processor for signal deaoistag and blind
source separation, the method -comprising an act of:
causing one or more processers to. -execute instructions encoded on a non-transitory computer-readable medium, such thai upon execution, the one or more processors perform operations of
receiving a mixture signal that comprises a plurality of source signals;
generating a denoised . reservoir state signal by mapping th mixture signal to a dynamic reservoir to perform signal denoising and identifying at least one separated source signal by adaptive!}-' filtering the denoised reservoir state signal.
22. The computer implemented method as set forth in Claim 21, wherein filtering the denoised reservoir state signal is performed with a bank of filters,
23. The compute implemented method as set forth in Claim 22, further '.comprising.
an operation of controlling the bank of filters to cause each filter within the bank of filters to filter a unique waveform.
24. The computer implemented method as set forth in Claim -23, wherein each filter has an adaptable center frequency.
25. The compuier implemented method as set forth ro Claim '21„ wherein adaptivei filtering the denoised reservoir state signal further comprises operations of: detecting that a particular frequency band possesses a pulse; switching a first filter to a tracking state with a center frequency equal to a resonant frequency of a reservoir state corresponding to the particular
frequency band; and
setting the center frequency of the first fi lte as a protected region to prevent other filters within a bank of filters from sharing the center frequency.
26. The computer implemented method as set .forth In Claim 25 , wherein adaptiveiy filtering the denoised reservoir state signal further comprises operations of: switching the first .filter to a holding state if the first filter loses the pulse of the particular frequency band;
maintaining the first filter in the holding state for a fixed period of time while maintaining the protected region;
if during the fixed period of time the po ise returns, switching the first filter to the tracking state, otherwise switching the first filter to an inactive state and removing the protected region.
27. The computer implemented method as set forth in Claim 21, wherein generating the denoised reservoir state signal further comprises delay embedding the reservoir state signal to generate a reservoir state history.
28. The computer implemented method as set forth in Claim 21, wherein generating the denoised reservoir state signal further comprises generating a predicted input signal a small-time step ahead of the mixture signal, wherein an error between the predicted input signal and mixture signal is used to update output weights of the dynamic reservoir. 29, The compuier impiemeMed meihod as set forth in Claim '21„ wherein generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in analog hardware by satisfying a set of ordinary differentia! equations.
30. The computer impiemented method as set fbrth in Claim 21 , wherein generating the denoised reservoir state signal is performed with a dynamic reservoir implemented in software or digital hardware by converting a set of ordinary differential equations to delay difference equations.
PCT/US2017/062561 2017-01-18 2017-11-20 Cognitive signal processor for simultaneous denoising and blind source separation WO2018136144A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17892664.8A EP3571514A4 (en) 2017-01-18 2017-11-20 Cognitive signal processor for simultaneous denoising and blind source separation
CN201780078246.2A CN110088635B (en) 2017-01-18 2017-11-20 Cognitive signal processor, method and medium for denoising and blind source separation

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762447883P 2017-01-18 2017-01-18
US62/447,883 2017-01-18
US15/452,155 2017-03-07
US15/452,412 US10153806B1 (en) 2015-03-19 2017-03-07 Cognitive architecture for wideband, low-power, real-time signal denoising
US15/452,155 US10484043B1 (en) 2015-03-19 2017-03-07 Adaptive blind source separator for ultra-wide bandwidth signal tracking
US15/452,412 2017-03-07

Publications (1)

Publication Number Publication Date
WO2018136144A1 true WO2018136144A1 (en) 2018-07-26

Family

ID=62909270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/062561 WO2018136144A1 (en) 2017-01-18 2017-11-20 Cognitive signal processor for simultaneous denoising and blind source separation

Country Status (3)

Country Link
EP (1) EP3571514A4 (en)
CN (1) CN110088635B (en)
WO (1) WO2018136144A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109061597A (en) * 2018-08-23 2018-12-21 哈尔滨工业大学 Ionospheric clutter suppressing method based on blind source separating and the filtering of time-frequency ridge ripple domain
EP3477331A1 (en) * 2017-10-25 2019-05-01 The Boeing Company Below-noise after transmit (bat) chirp radar
EP3709049A1 (en) 2019-03-13 2020-09-16 Thales Radar processing system and related noise reduction method
US10783430B2 (en) 2016-09-26 2020-09-22 The Boeing Company Signal removal to examine a spectrum of another signal
CN112329855A (en) * 2020-11-05 2021-02-05 华侨大学 Underdetermined working modal parameter identification method and detection method based on adaptive dictionary
CN112435685A (en) * 2020-11-24 2021-03-02 深圳市友杰智新科技有限公司 Blind source separation method and device for strong reverberation environment, voice equipment and storage medium
US11002819B2 (en) 2018-04-24 2021-05-11 The Boeing Company Angular resolution of targets using separate radar receivers
CN113671471A (en) * 2021-08-18 2021-11-19 中国科学院声学研究所北海研究站 Underwater sound target detection blind source separation method
CN113835068A (en) * 2021-09-22 2021-12-24 南京信息工程大学 Blind source separation real-time main lobe interference resisting method based on independent component analysis
EP4243302A1 (en) * 2022-03-07 2023-09-13 The Boeing Company Very low frequency signals for underwater communications
US11863221B1 (en) * 2020-07-14 2024-01-02 Hrl Laboratories, Llc Low size, weight and power (swap) efficient hardware implementation of a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050047611A1 (en) * 2003-08-27 2005-03-03 Xiadong Mao Audio input system
US7474756B2 (en) * 2002-12-18 2009-01-06 Siemens Corporate Research, Inc. System and method for non-square blind source separation under coherent noise by beamforming and time-frequency masking
US20100158271A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method for separating source signals and apparatus thereof
US8031117B2 (en) * 2004-09-23 2011-10-04 Interdigital Technology Corporation Blind signal separation using polarized antenna elements
US9042496B1 (en) * 2013-02-19 2015-05-26 The United States Of America, As Represented By The Secretary Of The Army Signal modulation scheme determination through an at least fourth-order noise-insensitive cumulant

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167417A (en) * 1998-04-08 2000-12-26 Sarnoff Corporation Convolutive blind source separation using a multiple decorrelation method
US6898612B1 (en) * 1998-11-12 2005-05-24 Sarnoff Corporation Method and system for on-line blind source separation
US6421372B1 (en) * 1999-11-10 2002-07-16 Itt Manufacturing Enterprises, Inc. Sequential-acquisition, multi-band, multi-channel, matched filter
WO2001065637A2 (en) * 2000-02-29 2001-09-07 Hrl Laboratories, Llc Cooperative mobile antenna system
WO2002059772A2 (en) * 2000-11-09 2002-08-01 Hrl Laboratories, Llc Blind decomposition using fourier and wavelet transforms
CN100392723C (en) * 2002-12-11 2008-06-04 索夫塔马克斯公司 System and method for speech processing using independent component analysis under stability restraints
JP2005347946A (en) * 2004-06-01 2005-12-15 Matsushita Electric Ind Co Ltd Signal processor
BRPI0516289A (en) * 2004-09-23 2008-09-02 Interdigital Tech Corp Statistically independent signal separation using a combination of correlated and uncorrelated antenna elements
US8046219B2 (en) * 2007-10-18 2011-10-25 Motorola Mobility, Inc. Robust two microphone noise suppression system
US20090264786A1 (en) * 2008-04-21 2009-10-22 Brainscope Company, Inc. System and Method For Signal Denoising Using Independent Component Analysis and Fractal Dimension Estimation
US8632465B1 (en) * 2009-11-03 2014-01-21 Vivaquant Llc Physiological signal denoising
CN101949977B (en) * 2010-06-02 2012-12-05 华南理工大学 Railway frequency shift signal anti-interference method based on blind source separation
CN102866425A (en) * 2012-09-17 2013-01-09 中国石油大学(华东) Blind source seismic signal stable-superposition model based blind separation method
CN103051401B (en) * 2012-12-28 2015-02-04 公安部第三研究所 Cognitive radio frequency spectrum sensing method based on wavelets
US9460732B2 (en) * 2013-02-13 2016-10-04 Analog Devices, Inc. Signal source separation
CN103368264B (en) * 2013-07-22 2015-10-21 国家电网公司 Inspection platform is transported safely in a kind of substation relay protection room
JP6351538B2 (en) * 2014-05-01 2018-07-04 ジーエヌ ヒアリング エー/エスGN Hearing A/S Multiband signal processor for digital acoustic signals.
CN104473631B (en) * 2014-12-12 2016-07-13 广东工业大学 A kind of based on non-negative blind separation Fetal ECG instantaneous heart rate recognition methods and system
CA2975812A1 (en) * 2015-02-05 2016-08-11 Dh Technologies Development Pte. Ltd. Interference detection and peak of interest deconvolution
CN105628419A (en) * 2015-12-18 2016-06-01 国网安徽省电力公司 System and method of diagnosing GIS (Gas Insulated Switchgear) mechanical defects based on independent component analysis denoising
CN105962914B (en) * 2016-05-24 2019-08-27 北京千安哲信息技术有限公司 The separation method and device of breathing and heartbeat signal based on blind source separating
CN106199342B (en) * 2016-09-20 2017-05-31 西安科技大学 A kind of wire selection method for power distribution network single phase earthing failure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7474756B2 (en) * 2002-12-18 2009-01-06 Siemens Corporate Research, Inc. System and method for non-square blind source separation under coherent noise by beamforming and time-frequency masking
US20050047611A1 (en) * 2003-08-27 2005-03-03 Xiadong Mao Audio input system
US8031117B2 (en) * 2004-09-23 2011-10-04 Interdigital Technology Corporation Blind signal separation using polarized antenna elements
US20100158271A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method for separating source signals and apparatus thereof
US9042496B1 (en) * 2013-02-19 2015-05-26 The United States Of America, As Represented By The Secretary Of The Army Signal modulation scheme determination through an at least fourth-order noise-insensitive cumulant

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3571514A4

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10783430B2 (en) 2016-09-26 2020-09-22 The Boeing Company Signal removal to examine a spectrum of another signal
EP3477331A1 (en) * 2017-10-25 2019-05-01 The Boeing Company Below-noise after transmit (bat) chirp radar
US10921422B2 (en) 2017-10-25 2021-02-16 The Boeing Company Below-noise after transmit (BAT) Chirp Radar
US11002819B2 (en) 2018-04-24 2021-05-11 The Boeing Company Angular resolution of targets using separate radar receivers
CN109061597A (en) * 2018-08-23 2018-12-21 哈尔滨工业大学 Ionospheric clutter suppressing method based on blind source separating and the filtering of time-frequency ridge ripple domain
EP3709049A1 (en) 2019-03-13 2020-09-16 Thales Radar processing system and related noise reduction method
FR3093817A1 (en) * 2019-03-13 2020-09-18 Thales RADAR TREATMENT SYSTEM AND ASSOCIATED NOISE REDUCTION PROCESS
US11863221B1 (en) * 2020-07-14 2024-01-02 Hrl Laboratories, Llc Low size, weight and power (swap) efficient hardware implementation of a wide instantaneous bandwidth neuromorphic adaptive core (NeurACore)
CN112329855A (en) * 2020-11-05 2021-02-05 华侨大学 Underdetermined working modal parameter identification method and detection method based on adaptive dictionary
CN112329855B (en) * 2020-11-05 2023-06-02 华侨大学 Underdetermined working mode parameter identification method and detection method based on self-adaptive dictionary
CN112435685A (en) * 2020-11-24 2021-03-02 深圳市友杰智新科技有限公司 Blind source separation method and device for strong reverberation environment, voice equipment and storage medium
CN112435685B (en) * 2020-11-24 2024-04-12 深圳市友杰智新科技有限公司 Blind source separation method and device for strong reverberation environment, voice equipment and storage medium
CN113671471A (en) * 2021-08-18 2021-11-19 中国科学院声学研究所北海研究站 Underwater sound target detection blind source separation method
CN113671471B (en) * 2021-08-18 2024-04-30 中国科学院声学研究所北海研究站 Underwater sound target detection blind source separation method
CN113835068A (en) * 2021-09-22 2021-12-24 南京信息工程大学 Blind source separation real-time main lobe interference resisting method based on independent component analysis
CN113835068B (en) * 2021-09-22 2023-06-20 南京信息工程大学 Blind source separation real-time main lobe interference resistance method based on independent component analysis
EP4243302A1 (en) * 2022-03-07 2023-09-13 The Boeing Company Very low frequency signals for underwater communications

Also Published As

Publication number Publication date
CN110088635B (en) 2022-09-20
EP3571514A4 (en) 2020-11-04
CN110088635A (en) 2019-08-02
EP3571514A1 (en) 2019-11-27

Similar Documents

Publication Publication Date Title
WO2018136144A1 (en) Cognitive signal processor for simultaneous denoising and blind source separation
US10128820B2 (en) Cognitive signal processor for simultaneous denoising and blind source separation
Papi et al. Generalized labeled multi-Bernoulli approximation of multi-object densities
US10153806B1 (en) Cognitive architecture for wideband, low-power, real-time signal denoising
US9749007B1 (en) Cognitive blind source separator
JP7258513B2 (en) After transmission of below noise (BAT) chirp radar
CN111971743A (en) System, method, and computer readable medium for improved real-time audio processing
US10404299B1 (en) System for parallelized cognitive signal denoising
KR102472420B1 (en) A method and system for examining a spectrum of rf signal
US10162378B1 (en) Neuromorphic processor for wideband signal analysis
US11037057B1 (en) Cognitive signal processor
Encinas et al. Singular spectrum analysis for source separation in drone-based audio recording
US10712425B1 (en) Cognitive denoising of nonstationary signals using time varying reservoir computer
US20230109019A1 (en) Pipelined cognitive signal processor
US10484043B1 (en) Adaptive blind source separator for ultra-wide bandwidth signal tracking
Singh et al. Design of low pass digital fir filter using cuckoo search algorithm
Selesnick et al. Doppler-streak attenuation via oscillatory-plus-transient decomposition of IQ data
KR20230102757A (en) Method and system for designing meta-material surface patterns
US10720949B1 (en) Real-time time-difference-of-arrival (TDOA) estimation via multi-input cognitive signal processor
Sun et al. Blind separation with unknown number of sources based on auto-trimmed neural network
Oubouaddi et al. Parameter estimation of linear and nonlinear systems connected in parallel
Doblinger Adaptive Kalman smoothing of AR signals disturbed by impulses and colored noise
Zhou et al. Electromagnetic signal modulation classification based on multimodal features and reinforcement learning
Huang et al. Machine Learning Methods for SAR Interference Mitigation
Lan et al. Evaluation of Audio Denoising Algorithms for Application of Unmanned Aerial Vehicles in Wildlife Monitoring

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892664

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017892664

Country of ref document: EP

Effective date: 20190819