WO2020141921A1 - Génération de nombre aléatoire honnête et générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé - Google Patents

Génération de nombre aléatoire honnête et générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé Download PDF

Info

Publication number
WO2020141921A1
WO2020141921A1 PCT/KR2020/000089 KR2020000089W WO2020141921A1 WO 2020141921 A1 WO2020141921 A1 WO 2020141921A1 KR 2020000089 W KR2020000089 W KR 2020000089W WO 2020141921 A1 WO2020141921 A1 WO 2020141921A1
Authority
WO
WIPO (PCT)
Prior art keywords
random
honest
random number
signal
imhrng
Prior art date
Application number
PCT/KR2020/000089
Other languages
English (en)
Inventor
Sachin Kumar Agrawal
Kapil Sharma
Original Assignee
Samsung Electronics Co., Ltd.
Delhi Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd., Delhi Technological University filed Critical Samsung Electronics Co., Ltd.
Publication of WO2020141921A1 publication Critical patent/WO2020141921A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • G06F7/588Random number generators, i.e. based on natural stochastic processes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections

Definitions

  • the present disclosure in general relates to random number generation and in particular relates to honest random number generation using millimeter wave.
  • Randomness and random numbers have been used in many applications such as statistics, gaming, cryptography, etc. With advent of technology and emergence of neural networks, random numbers are employed as inputs for regularization of the neural networks and fragmenting of the neural networks. Random numbers can be generated by software or by hardware. Random numbers generated by software are called as pseudo random numbers. Random numbers generated by hardware are called as true random numbers or honest random numbers. However, correlation and unexpected period of the random numbers could make learning (or training) of the neural network inefficient. This can be addressed by performing batch normalization on the random numbers being input to the neural networks. However, the batch normalization requires extra computation efforts for generating Gaussian probability distribution of random number signals, and thus consumes high computational time.
  • Neural networks and deep learning methods are currently being utilized for anomaly detection, prediction, and supporting decision-making, in sensitive applications such as personal health-care, pervasive body sensing, etc.
  • these applications are downloaded onto smartphones that collect data and share with external servers hosting the applications for performing anomaly detection, prediction, and supporting decision-making, using neural networks.
  • sharing of data leads to concerns regarding privacy and data-theft.
  • a method for generating an honest random number signal using millimeter wave includes receiving a reflected signal corresponding to a millimeter wave emitted in a random direction.
  • the method includes determining an intermediate noise signal from the reflected signal based on a first set of parameters associated with the reflected signal, and characteristics of at least one obstruction derived from the reflected signal.
  • the method includes determining a random noise signal from the intermediate noise signal based on a second set of parameters associated with the emitted millimeter wave and a first random threshold value.
  • the method includes determining the honest random number signal from the random noise signal based on a plurality of second random threshold values.
  • the honest random number signal comprises a plurality of digital bits.
  • an intelligent millimeter-wave honest random number generator for generating an honest random number signal.
  • the ImHRNG comprises a first artificial intelligence (AI)-controlling unit to determine a first random threshold value.
  • the ImHRNG comprises a second AI-controlling unit to determine a plurality of second random threshold values.
  • the ImHRNG comprises a millimeter-wave transceiver communicatively coupled with the first AI-controlling unit. The millimeter-wave transceiver receives a reflected signal corresponding to a millimeter wave emitted in a random direction.
  • the millimeter-wave transceiver determines an intermediate noise signal from the reflected signal based on a first set of parameters associated with the reflected signal, and characteristics of at least one obstruction derived from the reflected signal.
  • the millimeter-wave transceiver determines a random noise signal from the intermediate noise signal based on a second set of parameters associated with the emitted millimeter wave and the first random threshold value.
  • the ImHRNG comprises a random number generator communicatively coupled with the millimeter-wave transceiver and the second AI-controlling unit.
  • the random number generator determines the honest random number signal from the random noise signal based on the plurality of second random threshold values.
  • the honest random number signal comprises a plurality of digital bits.
  • the advantages of the present disclosure include, but not limited to, generating honest random number signal from noise present in reflected signal with less computational overhead and less complexity. Further, the honest random number signal is applied to fragment neural networks to improve user-privacy and data security.
  • FIG. 1 illustrates a schematic block diagram of intelligent millimeter-wave honest random number generator (ImHRNG) for generating an honest random number signal, in accordance with the embodiment of the present disclosure.
  • ImHRNG intelligent millimeter-wave honest random number generator
  • Figures 2A and 2B schematically illustrates generation of first random threshold values for controlling random probability distribution of a random noise signal to generate the honest random number signal, in accordance with the embodiment of the present disclosure.
  • FIGS 3A and 3B schematically illustrates generation of plurality of second random threshold values to generate the honest random number signal, in accordance with the embodiment of the present disclosure.
  • FIGS 4A and 4B schematically illustrates generation of the honest random number signal by using the random noise signal and the plurality of second threshold values, in accordance with the embodiment of the present disclosure.
  • Figure 5 schematically illustrates generation of the honest random number signal based on random clock signal, in accordance with the embodiment of the present disclosure.
  • Figure 6 illustrates a first schematic block diagram of a neural network system for generating an honest random number signal and applying the honest random number signal to regularize neural network, in accordance with the embodiment of the present disclosure.
  • Figure 7 schematically illustrates first application of the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 8 schematically illustrates second application of the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 9 schematically illustrates third application of the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 10 schematically illustrates fourth application of the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 11 illustrates a second schematic block diagram of a neural network system for generating an honest random number signal and applying the honest random number signal to regularize neural network, in accordance with the embodiment of the present disclosure.
  • FIGS 12A and 12B schematically illustrate fifth application of the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 13 illustrates a third schematic block diagram of a neural network system for generating an honest random number signal and applying the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 14 schematically illustrates sixth application of the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • FIGS 15A and 15B schematically illustrate seventh application of the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • FIGS. 16A, 16B, and 16C schematically illustrate eighth application of the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • FIGS 17A and 17B schematically illustrate ninth application of the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • Figure 18 illustrates a fourth schematic block diagram of a neural network system for generating an honest random number signal and applying the honest random number signal to fragment neural network, in accordance with the embodiment of the present disclosure.
  • FIGS 19A and 19B schematically illustrate tenth application of the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • Figures 20, 21, and 22 illustrate flow diagrams of methods for generating an honest random number signal, in accordance with the embodiment of the present disclosure.
  • FIGS 23, 24, 25, and 26 illustrate flow diagrams of methods for generating an honest random number signal and applying the honest random number signal to regularize a neural network, in accordance with the embodiment of the present disclosure.
  • Figures 27, 28, 29, and 30 illustrate flow diagrams of methods for generating an honest random number signal and applying the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • FIG. 1 illustrates a schematic block diagram of intelligent millimeter-wave honest random number generator (ImHRNG) 100 for generating an honest random number signal, in accordance with the embodiment of the present disclosure.
  • the ImHRNG 100 comprises a first artificial intelligence (AI)-controlling unit 102, a second AI-controlling unit 104, a millimeter-wave transceiver 106, and a random number generator 108.
  • the millimeter-wave transceiver 106 is communicatively coupled with the first AI-controlling unit 102.
  • the random number generator 108 is communicatively coupled with second AI-controlling unit 104.
  • the millimeter-wave transceiver 106 is continuous wave radar system to transmit or emit millimeter-wave (mmWv) signal and to determine random noise signal from reflected signal corresponding to the emitted mmWV signal.
  • mmWv millimeter-wave
  • the millimeter-waves have high frequency and short wavelengths and as such have high atmospheric attenuation and can be blocked by obstructions or physical objects. Thus, the probability of obtaining true random noise signal from the millimeter-waves is very high.
  • the millimeter-wave transceiver 106 is a single-input-single-output (SISO) system and includes a transmitting antenna 110 and a receiving antenna 112.
  • SISO single-input-single-output
  • the millimeter-wave transceiver 106 can be any of single-input-multiple-output (SIMO) system, multiple-input-single-output (MISO) system, and multiple-input-multiple-output (MIMO) system.
  • SIMO single-input-multiple-output
  • MISO multiple-input-single-output
  • MIMO multiple-input-multiple-output
  • the millimeter-wave transceiver 106 includes a beam forming unit (BFU) 114 to form one or more high directional and efficient beam with minimal losses and overheads.
  • the millimeter-wave transceiver 106 controls and performs a function of transmitting a signal through the one or more beams formed by the BFU 114.
  • the BFU 114 employs various beam forming techniques to form the beams. Examples of such techniques include analogue beam forming technique, digital beam forming technique, hybrid beam forming technique, and physically moving the transmitting antenna. Examples of the digital beam forming technique include fixed beam forming technique, adaptive beam forming technique, azimuth beam forming technique, elevation beam forming technique, 2D beam forming technique, and 3D beam forming technique.
  • the BFU 114 can form random beam patterns using various techniques such as elevated beam rotation in 2D direction, elevated beam rotation in 3D direction, azimuth beam rotation in 2D direction, and azimuth beam rotation in 3D direction.
  • the transmitting antenna 110 then transmits the signal in a random direction.
  • the continuous wave radar system may further comprise DC tuning and noise removal circuits (not shown in the figure) to remove DC offset caused by clutter reflections and to eliminate noise.
  • the millimeter-wave transceiver 106 emits a millimeter wave signal in a random direction through the transmitting antenna 110.
  • the millimeter-wave transceiver 106 receives a reflected signal corresponding to the emitted millimeter wave signal.
  • the millimeter-wave transceiver 106 obtains the reflected signal using various techniques as known in the art such as Frequency-Modulated Continuous millimeter waves (FMCmmWV) radar technique and low-Doppler radar technique.
  • FMCmmWV Frequency-Modulated Continuous millimeter waves
  • the explanation is provided w.r.t single signals, it would be understood that the present disclosure would work in same manner when multiple mmWv signals are transmitted and corresponding reflected signal(s) are received.
  • the millimeter-wave transceiver 106 determines an intermediate noise signal from the reflected signal based on a first set of parameters associated with the reflected signal, and characteristics of at least one obstruction derived from the reflected signal.
  • the first set of parameters of the reflected signal includes power of the reflected signal, intensity, angle of arrival (AOA), elevation angle, azimuth angle, frequency/Doppler shift, time of arrival (TOA), time difference of arrival (TDOA), signal to noise ratio, signal to interference plus noise ratio, interference, offset, energy, variance, and correlation.
  • the millimeter-wave transceiver 106 determines data corresponding to the first set of parameters using techniques as known in the art such as Frequency- Modulated Continuous millimeter waves (FMCmmWV) radar technique, 3D scanning technique, and 2D scanning technique.
  • FMCmmWV Frequency- Modulated Continuous millimeter waves
  • the at least one obstruction can be manmade obstruction(s) or natural obstruction(s) or obstruction(s) faced while transmitting and receiving signals.
  • the obstruction include buildings/high-rise structures, trees, vehicles, rain, clouds, weather, season, atmospheric conditions, channel conditions, and a human body.
  • the characteristics of the at least one obstruction derived from the reflected signal includes depth of the at least one obstruction, a width of the at least one obstruction, a location of the at least one obstruction, a direction of the at least one obstruction, and a property of the at least one obstruction such as moving, stationary, seasonal values, etc.
  • the millimeter-wave transceiver 106 derives data corresponding to the characteristics of the at least one obstruction, if any, present in a region from where the reflected signal is received using techniques as known in the art such as 3D scanning technique and 2D scanning technique.
  • the millimeter-wave transceiver 106 also stores data corresponding to the first set of parameters and data corresponding of characteristics of the at least one obstruction in a storage unit 116 as current data 118.
  • the storage unit 116 can be internal or external to the ImHRNG 100.
  • the millimeter-wave transceiver 106 amplifies the intermediate noise signal based on the first set of parameters associated with the reflected signal.
  • the millimeter-wave transceiver 106 includes a first amplifier 120.
  • the first amplifier 120 can be implemented as millimeter-wave power amplifier.
  • the intermediate noise signal and the data corresponding to the first set of parameters from the storage unit 116 are supplied as input to the first amplifier 120 to generate an amplified intermediate noise signal as output for determining a random noise signal.
  • the millimeter-wave transceiver 106 determines the random noise signal from the intermediate noise signal based on a second set of parameters associated with the emitted millimeter wave and a first random threshold value.
  • the random noise signal is in a first analog waveform, for example, a sine waveform.
  • the second set of parameters includes location of the transmitting antenna 110 emitting the millimeter wave, distance between the transmitting antenna 110 and the receiving antenna 112, intensity of the emitted millimeter wave (i.e., the transmitted signal), transmission power of the emitted millimeter wave, and frequency of the emitted millimeter wave.
  • the location of the transmitting antenna 110 & distance between the transmitting antenna 110 and the receiving antenna 112 are known as they form part of the millimeter-wave transceiver 106.
  • the data corresponding to intensity, power, and frequency is known while transmitting the millimeter wave signal through the transmitting antenna 110.
  • the first random threshold value is determined by the first AI-controlling unit 102 as learned data at each instance of generating the random number signal.
  • the first AI-controlling unit 102 can be implemented as AI based system employing supervised learning algorithm(s) to obtain the learned data.
  • supervised learning algorithms include, but not limited to, Naive Bayes model, Decision trees, Linear discriminant functions such as Support vector machines (SVM), etc., Artificial neural networks (ANNs), Deep neural networks (DNNs), Convolution neural networks (CNNs), Hidden Markov models (HMMs), etc.
  • SVM Support vector machines
  • ANNs Artificial neural networks
  • DNNs Deep neural networks
  • CNNs Convolution neural networks
  • HMMs Hidden Markov models
  • the first AI-controlling unit 102 processes training data, current data associated with the first set of parameters, and current data associated with the characteristics of at least one obstruction using the supervised learning algorithm(s).
  • the training data includes predefined threshold values (determined using techniques as known in the art) and is stored in the storage unit 110 as training data 122 during manufacturing of the ImHRNG 100.
  • the first AI-controlling unit 102 obtains the training data and current data associated with the first set of parameters and the characteristics of at least one obstruction from the storage unit 116.
  • the first AI-controlling unit 102 then applies the training data and the current data to the supervised learning algorithms to obtain the learned data and predictive model.
  • the first AI-controlling unit 102 then applies the training data, current data associated with the first set of parameters and the characteristics of at least one obstruction, and the learned data to the predictive model to generate/determine the first random threshold value.
  • the first AI-controlling unit 102 also stores the first random threshold value in the storage unit 116 as learned data 124. Upon determining the first random threshold value, the first AI-controlling unit 102 provides the first random threshold value to the millimeter-wave transceiver 106 as an input to process the intermediate noise signal for determining the random noise signal.
  • the millimeter-wave transceiver 106 determines the random noise signal from the intermediate noise signal using techniques as known in the art.
  • the random noise signal is generated as a function of transmitted power minus received power reflections.
  • the millimeter-wave transceiver 106 controls a random probability distribution of the random noise signal based on the first random threshold value, wherein the first random threshold value can be generated/ determined based on the data corresponding to the first set of parameters and the second set of parameters.
  • the random probability distributions of the random number signal can be controlled using at least one of a magnitude of transmission power, directions of the reflected signal, duration of receiving the reflected signal, characteristics of the at least one obstruction reflections, and channel conditions.
  • the random probability distribution of the random noise signal is controlled by controlling transmission power of the emitted millimeter wave or the transmitted signal.
  • FIGS 2A and 2B schematically illustrates generation of the first random threshold value for controlling random probability distribution of the random noise signal in accordance with the embodiment of the present disclosure.
  • the first random threshold value is generated based on transmitted power and reflected power reflections by the first AI-controlling unit 102.
  • the first AI-controlling unit 102 can be implemented as an artificial intelligence (AI) based system employing supervised learning algorithm(s). For such AI based systems, both input data and desired output data are labelled for classification to provide a learning basis for future data processing.
  • AI artificial intelligence
  • labelled training data 200 comprising of predefined threshold values 202 of transmission power, current data 204 corresponding to the transmission power, and current data 206 corresponding to the reflected power reflections are applied to a supervised learning algorithm (SLA) 208 to generate a predictive model 210 and to obtain learned data 212 during learning phase.
  • the learned data 212 indicates signal fluctuations.
  • the millimeter-wave transceiver 106 also adjusts the transmission power while emitting mmWv.
  • input data 214 comprising of labelled training data 200 comprising of the predefined threshold values 202, the current data 204 corresponding to the transmission power, the current data 206 corresponding to the reflected power reflections, and learned data 212 is applied to the predictive model 210 to generate output data 216.
  • the output data 216 is the random threshold value for controlling the random distribution of the random noise signal.
  • the random noise signal determined by the millimeter-wave transceiver 106 is provided as an input to a second amplifying circuit 126 for amplifying the random noise signal based on the first set of parameters and the second set of parameters.
  • the second amplifying circuit 126 can be a low noise amplifier.
  • the data corresponding to the first set of parameters from the storage unit 116 and the data corresponding to the second set of parameters are supplied as input to the second amplifying circuit 126 to generate an amplified random noise signal as input for determining the honest random number signal.
  • the random noise signal is amplified based on transmission power of the transmitted signal and power of reflected signal.
  • the random noise signal is derived from "noise” present in the reflected signal corresponding to the emitted millimeter wave signal based on the various parameters of the reflected signal, obstructions, and emitted millimeter signal. As such, the random noise signal exhibits resemblance to a "true white noise distribution characteristic".
  • the first random threshold value is used to add further "randomness" to the random noise signal for generating the honest random number signal.
  • the random noise signal is provided as input to the random number generator 108 for determining the honest random number signal based on a plurality of second random threshold values.
  • the plurality of second random threshold values is determined by the second AI-controlling unit 104 as learned data at each instance of generating the random number signal.
  • the second AI-controlling unit 104 can be implemented as an artificial intelligence (AI) based system employing supervised learning algorithm(s) to obtain the learned data.
  • AI artificial intelligence
  • supervised learning algorithms include, but not limited to, Naive Bayes model, Decision trees, Linear discriminant functions such as Support vector machines (SVM), etc., Artificial neural networks (ANNs), Deep neural networks (DNNs), Convolution neural networks (CNNs), Hidden Markov models (HMMs), etc.
  • ANNs Artificial neural networks
  • DNNs Deep neural networks
  • CNNs Convolution neural networks
  • HMMs Hidden Markov models
  • the second AI-controlling unit 104 processes training data, current data associated with the characteristics of at least one obstruction, and historical data associated with the characteristics of at least one obstruction using the supervised learning algorithm(s).
  • the training data includes predefined threshold values or permissible values for obstructions and is stored in the storage unit 116 as the training data 122 during manufacturing of the ImHRNG 100.
  • Historical data associated with the characteristics of at least one obstruction corresponds to the data obtained by the millimeter-wave transceiver 106 at instances of time prior to current instance of time.
  • the historical data is also stored in the storage unit 116 as historical data 128.
  • the second AI-controlling unit 104 obtains the training data 122 and the historical data 128 associated with the characteristics of at least one obstruction from the storage unit 116.
  • the second AI-controlling unit 104 obtains the current data associated with the characteristics of at least one obstruction either from the millimeter-wave transceiver 106 or from the storage unit 116. The second AI-controlling unit 104 then applies the training data, the current data, and the historical data to the supervised learning algorithms to obtain the learned data and predictive model. The second AI-controlling unit 104 then applies the training data, the current data, the historical data, and the learned data to the predictive model to generate/determine the plurality of second random values. The plurality of second random values are applied to the random number generator 108 as an input along with the random noise signal to determine honest random number signal HNR-1 as output.
  • FIGS 3A and 3B schematically illustrates generation of plurality of second random threshold values to generate the honest random number signal, in accordance with the embodiment of the present disclosure.
  • the second random threshold values are generated based on characteristics of at least one obstruction derived from the reflected signal by the second AI-controlling unit 104.
  • the second AI-controlling unit 104 can be implemented as an artificial intelligence (AI) based system employing supervised learning algorithm(s). For such AI based systems, both input data and desired output data are labelled for classification to provide a learning basis for future data processing.
  • AI artificial intelligence
  • labelled training data 300 comprising of predefined threshold values 302 of obstruction data, current data 304 corresponding to the characteristics of at least one obstruction, and historical values 306 corresponding to the characteristics of at least one obstruction are applied to a supervised learning algorithm (SLA) 308 to generate a predictive model 310 and to obtain learned data 312 during learning phase.
  • SLA supervised learning algorithm
  • the learned data 312 indicates signal fluctuations.
  • the millimeter-wave transceiver 106 also adjusts the transmission power while emitting mmWv.
  • input data 314 comprising of labelled training data 300 comprising of predefined threshold values 302 of transmission power, current data 304 corresponding to the characteristics of at least one obstruction, historical data 306 corresponding to the characteristics of at least one obstruction, and learned data 312 is applied to the predictive model 310 to generate output data 316.
  • the output data 316 is the plurality of second random threshold values for generating the honest random number signal.
  • the honest random number signal HRN-1 comprises a plurality of digital bits while the random noise signal is in form of first analog waveform.
  • the random number generator 108 converts the first analog waveform to a second analog waveform, such as square waveform, based on the plurality of second random threshold values.
  • the random number generator 108 then converts the second analog waveform to a standardized digital format comprising the plurality of digital bits.
  • the random number generator 108 can include a plurality of regenerator comparator circuits (R-CKTs) 400 and plurality of analog to digital convertors (ADCs) 402.
  • the number of R-CKTs 400 and the ADCs 402 corresponds to the number of digital bits forming the random number signal.
  • the honest random number signal comprises of eight (8) digital bits.
  • the random number generator 108 can include eight (8) number of R-CKTs 400, i.e., R-CKT 400-0 till R-CKT 400-7 & and eight (8) number of ADCs 402, i.e., ADC 402-0 till ADC 402-7 for generating one bit each.
  • each R-CKT 400 the random noise signal and a random threshold value from the plurality of second random threshold values is provided as input to obtain a corresponding square waveform.
  • the R-CKTs 400 can be implemented as Schmitt Trigger using Op-Amp.
  • the output from the individual R-CKTs 400 is applied as input to the corresponding ADC 402 to convert the square waveform into a digital bit.
  • each of the R-CKTs 400 receives the input (random noise signal in form of sine wave) and threshold value to generate bit output from each of the ADC 402. Each bit output in combination forms the honest random number signal HRN-1.
  • the honest random number signal HRN-1 is determined from the random noise signal, which itself is derived from "noise" present in the reflected signal corresponding to the emitted millimeter wave signal based on the various parameters of the reflected signal, obstructions, and emitted millimeter signal.
  • the honest random number signal HRN-1 is a true random number with less correlation or unexpected period and is obtained without having to perform batch normalization.
  • the second random threshold values are used to add further "randomness" to the honest random number signal HRN-1.
  • the "randomness" of the honest random number signal HRN-1 can be controlled by applying each of the plurality of second random threshold values to the random noise signal for a random time period.
  • the random time period is determined dynamically based on a random clock signal.
  • the random clock signal is determined based on a further honest random number signal, which is determined in a manner similar to the honest random number signal HRN-1.
  • the ImHRNG 100 is communicatively coupled with a random clock generator 500.
  • the random clock generator 500 receives a master clock signal (MCLK) as input from a master clock signal generator 502.
  • the random clock generator 500 is also communicatively coupled with a second ImHRNG 504.
  • the second ImHRNG 504 includes components as that of the ImHRNG 100 and functions in a similar manner.
  • the second ImHRNG 504 also includes a storage unit 506 and stores historical data, current data, training data, and learned data, in a manner similar to storage unit 116.
  • the second ImHRNG 504 determines a second honest random number signal HRN-2, in a manner similar to the honest random number signal HRN-1 determined by the ImHRNG 100.
  • the second ImHRNG 504 provides the second honest random number signal HRN-2 as input to the random clock generator 500. Based on the second honest random number signal HRN-2 and the master clock signal (MCLK), the random clock generator 500 generates a random clock signal (RCLK) as output using techniques as known in the art. The random clock generator 500 then provides the random clock signal (RCLK) as input to the second AI-controlling unit 104.
  • the second AI-controlling unit 108 determines a random time period (T-RCLK) based on the random clock signal RCLK, using techniques as known in the art.
  • the second AI-controlling unit 104 applies the plurality of second random values (SRV) to the random number generator 108 for the random time period (T-RCLK).
  • the random number generator 108 determines the honest random number signal HRN-1 from the random noise signal (RNS) received from the second amplifier 126 based on the plurality of second random values (SRV).
  • random numbers are generally employed as inputs for regularization of the neural networks and fragmenting of the neural networks.
  • the honest random number signal HRN-1 is employed as input for regularization of the neural networks and for fragmenting of the neural networks.
  • Figures 6-19 illustrate applications of the honest random number signal HRN-1 to the neural networks.
  • the generated honest random number signals are employed as inputs for regularization of the neural networks by applying dropout technique to prevent overlifting.
  • dropout is a technique where randomly selected neurons or nodes (hidden or visible, along with their connections) in the neural networks are ignored during training. This means that their contribution to the activation of downstream nodes is temporally removed on the forward pass and any weight updates are not applied to the nodes on the backward pass.
  • applying dropout to a neural network amounts to sampling a "thinned" neural network from the large neural network.
  • the thinned network consists of all the nodes that survived dropout.
  • the effect of averaging the predictions of all these thinned neural networks is approximated by simply using a single un-thinned neural network (or the original large neural network) that has smaller weights. This significantly reduces overfitting.
  • dropout can be used with various large and complex data sets such as Street View House Numbers (SVHN) Dataset, ImageNet Dataset, CIFAR-100 Dataset, and MNIST Dataset resulting in improved performance of standard neural networks.
  • SSHN Street View House Numbers
  • ImageNet Dataset ImageNet Dataset
  • CIFAR-100 Dataset CIFAR-100 Dataset
  • MNIST Dataset MNIST Dataset resulting in improved performance of standard neural networks.
  • the choice of nodes for ignoring (deactivation) or activating for performing dropout is random.
  • each node is retained with a probability p independent of other node, where p can be chosen using a validation set or can simply be set at 0.5, which is optimal for a wide range of networks and tasks.
  • the optimal probability of retention is usually closer to 1 than to 0.5.
  • Such randomness or stochasticity prevents overfitting.
  • the number of nodes to be activated/deactivated in the dropout can be controlled based on random node count.
  • a counter is set for tracking the number of nodes being activated/inactivated for dropout.
  • the application of honest random number signal for dropout is terminated.
  • the random node count can be determined based on an honest random number signal, which is generated in same manner as the honest random number signal HRN-1.
  • the random node count can be determined manually using techniques as known in the art.
  • the random node count can be determined automatically using techniques as known in the art. In one implementation, the random node count can be determined using semi-automatic techniques as known in the art. For the sake of brevity, the present disclosure is explained with respect to dropout technique. It would be understood, the application of the application of the honest random number signal would be remain same for the other regularization techniques such as L2 regularization.
  • FIG. 6 illustrates a first schematic block diagram of a neural network system 600 for generating an honest random number signal and applying the honest random number signal to regularize neural network, in accordance with the embodiment of the present disclosure.
  • the neural network system 600 comprises the ImHRNG 100 to generate the honest random number signal HRN-1.
  • the neural network system 600 further comprises a neural network circuit (NNC) 602 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • the ImHRNG 100 is communicatively coupled to the NNC 602.
  • the ImHRNG 100 generates the honest random number signal HRN-1 in a manner as described earlier.
  • the millimeter-wave transceiver 106 emits millimeter wave and receives reflected signal 604.
  • the millimeter-wave transceiver 106 processes the reflected signal 604 based on the first set of parameters, the second set of parameters, and first random threshold value 606 to generate a random noise signal 608.
  • the first AI-controlling unit 102 determines the first random threshold value 606 and provides as input to the millimeter-wave transceiver 106 to generate the random noise signal 608.
  • the millimeter-wave transceiver 106 provides the random noise signal 608 as input to the second amplifier 126 to amplify the random noise signal 608.
  • the second amplifier 126 provides the amplified random noise signal 610 to the random number generator 108 to generate the honest random number signal HRN-1.
  • the random number generator 108 processes the amplified random noise signal 610 based on plurality of second random values 612.
  • the second AI-controlling unit 104 determines the plurality of second random values 612 and provides as input to the random number generator 108 to generate honest random number signal HRN-1.
  • the random number generator 108 controls the random probability distribution of the honest random number signal HRN-1.
  • the random number generator 108 provides the honest random number signal HRN-1 as input to the NNC 602 for regularization of neural network.
  • the ImHRNG 100 may include a communication interface unit (not shown in the figure) to communicate data with the NNC 602.
  • the NNC 602 comprises a neural network 614 and a training dataset that includes input data 616 and target output data 618 that should be generated by the neural network 614 when the input data 616 is applied.
  • the neural network 614 can be trained using a first dataset that is general before being trained using the training dataset that includes input data 616 and is specific.
  • the neural network 614 processes the input data 616 and generates prediction data 620 (i.e., output data).
  • a weight computation unit 622 receives the prediction data 620 and the target output data 618 and computes statistics 624 comprising of average change in weights between training iterations, the neurons average weight size, and the variance of a neurons output during a training iteration.
  • a switching unit 626 receives the statistics 624 calculated from each of iterations. Based on the statistics 624 and a layer drop probability, the switching unit 626 determines individual drop probability 628 for each node in the layer while keeping the mean of all probabilities assigned to those nodes the same as the drop percentage for each layer. The switching unit 626 also receives the HRN-1 from the ImHRNG 100 to determine probability 628. The HRN-1 enables probabilistically dropping the nodes with smaller values in the statistics 624, resulting in higher drop probability to nodes with a low output variance. Further, the switching unit 626 determines mode 630 of activation of the nodes based on the honest random number signal HRN-1. The modes can be time-out mode, hinge mode, and a combination thereof, as explained in later paragraphs.
  • the probability 628 and/or the mode 630 are provided as input to node activation unit 630.
  • the node activation unit 632 indicates to the neural network 614 that one or more nodes should be activated/deactivated in the neural network 614.
  • the honest random number signal HRN-1 comprises plurality of bits. As such, if the honest random number signal HRN-1 is at a high level (e.g., a logic 1), the node activation unit 632 indicates the corresponding node is to be dropped out or deactivated. Conversely, if the honest random number signal HRN-1 is at a low level (e.g., a logic 0), the node activation unit 632 indicates the corresponding node is not to be dropped out or is to remain active.
  • ImHRNG 100 is configured to generate multiple honest random number signals for application to the neural network.
  • Each of the honest random number signals has different random probability distribution with respect to all of the nodes in the neural network.
  • the honest random number signals used for dropping out the nodes are stochastically independent one from another and eliminates or minimize correlation between the generated honest random number signals.
  • the random probability distributions of each of the honest random number signals are controlled by the random number generator 108.
  • the ImHRNG 100 is communicatively coupled to the NNC 602 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • the NNC 602 implements a course grained reconfigurable architectures (CGRA)-based neural network.
  • CGRA course grained reconfigurable architectures
  • only one neural network 700 is illustrated with four layers, input layer 700-1, hidden layers 700-2 & 700-3, and output layer 700-4.
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the neural network 700, i.e., to the input layer 700-1 and the hidden layers 700-2 & 700-3, to select a set of nodes for dropping or inactivation, represented by cross and removal of straight lines.
  • the random number signal HRN-1 is an eight bit waveform. As such, if the random number signal HRN-1 is at a high level (e.g., a logic 1), the corresponding node is dropped out or deactivated. Conversely, if the random number signal HRN-1 is at a low level (e.g., a logic 0), the corresponding node is not be dropped out or remains active. As illustrated in the figure, the nodes marked by cross and removal of straight lines in the input layer 700-1 and the hidden layers 700-2 & 700-3 are dropped/inactivated and the remaining nodes are kept/activated for training.
  • the set of nodes in the neural networks are activated for a random time period.
  • Such activation of nodes can be termed as 'time-out mode' application of the honest random number signal.
  • the random time period is determined based on a further honest random number signal.
  • the further honest random number signal and the random time period are determined in a manner as described earlier.
  • the ImHRNG 100 is communicatively coupled with a random clock generator 800.
  • the random clock generator 800 receives a master clock signal from a clock generator (not shown in the figure).
  • the random clock generator 800 is also communicatively coupled with a third ImHRNG 802.
  • the third ImHRNG 802 includes components as that of the ImHRNG 100 and functions in a similar manner.
  • the third ImHRNG 802 determines a third honest random number signal HRN-3, in a manner similar to the honest random number signal HRN-1 determined by the ImHRNG 100.
  • the third ImHRNG 802 provides the third honest random number signal HRN-3 as input to the random clock generator 800.
  • the random clock generator 800 Based on the third honest random number signal HRN-3 and the master clock, the random clock generator 800 generates a random clock signal (RCLK) as output using techniques as known in the art. The random clock generator 800 then provides the random clock signal (RCLK) to the ImHRNG 100 as input. The ImHRNG 100 determines a random time period (T-RCLK) based on the random clock signal (RCLK) to apply the honest random number signal HRN-1 for the random time period (T-RCLK) to the each of the nodes of the neural network 602. In one implementation, the random time period (T-RCLK) can be same for each node in different layers. In another implementation, the random time period (T-RCLK) can be different for each node and/or layer.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the input layer 700-1 to select the nodes for activation for a time T1 based on the random time period (T-RCLK).
  • the nodes marked by cross and removal of straight lines in the input layer 700-1 are dropped/inactivated and the remaining nodes are kept/activated for training for time T1.
  • the dropped nodes are activated and activated nodes are dropped for time T1. This process results in effective dropout.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden layers 700-2 to select the nodes for activation for a time T2 based on the random time period (T-RCLK).
  • the nodes marked by cross and removal of straight lines in the hidden layers 700-2 are dropped/inactivated and the remaining nodes are kept/activated for training for time T1.
  • the dropped nodes are activated and activated nodes are dropped for time T2. This process results in effective dropout.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden layers 700-3 to select the nodes for deactivation for a time T3 based on the random time period (T-RCLK).
  • the nodes marked by cross and removal of straight lines in the hidden layers 700-3 are dropped/inactivated and the remaining nodes are kept/activated for training for time T3.
  • the dropped nodes are activated and activated nodes are dropped for time T3. This process results in effective dropout.
  • the time T1, T2, and T3 can be same. In another example, the time T1, T2, and T3 can be different from each other.
  • the set of nodes are selected in at least one of the plurality of directions associated with the neural network based on the honest random number signal HRN-1.
  • the honest random number signal HRN-1 comprises plurality of bits and therefore can be applied in horizontal, or vertical or diagonal or any direction which represents a rotating hinge in the neural network. Such activation of nodes can be termed as 'hinge mode' application of the honest random number signal.
  • the NNC 602 implements one or more neural networks comprising of plurality of nodes and plurality of layers arranged in stacked neural network architecture.
  • the NNC 602 implements one or more neural networks comprising of plurality of nodes and plurality of layers arranged in stacked neural network architecture.
  • only one neural network 900 is illustrated with five staked layers, input stacked layers 900-1, hidden stacked layers 900-2, 900-3, & 900-4, and output staked layers 900-5.
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2 in vertical direction.
  • the random number signal HRN-1 is an eight bit waveform.
  • the random number signal HRN-1 is at a high level (e.g., a logic 1), the corresponding nodes in all the hidden stacked layers 900-2 are dropped out or deactivated. Conversely, if the random number signal HRN-1 is at a low level (e.g., a logic 0), the corresponding nodes in all the hidden stacked layers 900-2 are not be dropped out or remains active.
  • a high level e.g., a logic 1
  • a low level e.g., a logic 0
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2, 900-3, & 900-4 in horizontal direction such that first nodes in the hidden stacked layers 900-2, 900-3, & 900-4, are selected for deactivation or activation.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2, 900-3, & 900-4 in horizontal direction such that last nodes in the hidden stacked layers 900-2, 900-3, & 900-4, are selected for deactivation or activation.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2, 900-3, & 900-4 in diagonal direction such that nodes in the hidden stacked layers 900-2, 900-3, & 900-4, falling in diagonal direction are selected for deactivation or activation.
  • the set of nodes are selected in at least one of the plurality of directions based on the honest random number signal and for the random time period based on a further honest random number signal.
  • Such activation of nodes can be termed as 'hybrid mode' application of the honest random number signal.
  • the further honest random number signal and the random time period are determined in a manner as described earlier.
  • the ImHRNG 100 is communicatively coupled with a random clock generator 1000.
  • the random clock generator 1000 receives a master clock signal from a clock generator (not shown in the figure).
  • the random clock generator 1000 is also communicatively coupled with a fourth ImHRNG 1002.
  • the fourth ImHRNG 1002 includes components as that of the ImHRNG 100 and functions in a similar manner.
  • the fourth ImHRNG 1002 determines a fourth honest random number signal HRN-4, in a manner similar to the honest random number signal HRN-1 determined by the ImHRNG 100.
  • the fourth ImHRNG 1002 provides the fourth honest random number signal HRN-4 as input to the random clock generator 1000.
  • the random clock generator 1000 Based on the fourth honest random number signal HRN-4 and the master clock, the random clock generator 1000 generates a random clock signal (RCLK) as output using techniques as known in the art. The random clock generator 1000 then provides the random clock signal (RCLK) to the ImHRNG 100 as input. The ImHRNG 100 determines a random time period (T-RCLK) based on the random clock signal (RCLK) to apply the honest random number signal HRN-1 for the random time period (T-RCLK) to the each of the layers of the neural network 900. In one implementation, the random time period (T-RCLK) can be same for each layer. In another implementation, the random time period (T-RCLK) can be different for each layer.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2 in vertical direction to select the nodes for deactivation or activation for a time T1 based on the random time period (T-RCLK).
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2, 900-3, & 900-4 in horizontal direction such that first nodes in the hidden stacked layers 900-2, 900-3, & 900-4, are selected for deactivation or activation for a time T2 based on the random time period (T-RCLK).
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the hidden stacked layers 900-2, 900-3, & 900-4 in diagonal direction such that nodes in the hidden stacked layers 900-2, 900-3, & 900-4, falling in diagonal direction are selected for deactivation or activation for a time T3 based on the random time period (T-RCLK).
  • T-RCLK random time period
  • the time T1, T2, and T3 can be same. In another example, the time T1, T2, and T3 can be different from each other.
  • the number of nodes to be activated/deactivated in the dropout can be controlled based on random node count.
  • a counter is set for tracking the number of nodes being activated/inactivated for dropout.
  • the application of honest random number signal for dropout is terminated.
  • the random node count can be a number or value of base 10.
  • the random node count can be generated based on an honest random number signal, which is generated in same manner as the honest random number signal HRN-1.
  • the random node count can be determined after the application of the honest random number signal HRN-1 to the neural network. In one implementation, the random node count can be determined prior to the application of the honest random number signal HRN-1 to the neural network. In one implementation, the random node count can be determined simultaneously with the application of the honest random number signal HRN-1 to the neural network.
  • FIG 11 illustrates a second schematic block diagram of a neural network system 600 for generating an honest random number signal and applying the honest random number signal to regularize neural network, in accordance with the embodiment of the present disclosure.
  • the neural network system 1100 comprises the ImHRNG 100 to generate the honest random number signal HRN-1, in a manner as described above and specifically with reference to Figure 6.
  • the block diagram of ImHRNG 100 as described in Figure 6 is not repeated in this figure.
  • details already explained with reference to Figures 1 to 5 are not explained herein.
  • the neural network system 1100 further comprises the neural network circuit (NNC) 602 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • the ImHRNG 100 is communicatively coupled to the NNC 602.
  • the honest random number signal HRN-1 is applied to the neural network 614 in the NNC 602 in a manner as described earlier.
  • details already explained for the block diagram of NNC 602 as described in Figure 6 are not explained herein.
  • the NNC 602 is further communicatively coupled with a fifth ImHRNG 1100.
  • the fifth ImHRNG generates a fifth honest random number signal HRN-5 in a manner as described earlier.
  • HRN-5 a fifth honest random number signal
  • the millimeter-wave transceiver 106 emits millimeter wave and receives reflected signal 1104.
  • the millimeter-wave transceiver 106 processes the reflected signal 1104 based on the first set of parameters, the second set of parameters, and first random threshold value 1106 to generate a random noise signal 1108.
  • the first AI-controlling unit 102 determines the first random threshold value 1106 and provides as input to the millimeter-wave transceiver 106 to generate the random noise signal 1108.
  • the millimeter-wave transceiver 106 provides the random noise signal 1108 as input to the second amplifier 126 to amplify the random noise signal 1108.
  • the second amplifier 126 provides the amplified random noise signal 1110 to the random number generator 108 to generate the honest random number signal HRN-1.
  • the random number generator 108 processes the amplified random noise signal 1110 based on plurality of second random values 1112.
  • the second AI-controlling unit 104 determines the plurality of second random values 1112 and provides as input to the random number generator 108 to generate the fifth honest random number signal HRN-5.
  • the random number generator 108 controls the random probability distributions fifth honest random number signal HRN-5.
  • the random number generator 108 provides the fifth honest random number signal HRN-5 as input to the NNC 1102 for randomly controlling the number of nodes to be activated during dropout.
  • the ImHRNG 1102 may include a communication interface unit (not shown in the figure) to communicate data with the NNC 602.
  • the NNC 602 comprises a node controlling unit 1114 coupled with the node activation unit 632.
  • the ImHRNG 1102 provides the fifth honest random number signal HRN-5 as input to the node controlling unit 1114.
  • the node controlling unit 1114 generates a random node count 1116 based on the fifth honest random number signal HRN-5 to randomly control the activation of nodes in the neural network 614.
  • the node controlling unit 1114 also initiates a counter 1118 based on the random node count 1116 to track the number of nodes being activated/inactivated for dropout. For each node being activated or deactivated based on the honest random number signal HRN-1 by the node activation unit 632, the node controlling unit 1114 increments the counter 1118 by one.
  • the node controlling unit 1114 When the number of nodes (that are activated/inactivated) counted by the counter 1118 is not equal to the random node count 1116, the node controlling unit 1114 provides a continue signal as input 1120 to the node activation unit 632. Upon receiving the continue signal, the the node activation unit 632 continues application of honest random number signal HRN-1 for dropout. When the number of nodes (that are activated/inactivated) counted by the counter 1118 is equal to the random node count 1116, the node controlling unit 1114 provides a stop signal as input 1120 to the node activation unit 632. Upon receiving the stop signal, the the node activation unit 632 terminates application of honest random number signal HRN-1 for dropout.
  • the following figure illustrates examples application of the honest random number signal to neural network.
  • the ImHRNG 100 is communicatively coupled to the NNC 602 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • the NNC 602 implements a course grained reconfigurable architectures (CGRA)-based neural network.
  • CGRA course grained reconfigurable architectures
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the neural network 1200, i.e., to the hidden layers 1200-2 & 1200-3, to select a set of nodes for dropping or inactivation, represented by cross and removal of straight lines.
  • the NNC 602 is further connected with the fifth ImHRNG 1102.
  • the ImHRNG 1102 generates the fifth honest random number signal HRN-5.
  • the NNC 602 determines the random node count 1116 based on the fifth honest random number signal HRN-5 to randomly control the activation or deactivation of the nodes.
  • the nodes marked by cross and removal of straight lines in the hidden layers 1200-2 and the 1200-3 are dropped/inactivated and the remaining nodes are kept/activated for training.
  • the number of nodes dropped is not equal to random node count 1116
  • the further nodes marked by cross and removal of straight lines in the hidden layer 1200-3 is dropped/inactivated and the remaining nodes are kept/activated for training.
  • the further nodes marked by cross and removal of straight lines in the hidden layer 1200-2 is dropped/inactivated and the remaining nodes are kept/activated for training.
  • the generated honest random number signal HRN-1 is applied to a neural network to fragment the neural network into a plurality of sub-networks to enhance user-privacy and data security. Fragmenting of neural networks finds many applications such as targeted advertisement based on user-activity on a mobile device. Generally, the user-activity is collected from the mobile device and shared with an external server. The external server applies the collected user-activity to neural network(s) for selecting or generating targeted advertisement. However, such sharing of data causes concerns related to security and user-privacy.
  • the neural network can be fragmented into random layers between two devices, i.e., the mobile device and the external server. The random layers are migrated to the mobile device.
  • dropout technique can applied to the layers migrated to the mobile device & the server in a manner as described earlier. The output of from the migrated layers is then shared with the external server. Dropout technique can then be applied to remaining layers at the external server making the output non-invertible or randomly dropping some outputs. This results in predicting output, targeted advertisement, without the need for sharing user-activity with external server. As such, user-privacy and data security are enhanced.
  • fragmenting the neural network between two devices can provide data privacy during training. In other words, one device is able to provide model learning service by using error-back propagation, without accessing original input data from the other device. It would be understood that the application of the honest random number signal would be remain same for the other techniques such as pruning of neural network to obtain smaller and faster neural networks.
  • FIG 13 illustrates a third schematic block diagram of a neural network system 1300 for generating an honest random number signal and applying the honest random number signal to fragment a neural network, in accordance with the embodiment of the present disclosure.
  • the neural network system 1300 comprises the ImHRNG 100 to generate the honest random number signal HRN-1, in a manner as described above and specifically with reference to Figure 6.
  • the block diagram of ImHRNG 100 as described in Figure 6 is not repeated in this figure.
  • details already explained with reference to Figures 1 to 5 are not explained herein.
  • the neural network system 1100 further comprises a neural network circuit (NNC) 1302 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • the ImHRNG 100 is communicatively coupled to the NNC 1302.
  • the NNC 1302 comprises a neural network 1304 and a training dataset that includes input data 1306 and target output data 1308 that should be generated by the neural network 1304 when the input data 1304 is applied.
  • the neural network 1304 can be trained using a first dataset that is general before being trained using the training dataset that includes input data 1306 and is specific. During training or post training operation, the neural network 1304 processes the input data 1306 and generates prediction data 1310 (i.e., output data).
  • a parameter computation unit 1312 receives the prediction data 1310 and the target output data 1308 and computes statistics 1314 comprising of layer parameters during a training iteration or post-training iteration.
  • the statistics 1314 is computed based on first order gradient of a cost function with respect to layer parameters.
  • a switching unit 1316 receives the statistics 1314 calculated from each of iterations. Based on the statistics 1316 and a layer drop probability, the switching unit 1316 determines fragmenting probability 1318 for each layer. The switching unit 1316 also receives the HRN-1 from the ImHRNG 100 to determine probability 1318. The HRN-1 enables probabilistically fragmenting the layers with smaller values in the statistics 1314, resulting in higher fragmenting probability to layers with a low output variance.
  • the fragmenting probability 1318 indicates how the neural network is to be fragmented, such as creating equal number of sub-networks; creating unequal number of sub-networks; creating sub-networks with equal number of layers selected from the total number of layers in the neural network 1304; creating sub-networks with unequal number of layers selected from the total number of layers in the neural network 1304; and creating sub-networks with equal/unequal number of layers selected from the total number of layers in the neural network 1304 and with addition of new layers or deletion of original layers.
  • the switching unit 1316 determines mode 1320 of fragmenting the layers based on the honest random number signal HRN-1.
  • the modes can be time-out mode, hinge mode, and a combination thereof, as explained in later paragraphs.
  • the probability 1318 and/or the mode 1320 are provided as input to a fragmenting unit 1322.
  • the fragmenting unit 1322 indicates to the neural network 1304 that one or more layers should be fragmented from the neural network 1304 to create sub-networks.
  • the ImHRNG 100 is communicatively coupled with a first device 1400 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • a first device 1400 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • a first device 1400 implementing one or more neural networks comprising of plurality of nodes and plurality of layers.
  • only one neural network 1402 is illustrated with four layers, input layer 1400-1, hidden layers 1400-2 & 1400-3, and output layers 1400-4.
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the first device 1400 is further communicatively coupled with a second device 1402.
  • Examples of the first device 1400 and the second device 1404 include, but not limited to, server, mobile device such as laptop, smart phone, etc.
  • the first device 1400 and the second device 1404 are communicative coupled with each other in various architectures such as server-client, device to device, remotely connected devices, etc.
  • the ImHRNG 140 applies the honest random number signal HRN-1 to the neural network 1402 to fragment the neural network 1402 into a plurality of sub-networks.
  • Each of the plurality of sub-networks comprises a set of layers selected from the plurality of layers based on the honest random number signal HRN-1.
  • the plurality of sub-networks comprises equal number of layers selected from the plurality of layers.
  • only two sub-networks 1406 and 1408 are illustrated with each sub-network comprising two layers from the neural network 1402.
  • the sub-network 1406 includes the input layer 1402-1 and the hidden layer 1402-3 while the sub-network 1408 includes the hidden layer 1402-3 and the output layer 1402-4.
  • the first sub-neural network 1406 with the input layer 1402-1 and the hidden layer 1402-3 is transmitted to the second device 1404 and the second sub-neural network 1408 with the hidden layer 1402-3 and the output layer 1402-4 is retained at the first device 1400.
  • the second device 1404 includes a neural network circuit (NNC) (not shown in the figure) to process the first sub-network 1406 using techniques as known in the art.
  • the NNC processes the first sub-network 1406 and generates an output 1406-O.
  • the NNC then transmits the output 1406-O to the first device 1400.
  • the first device 1400 includes a neural network circuit (NNC) (not shown in the figure) to process the second sub-network 1408 using techniques as known in the art.
  • the NNC applies the output 1406-O as input 1408-I to the hidden layer 1402-3 and generates the output 1402-4.
  • the data from the second device 1404 is not shared with the first device 1400 and instead output of the fragmented & migrated neural network is shared. This alleviates concerns related to data security.
  • dropout technique is applied to the sub-neural networks based on a further honest random number signal to reduce over-lifting during generation of output and to further enhance data security.
  • the first device 1400 is communicatively coupled with a sixth ImHRNG 1500 to generate a sixth honest random number signal HRN-6 to perform activation or deactivation of nodes in the second sub-neural network 1408.
  • the HNR-6 is applied to the hidden layer 1402-3 to generate the output 1402-4.
  • the second device 1404 is communicatively coupled with a seventh ImHRNG 1502 to generate a seventh honest random number signal HRN-7 to perform activation or deactivation of nodes in the first sub-neural network 1406.
  • the HNR-7 is applied to the hidden layer 1402-2 to generate the output 1406-O.
  • the nodes marked by cross and removal of straight lines in the hidden layers 1402-2 & 1402-3 are dropped/inactivated and the remaining nodes are kept/activated for processing data in the first device 1400 and the second device 1404.
  • the activation or deactivation of nodes can be performed for a random time period based on a further random number signal, in a manner as described earlier.
  • each of the first device 1400 and the second device 1404 are connected with a separate system comprising of random clock generator and ImHRNG to generate the random time period, in a manner as described earlier.
  • each of the plurality of sub-networks are selected in at least one of the plurality of directions associated with the neural network based on a further random number signal, in a manner as described earlier.
  • the first device 1400 is implementing a neural network 1600 comprising of stacked layers.
  • the neural network 1600 is illustrated with five stacked layers, i.e., input layer 1600-1, hidden layers 1600-2, 1600-3, 1600-4, and output layer 1600-5.
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the first device 1400 is connected with eighth ImHRNG 1602 to generate eighth honest random number signal HRN-8.
  • the second device 1404 is connected with ninth ImHRNG 1604 to generate ninth honest random number signal HRN-9.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the neural network 1600 to fragment the neural network 1600 into a plurality of sub-networks.
  • the plurality of sub-networks comprises unequal number of layers selected from the plurality of layers of the neural network 1600.
  • FIGs 16B and 16C for the sake of brevity, only two sub-networks 1606 and 1608 are illustrated as fragmented from the network 1600.
  • the HRN-1 is applied to the neural network 1600 in vertical directions to select the hidden stacked layers 1600-2 and 1600-4 and the output layer 1600-5 as the layers for the first sub-network 1606, while the second sub-network 1608 includes the input layer 1600-1 and the hidden layer 1600-3.
  • the first sub-network 1606 is retained at the first device 1400 and the second sub-network 1608 is transmitted to the second device 1404. Thereafter, the HNR-9 is applied to the hidden layer 1600-3 to perform dropout and generate output 1608-O, as described earlier.
  • the output is transmitted to the first device 1400.
  • the output is applied as input 1606-I and the honest random number signal HRN-8 is applied to the hidden layers 1600-2 and 1600-4 to perform dropout and generate the output 1600-5, as described earlier.
  • each of the plurality of sub-networks are selected in at least one of the plurality of directions associated with the neural network for a random time period based on a further random number signal, in a manner as described earlier.
  • each of the first device 1400 and the second device 1404 are connected with a separate system comprising of random clock generator and ImHRNG to generate the random time period, in a manner as described earlier.
  • unequal number of sub-networks can be created.
  • different layers can be added or removed from the plurality of sub-networks (either equal or unequal number of sub-networks) based on a further random number signal.
  • different layers can be selected for dropout based on a further random number signal
  • the first device 1400 is implementing a neural network 1700 comprising of plurality of layers.
  • the neural network 1700 is illustrated with five layers, i.e., input layer 1700-1, hidden layers 1700-2, 1700-3, 1700-4, and output layer 1700-5.
  • the nodes are represented by circles and connections are represented by straight lines connecting the circles.
  • the first device 1000 is connected with a tenth ImHRNG 1702 to generate a tenth honest random number signal HRN-10 and with eleventh ImHRNG 1704 to generate a eleventh honest random number signal HRN-11.
  • the second device 1004 is connected with twelfth ImHRNG 1706 to generate an twelfth honest random number signal HRN-12 and with a thirteenth ImHRNG 1708 to generate a thirteenth honest random number signal HRN-13.
  • unequal number of sub-networks is created.
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the neural network 1700 to split the neural network 1700 into a plurality of sub-networks.
  • the plurality of sub-networks comprises unequal number of layers selected from the plurality of layers of the neural network 1600 with addition of new layers.
  • only three sub-networks 1710, 1712, and 1714 are illustrated as fragmented from the network 1700.
  • the HRN-1 is applied to the neural network 1700 in vertical directions to select the hidden layer 1700-2 and 1700-4 as the layers for the first sub-network 1710; the second sub-network 1712 includes the hidden layer 1700-3, while the third sub-network layer 1714 includes the input layer 1700-1.
  • the first sub-network 1710 is retained at the first device 1400 and the third sub-network 1714 is transmitted to the second device 1404.
  • the second sub-network 1712 may be retained at the first device 1400 or transmitted to other device (not shown in the figure) for further processing in a manner similar to the second device 1404.
  • the honest random number signal HRN-13 is applied to the third sub-network 1714 to add new hidden layer 1714-1 to the second sub-network 1712.
  • the nodes of new hidden layer 1714-1 are represented by dotted circles.
  • the honest random number signal HRN-10 is applied to the second sub-network 1712 to add new hidden layer 1712-1 to the second sub-network 1712.
  • the nodes of new hidden layer 1712-1 are represented by dotted circles.
  • the twelfth honest random number signal HNR-12 is applied to the input layer 1700-1 and the hidden layer 1714-1 in the neural network 1714 to perform dropout and generate output 1714-O, as described earlier.
  • the output is transmitted to the first device 1400.
  • a honest random number signal HNR-X is applied to the input layer 1712-I and the hidden layers 1700-3 and 1712-1 in the neural network 1712 to perform dropout and generate output 1712-O, as described earlier.
  • the HNR-X may be generated in the same manner as the honest random number signal HRN-1.
  • each of the plurality of sub-networks are selected in at least one of the plurality of directions associated with the neural network for a random time period based on a further random number signal, in a manner as described earlier.
  • each of the first device 1000 and the second device 1004 are connected with a separate system comprising of random clock generator and ImHRNG to generate the random time period, in a manner as described earlier.
  • the number of layers to be selected for the fragmentation can be controlled based on random layer count.
  • a counter is set for tracking the number of layers being selected.
  • the random layer count can be generated based on an honest random number signal, which is generated in same manner as the honest random number signal HRN-1.
  • the random layer count can be a number or value of base 10.
  • the random layer count can be determined manually using techniques as known in the art.
  • the random layer count can be determined automatically using techniques as known in the art.
  • the random layer count can be determined using semi-automatic techniques as known in the art.
  • the random layer count can be determined after the application of the honest random number signal HRN-1 to the neural network. In one implementation, the random layer count can be determined prior to the application of the honest random number signal HRN-1 to the neural network. In one implementation, the random layer count can be determined simultaneously with the application of the honest random number signal HRN-1 to the neural network.
  • FIG 18 illustrates a fourth schematic block diagram of a neural network system 1800 for generating an honest random number signal and applying the honest random number signal to fragment neural network, in accordance with the embodiment of the present disclosure.
  • the neural network system 1800 comprises the ImHRNG 100 to generate the honest random number signal HRN-1, in a manner as described above and specifically with reference to Figure 6.
  • the block diagram of ImHRNG 100 as described in Figure 6 is not repeated in this figure.
  • details already explained with reference to Figures 1 to 5 are not explained herein.
  • the neural network system 1800 further comprises the neural network circuit (NNC) 1302 implementing one or more neural networks comprising of plurality of layers and plurality of layers.
  • the ImHRNG 100 is communicatively coupled to the NNC 1302.
  • the honest random number signal HRN-1 is applied to the neural network 1304 in the NNC 1302 in a manner as described earlier.
  • NNC neural network circuit
  • the NNC 1302 is further communicatively coupled with a fourteenth ImHRNG 1802.
  • the fourteenth ImHRNG 1802 generates a fourteenth honest random number signal HRN-14 in a manner as described earlier.
  • HRN-14 fourteenth honest random number signal
  • the millimeter-wave transceiver 106 emits millimeter wave and receives reflected signal 1804.
  • the millimeter-wave transceiver 106 processes the reflected signal 1804 based on the first set of parameters, the second set of parameters, and first random threshold value 1806 to generate a random noise signal 1808.
  • the first AI-controlling unit 102 determines the first random threshold value 1806 and provides as input to the millimeter-wave transceiver 106 to generate the random noise signal 1808.
  • the millimeter-wave transceiver 106 provides the random noise signal 1808 as input to the second amplifier 126 to amplify the random noise signal 1808.
  • the second amplifier 126 provides the amplified random noise signal 1810 to the random number generator 108 to generate the honest random number signal HRN-1.
  • the random number generator 108 processes the amplified random noise signal 1810 based on plurality of second random values 1812.
  • the second AI-controlling unit 104 determines the plurality of second random values 1812 and provides as input to the random number generator 108 to generate the fourteenth honest random number signal HRN-14.
  • the random number generator 108 controls the random probability distributions fourteenth honest random number signal HRN-14.
  • the random number generator 108 provides the fourteenth honest random number signal HRN-14 as input to the NNC 1302 for randomly controlling the number of layers to be selected.
  • the ImHRNG 1802 may include a communication interface unit (not shown in the figure) to communicate data with the NNC 1302.
  • the NNC 1302 comprises a layer controlling unit 1814 coupled with the fragmenting unit 1322.
  • the ImHRNG 1802 provides the fourteenth honest random number signal HRN-14 as input to the layer controlling unit 1814.
  • the layer controlling unit 1814 generates a random layer count 1816 based on the fourteenth honest random number signal HRN-14 to randomly control the activation of layers in the neural network 1304.
  • the layer controlling unit 1814 also initiates a counter 1818 based on the random layer count 1816 to track the number of layers being selected for fragmenting. For each layer being selected based on the honest random number signal HRN-1 by the fragmenting unit 1322, the layer controlling unit 1814 increments the counter 1818 by one.
  • the layer controlling unit 1814 When the number of layers (that are selected)counted by the counter 1818 is not equal to the random layer count 1816, the layer controlling unit 1814 provides a continue signal as input 1820 to the fragmenting unit 1322. Upon receiving the continue signal, the the fragmenting unit 1322 continues application of honest random number signal HRN-1 for fragmenting the neural network 1304. When the number of layers (that are selected) counted by the counter 1818 is equal to the random layer count 1816, the layer controlling unit 1814 provides a stop signal as input 1820 to the fragmenting unit 1322. Upon receiving the stop signal, the fragmenting unit 1322 terminates application of honest random number signal HRN-1 for fragmenting the neural network 1304.
  • the following figure illustrates examples application of the honest random number signal to neural network.
  • the ImHRNG 100 is communicatively coupled to the NNC 1302 implementing one or more neural networks comprising of plurality of layers and plurality of nodes.
  • the NNC 1302 implements a course grained reconfigurable architectures (CGRA)-based neural network.
  • CGRA course grained reconfigurable architectures
  • the ImHRNG 100 applies the honest random number signal HRN-1 to the neural network 1900 to fragment the neural network 1900.
  • the NNC 1302 is further connected with the fourteenth ImHRNG 1802.
  • the ImHRNG 1802 generates the fourteenth honest random number signal HRN-14.
  • the NNC 1302 determines the random layer count 1816 based on the fourteenth honest random number signal HRN-14 to randomly control the selection of the layers.
  • hidden layers 1900-2 and 1900-4, and the output layer 1900-5 are selected as the layers for the first sub-network 1902.
  • hidden layer 1900-3 is selected for the second sub-network 1904.
  • a new hidden layer 1904-1 is added to the second sub-network 1904.
  • input layer 1900-1 is selected as the layer for the third sub-network 1906.
  • a new hidden layer 1906-1 is added to the third sub-network 1906.
  • no further layers are selected.
  • Figure 20 illustrates a flow diagram of a method 2000 for generating an honest random number signal, in accordance with the embodiment of the present disclosure.
  • the method 900 may be implemented in the ImHRNG 100 using components thereof, as described above.
  • the method 2000 may be executed by the first AI-controlling unit 102, the second AI-controlling unit 104, the millimeter-wave transceiver 106, and the random number generator 108. Further, for the sake of brevity, details of the present disclosure that are explained in details in the description of Figure 1 to Figure 5 are not explained in detail in the description of Figure 20.
  • the method 2000 includes receiving a reflected signal corresponding to a millimeter wave emitted in a random direction.
  • the millimeter-wave transceiver 106 receives the reflected signal corresponding to the millimeter wave emitted in the random direction.
  • the method 2000 includes determining an intermediate noise signal from the reflected signal based on a first set of parameters associated with the reflected signal, and characteristics of at least one obstruction derived from the reflected signal.
  • the first set of parameters of the reflected signal includes power of the reflected signal, intensity, angle of arrival (AOA), elevation angle, azimuth angle, frequency/Doppler shift, time of arrival (TOA), time difference of arrival (TDOA), signal to noise ratio, signal to interference plus noise ratio, interference, offset, energy, variance, and correlation.
  • the characteristics include depth of the at least one obstruction, a width of the at least one obstruction, a location of the at least one obstruction, a direction of the at least one obstruction, and a property of the at least one obstruction.
  • the millimeter-wave transceiver 106 determines the first set of parameters and the characteristics of the at least one obstruction.
  • the millimeter-wave transceiver 106 determines the intermediate noise signal based on the first set of parameters and the characteristics of the at least one obstruction.
  • the method 2000 includes amplifying the intermediate noise signal based on the first set of parameters associated with the reflected signal.
  • the method 2000 includes determining a random noise signal from the intermediate noise signal based on a second set of parameters associated with the emitted millimeter wave and a first random threshold value.
  • the second set of parameters includes location of a transmitting antenna emitting the millimeter wave, distance between the transmitting antenna and a receiving antenna, intensity, power, and frequency.
  • the first random threshold value is learned data obtained by processing training data, current data associated with the first set of parameters, and current data associated with the characteristics of the at least one obstruction using a neural network.
  • the training data includes predefined threshold values.
  • the millimeter-wave transceiver 106 determines the second set of parameters and the first AI-controlling unit 102 obtains the first random threshold value.
  • the millimeter-wave transceiver 106 determines the random noise signal from the intermediate noise signal based on the second set of parameters and the first random threshold value.
  • the method 2000 includes determining the honest random number signal from the random noise signal based on a plurality of second random threshold values.
  • the honest random number signal comprises a plurality of digital bits.
  • Each of the plurality of second random threshold values is learned data obtained by processing training data, current data associated with the characteristics of the at least one obstruction, and historical data associated with the characteristics of the at least one obstruction the using a neural network.
  • the training data includes predefined threshold values.
  • the second AI-controlling unit 104 determines the plurality of second random threshold values.
  • the random number generator 108 determines the honest random number signal from the random noise signal based on the plurality of second random threshold values.
  • the honest random number signal comprises a plurality of digital bits.
  • each of the plurality of second random threshold values is applied to the random noise signal for a random time period.
  • the random time period is determined dynamically based on a random clock signal.
  • the random clock signal is determined based on a further honest random number signal. The further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above.
  • the method 2000 includes further steps for determining the random noise signal at block 2004.
  • the method 2000 includes controlling a random probability distribution of the random noise signal based on the second set of parameters associated with the emitted millimeter wave and the first random threshold value.
  • the method 2000 includes amplifying the random noise signal based on the first set of parameters associated with the reflected signal and the second set of parameters associated with the emitted millimeter wave prior to the determination of the honest random number signal.
  • the method 2000 includes further steps for determining the honest random number signal at block 2008.
  • the random noise signal is in a first analog waveform.
  • the method 2000 includes converting the first analog waveform to a second analog waveform based on the plurality of second random threshold values.
  • the method 2000 includes converting the first analog waveform to a standardized digital format comprising the plurality of digital bits.
  • the method 2300 includes generating the honest random number signal based on the method 2000.
  • the method 2300 includes applying the honest random number signal to a neural network comprising of plurality of nodes to select a set of nodes for regularization of the neural network.
  • the ImHRNG 100 is communicatively coupled with the NNC 602 implementing the neural network for regularization of the neural network.
  • the set of nodes are activated for a random time period.
  • the method 2300 includes applying the honest random number signal in a "time-out mode", i.e., applying the honest random number signal to the neural network to activate nodes for random time period.
  • the random time period is determined based on a further honest random number signal.
  • the further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above.
  • the set of nodes are selected in at least one of the plurality of directions associated with the neural network based on the honest random number signal.
  • the method 2300 includes applying the honest random number signal in a "hinge mode", i.e., applying the honest random number signal to the neural network to select nodes in a random direction of the neural network for activation.
  • the random direction is determined based on a further honest random number signal.
  • the further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above.
  • the activation of nodes is controlled based on a random node count.
  • the random node count can be determined manually using techniques as known in the art.
  • the random node count can be determined using automatic-techniques as known in the art.
  • the random node count can be generated using semi-automatic techniques as known in the art.
  • the method 2300 includes determining a random node count based on a further honest random number signal. The further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above.
  • the method 2300 includes determining if the number of nodes activated is equal to the random node count. If the number of nodes activated is not equal to the random node count, the process flow to step 2304. If the number of nodes activated is equal to the random node count, the process flow is terminated.
  • the method 2700 includes generating the honest random number signal based on the method 2000.
  • the method 2700 includes applying the honest random number signal to a neural network comprising of a plurality of layers to fragment the neural network into a plurality of sub-networks based on the honest random number signal.
  • the ImHRNG 100 is communicatively coupled with the first device 1400 implementing the neural network for fragmenting the neural network.
  • Each of the plurality of sub-networks comprises a set of layers selected from the plurality of layers based on the honest random number signal.
  • set of nodes in the plurality of sub-networks are activated for regularization of the sub-networks.
  • the method 2700 includes applying a further honest random number signal to the sub-network.
  • the further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above.
  • each of the plurality of sub-networks is selected in at least one of the plurality of directions associated with the neural network based on the honest random number signal.
  • the method 2700 includes applying the honest random number signal to the neural network to selects layers from the neural network to form the sub-networks.
  • the selection of layers is controlled based on a random layer count.
  • the random layer count can be determined manually using techniques as known in the art.
  • the random layer count can be determined using automatic-techniques as known in the art.
  • the random layer count can be generated using semi-automatic techniques as known in the art.
  • the method 2700 includes determining a random layer count based on a further honest random number signal.
  • the further honest random number signal is generated in a manner similar to the generation of the honest random number signal, as described above. It should be understood that the determination of the random layer count can be made at an earlier stage, i.e., prior to the application of the honest random number signal to the neural network at block 2704.
  • the method 2700 includes determining if the number of layers selected is equal to the random layer count. If the number of layers selected is not equal to the random layer count, the process flow to step 2704. If the number of layers selected is equal to the random layer count, the process flow is terminated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

La présente invention concerne la génération de nombre aléatoire honnête et un générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé. Selon un mode de réalisation, l'invention concerne un procédé de génération d'un signal de nombres aléatoires honnête en utilisant des ondes millimétriques. Le procédé comprend la réception d'un signal réfléchi correspondant à une onde millimétrique émise dans une direction aléatoire. Le procédé comprend la détermination d'un signal de bruit intermédiaire à partir du signal réfléchi sur la base d'un premier ensemble de paramètres associés au signal réfléchi et des caractéristiques d'au moins une obstruction dérivée du signal réfléchi. Le procédé comprend la détermination d'un signal de bruit aléatoire à partir du signal de bruit intermédiaire sur la base d'un deuxième ensemble de paramètres associés à l'onde millimétrique émise et d'une première valeur de seuil aléatoire. Le procédé comprend la détermination du signal de nombre aléatoire honnête à partir du signal de bruit aléatoire sur la base d'une pluralité de deuxièmes valeurs de seuil aléatoires. Le signal de nombre aléatoire honnête comprend une pluralité de bits numériques.
PCT/KR2020/000089 2019-01-03 2020-01-03 Génération de nombre aléatoire honnête et générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé WO2020141921A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201911000258 2019-01-03
IN201911000258 2019-01-03

Publications (1)

Publication Number Publication Date
WO2020141921A1 true WO2020141921A1 (fr) 2020-07-09

Family

ID=71407023

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/000089 WO2020141921A1 (fr) 2019-01-03 2020-01-03 Génération de nombre aléatoire honnête et générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé

Country Status (1)

Country Link
WO (1) WO2020141921A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030090306A1 (en) * 1999-11-02 2003-05-15 Takeshi Saito Thermal noise random pulse generator and random number generator
US20050286718A1 (en) * 2002-04-29 2005-12-29 Infineon Technologies Ag Apparatus and method for generating a random number
JP2009070009A (ja) * 2007-09-12 2009-04-02 Sony Corp 乱数生成装置および乱数生成方法
US20160259625A1 (en) * 2015-03-04 2016-09-08 Carol Y. Scarlett Generation of Random Numbers Through the Use of Quantum-Optical Effects within a Mirror Cavity System
US20170161022A1 (en) * 2015-12-02 2017-06-08 International Business Machines Corporation Random telegraph noise native device for true random number generator and noise injection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030090306A1 (en) * 1999-11-02 2003-05-15 Takeshi Saito Thermal noise random pulse generator and random number generator
US20050286718A1 (en) * 2002-04-29 2005-12-29 Infineon Technologies Ag Apparatus and method for generating a random number
JP2009070009A (ja) * 2007-09-12 2009-04-02 Sony Corp 乱数生成装置および乱数生成方法
US20160259625A1 (en) * 2015-03-04 2016-09-08 Carol Y. Scarlett Generation of Random Numbers Through the Use of Quantum-Optical Effects within a Mirror Cavity System
US20170161022A1 (en) * 2015-12-02 2017-06-08 International Business Machines Corporation Random telegraph noise native device for true random number generator and noise injection

Similar Documents

Publication Publication Date Title
Guo et al. Learning-based robust and secure transmission for reconfigurable intelligent surface aided millimeter wave UAV communications
AU2017360650B2 (en) Method and apparatus for analyzing communication environment based on property information of an object
WO2018093204A1 (fr) Procédé et appareil pour analyser un environnement de communication sur la base d'informations de propriété d'un objet
Alrabeiah et al. Viwi vision-aided mmwave beam tracking: Dataset, task, and baseline solutions
Hu et al. A trajectory prediction based intelligent handover control method in UAV cellular networks
Wu et al. When UAVs meet ISAC: Real-time trajectory design for secure communications
WO2021230586A1 (fr) Procédé et système de formation de faisceau pour au moins une antenne d'émission dans un environnement de réseau de radiocommunication
US11419162B2 (en) Method for extracting environment information leveraging directional communication
Hosseinianfar et al. Performance limits for fingerprinting-based indoor optical communication positioning systems exploiting multipath reflections
WO2020141921A1 (fr) Génération de nombre aléatoire honnête et générateur de nombre aléatoire honnête à ondes millimétriques intelligent associé
Lin et al. A bat-inspired algorithm for router node placement with weighted clients in wireless mesh networks
Jiang et al. Jamming resilient tracking using POMDP-based detection of hidden targets
Charan et al. Camera Based mmWave Beam Prediction: Towards Multi-Candidate Real-World Scenarios
Kaur et al. Contextual beamforming: Exploiting location and AI for enhanced wireless telecommunication performance
Li et al. Resource optimization strategy in phased array radar network for multiple target tracking when against active oppressive interference
WO2021230448A1 (fr) Système et procédé pour la formation de faisceau sécurisé dans des réseaux de communication sans fil
Ahn et al. Sensing and Computer Vision-Aided Mobility Management for 6G Millimeter and Terahertz Communication Systems
WO2018147501A1 (fr) Procédé et dispositif de sélection de point de réception et de point de transmission dans un système de communications sans fil
Parija et al. A metaheuristic bat inspired technique for cellular network optimization
Yousefi Rezaii et al. Distributed multi-target tracking using joint probabilistic data association and average consensus filter
Aswoyo et al. Adaptive beamforming based on linear array antenna for 2.3 GHz 5G communication using LMS algorithm
Zhang et al. Multi-Armed Bandit for Link Configuration in Millimeter-Wave Networks: An Approach for Solving Sequential Decision-Making Problems
Pang et al. Dynamic ISAC Beamforming Design for UAV-Enabled Vehicular Networks
Wang et al. Sequential opening multi‐jammers localisation in multi‐hop wireless network
Koh et al. Localizing wireless jamming attacks with minimal network resources

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20736092

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20736092

Country of ref document: EP

Kind code of ref document: A1