US20140075091A1 - Processing Device With Restricted Power Domain Wakeup Restore From Nonvolatile Logic Array - Google Patents
Processing Device With Restricted Power Domain Wakeup Restore From Nonvolatile Logic Array Download PDFInfo
- Publication number
- US20140075091A1 US20140075091A1 US13/770,583 US201313770583A US2014075091A1 US 20140075091 A1 US20140075091 A1 US 20140075091A1 US 201313770583 A US201313770583 A US 201313770583A US 2014075091 A1 US2014075091 A1 US 2014075091A1
- Authority
- US
- United States
- Prior art keywords
- volatile
- logic element
- element arrays
- volatile logic
- nvl
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000012545 processing Methods 0.000 title claims abstract description 56
- 238000003491 array Methods 0.000 claims abstract description 169
- 238000003860 storage Methods 0.000 claims abstract description 78
- 230000004044 response Effects 0.000 claims abstract description 35
- 238000000034 method Methods 0.000 claims description 34
- 230000014759 maintenance of location Effects 0.000 description 54
- 210000004027 cell Anatomy 0.000 description 45
- 238000012360 testing method Methods 0.000 description 44
- 239000003990 capacitor Substances 0.000 description 32
- 238000012546 transfer Methods 0.000 description 28
- 230000010287 polarization Effects 0.000 description 24
- 238000013459 approach Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 21
- 230000015654 memory Effects 0.000 description 19
- 230000006870 function Effects 0.000 description 18
- 230000008569 process Effects 0.000 description 14
- 208000030402 vitamin D-dependent rickets Diseases 0.000 description 14
- 238000007726 management method Methods 0.000 description 13
- 238000013461 design Methods 0.000 description 11
- 238000002955 isolation Methods 0.000 description 9
- 238000004519 manufacturing process Methods 0.000 description 8
- 239000000872 buffer Substances 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000003306 harvesting Methods 0.000 description 7
- 101800000515 Non-structural protein 3 Proteins 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000007704 transition Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 4
- 230000005684 electric field Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000011084 recovery Methods 0.000 description 4
- 230000002829 reductive effect Effects 0.000 description 4
- 238000010998 test method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000001976 improved effect Effects 0.000 description 3
- 238000011065 in-situ storage Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 230000003071 parasitic effect Effects 0.000 description 3
- 230000000135 prohibitive effect Effects 0.000 description 3
- 230000002269 spontaneous effect Effects 0.000 description 3
- 210000000352 storage cell Anatomy 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 241001124569 Lycaenidae Species 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 239000013078 crystal Substances 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 229910052451 lead zirconate titanate Inorganic materials 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000028161 membrane depolarization Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 230000004622 sleep time Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 239000003302 ferromagnetic material Substances 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- HFGPZNIAWCZYJU-UHFFFAOYSA-N lead zirconate titanate Chemical compound [O-2].[O-2].[O-2].[O-2].[O-2].[Ti+4].[Zr+4].[Pb+2] HFGPZNIAWCZYJU-UHFFFAOYSA-N 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000005415 magnetization Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000005022 packaging material Substances 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000002000 scavenging effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012956 testing procedure Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/22—Read-write [R-W] timing or clocking circuits; Read-write [R-W] control signal generators or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3287—Power saving characterised by the action undertaken by switching off individual functional units in the computer system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/30—Means for acting in the event of power-supply failure or interruption, e.g. power-supply fluctuations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3275—Power saving in memory, e.g. RAM, cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1012—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using codes or arrangements adapted for a specific type of error
- G06F11/1032—Simple parity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1438—Restarting or rejuvenating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/4401—Bootstrapping
- G06F9/4406—Loading of operating system
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C14/00—Digital stores characterised by arrangements of cells having volatile and non-volatile storage properties for back-up when the power is down
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03K—PULSE TECHNIQUE
- H03K3/00—Circuits for generating electric pulses; Monostable, bistable or multistable circuits
- H03K3/02—Generators characterised by the type of circuit or by the means used for producing pulses
- H03K3/353—Generators characterised by the type of circuit or by the means used for producing pulses by the use, as active elements, of field-effect transistors with internal or external positive feedback
- H03K3/356—Bistable circuits
- H03K3/3562—Bistable circuits of the master-slave type
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/50—Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate
Definitions
- This invention generally relates to nonvolatile memory cells and their use in a system, and in particular, in combination with logic arrays to provide nonvolatile logic modules.
- Many portable electronic devices such as cellular phones, digital cameras/camcorders, personal digital assistants, laptop computers and video games operate on batteries. During periods of inactivity the device may not perform processing operations and may be placed in a power-down or standby power mode to conserve power. Power provided to a portion of the logic within the electronic device may be turned off in a low power standby power mode. However, presence of leakage current during the standby power mode represents a challenge for designing portable, battery operated devices.
- Data retention circuits such as flip-flops and/or latches within the device may be used to store state information for later use prior to the device entering the standby power mode.
- the data retention latch which may also be referred to as a shadow latch or a balloon latch, is typically powered by a separate ‘always on’ power supply.
- a known technique for reducing leakage current during periods of inactivity utilizes multi-threshold CMOS (MTCMOS) technology to implement the shadow latch.
- the shadow latch utilizes thick gate oxide transistors and/or high threshold voltage (V t ) transistors to reduce the leakage current in standby power mode.
- the shadow latch is typically detached from the rest of the circuit during normal operation (e.g., during an active power mode) to maintain system performance.
- a third latch e.g., the shadow latch, may be added to the master latch and the slave latch for the data retention.
- the slave latch may be configured to operate as the retention latch during low power operation. However, some power is still required to retain the saved state.
- SoC System on Chip
- Energy harvesting also known as power harvesting or energy scavenging, is the process by which energy is derived from external sources, captured, and stored for small, wireless autonomous devices, such as those used in wearable electronics and wireless sensor networks.
- Harvested energy may be derived from various sources, such as: solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, etc.
- typical energy harvesters provide a very small amount of power for low-energy electronics.
- the energy source for energy harvesters is present as ambient background and is available for use. For example, temperature gradients exist from the operation of a combustion engine, and in urban areas, there is a large amount of electromagnetic energy in the environment because of radio and television broadcasting, etc.
- FIG. 1 is a functional block diagram of a portion of an example system on chip (SoC) as configured in accordance with various embodiments of the invention
- FIG. 2 is a more detailed block diagram of one flip-flop cloud used in the SoC of FIG. 1 ;
- FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor
- FIGS. 4-7 are schematic and timing diagrams illustrating an example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention.
- FIGS. 8-9 are schematic and timing diagrams illustrating another example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention.
- FIG. 10 is a block diagram illustrating an example NVL array used in the SoC of FIG. 1 ;
- FIGS. 11A and 11B are more detailed schematics of input/output circuits used in the NVL array of FIG. 10 ;
- FIG. 12A is a timing diagram illustrating an example offset voltage test during a read cycle as configured in accordance with various embodiments of the invention.
- FIG. 12B illustrates a histogram generated during an example sweep of offset voltage as configured in accordance with various embodiments of the invention
- FIG. 13 is a schematic illustrating parity generation in the NVL array of FIG. 10 ;
- FIG. 14 is a block diagram illustrating example power domains within an NVL array as configured in accordance with various embodiments of the invention.
- FIG. 15 is a schematic of an example level converter for use in the NVL array as configured in accordance with various embodiments of the invention.
- FIG. 16 is a timing diagram illustrating an example operation of level shifting using a sense amp within a ferroelectric bitcell as configured in accordance with various embodiments of the invention.
- FIG. 17 is a block diagram of an example power detection arrangement as configured in accordance with various embodiments of the invention.
- FIG. 18 is a functional block diagram of a portion of an example system on chip (SoC) and flip flop design with more than one NVL array per flip flop cloud as configured in accordance with various embodiments of the invention;
- SoC system on chip
- FIG. 19 is a flow chart illustrating an example operation of a processing device operating two or more processing threads as configured in accordance with various embodiments of the invention.
- FIG. 20 is a block diagram of another example SoC that includes NVL arrays as configured in accordance with various embodiments of the invention.
- Non-Volatile Logic A micro-control unit (MCU) implemented with NVL within an SoC (system on a chip) may have the ability to stop, power down, and power up with no loss in functionality. A system reset/reboot is not required to resume operation after power has been completely removed.
- MCU micro-control unit
- SoC system on a chip
- NFC Near Field Communication
- RFID radio frequency identification
- embedded control and monitoring systems for example, where the time and power cost of the reset/reboot process can consume much of the available energy, leaving little or no energy for useful computation, sensing, or control functions.
- NFC Near Field Communication
- RFID radio frequency identification
- NVL can be applied to state machines hard coded into ordinary logic gates or ROM, PLA, or PLD based control systems.
- an SoC includes one or more blocks of nonvolatile logic.
- a non-volatile logic (NVL) based SoC may back up its working state (all flip-flops) upon receiving a power interrupt, have zero leakage in sleep mode, and need less than 400 ns to restore the system state upon power-up.
- NNL non-volatile logic
- NVL Without NVL, a chip would either have to keep all flip-flops powered in at least a low power retention state that requires a continual power source even in standby mode or waste energy and time rebooting after power-up.
- NVL is useful because there is no constant power source required to preserve the state of flip-flops (FFs), and even when the intermittent power source is available, boot-up code alone may consume all the harvested energy.
- FFs flip-flops
- boot-up code alone may consume all the harvested energy.
- zero-leakage IC's (integrated circuits) with “instant-on” capability are ideal.
- Ferroelectric random access memory is a non-volatile memory technology with similar behavior to DRAM (dynamic random access memory). Each individual bit can be accessed, but unlike EEPROM (electrically erasable programmable read only memory) or Flash, FRAM does not require a special sequence to write data nor does it require a charge pump to achieve required higher programming voltages.
- Each ferroelectric memory cell contains one or more ferroelectric capacitors (FeCap). Individual ferroelectric capacitors may be used as non-volatile elements in the NVL circuits described herein.
- FIG. 1 is a functional block diagram illustrating a portion of a computing device, in this case, an example system on chip (SoC) 100 providing non-volatile logic based computing features.
- SoC system on chip
- the term SoC is used herein to refer to an integrated circuit that contains one or more system elements
- the teachings of this disclosure can be applied to various types of integrated circuits that contain functional logic modules such as latches, integrated clock gating cells, and flip-flop circuit elements (FF) that provide non-volatile state retention.
- FF flip-flop circuit elements
- Embedding non-volatile storage elements outside the controlled environment of a large array presents reliability and fabrication challenges.
- An NVL bitcell based NVL array is typically designed for maximum read signal margin and in-situ margin testability as is needed for any NV-memory technology. However, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead.
- a plurality of non-volatile logic element arrays or NVL arrays 110 are disposed with a plurality of volatile storage elements 220 .
- At least one non-volatile logic controller 106 configured to control the plurality of NVL arrays 110 to store a machine state represented by the plurality of volatile storage elements 220 and to read out a stored machine state from the plurality of NVL arrays 110 to the plurality of volatile storage elements 220 .
- the at least one non-volatile logic controller 106 is configured to generate a control sequence for saving the machine state to or retrieving the machine state from the plurality of NVL arrays 110 .
- a multiplexer 212 is connected to variably connect individual ones of the volatile storage elements 220 to one or more corresponding individual ones of the NVL arrays 110 .
- the computing device apparatus is arranged on a single chip, here an SoC 100 implemented using 256b mini-arrays 110 , which will be referred to herein as NVL arrays, of FeCap (ferroelectric capacitor) based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed.
- NVL arrays of FeCap (ferroelectric capacitor) based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed.
- Each cloud 102 - 104 of FFs 120 includes an associated NVL array 110 .
- Such dispersal results in individual ones of the NVL arrays 110 being arranged physically closely to and connected to receive data from corresponding individual ones of the volatile storage elements 220 .
- a central NVL controller 106 controls all the arrays and their communication with FFs 120 .
- SoC 100 may have additional, or fewer, FF clouds all controlled by NVL controller 106 .
- the SOC 100 can be partitioned into more than one NVL domain in which there is a dedicated NVL controller for managing the NVL arrays 110 and FFs 120 in each of the separate NVL domains.
- the existing NVL array embodiment uses 256 bit mini-arrays, but the arrays may have a greater or lesser number of bits as needed.
- SoC 100 is implemented using modified retention flip flops 120 including circuitry configured to enable write back of data from individual ones of the plurality of non-volatile logic element arrays to the individual ones of the plurality of flip flop circuits.
- a data input may be latched by a first latch.
- a second latch coupled to the first latch may receive the data input for retention while the first latch is inoperative in a standby power mode.
- the first latch receives power from a first power line that is switched off during the standby power mode.
- the second latch receives power from a second power line that remains on during the standby mode.
- a controller receives a clock input and a retention signal and provides a clock output to the first latch and the second latch.
- a change in the retention signal is indicative of a transition to the standby power mode.
- the controller continues to hold the clock output at a predefined voltage level and the second latch continues to receive power from the second power line in the standby power mode, thereby retaining the data input
- Such a retention latch is described in more detail in U.S. Pat. No. 7,639,056, “Ultra Low Area Overhead Retention Flip-Flop for Power-Down Applications”.
- FIG. 2 illustrates an example retention flop architecture that does not require that the clock be held in a particular state during retention.
- the clock value is a “don't care” during retention.
- modified retention FFs 120 include simple input and control modifications to allow the state of each FF to be saved in an associated FeCap bit cell in NVL array 110 , for example, when the system is being transitioned to a power off state. When the system is restored, then the saved state is transferred from NVL array 110 back to each FF 120 . Power savings and data integrity can be improved through implementation of particular power configurations.
- individual retention flip flop circuits include a primary logic circuit portion (master stage or latch) powered by a first power domain (such as VDDL in the below described example) and a slave stage circuit portion powered by a second power domain (such as VDDR in the below described example).
- the first power domain is configured to be powered down and the second power domain is active during write back of data from the plurality of NVL arrays to the plurality of volatile storage elements.
- the plurality of non-volatile logic elements are configured to be powered by a third power domain (such as VDDN in the below described example) that is configured to be powered down during regular operation of the computing device apparatus.
- the computing apparatus includes a first power domain configured to supply power to switched logic elements of the computing device apparatus and a second power domain configured to supply power to logic elements configured to control signals for storing data to or reading data from the plurality of non-volatile logic element arrays.
- the plurality of volatile storage elements comprise retention flip flops
- the second power domain is configured to provide power to a slave stage of individual ones of the retention flip flops.
- a third power domain supplies power for the plurality of non-volatile logic element arrays.
- NVL arrays can be defined as domains relating to particular functions.
- a first set of at least one of the plurality of non-volatile logic element arrays can be associated with a first function of the computing device apparatus and a second set of at least one of the plurality of non-volatile logic element arrays can be associated with a second function of the computing device apparatus. Operation of the first set of at least one of the plurality of non-volatile logic element arrays is independent of operation of the second set of at least one of the plurality of non-volatile logic element arrays. So configured, flexibility in the control and handling of the separate NVL array domains or sets allows more granulated control of the computing device's overall function.
- the first power domain is divided into a first portion configured to supply power to switched logic elements associated with the first function and a second portion configured to supply power to switched logic elements associated with the second function.
- the first portion and the second portion of the first power domain are individually configured to be powered up or down independently of other portions of the first power domain.
- the third power domain can be divided into a first portion configured to supply power to non-volatile logic element arrays associated with the first function and a second portion configured to supply power to non-volatile logic element arrays associated with the second function.
- the first portion and the second portion of the third power domain are individually configured to be powered up or down independently of other portions of the third power domain.
- flip flops and NVL arrays associated with the unused functions can be respectively powered down and operated separately from the other flip flops and NVL arrays.
- Such flexibility in power and operation management allows one to tailor the functionality of a computing device with respect to power usage and function.
- This can be further illustrated in the following example design having a CPU, three SPI interfaces, three UART interfaces, three I2C interfaces, and only one logic power domain (VDDL).
- the logic power domain is distinguished from the retention or NVL power domains (VDDR and VDDN respectively), although these teachings can be applied to those power domains as well.
- the VDDL power domain can be partitioned into 10 separate NVL domains (one CPU, three SPI, three UART, three I2C totaling 10 NVL domains), each of which can be enabled/disabled independently of the others. So, the customer could enable NVL capability for the CPU, one SPI, one UART, and one I2C for their specific application while disabling the others.
- this partitioning also allows flexibility in time as well as energy and the different NVL domains can save and restore state at different points in time.
- NVL domains can overlap with power domains.
- four power domains can be defined: one each for CPU, SPI, UART, and I2C (each peripheral power domain has three functional units) while defining three NVL domains within each peripheral domain and one for the CPU (total of 10 NVL domains again).
- individual power domains turn on or off in addition to controlling the NVL domains inside each power domain for added flexibility in power savings and wakeup/sleep timing.
- individual ones of the first power domain, the second power domain, and the third power domain are configured to be powered down or up independently of other ones of the first power domain, the second power domain, and the third power domain.
- integral power gates can be configured to be controlled to power down the individual ones of the first power domain, the second power domain, and the third power domain.
- the third power domain is configured to be powered down during regular operation of the computing device apparatus, and the second power domain is configured to be powered down during a write back of data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements.
- a fourth power domain can be configured to supply power to real time clocks and wake-up interrupt logic.
- VDDN NVL power domain
- All logic, memory blocks 107 such as ROM (read only memory) and SRAM (static random access memory), and master stage of FFs are on a logic power domain referred to as VDDL.
- VDDZ dedicated global supply rail
- FRAM arrays as shown in 103 typically contain integrated power switches that allow the FRAM arrays to be powered down as needed, though it can easily be seen that FRAM arrays without internal power switches can be utilized in conjunction with power switches that are external to the FRAM array.
- the slave stages of retention FFs are on a retention power domain referred to as the VDDR domain to enable regular retention in a stand-by mode of operation.
- Table 1 summarizes power domain operation during normal operation, system backup to NVL arrays, sleep mode, system restoration from NVL arrays, and back to normal operation. Table 1 also specifies domains used during a standby idle mode that may be initiated under control of system software in order to enter a reduced power state using the volatile retention function of the retention flip flops.
- a set of switches indicated at 108 are used to control the various power domains. There may be multiple switches 108 that may be distributed throughout SoC 100 and controlled by software executed by a processor on SoC 100 and/or by a hardware controller (not shown) within SoC 100 . There may be additional domains in addition to the three illustrated here, as will be described later.
- State info could be saved in a large centralized FRAM array, but would require a more time to enter sleep mode, longer wakeup time, excessive routing, and power costs caused by the lack of parallel access to system FFs.
- FIG. 2 is a more detailed block diagram of one FF cloud 102 used in SoC 100 .
- each FF cloud includes up to 248 flip flops and each NVL array is organized as an 8 ⁇ 32 bit array, but one bit is used for parity in this embodiment.
- the number of flip flops and the organization of the NVL array may have a different configuration, such as 4 ⁇ m, 16 ⁇ m, etc, where m is chosen to match the size of the FF cloud.
- all of the NVL arrays in the various clouds may be the same size, while in other approaches there may be different size NVL arrays in the same SoC.
- Block 220 is a more detailed schematic of each retention FF 120 .
- Several of the signals have an inverted version indicated by suffix “B” (referring to “bar” or /), such as RET and RETB, CLK and CLKB, etc.
- Each retention FF includes a master latch 221 and a slave latch 222 .
- Slave latch 222 is formed by inverter 223 and inverter 224 .
- Inverter 224 includes a set of transistors controlled by the retention signal (RET, RETB) that are used to retain the FF state during low power sleep periods, during which power domain VDDR remains on while power domain VDDL is turned off, as described above and in Table 1.
- NVL array 110 is logically connected with the 248 FFs it serves in cloud 102 .
- individual FFs include circuitry configured to enable write back of data from individual ones of the plurality of NVL arrays 110 .
- two additional ports are provided on the slave latch 222 of each FF as shown in block 220 .
- a data input port (gate 225 ) is configured to insert data ND from one of the NVL arrays 110 to an associated volatile storage element 220 .
- the data input port is configured to insert the data ND by allowing passage of a stored data related signal from the one of the NVL arrays to a slave stage of the associated flip flop circuit in response to receiving an update signal NU from the at least one non-volatile logic controller 106 on a data input enable port to trigger the data input port.
- Inverter 223 is configured to be disabled in response to receiving the inverted NVL update signal NUZ to avoid an electrical conflict between the tri-state inverter 223 and the NVL data port input tri-state inverter 225 .
- the inv-inv feedback pair ( 223 and 224 ) form the latch itself.
- These inverters make a very stable configuration for holding the data state and will fight any attempts to change the latch state unless at least one of the inverters is disabled to prevent electrical conflict when trying to overwrite the current state with the next state via one of the data ports.
- the illustrated NVL FF 220 includes two data ports that access the slave latch 222 as compared to one data port for a regular flop. One port transfers data from the master stage 221 to the slave stage 222 via the cmos pass gate controlled by the clock.
- the inverter 224 driving onto the output node of the pass gate controlled by CLK is disabled to avoid an electrical conflict while the inverter 223 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state in preparation for holding the data when clock goes low (for a posedge FF).
- the inverter 223 is disabled when the ND data port is activated by NU transitioning to the active high state to avoid an electrical conflict on the ND port.
- the second inverter 224 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state to be latched when NU goes low.
- the NU port does not in any way impact the other data port controlled by the clock.
- On a dual port FF having both ports active at the same time is an illegal control condition, and the resulting port conflict means the resulting next state will be indeterminate.
- the system holds the clock in the inactive state if the slave state is updated while in functional mode.
- the RET signal along with supporting circuits inside the FF are used to prevent electrical conflicts independent of the state of CLK while in retention mode (see the inverter controlled by RETB in the master stage).
- these additional elements are disposed in the slave stage 222 of the associated FF.
- the additional transistors are not on the critical path of the FF and have only 1.8% and 6.9% impact on normal FF performance and power (simulation data) in this particular implementation.
- the NU (NVL-Update) control input is pulsed high for a cycle to write to the FF.
- the thirty-one bit data output of an NVL array fans out to ND ports of eight thirty-one bit FF groups.
- a multiplexer is configured to pass states from a plurality of the individual ones of the plurality of volatile storage elements 220 for essentially simultaneous storage in an individual one of the plurality of NVL arrays 110 .
- the multiplexer may be configured to connect to N groups of M volatile storage elements of the plurality of volatile storage elements per group and to an N by M size NVL array of the plurality of NVL arrays.
- the multiplexer connects one of the N groups to the N by M size NVL array to store data from the M volatile storage elements into a row of the N by M size NVL array at one time.
- Q outputs of 248 FFs are connected to the 31b parallel data input of NVL array 110 through a 31b wide 8-1 mux 212 .
- the mux may be broken down into smaller muxes based on the layout of the FF cloud and placed close to the FFs they serve.
- the NVL controller synchronizes writing to the NVL array, and the select signals MUX SEL ⁇ 2:0> of 8-1 mux 212 .
- a clock CLK of the computing device is a “don't care” such that it is irrelevant for the volatile storage elements with respect to updating the slave stage state whenever the NU signal is active, whereby the non-volatile logic controller is configured to control and effect storage of data from individual ones of the volatile storage elements into individual ones of the non-volatile storage elements.
- the clock CLK control is not needed during NVL data recovery during retention mode, but the clock CLK should be controlled at the system level once the system state is restored, right before the transition between retention mode and functional mode.
- the NVL state can be recovered to the volatile storage elements when the system is in a functional mode.
- the clock CLK is held in the inactive state for the volatile storage elements during the data restoration from the NVL array, whereby the non-volatile logic controller is configured to control and effect transfer of data from individual ones of the non-volatile storage elements into individual ones of the volatile storage elements.
- a system clock CLK is typically held low for positive edge FF based logic and held high for negative edge FF based logic.
- the first step is to stop the system clock(s) in an inactive state to freeze the machine state to not change while the backup is in progress.
- the clocks are held in the inactive state until backup is complete. After backup is complete, all power domains are powered down and the state of the clock becomes a don't care in sleep mode by definition.
- the FF When restoring the state from NVL arrays, the FF are placed in a retention state (see Table 2 below) in which the clock continues to be a don't care as long as the RET signal is active (clock can be a don't care by virtue of special transistors added to each retention FF and is controlled by the RET signal). While restoring NVL state, the flops remain in retention mode so clock remains a don't care.
- the state of the machine logic that controls the state of the system clocks will also be restored to the state they were in at the time of the state backup, which also means that for this example all the controls (including the volatile storage elements or FF's) that placed the system clock into inactive states have now been restored such that the system clocks will remain in the inactive state upon completion of NVL data recovery.
- the RET signal can be deactivated, and the system will sit quiescent with clocks deactivated until the NVL controller signals to the power management controller that the restoration is complete, in response to which the power management controller will enable the clocks again.
- NVL controller 106 To restore flip-flop state during restoration, NVL controller 106 reads an NVL row in NVL array 110 and then pulses the NU signal for the appropriate flip-flop group.
- retention signal RET is held high and the slave latch is written from ND with power domain VDDL unpowered; at this point the state of the system clock CLK is a don't care.
- Suitably modified non-retention flops can be used in NVL based SOC's at the expense of higher power consumption during NVL data recovery operations.
- System clock CLK should start from low once VDDL comes up and thereafter normal synchronous operation continues with updated information in the FFs.
- Data transfer between the NVL arrays and their respective FFs can be done in serial or parallel or any combination thereof to tradeoff peak current and backup/restore time. Because a direct access is provided to FFs controlled by at least one non-volatile logic controller that is separate from a central processing unit for the computing device apparatus, intervention from a microcontroller processing unit (CPU) is not required for NVL operations; therefore the implementation is SoC/CPU architecture agnostic. Table 2 summarizes operation of the NVL flip flops.
- the at least one non-volatile logic controller is configured to variably control data transfer to or reading from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals.
- system designers have additional options with respect to tailoring system operation specifications to particular needs. For instance, because no computation can occur on an MCU SOC during the time the system enters a low power system state or to wakeup from a low power state, minimizing the wakeup or go to sleep time is advantageous.
- non-volatile state retention is power intensive because significant energy is needed to save and restore state to or from non-volatile elements such as ferro-electric capacitors.
- the power required to save and restore system state can exceed the capacity of the power delivery system and cause problems such as electromigration induced power grid degradation, battery life reduction due to excessive peak current draw, or generation of high levels of noise on the power supply system that can degrade signal integrity on die. Thus, allowing a system designer to be able to balance between these two concerns is desirable.
- the at least one non-volatile logic controller 106 is configured to receive the input signals through a user interface 125 , such as those known to those of skill in the art.
- the at least one non-volatile logic controller is configured to receive the input signals from a separate computing element 130 that may be executing an application.
- the separate computing element is configured to execute the application to determine a reading sequence for the plurality of non-volatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus 130 . So configured, a system user can manipulate the system state store and retrieve procedure to fit a given design.
- FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor.
- the general operation of ferroelectric bit cells is known. When most materials are polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function, referred to as dielectric polarization.
- ferroelectric materials demonstrate a spontaneous nonzero polarization as illustrated in FIG. 3 when the applied field E is zero.
- the distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by an applied electric field; the polarization is dependent not only on the current electric field but also on its history, yielding a hysteresis loop.
- the term “ferroelectric” is used to indicate the analogy to ferromagnetic materials, which have spontaneous magnetization and also exhibit hysteresis loops.
- the dielectric constant of a ferroelectric capacitor is typically much higher than that of a linear dielectric because of the effects of semi-permanent electric dipoles formed in the crystal structure of the ferroelectric material.
- the dipoles tend to align themselves with the field direction, produced by small shifts in the positions of atoms that result in shifts in the distributions of electronic charge in the crystal structure. After the charge is removed, the dipoles retain their polarization state.
- Binary “0”s and “1”s are stored as one of two possible electric polarizations in each data storage cell. For example, in the figure a “1” may be encoded using the negative remnant polarization 302 , and a “0” may be encoded using the positive remnant polarization 304 , or vice versa.
- Ferroelectric random access memories have been implemented in several configurations.
- a one transistor, one capacitor (1T-1C) storage cell design in an FeRAM array is similar in construction to the storage cell in widely used DRAM in that both cell types include one capacitor and one access transistor.
- a linear dielectric is used, whereas in an FeRAM cell capacitor the dielectric structure includes ferroelectric material, typically lead zirconate titanate (PZT). Due to the overhead of accessing a DRAM type array, a 1T-1C cell is less desirable for use in small arrays such as NVL array 110 .
- PZT lead zirconate titanate
- a four capacitor, six transistor (4C-6T) cell is a common type of cell that is easier to use in small arrays. An improved four capacitor cell will now be described.
- FIG. 4 is a schematic illustrating one embodiment of a ferroelectric nonvolatile bitcell 400 that includes four capacitors and twelve transistors (4C-12T).
- the four FeCaps are arranged as two pairs in a differential arrangement.
- FeCaps C1 and C2 are connected in series to form node Q 404
- FeCaps C1′ and C2′ are connected in series to form node QB 405 , where a data bit is written into node Q and stored in FeCaps C1 and C2 via bit line BL and an inverse of the data bit is written into node QB and stored in FeCaps C1′ and C2′ via inverse bitline BLB.
- Sense amp 410 is coupled to node Q and to node QB and is configured to sense a difference in voltage appearing on nodes Q, QB when the bitcell is read.
- the four transistors in sense amp 410 are configured as two cross coupled inverters to form a latch.
- Pass gate 402 is configured to couple node Q to bitline B and pass gate 403 is configured to couple node QB to bit line BLB.
- Each pass gate 402 , 403 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps.
- Plate line 1 is coupled to FeCaps C1 and C1′ and plate line 2 (PL2) is coupled to FeCaps C2 and C2′.
- the plate lines are use to provide biasing to the FeCaps during reading and writing operations.
- the cmos pass gates can be replaced with NMOS pass gates that use a pass gate enable that is has a voltage higher than VDDL.
- the magnitude of the higher voltage must be larger than the usual NMOS Vt in order to pass a undegraded signal from the bitcell Q/QB nodes to/from the bitlines BL/BLB (I.E. Vpass_gate_control must be >VDDL+Vt).
- bit cells 400 there will be an array of bit cells 400 . There may then be multiple columns of similar bitcells to form an n row by m column array.
- the NVL arrays are 8 ⁇ 32; however, as discussed earlier, different configurations may be implemented.
- FIGS. 5 and 6 are timing diagram illustrating read and write waveforms for reading a data value of logical 0 and writing a data value of logical 0, respectively.
- Reading and writing to the NVL array is a multi-cycle procedure that may be controlled by the NVL controller and synchronized by the NVL clock.
- the waveforms may be sequenced by fixed or programmable delays starting from a trigger signal, for example.
- TDDB time dependent dielectric breakdown
- FeCaps inverted version of the data value is also stored, one side or the other will always be storing a “1”.
- plate line PL1, plate line PL2, node Q and node QB are held at a quiescent low value when the cell is not being accessed, as indicated during time periods s0 in FIGS. 5 , 6 .
- Power disconnect transistors MP 411 and MN 412 allow sense amp 410 to be disconnected from power during time periods s0 in response to sense amp enable signals SAEN and SAENB.
- Clamp transistor MC 406 is coupled to node Q and clamp transistor MC′ 407 is coupled to node QB.
- Clamp transistors 406 , 407 are configured to clamp the Q and QB nodes to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods s0, which in this embodiment equal 0 volts, (the ground potential). In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB is essentially eliminated.
- the clamp transistors also serve to prevent any stray charge buildup on nodes Q and QB due to parasitic leakage currents. Build up of stray charge may cause the voltage on Q or QB to rise above 0 v, leading to a voltage differential across the FeCaps between Q or QB and PL1 and PL2. This can lead to unintended depolarization of the FeCap remnant polarization and could potentially corrupt the logic values stored in the FeCaps.
- Vdd is 1.5 volts and the ground reference plane has a value of 0 volts.
- a logic high has a value of approximately 1.5 volts, while a logic low has a value of approximately 0 volts.
- Other embodiments that use logic levels that are different from ground for logic 0 (low) and Vdd for logic 1 (high) would clamp nodes Q, QB to a voltage corresponding to the quiescent plate line voltage so that there is effectively no voltage across the FeCaps when the bitcell is not being accessed.
- two clamp transistors may be used. Each of these two transistors is used to clamp the voltage across each FeCap to be no greater than one transistor Vt (threshold voltage). Each transistor is used to short out the FeCaps. In this case, for the first transistor, one terminal connects to Q and the other one connects to PL1, while for transistor two, one terminal connects to Q and the other connects to PL2.
- the transistor can be either NMOS or PMOS, but NMOS is more likely to be used.
- a bit cell in which the two transistor solution is used does not consume significantly more area than the one transistor solution.
- the single transistor solution assumes that PL1 and PL2 will remain at the same ground potential as the local VSS connection to the single clamp transistor, which is normally a good assumption.
- noise or other problems may occur (especially during power up) that might cause PL1 or PL2 to glitch or have a DC offset between the PL1/PL2 driver output and VSS for brief periods; therefore, the two transistor design may provide a more robust solution.
- plate line PL1 is switched from low to high while keeping plate line PL2 low, as indicated in time period s2.
- This induces voltages on nodes Q, QB whose values depend on the capacitor ratio between C1-C2 and C1′-C2′ respectively.
- the induced voltage in turn depends on the remnant polarization of each FeCap that was formed during the last data write operation to the FeCap's in the bit cell.
- the remnant polarization in effect “changes” the effective capacitance value of each FeCap which is how FeCaps provide nonvolatile storage.
- V ⁇ ( Q ) V ⁇ ( PL ⁇ ⁇ 1 ) ⁇ ( C ⁇ ⁇ 2 C ⁇ ⁇ 1 + C ⁇ ⁇ 2 ) ( 1 )
- the local sense amp 410 is then enabled during time period s3. After sensing the differential values 502 , 503 , sense amp 410 produces a full rail signal 504 , 505 .
- the resulting full rail signal is transferred to the bit lines BL, BLB during time period s4 by asserting the transfer gate enable signals PASS, PASSB to enable transfer gates 402 , 403 and thereby transfer the full rail signals to an output latch responsive to latch enable signal LAT_EN that is located in the periphery of NVL array 110 , for example
- FIG. 6 is a timing diagram illustrating writing a logic 0 to bit cell 400 .
- the write operation begins by raising both plate lines to Vdd during time period s1. This is called the primary storage method.
- the signal transitions on PL1 and PL2 are capacitively coupled onto nodes Q and QB, effectively pulling both storage nodes almost all the way to VDD (1.5 v).
- Data is provided on the bit lines BL, BLB and the transfer gates 402 , 403 are enabled by the pass signal PASS during time periods s2-s4 to transfer the data bit and its inverse value from the bit lines to nodes Q, QB.
- Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive after the write data drivers have forced adequate differential on Q/QB during time period s2.
- the write data drivers are turned off at the end of time period s2 before the sense amp is turned on during time periods s3, s4.
- write operations hold PL2 at 0 v or ground throughout the data write operation. This can save power during data write operations, but reduces the resulting read signal margin by 50% as C2 and C2′ no longer hold data via remnant polarization and only provide a linear capacitive load to the C1 and C2 FeCaps.
- reading data from the FeCap's may partially depolarize the capacitors. For this reason, reading data from FeCaps is considered destructive in nature; i.e. reading the data may destroy the contents of the FeCap's or reduce the integrity of the data at a minimum. For this reason, if the data contained in the FeCap's is expected to remain valid after a read operation has occurred, the data must be written back into the FeCaps.
- specific NVL arrays may be designated to store specific information that will not change over a period of time.
- certain system states can be saved as a default return state where returning to that state is preferable to full reboot of the device.
- the reboot and configuration process for a state of the art ultra low power SoC can take 1000-10000 clock cycles or more to reach the point where control is handed over to the main application code thread.
- This boot time becomes critical for energy harvesting applications in which power is intermittent, unreliable, and limited in quantity.
- the time and energy cost of rebooting can consume most or all of the energy available for computation, preventing programmable devices such as MCU's from being used in energy harvesting applications.
- An example application would be energy harvesting light switches.
- the energy harvested from the press of the button on the light switch represents the entire energy available to complete the following tasks: 1) determine the desired function (on/off or dimming level), 2) format the request into a command packet, 3) wake up a radio and squirt the packet over an RF link to the lighting system.
- Known custom ASIC chips with hard coded state machines are often used for this application due to the tight energy constraints, which makes the system inflexible and expensive to change because new ASIC chips have to be designed and fabricated whenever any change is desired.
- a programmable MCU SOC would be a much better fit, except for the power cost of the boot process consumes most of the available energy, leaving no budget for executing the required application code.
- At least one of the plurality of non-volatile logic element arrays is configured to store a boot state representing a state of the computing device apparatus after a given amount of a boot process is completed.
- the at least one non-volatile logic controller in this approach is configured to control restoration of data representing the boot state from the at least one of the plurality of non-volatile logic element arrays to corresponding ones of the plurality of volatile storage elements in response to detecting a previous system reset or power loss event for the computing device apparatus.
- the at least one non-volatile logic controller can be configured to execute a round-trip data restoration operation that automatically writes back data to an individual non-volatile logic element after reading data from the individual non-volatile logic element without completing separate read and write operations.
- FIG. 7 An example execution of a round-trip data restoration is illustrated in FIG. 7 , which illustrates a writeback operation on bitcell 400 , where the bitcell is read, and then written to the same value.
- initiating reading of data from the individual non-volatile logic element is started at a first time S1 by switching a first plate line PL1 high to induce a voltage on a node of a corresponding ferroelectric capacitor bit cell based on a capacitance ratio for ferroelectric capacitors of the corresponding ferroelectric capacitor bit cell. If clamp switches are used to ground the nodes of the ferroelectric capacitors, a clear signal CLR is switched from high to low at the first time S1 to unclamp those aspects of the individual non-volatile logic element from electrical ground.
- a sense amplifier enable signal SAEN is switched high to enable a sense amplifier to detect the voltage induced on the node and to provide an output signal corresponding to data stored in the individual non-volatile logic element.
- a pass line PASS is switched high to open transfer gates to provide an output signal corresponding to data stored in the individual non-volatile logic element.
- a second plate line PL2 is switched high to induce a polarizing signal across the ferroelectric capacitors to write data back to the corresponding ferroelectric capacitor bit cell corresponding to the data stored in the individual non-volatile logic element.
- the pass line PASS is switched low at the sixth time S6, and the sense amplifier enable signal SAEN is switched law at the seventh time S7.
- a clear signal CLR is switched from low to high to clamp the aspects of the individual non-volatile logic element to the electrical ground to help maintain data integrity as discussed herein. This process includes a lower total number of transitions than what is needed for distinct and separate read and write operations (read, then write). This lowers the overall energy consumption.
- Bitcell 400 is designed to maximize read differential across Q/QB in order to provide a highly reliable first generation of NVL products. Two FeCaps are used on each side rather than using one FeCap and constant BL capacitance as a load because this doubles the differential voltage that is available to the sense amp. A sense amp is placed inside the bitcell to prevent loss of differential due to charge sharing between node Q and the BL capacitance and to avoid voltage drop across the transfer gate. The sensed voltages are around VDD/2, and a HVT transfer gate takes a long time to pass them to the BL. Bitcell 400 helps achieve twice the signal margin of a regular FRAM bitcell known in the art, while not allowing any DC stress across the FeCaps.
- timing of signals shown in FIGS. 5 and 6 are for illustrative purposes. Various embodiments may signal sequences that vary depending on the clock rate, process parameters, device sizes, etc.
- the timing of the control signals may operate as follows. During time period S1: PASS goes from 0 to 1 and PL1/PL2 go from 0 to 1. During time period S2: SAEN goes from 0 to 1, during which time the sense amp may perform level shifting as will be described later, or provides additional drive strength for a non-level shifted design. During time period S3: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same, but are moved up one clock cycle. This sequence is one clock cycle shorter than that illustrated in FIG. 6 .
- the timing of the control signals may operate as follows.
- S1 PASS goes from 0 to 1 (BL/BLB, Q/QB are 0 v and VDDL respectively).
- S2 SAEN goes from 0 to 1 (BL/BLB, Q/QB are 0 v and VDDN respectively).
- S3 PL1/PL2 go from 0 to 1 (BL/Q is coupled above ground by PL1/PL2 and is driven back low by the SA and BL drivers).
- PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same.
- FIGS. 8-9 are a schematic and timing diagram illustrating another embodiment of a ferroelectric nonvolatile bit cell 800 , a 2C-3T self-referencing based NVL bitcell.
- the previously described 4-FeCap based bitcell 400 uses two FeCaps on each side of a sense amp to get a differential read with double the margin as compared to a standard 1C-1T FRAM bitcell.
- a 4-FeCap based bitcell has a larger area and may have a higher variation because it uses more FeCaps.
- Bitcell 800 helps achieve a differential 4-FeCap like margin in lower area by using itself as a reference, referred to herein as self-referencing. By using fewer FeCaps, it also has lower variation than a 4 FeCap bitcell.
- a single sided cell needs to use a reference voltage that is in the middle of the operating range of the bitcell. This in turn reduces the read margin by half as compared to a two sided cell.
- the reference value may become skewed, further reducing the read margin.
- a self-reference scheme allows comparison of a single sided cell against itself, thereby providing a higher margin. Tests of the self-referencing cell described herein have provided at least double the margin over a fixed reference cell.
- Bitcell 800 has two FeCaps C1, C2 that are connected in series to form node Q 804 .
- Plate line 1 (PL1) is coupled to FeCap C1 and plate line 2 (PL2) is coupled to FeCap C2.
- the plate lines are use to provide biasing to the FeCaps during reading and writing operations.
- Pass gate 802 is configured to couple node Q to bitline B.
- Pass gate 802 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps.
- an NMOS pass gate may be used with a boosted word line voltage.
- Clamp transistor MC 806 is coupled to node Q.
- Clamp transistor 806 is configured to clamp the Q node to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods s0, which in this embodiment 0 volts, ground. In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB and unintended partial depolarization is essentially eliminated.
- the initial state of node Q, plate lines PL1 and PL2 are all 0, as shown in FIG. 9 at time period s0, so there is no DC bias across the FeCaps when the bitcell is not being accessed.
- PL1 is toggled high while PL2 is kept low, as shown during time period s1.
- a signal 902 develops on node Q from a capacitance ratio based on the retained polarization of the FeCaps from a last data value previously written into the cell, as described above with regard to equation 1.
- This voltage is stored on a read capacitor 820 external to the bitcell by passing the voltage though transfer gate 802 onto bit line BL and then through transfer gate 822 in response to a second enable signal EN1.
- BL and the read capacitors are precharged to VDD/2 before the pass gates 802 , 822 , and 823 are enabled in order to minimize signal loss via charge sharing when the recovered signals on Q are transferred via BL to the read storage capacitors 820 and 821 .
- PL1 is toggled back low and node Q is discharged using clamp transistor 806 during time period s2.
- PL2 is toggled high keeping PL1 low during time period s3.
- a new voltage 904 develops on node Q, but this time with the opposite capacitor ratio.
- This voltage is then stored on another external read capacitor 821 via transfer gate 823 .
- Sense amplifier 810 can then determine the state of the bitcell by using the voltages stored on the external read capacitors 820 , 821 .
- bit cells 800 there will be an array of bit cells 800 .
- One column of bit cells 800 - 800 n is illustrated in FIG. 8 coupled via bit line 801 to read transfer gates 822 , 823 .
- the NVL arrays are 8 ⁇ 32; however, as discussed earlier, different configurations may be implemented.
- the read capacitors and sense amps may be located in the periphery of the memory array, for example.
- FIG. 10 is a block diagram illustrating NVL array 110 in more detail. Embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. As discussed earlier with reference to FIG. 1 , adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. To amortize the test feature costs and improve manufacturability, SoC 100 is implemented using 256b mini-NVL arrays 110 , of FeCap based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed. Each cloud 102 - 104 of FFs 120 includes an associated NVL array 110 . A central NVL controller 106 controls all the arrays and their communication with FFs 120 .
- NVL array 110 is implemented with an array 1040 of eight rows and thirty-two bit columns of bitcells.
- Each individual bit cell such as bitcell 1041 , is coupled to a set of control lines provided by row drivers 1042 .
- the control signals described earlier including plate lines (PL1, PL2), sense amp enable (SEAN), transfer gate enable (PASS), and clear (CLR) are all driven by the row drivers.
- Each individual bit cell, such as bitcell 1041 is also coupled via the bitlines to a set of input/output (IO) drivers 1044 .
- IO input/output
- Each driver set produces an output signal 1046 that provides a data value when a row of bit lines is read.
- Each bitline runs the length of a column of bitcells and couples to an IO driver for that column.
- Each bitcell may be implemented as 2C-3T bitcell 800 , for example. In this case, a single bitline will be used for each column, and the sense amps and read capacitors will be located in IO driver block 1044 .
- each bitcell may be implemented as 4C-12T bit cell 400 .
- the bitlines will be a differential pair with two IO drivers for each column.
- a comparator receives the differential pair of bitlines and produces a final single bit line that is provided to the output latch.
- Other implementations of NVL array 110 may use other known or later developed bitcells in conjunction with the row drivers and IO drivers that will be described in more detail below.
- Timing logic 1046 generates timing signals that are used to control the read drivers to generate the sequence of control signals for each read and write operation. Timing logic 1046 may be implemented using both synchronous or asynchronous state machines, or other known or later developed logic technique.
- One potential alternative embodiment utilizes a delay chain with multiple outputs that “tap” the delay chain at desired intervals to generate control signals. Multiplexors can be used to provide multiple timing options for each control signal.
- Another potential embodiment uses a programmable delay generator that produces edges at the desired intervals using dedicated outputs that are connected to the appropriate control signals.
- FIG. 11 is a more detailed schematic of a set of input/output circuits 1150 used in the NVL array of FIG. 10 .
- each IO set 1045 of the thirty-two drivers in IO block 1044 is similar to IO circuits 1150 .
- I/O block 1044 provides several features to aid testability of NVL bits.
- a first latch (L1) 1151 serves as an output latch during a read and also combines with a second latch (L2) 1152 to form a scan flip flop.
- the scan output (SO) signal is routed to multiplexor 1153 in the write driver block 1158 to allow writing scanned data into the array during debug.
- Scan output (SO) is also coupled to the scan input (SI) of the next set of IO drivers to form a thirty-two bit scan chain that can be used to read or write a complete row of bits from NVL array 110 .
- the scan latch of each NVL array is connected in a serial manner to form a scan chain to allow all of the NVL arrays to be accessed using the scan chain.
- the scan chain within each NVL array may be operated in a parallel fashion (N arrays will generate N chains) to reduce the number of internal scan flop bits on each chain in order to speed up scan testing.
- the number of chains and the number of NVL arrays per chain may be varied as needed.
- all of the storage latches and flipflops within SoC 100 include scan chains to allow complete testing of SoC 100 . Scan testing is well known and does not need to be described in more detail herein.
- the NVL chains are segregated from the logic chains on a chip so that the chains can be exercised independently and NVL arrays can be tested without any dependencies on logic chain organization, implementation, or control.
- the maximum total length of NVL scan chains will always be less than the total length of logic chains since the NVL chain length is reduced by a divisor equal to the number of rows in the NVL arrays.
- there are 8 entries per NVL array so the total length of NVL scan chains is 1 ⁇ 8 th the total length of the logic scan chains. This reduces the time required to access and test NVL arrays and thus reduces test cost. Also, it eliminates the need to determine the mapping between logic flops, their position on logic scan chains and their corresponding NVL array bit location (identifying the array, row, and column location), greatly simplifying NVL test, debug, and failure analysis.
- Each NVL bitcell is coupled to an associated flip-flop and is only written to by saving the state of the flip flop. Thus, in order to load a pattern test into an NVL array from the associated flipflops, the corresponding flipflops must be set up using a scan chain.
- Determining which bits on a scan chain have to be set or cleared in order to control the contents of a particular row in an NVL array is a complex task as the connections are made based on the physical location of arbitrary groups of flops on a silicon die and not based on any regular algorithm. As such, the mapping of flops to NVL locations need not be controlled and is typically somewhat random.
- NVL controller 106 has state machine(s) to perform fast pass/fail tests for all NVL arrays on the chip to screen out bad dies.
- at least one non-volatile logic controller is configured to control a built-in-self-test mode where all zeros or all ones are written to at least a portion of an NVL array of the plurality of NVL arrays and then it is determined whether data read from the at least the portion of the NVL array is all ones or all zeros.
- the NVL controller may instruct all of the NVL arrays within SoC 100 to simultaneously perform an all ones write to a selected row, and then instruct all of the NVL arrays to simultaneously read the selected row and provide a pass fail indication using only a few control signals without transferring any explicit test data from the NVL controller to the NVL arrays.
- the BIST controller In typical memory array BIST (Built In Self Test) implementations, the BIST controller must have access to all memory output values so that each output bit can be compared with the expected value. Given there are many thousands of logic flops on typical silicon SOC chips, the total number of NVL array outputs can also measure in the thousands.
- the NVL test method can then be repeated eight times, for NVL arrays having eight rows (the number of repetitions will vary according to the array organization. In one example, a 10 entry NVL array implementation would repeat the test method 10 times), so that all of the NVL arrays in SoC 100 can be tested for correct all ones operation in only eight write cycles and eight read cycles. Similarly, all of the NVL arrays in SoC 100 can be tested for correct all zeros operation in only eight write cycles and eight read cycles.
- the results of all of the NVL arrays may be condensed into a single signal indicating pass or fail by an additional AND gate and OR gate that receive the corr — 0 and corr — 1 signals from each of the NVL arrays and produces a single corr — 0 and corr — 1 signal, or the NVL controller may look at each individual corr — 0 and corr — 1 signal.
- All 0/1 write driver 1180 includes PMOS devices M1, M3 and NMOS devices M2, M4. Devices M1 and M2 are connected in series to form a node that is coupled to the bitline BL, while devices M3 and M4 are connected in series to form a node that is coupled to the inverse bitline BLB.
- Control signal “all — 1_A” and inverse “all — 1_B” are generated by NVL controller 106 . When asserted during a write cycle, they activate device devices M1 and M4 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 1. Similarly, control signal “all — 0_A” and inverse “all — 0_B” are generated by NVL controller 106 .
- the thirty-two drivers When asserted during a write cycle, they activate devices M2 and M3 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 0. In this manner, the thirty-two drivers are operable to write all ones into a row of bit cells in response to a control signal and to write all zeros into a row of bit cells in response to another control signal.
- One skilled in the art can easily design other circuit topologies to accomplish the same task.
- the current embodiment is preferred as it only requires 4 transistors to accomplish the required data writes.
- write driver block 1158 receives a data bit value to be stored on the data_in signal.
- Write drivers 1156 , 1157 couple complimentary data signals to bitlines BL, BLB and thereby to the selected bit cell.
- Write drivers 1156 , 1157 are enabled by the write enable signal STORE.
- FIG. 12A is a timing diagram illustrating an offset voltage test during a read cycle.
- state s1 is modified during a read.
- This figure illustrates a voltage disturb test for reading a data value of “0” (node Q); a voltage disturb test for a data value of “1” is similar, but injects the disturb voltage onto the opposite side of the sense amp (node QB).
- the disturb voltage in this embodiment is injected onto the low voltage side of the sense amp based on the logic value being read.
- Transfer gates 1154 , 1155 are coupled to the bit line BL, BLB.
- a digital to analog converter is programmed by NVL controller 106 , by an off-chip test controller, or via an external production tester to produce a desired amount of offset voltage V_OFF.
- NVL controller 106 may assert the Vcon control signal for the bitline side storing a “0” during the s1 time period to thereby enable Vcon transfer gate 1154 , 1155 , discharge the other bit-line using M2/M4 during s1, and assert control signal PASS during s1 to turn on transfer gates 402 , 403 . This initializes the voltage on node Q/QB of the “0” storing side to offset voltage V_Off, as shown at 1202 .
- V_Off may be set to a required margin value, and the pass/fail test using G0-1 may then be used to screen out any failing die.
- FIG. 12B illustrates a histogram generated during a sweep of offset voltage.
- Bit level failure margins can be studied by sweeping V_Off and scanning out the read data bits using a sequence of read cycles, as described above.
- the worst case read margin is 550 mv
- the mean value is 597 mv
- the standard deviation is 22 mv. In this manner, the operating characteristics of all bit cells in each NVL array on an SoC may be easily determined.
- NVL bitcell should be designed for maximum read signal margin and in-situ testability as is needed for any NV-memory technology.
- NVL implementation cannot rely on SRAM like built in self test (BIST) because NVL arrays are distributed inside the logic cloud.
- BIST built in self test
- the NVL implementation described above includes NVL arrays controlled by a central NVL controller 106 . While screening a die for satisfactory behavior, NVL controller 106 runs a sequence of steps that are performed on-chip without any external tester interference. The tester only needs to issue a start signal, and apply an analog voltage which corresponds to the desired signal margin.
- the controller first writes all 0s or 1s to all bits in the NVL array. It then starts reading an array one row at a time.
- the NVL array read operations do not necessarily immediately follow NVL array write operations. Often, high temperature bake cycles are inserted between data write operations and data read operations in order to accelerate time and temperature dependent failure mechanisms so that defects that would impact long term data retention can be screened out during manufacturing related testing.
- the array contains logic that ANDs and ORs all outputs of the array. These two signals are sent to the controller. Upon reading each row, the controller looks at the two signals from the array, and based on knowledge of what it previously wrote, decides it the data read was correct or not in the presence of the disturb voltage.
- the controller moves onto the next row in the array. All arrays can be tested in parallel at the normal NVL clock frequency. This enables high speed on-chip testing of the NVL arrays with the tester only issuing a start signal and providing the desired read signal margin voltage while the NVL controller reports pass at the end of the built in testing procedure or generates a fail signal whenever the first failing row is detected. Fails are reported immediately so the tester can abort the test procedure at the point of first failure rather than waste additional test time testing the remaining rows.
- NVM non-volatile memories
- the controller may also have a debug mode.
- the tester can specify an array and row number, and the NVL controller can then read or write to just that row.
- the read contents can be scanned out using the NVL scan chain.
- This method provides read or write access to any NVL bit on the die without CPU intervention or requiring the use of a long complicated SOC scan chains in which the mapping of NVL array bits to individual flops is random. Further, this can be done in concert with applying an analog voltage for read signal margin determination, so exact margins for individual bits can be measured.
- NVL implementation using mini-arrays distributed in the logic cloud means that a sophisticated error detection method like ECC would require a significant amount of additional memory columns and control logic to be used on a per array basis, which could be prohibitive from an area standpoint.
- the NVL arrays of SoC 100 may include parity protection as a low cost error detection method, as will now be described in more detail.
- FIG. 13 is a schematic illustrating parity generation in NVL array 110 that illustrates an example NVL array having thirty-two columns of bits (0:31), that exclusive-ors the input data value DATA IN 1151 with the output of a similar XOR gate of the previous column's IO driver.
- Each IO driver section, such as section 1350 , of the NVL array may contain an XOR gate 1160 , referring again to FIG. 11A .
- the output of XOR gate 1160 that is in column 30 is the overall parity value of the row of data that is being written in bit columns 0:30 and is used to write parity values into the last column by feeding its output to the data input of column 31 the NVL mini-array, shown as XOR_IN in FIG. 11B .
- XOR gate 1160 exclusive-ors the data value DATA_OUT from read latch 1151 via mux 1161 (see FIG. 11 ) with the output of a similar XOR gate of the previous column's IO driver.
- the output of XOR gate 1160 that is in bit column 30 is the overall parity value for the row of data that was read from bit columns 0:30 and is used to compare to a parity value read from bit column 31 in parity error detector 1370 . If the overall parity value determined from the read data does not match the parity bit read from column 31, then a parity error is declared.
- a parity error When a parity error is detected, it indicates that the stored FF state values are not trustworthy. Since the NVL array is typically being read when the SoC is restarting operation after being in a power off state, then detection of a parity error indicates that a full boot operation needs to be performed in order to regenerate the correct FF state values.
- an indeterminate condition may exist. For example, if the NVL array is empty, then typically all of the bits may have a value of zero, or they may all have a value of one. In the case of all zeros, the parity value generated for all zeros would be zero, which would match the parity bit value of zero. Therefore, the parity test would incorrectly indicate that the FF state was correct and that a boot operation is not required, when in fact it would be required. In order to prevent this occurrence, an inverted version of the parity bit may be written to column 31 by bit line driver 1365 , for example. Referring again to FIG.
- bit line driver 1156 for columns 0-30 also inverts the input data bits
- mux 1153 inverts the data_in bits when they are received, so the result is that the data in columns 0-30 is stored un-inverted.
- the data bits may be inverted and the parity error not inverted, for example.
- NVL array 110 is constrained to have an odd number of data columns. For example, in this embodiment, there are thirty-one data columns and one parity column, for a total of thirty-two bitcell columns.
- control logic for the NVL array when an NVL read operation occurs, causes the parity bit to be read, inverted, and written back. This allows the NVL array to detect when prior NVL array writes were incomplete or invalid/damaged. Remnant polarization is not completely wiped out by a single read cycle. Typically, it take 5-15 read cycles to fully depolarize the FeCaps or to corrupt the data enough to reliably trigger an NVL read parity. For example, if only four out of eight NVL array rows were written during the last NVL store operation due to loss of power, this would most likely result in an incomplete capture of the prior machine state.
- the current embodiment of the array disables the PL1, PL2, and sense amp enable signals for all non-parity bits (i.e. Data bits) to minimize the parasitic power consumption of this feature.
- a valid determination can be made that the data being read from the NVL arrays contains valid FF state information. If a parity error is detected, then a boot operation can be performed in place of restoring FF state from the NVL arrays.
- low power SoC 100 has multiple voltage and power domains, such as VDDN_FV, VDDN_CV for the NVL arrays, VDDR for the sleep mode retention latches and well supplies, and VDDL for the bulk of the logic blocks that form the system microcontroller, various peripheral devices, SRAM, ROM, etc., as described earlier with regard to Table 1 and Table 2.
- FRAM has internal power switches and is connected to the always on supply VDDZ
- the VDDN_FV domain may be designed to operate at one voltage, such as 1.5 volts needed by the FeCap bit cells, while the VDDL and VDDN_CV domain may be designed to operate at a lower voltage to conserve power, such as 0.9-1.5 volts, for example.
- VDDL/VDDN_CV can be any valid voltage less than or equal to VDDN_FV and the circuit will function correctly.
- FIG. 14 is a block diagram illustrating power domains within NVL array 110 .
- Various block of logic and memory may be arranged as illustrated in Table 3.
- VDD Full Chip Voltage Voltage Domain level
- VDDL 0.9-1.5 Always ON supply for VDDL, VDDR, VDDN_CV power switches, and always ON logic (if any)
- VDDZ 1.5 Always on 1.5 V supply for FRAM, and for VDDN_FV power switches.
- FRAM has internal power switches.
- VDDL 0.9-1.5 All logic, and master stage of all flops, SRAM, ROM, Write multiplexor, buffers on FF outputs, and mux outputs: Variable logic voltage; e.g. 0.9 to 1.5 V (VDDL). This supply is derived from the output of VDDL power switches VDDN_CV 0.9-1.5 NVL array control and timing logic, and IO circuits, NVL controller.
- VDDN_FV 1.5 NVL array Wordline driver circuits 1042 and NVL bitcell array 1040: Same voltage as FRAM. Derived from VDDN_FV power switches. VDDR 0.9-1.5 This is the data retention domain and includes the slave stage of retention flops, buffers on NVL clock, flop retention enable signal buffers, and NVL control outputs such as flop update control signal buffers, and buffers on NVL data outputs. Derived from VDDR power switches.
- Power domains VDDL, VDDN_CV, VDDN_FV, and VDDR described in Table 3 are controlled using a separate set of power switches, such as switches 108 described earlier. However, isolation may be needed for some conditions. Data output buffers within IO buffer block 1044 are in the NVL logic power domain VDDN_CV and therefore may remain off while domain VDDR (or VDDL depending on the specific implementation) is ON during normal operation of the chip. ISO-Low isolation is implemented to tie all such signals to ground during such a situation.
- VDDN_CV While VDDN_CV is off, logic connected to data outputs in VDDR (or VDDL depending on the specific implementation) domain in random logic area may generate short circuit current between power and ground in internal circuits if any signals from the VDDN_CV domain are floating (not driven when VDDN_CV domain is powered down) if they are not isolated. The same is applicable for correct — 0/1 outputs and scan out output of the NVL arrays. The general idea here is that any outputs of the NVL array will be isolated when the NVL array has no power given to it. In case there is always ON logic present in the chip, all signals going from VDDL or VDDN_CV to VDD must be isolated using input isolation at the VDD domain periphery.
- NVL flops at the ND input.
- the input goes to a transmission gate, whose control signal NU is driven by an always on signal.
- NU is made low, thereby disabling the ND input port.
- Similar built-in isolation exists on data inputs and scan-in of the NVL array. This isolation would be needed during NVL restore when VDDL is OFF.
- signals NU and NVL data input multiplexor enable signals (mux_sel) must be buffered only in the VDDR domain. The same applies for the retention enable signal.
- VDDL and VDDN* domain are shut off at various times, and isolation is makes that possible without burning short circuit current.
- Level conversion from the lower voltage VDDL domain to the higher voltage VDDN domain is needed on control inputs of the NVL arrays that go to the NVL bitcells, such as: row enables, PL1, PL2, restore, recall, and clear, for example.
- This enables a reduction in system power dissipation by allowing blocks of SOC logic and NVL logic gates that can operate at a lower voltage to do so.
- word line drivers 1042 For each row of bitcells in bitcell array 1040 , there is a set of word line drivers 1042 that drive the signals for each row of bitcells, including plate lines PL1, PL2, transfer gate enable PASS, sense amp enable SAEN, clear enable CLR, and voltage margin test enable VCON, for example.
- bitcell array 1040 and the wordline circuit block 1042 are supplied by VDDN.
- Level shifting on input signals to 1042 are handled by dedicated level shifters (see FIG. 15 ), while level shifting on inputs to the bitcell array 1040 are handled by special sequencing of the circuits within the NVL bitcells without adding any additional dedicated circuits to the array datapath or bitcells.
- FIG. 15 is a schematic of a level converter 1500 for use in NVL array 110 .
- FIG. 15 illustrates one wordline driver that may be part of the set of wordline drivers 1402 .
- Level converter 1500 includes PMOS transistors P1, P2 and NMOS transistor N1, N2 that are formed in region 1502 in the 1.5 volt VDDN domain for wordline drivers 1042 .
- the control logic in timing and control module 1046 is located in region 1503 in the 1.2 v VDDL domain (1.2 v is used to represent the variable VDDL core supply that can range from 0.9 v to 1.5 v).
- 1.2 volt signal 1506 is representative of any of the row control signals that are generated by control module 1046 , for use in accessing NVL bitcell array 1040 .
- Inverter 1510 forms a complimentary pair of control signals 1511 , 1512 in region 1503 that are then routed to transistors N1 and N2 in level converter 1500 .
- NMOS device N1 pulls the gate of PMOS device P2 low, which causes P2 to pull signal 1504 up to 1.5 volts.
- complimentary signal 1512 causes NMOS device N2 to pull the gate of PMOS device P1 low, which pulls up the gate of PMOS device P2 and allows signal 1504 to go low, approximately zero volts.
- the NMOS devices must be stronger than the PMOS so the converter doesn't get stuck.
- level shifting may done across the voltage domains and power may be saved by placing the control logic, including inverter 1510 , in the lower voltage domain 1503 .
- the controller is coupled to each of level converter 1500 by two complimentary control signals 1511 , 1512 .
- FIG. 16 is a timing diagram illustrating operation of level shifting using a sense amp within a ferroelectric bitcell.
- Input data that is provided to NVL array 110 from multiplexor 212 , referring again to FIG. 2 , also needs to be level shifted from the 1.2 v VDDL domain to 1.5 volts needed for best operation of the FeCaps in the 1.5 volt VDDN domain during write operations. This may be done using the sense amp of bit cell 400 , for example.
- each bit line BL such as BL 1352 , which comes from the 1.2 volt VDDL domain, is coupled to transfer gate 402 or 403 within bitcell 400 .
- Sense amp 410 operates in the 1.5 v VDDN power domain.
- data is provided on the bit lines BL, BLB and the transfer gates 402 , 403 are enabled by the pass signal PASS during time periods s2 to transfer the data bit and its inverse value from the bit lines to differential nodes Q, QB.
- the voltage level transferred is only limited to less than the 1.5 volt level because the bit line drivers are located in the 1.2 v VDDL domain.
- Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive, as illustrated at 1604 , after the write data drivers, such as write driver 1156 , 1157 , have forced adequate differential 1602 on Q/QB during time period s2. Since the sense amp is supplied by a higher voltage (VDDN), the sense amp will respond to the differential established across the sense amp by the write data drivers and will clamp the logic 0 side of the sense amp to VSS (Q or QB) while the other side containing the logic 1 is pulled up to VDDN voltage level. In this manner, the existing NVL array hardware is reused to provide a voltage level shifting function during NVL store operations.
- VDDN voltage
- the write data drivers are isolated from the sense amp at the end of time period s2 before the sense amp is turned on during time periods s3, s4. This may be done by turning off the bit line drivers by de-asserting the STORE signal after time period s2 and/or also by disabling the transfer gates by de-asserting PASS after time period s2.
- a computing device can be configured to operate continuously across a series of power interruptions without loss of data or reboot.
- a processing device 1700 as described above includes a plurality of non-volatile logic element arrays 1710 , a plurality of volatile storage elements 1720 , and at least one non-volatile logic controller 1730 configured to control the plurality of non-volatile logic element arrays 1710 to store a machine state represented by the plurality of volatile storage elements 1720 and to read out a stored machine state from the plurality of non-volatile logic element arrays 1710 to the plurality of volatile storage elements 1720 .
- a voltage or current detector 1740 is configured to sense a power quality from an input power supply 1750 .
- a power management controller 1760 is in communication with the voltage or current detector 1740 to receive information regarding the power quality from the voltage or current detector 1710 .
- the power management controller 1760 is also configured to be in communication with the at least one non-volatile logic controller 1710 to provide information effecting storing the machine state to and restoration of the machine state from the plurality of non-volatile logic element arrays 1710 .
- a voltage regulator 1770 is connected to receive power from the input power supply 1750 and provide power to an output power supply rail 1755 configured to provide power to the processing device 1700 .
- the voltage regulator 1770 is further configured to be in communication with the power management controller 1760 and to disconnect the output power supply rail 1755 from the input power supply 1750 , such as through control of a switch 1780 , in response to a determination that the power quality is below a threshold.
- the power management controller 1760 and the voltage or current detector 1740 work together with the at least one non-volatile logic controller 1730 and voltage regulator 1770 to manage the data backup and restoration processes independent of the primary computing path.
- the power management controller 1760 is configured to send a signal to effect stoppage of clocks for the processing device 1700 in response to the determination that the power quality is below the threshold.
- the voltage regulator 1770 can then send a disconnect signal to the power management controller 1760 in response to disconnecting the output power supply rail 1755 from the input power supply 1750 .
- the power management controller 1760 sends a backup signal to the at least one non-volatile logic controller 1710 in response to receiving the disconnect signal.
- the voltage regulator 1770 can be configured to detect the power quality's rising above the threshold and, in response, to send a good power signal to the power management controller 1760 .
- the power management controller 1760 is configured to send a signal to provide power to the plurality of non-volatile logic element arrays 1710 and the at least one non-volatile logic controller 1730 to facilitate restoration of the machine state.
- the power management controller 1760 is configured to determine that power up is complete and, in response, send a signal to effect release of clocks for the processing device 1700 wherein the processing device 1700 resumes operation from the machine state prior to the determination that the power quality was below the threshold.
- a charge storage element 1790 is configured to provide temporary power to the processing device 1700 sufficient to power it long enough to store the machine state in the plurality of non-volatile logic element arrays 1710 after the output power supply rail 1755 is disconnected from the input power supply 1750 .
- the charge storage element 1790 may be at least one dedicated on-die (or off-die) capacitor designed to store such emergency power.
- the charge storage element 1790 may be circuitry in which naturally occurring parasitic charge builds up in the die where the dissipation of the charge from the circuitry to ground provides sufficient power to complete a backup operation.
- RTOS Real Time Operating Systems
- a version of the processing or computing device described above can be configured to handle two or more operating threads or virtual machines.
- the at least one non-volatile logic controller is configured to store first program data from a first program executed by the computing device apparatus in a first set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays.
- the at least one non-volatile logic controller is further configured to store second program data from a second program executed by the computing device apparatus in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays.
- the first program and the second program can correspond to distinct executing threads or virtual machines for the computing device apparatus, and the storage can be completed in response to receiving stimulus regarding an interrupt for the computing device apparatus or in response to a power supply quality problem for the computing device apparatus.
- the at least one non-volatile logic controller is further configured to restore the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first program or the second program is to be executed by the computing device apparatus.
- the stimuli described above could be an actual instruction that triggers the context switch, an interrupt signal, an event from an internal timer, an event coming from the outside of the chip, or the like.
- FIG. 18 An example arrangement used to effect the storage and restoration of the different processing threads or virtual machines is illustrated in FIG. 18 , which represents a modification of the example systems of FIGS. 1 and 2 .
- a given cloud 1805 of volatile storage elements 230 and 237 includes a plurality 1810 of NVL arrays 1812 and 1814 associated with the volatile storage elements 230 and 237 .
- a multiplexer 212 is connected to variably connect individual ones of the volatile storage elements 230 and 237 to one or more corresponding individual ones of the non-volatile logic element arrays 1812 and 1814 .
- the at least one non-volatile logic controller 1806 is further configured to store the first program data or the second program data to the plurality of non-volatile logic element arrays 1812 and 1814 by controlling the multiplexer 212 to connect individual ones of the plurality of volatile storage elements 230 and 237 to either the first set 1812 of non-volatile logic element arrays or the second set 1814 of non-volatile logic element arrays based on whether the first program or the second program is executing in the computing device apparatus.
- a second multiplexer 1822 is connected to variably connect outputs of individual ones of the non-volatile logic element arrays 1812 and 1814 to inputs of one or more corresponding individual ones of the volatile storage elements 230 and 237 .
- the at least one non-volatile logic controller 1806 is further configured to restore the first program data or the second program data to the plurality of volatile storage elements 230 and 237 by controlling the multiplexer 1822 to connect inputs of individual ones of the plurality of volatile storage elements 230 and 237 to outputs of either the first set 1812 of non-volatile logic element arrays or the second set 1814 of non-volatile logic element arrays based on whether the first program or the second program is to be executed in the computing device apparatus.
- the NVL arrays receive signals from the associated NVL controller during both read and write, whereas the first multiplexer 212 receives signals during a write to NVL array process and the second multiplexer 1822 receives signals during a read from NVL arrays process.
- FIG. 19 is a flow chart illustrating operation of a processing device operating two or more processing threads as described above.
- the method includes operating 1902 a processing device having at least a first processing thread and a second processing thread using a plurality of volatile storage elements.
- First program data stored in the plurality of volatile storage elements during execution of the first processing thread is stored 1904 in a first set of non-volatile logic element arrays of a plurality of non-volatile logic element arrays.
- second program data stored in the plurality of volatile storage elements during execution of the second processing thread is stored 1906 in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays.
- the storage in the NVL arrays can be done in response to a program based or power supply quality problem based interrupt, and the choice of which set of data to backup in the NVL arrays can be made based on the type of interrupt received.
- the method can include controlling a multiplexer to connect individual ones of the plurality of volatile storage elements to either the first set of non-volatile logic element arrays or the second set of non-volatile logic element arrays based on whether the first processing thread or the second processing thread is executing in the processing device.
- the method includes restoring 1908 the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first processing thread or the second processing thread is to be executed.
- any number of distinct executing threads or virtual machines can be supported (limited only by the die area needed for the required NVL arrays).
- Switching to a different code stream based on the nature of the interrupt that needs to be serviced is simply a matter of saving the current machine context (program counter, registers, stack pointer, and the like) to the NVL mini-arrays dedicated to that operating thread and recovering the desired operating context from another set of NVL-arrays.
- Switching between two operating contexts is controlled in hardware by using muxes on the NVL mini-array read and write data ports and control inputs to select the desired set of mini-arrays for the required operation.
- the multiple machine contexts are saved in NVL mini-arrays and are thus not sensitive to interruptions in the power supply. Machine execution can continue uninterrupted across supply disruptions, independent of the operating context currently being executed when the power is lost.
- time and power savings in switching between operating threads or machine states can be realized.
- the existing machine context must be saved before operations are switched to another machine context. This is typically done by moving the machine context in chunks equal in size to the normal machine data path width (8 bit, 16 bit, 32 bit, 64 bit, etc). Because the entire data bandwidth to memory in a typical machine is limited by the size of the machines data path, it takes more than one machine clock cycle to store the machine context. For example, if a context must save a 32 bit program counter, a 32 bit stack pointer, a 64 entry ⁇ 32 bit register file, and a 32 entry ⁇ 32 bit register file, then the total machine context is 98 thirty two bit machine “words”.
- a full context save would take 98 clock cycles assuming the memory can accept one 32 bit word per clock cycle.
- NVL arrays arranged as described herein have parallel access to all FF's. In an example with 8 entries per NVL array and all NVL arrays operating in parallel, it would only take 8 clock cycles to store 500K FF's worth of machine state.
- FIG. 20 is a block diagram of another SoC 2000 that includes NVL arrays, as described above.
- SoC 2000 features a Cortex-M0 processor core 2002 , universal asynchronous receiver/transmitter (UART) 2004 and SPI (serial peripheral interface) 2006 interfaces, and 10 KB ROM 2010 , 8 KB SRAM 2012 , 64 KB (Ferroelectric RAM) FRAM 2014 memory blocks, characteristic of a commercial ultra low power (ULP) microcontroller.
- the 130 nm FRAM process based SoC uses a single 1.5V supply, an 8 MHz system clock and a 125 MHz clock for NVL operation.
- the SoC consumes 75 uA/MHz & 170 uA/MHz while running code from SRAM & FRAM respectively.
- SoC 2000 provides test capability for each NVL bit, as described in more detail above, and in-situ read signal margin of 550 mV.
- SoC 2000 has 2537 FFs and latches served by 10 NVL arrays.
- a central NVL controller controls all the arrays and their communication with FFs, as described in more detail above.
- the distributed NVL mini-array system architecture helps amortize test feature costs, achieving a SoC area overhead of only 3.6% with exceptionally low system level sleep/wakeup energy cost of 2.2 pJ/0.66 pJ per bit.
- a SoC may contain one or more modules which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library.
- a nonvolatile FeCap bitcell from an NVL array may be coupled to flip-flop or latch that does not include a low power retention latch.
- the system would transition between a full power state, or otherwise reduced power state based on reduced voltage or clock rate, and a totally off power state, for example.
- the state of the flipflops and latches would be saved in distributed NVL arrays.
- the flipflops When power is restored, the flipflops would be initialized via an input provided by the associated NVL array bitcell.
- the techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP).
- the software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device and loaded and executed in the processor.
- the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium.
- the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Power Sources (AREA)
- Logic Circuits (AREA)
- Static Random-Access Memory (AREA)
- Semiconductor Integrated Circuits (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
- Design And Manufacture Of Integrated Circuits (AREA)
- Microcomputers (AREA)
- Semiconductor Memories (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional application No. 61/698,906, filed Sep. 10, 2012, which is incorporated by reference in its entirety herein.
- This invention generally relates to nonvolatile memory cells and their use in a system, and in particular, in combination with logic arrays to provide nonvolatile logic modules.
- Many portable electronic devices such as cellular phones, digital cameras/camcorders, personal digital assistants, laptop computers and video games operate on batteries. During periods of inactivity the device may not perform processing operations and may be placed in a power-down or standby power mode to conserve power. Power provided to a portion of the logic within the electronic device may be turned off in a low power standby power mode. However, presence of leakage current during the standby power mode represents a challenge for designing portable, battery operated devices. Data retention circuits such as flip-flops and/or latches within the device may be used to store state information for later use prior to the device entering the standby power mode. The data retention latch, which may also be referred to as a shadow latch or a balloon latch, is typically powered by a separate ‘always on’ power supply.
- A known technique for reducing leakage current during periods of inactivity utilizes multi-threshold CMOS (MTCMOS) technology to implement the shadow latch. In this approach, the shadow latch utilizes thick gate oxide transistors and/or high threshold voltage (Vt) transistors to reduce the leakage current in standby power mode. The shadow latch is typically detached from the rest of the circuit during normal operation (e.g., during an active power mode) to maintain system performance. To retain data in a ‘master-slave’ flip-flop topology, a third latch, e.g., the shadow latch, may be added to the master latch and the slave latch for the data retention. In other cases, the slave latch may be configured to operate as the retention latch during low power operation. However, some power is still required to retain the saved state. For example, see U.S. Pat. No. 7,639,056, “Ultra Low Area Overhead Retention Flip-Flop for Power-Down Applications”, which is incorporated by reference herein.
- System on Chip (SoC) is a concept that has been around for a long time; the basic approach is to integrate more and more functionality into a given device. This integration can take the form of either hardware or solution software. Performance gains are traditionally achieved by increased clock rates and more advanced process nodes. Many SoC designs pair a microprocessor core, or multiple cores, with various peripheral devices and memory circuits.
- Energy harvesting, also known as power harvesting or energy scavenging, is the process by which energy is derived from external sources, captured, and stored for small, wireless autonomous devices, such as those used in wearable electronics and wireless sensor networks. Harvested energy may be derived from various sources, such as: solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, etc. However, typical energy harvesters provide a very small amount of power for low-energy electronics. The energy source for energy harvesters is present as ambient background and is available for use. For example, temperature gradients exist from the operation of a combustion engine, and in urban areas, there is a large amount of electromagnetic energy in the environment because of radio and television broadcasting, etc.
-
FIG. 1 is a functional block diagram of a portion of an example system on chip (SoC) as configured in accordance with various embodiments of the invention; -
FIG. 2 is a more detailed block diagram of one flip-flop cloud used in the SoC ofFIG. 1 ; -
FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor; -
FIGS. 4-7 are schematic and timing diagrams illustrating an example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; -
FIGS. 8-9 are schematic and timing diagrams illustrating another example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; -
FIG. 10 is a block diagram illustrating an example NVL array used in the SoC ofFIG. 1 ; -
FIGS. 11A and 11B are more detailed schematics of input/output circuits used in the NVL array ofFIG. 10 ; -
FIG. 12A is a timing diagram illustrating an example offset voltage test during a read cycle as configured in accordance with various embodiments of the invention; -
FIG. 12B illustrates a histogram generated during an example sweep of offset voltage as configured in accordance with various embodiments of the invention; -
FIG. 13 is a schematic illustrating parity generation in the NVL array ofFIG. 10 ; -
FIG. 14 is a block diagram illustrating example power domains within an NVL array as configured in accordance with various embodiments of the invention; -
FIG. 15 is a schematic of an example level converter for use in the NVL array as configured in accordance with various embodiments of the invention; -
FIG. 16 is a timing diagram illustrating an example operation of level shifting using a sense amp within a ferroelectric bitcell as configured in accordance with various embodiments of the invention; -
FIG. 17 is a block diagram of an example power detection arrangement as configured in accordance with various embodiments of the invention; -
FIG. 18 is a functional block diagram of a portion of an example system on chip (SoC) and flip flop design with more than one NVL array per flip flop cloud as configured in accordance with various embodiments of the invention; -
FIG. 19 is a flow chart illustrating an example operation of a processing device operating two or more processing threads as configured in accordance with various embodiments of the invention; and -
FIG. 20 is a block diagram of another example SoC that includes NVL arrays as configured in accordance with various embodiments of the invention. - Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary technical meaning as is accorded to such terms and expressions by persons skilled in the technical field as set forth above except where different specific meanings have otherwise been set forth herein.
- Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. In the following detailed description, numerous specific details are set forth to provide a more thorough understanding. However, it will be apparent to one of ordinary skill in the art that aspects of the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
- While prior art systems made use of retention latches to retain the state of flip-flops in logic modules during low power operation, some power is still required to retain state. In contrast, nonvolatile elements can retain the state of flip flops in logic module while power is completely removed. Such logic elements will be referred to herein as Non-Volatile Logic (NVL). A micro-control unit (MCU) implemented with NVL within an SoC (system on a chip) may have the ability to stop, power down, and power up with no loss in functionality. A system reset/reboot is not required to resume operation after power has been completely removed. This capability is ideal for emerging energy harvesting applications, such as Near Field Communication (NFC), radio frequency identification (RFID) applications, and embedded control and monitoring systems, for example, where the time and power cost of the reset/reboot process can consume much of the available energy, leaving little or no energy for useful computation, sensing, or control functions. Though this description discusses an SOC containing a programmable MCU for sequencing the SOC state machines, one of ordinary skill in the art can see that NVL can be applied to state machines hard coded into ordinary logic gates or ROM, PLA, or PLD based control systems.
- In one approach, an SoC includes one or more blocks of nonvolatile logic. For example, a non-volatile logic (NVL) based SoC may back up its working state (all flip-flops) upon receiving a power interrupt, have zero leakage in sleep mode, and need less than 400 ns to restore the system state upon power-up.
- Without NVL, a chip would either have to keep all flip-flops powered in at least a low power retention state that requires a continual power source even in standby mode or waste energy and time rebooting after power-up. For energy harvesting applications, NVL is useful because there is no constant power source required to preserve the state of flip-flops (FFs), and even when the intermittent power source is available, boot-up code alone may consume all the harvested energy. For handheld devices with limited cooling and battery capacity, zero-leakage IC's (integrated circuits) with “instant-on” capability are ideal.
- Ferroelectric random access memory (FRAM) is a non-volatile memory technology with similar behavior to DRAM (dynamic random access memory). Each individual bit can be accessed, but unlike EEPROM (electrically erasable programmable read only memory) or Flash, FRAM does not require a special sequence to write data nor does it require a charge pump to achieve required higher programming voltages. Each ferroelectric memory cell contains one or more ferroelectric capacitors (FeCap). Individual ferroelectric capacitors may be used as non-volatile elements in the NVL circuits described herein.
-
FIG. 1 is a functional block diagram illustrating a portion of a computing device, in this case, an example system on chip (SoC) 100 providing non-volatile logic based computing features. While the term SoC is used herein to refer to an integrated circuit that contains one or more system elements, the teachings of this disclosure can be applied to various types of integrated circuits that contain functional logic modules such as latches, integrated clock gating cells, and flip-flop circuit elements (FF) that provide non-volatile state retention. Embedding non-volatile storage elements outside the controlled environment of a large array presents reliability and fabrication challenges. An NVL bitcell based NVL array is typically designed for maximum read signal margin and in-situ margin testability as is needed for any NV-memory technology. However, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. - To amortize the test feature costs and improve manufacturability, and with reference to the example of
FIGS. 1 and 2 , a plurality of non-volatile logic element arrays orNVL arrays 110 are disposed with a plurality ofvolatile storage elements 220. At least onenon-volatile logic controller 106 configured to control the plurality ofNVL arrays 110 to store a machine state represented by the plurality ofvolatile storage elements 220 and to read out a stored machine state from the plurality ofNVL arrays 110 to the plurality ofvolatile storage elements 220. For instance, the at least onenon-volatile logic controller 106 is configured to generate a control sequence for saving the machine state to or retrieving the machine state from the plurality ofNVL arrays 110. Amultiplexer 212 is connected to variably connect individual ones of thevolatile storage elements 220 to one or more corresponding individual ones of theNVL arrays 110. - In the illustrated example, the computing device apparatus is arranged on a single chip, here an
SoC 100 implemented using 256b mini-arrays 110, which will be referred to herein as NVL arrays, of FeCap (ferroelectric capacitor) based bitcells dispersed throughout the logic cloud to save state of thevarious flip flops 120 when power is removed. Each cloud 102-104 ofFFs 120 includes an associatedNVL array 110. Such dispersal results in individual ones of theNVL arrays 110 being arranged physically closely to and connected to receive data from corresponding individual ones of thevolatile storage elements 220. Acentral NVL controller 106 controls all the arrays and their communication withFFs 120. While three FF clouds 102-104 are illustrated here,SoC 100 may have additional, or fewer, FF clouds all controlled byNVL controller 106. TheSOC 100 can be partitioned into more than one NVL domain in which there is a dedicated NVL controller for managing theNVL arrays 110 andFFs 120 in each of the separate NVL domains. The existing NVL array embodiment uses 256 bit mini-arrays, but the arrays may have a greater or lesser number of bits as needed. -
SoC 100 is implemented using modifiedretention flip flops 120 including circuitry configured to enable write back of data from individual ones of the plurality of non-volatile logic element arrays to the individual ones of the plurality of flip flop circuits. There are various known ways to implement a retention flip flop. For example, a data input may be latched by a first latch. A second latch coupled to the first latch may receive the data input for retention while the first latch is inoperative in a standby power mode. The first latch receives power from a first power line that is switched off during the standby power mode. The second latch receives power from a second power line that remains on during the standby mode. A controller receives a clock input and a retention signal and provides a clock output to the first latch and the second latch. A change in the retention signal is indicative of a transition to the standby power mode. The controller continues to hold the clock output at a predefined voltage level and the second latch continues to receive power from the second power line in the standby power mode, thereby retaining the data input Such a retention latch is described in more detail in U.S. Pat. No. 7,639,056, “Ultra Low Area Overhead Retention Flip-Flop for Power-Down Applications”. -
FIG. 2 illustrates an example retention flop architecture that does not require that the clock be held in a particular state during retention. In such a “clock free” NVL flop design, the clock value is a “don't care” during retention. - In
SoC 100, modifiedretention FFs 120 include simple input and control modifications to allow the state of each FF to be saved in an associated FeCap bit cell inNVL array 110, for example, when the system is being transitioned to a power off state. When the system is restored, then the saved state is transferred fromNVL array 110 back to eachFF 120. Power savings and data integrity can be improved through implementation of particular power configurations. In one such approach, individual retention flip flop circuits include a primary logic circuit portion (master stage or latch) powered by a first power domain (such as VDDL in the below described example) and a slave stage circuit portion powered by a second power domain (such as VDDR in the below described example). In this approach, the first power domain is configured to be powered down and the second power domain is active during write back of data from the plurality of NVL arrays to the plurality of volatile storage elements. The plurality of non-volatile logic elements are configured to be powered by a third power domain (such as VDDN in the below described example) that is configured to be powered down during regular operation of the computing device apparatus. - With this configuration, a plurality of power domains can be implemented that are independently powered up or powered down in a manner that can be specifically designed to fit a given implementation. Thus, in another aspect, the computing apparatus includes a first power domain configured to supply power to switched logic elements of the computing device apparatus and a second power domain configured to supply power to logic elements configured to control signals for storing data to or reading data from the plurality of non-volatile logic element arrays. Where the plurality of volatile storage elements comprise retention flip flops, the second power domain is configured to provide power to a slave stage of individual ones of the retention flip flops. A third power domain supplies power for the plurality of non-volatile logic element arrays. In addition to the power domains, NVL arrays can be defined as domains relating to particular functions. For example, a first set of at least one of the plurality of non-volatile logic element arrays can be associated with a first function of the computing device apparatus and a second set of at least one of the plurality of non-volatile logic element arrays can be associated with a second function of the computing device apparatus. Operation of the first set of at least one of the plurality of non-volatile logic element arrays is independent of operation of the second set of at least one of the plurality of non-volatile logic element arrays. So configured, flexibility in the control and handling of the separate NVL array domains or sets allows more granulated control of the computing device's overall function.
- This more specific control can be applied to the power domains as well. In one example, the first power domain is divided into a first portion configured to supply power to switched logic elements associated with the first function and a second portion configured to supply power to switched logic elements associated with the second function. The first portion and the second portion of the first power domain are individually configured to be powered up or down independently of other portions of the first power domain. Similarly, the third power domain can be divided into a first portion configured to supply power to non-volatile logic element arrays associated with the first function and a second portion configured to supply power to non-volatile logic element arrays associated with the second function. As with the first power domain, the first portion and the second portion of the third power domain are individually configured to be powered up or down independently of other portions of the third power domain.
- So configured, if individual functions are not used for a given device, flip flops and NVL arrays associated with the unused functions can be respectively powered down and operated separately from the other flip flops and NVL arrays. Such flexibility in power and operation management allows one to tailor the functionality of a computing device with respect to power usage and function. This can be further illustrated in the following example design having a CPU, three SPI interfaces, three UART interfaces, three I2C interfaces, and only one logic power domain (VDDL). The logic power domain is distinguished from the retention or NVL power domains (VDDR and VDDN respectively), although these teachings can be applied to those power domains as well. Although this example device has only one logic power domain, a given application for the device might only use one of the three SPI units, one of the three UARTs and one of the three I2C peripherals. To allow applications to optimize the NVL application wake-up and sleep times and energy costs, the VDDL power domain can be partitioned into 10 separate NVL domains (one CPU, three SPI, three UART, three I2C totaling 10 NVL domains), each of which can be enabled/disabled independently of the others. So, the customer could enable NVL capability for the CPU, one SPI, one UART, and one I2C for their specific application while disabling the others. In addition, this partitioning also allows flexibility in time as well as energy and the different NVL domains can save and restore state at different points in time.
- To add further flexibility, NVL domains can overlap with power domains. Referring to the above example, four power domains can be defined: one each for CPU, SPI, UART, and I2C (each peripheral power domain has three functional units) while defining three NVL domains within each peripheral domain and one for the CPU (total of 10 NVL domains again). In this case, individual power domains turn on or off in addition to controlling the NVL domains inside each power domain for added flexibility in power savings and wakeup/sleep timing.
- Moreover, individual ones of the first power domain, the second power domain, and the third power domain are configured to be powered down or up independently of other ones of the first power domain, the second power domain, and the third power domain. For instance, integral power gates can be configured to be controlled to power down the individual ones of the first power domain, the second power domain, and the third power domain. As described in table 1 below, the third power domain is configured to be powered down during regular operation of the computing device apparatus, and the second power domain is configured to be powered down during a write back of data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements. A fourth power domain can be configured to supply power to real time clocks and wake-up interrupt logic.
- Such approaches can be further understood in reference to the illustrated
example SoC 100 whereNVL arrays 110 andcontroller 106 are operated on an NVL power domain referred to as VDDN and are switched off during regular operation. All logic, memory blocks 107 such as ROM (read only memory) and SRAM (static random access memory), and master stage of FFs are on a logic power domain referred to as VDDL. FRAM (ferroelectric random access memory) arrays are directly connected to a dedicated global supply rail (VDDZ) maintained at a higher fixed voltage needed for FRAM (i.e., VDDL<=VDDZ, where VDDZ is a fixed supply and VDDL can be varied as long as VDDL remains at a lower potential than VDDZ). Note that FRAM arrays as shown in 103 typically contain integrated power switches that allow the FRAM arrays to be powered down as needed, though it can easily be seen that FRAM arrays without internal power switches can be utilized in conjunction with power switches that are external to the FRAM array. The slave stages of retention FFs are on a retention power domain referred to as the VDDR domain to enable regular retention in a stand-by mode of operation. Table 1 summarizes power domain operation during normal operation, system backup to NVL arrays, sleep mode, system restoration from NVL arrays, and back to normal operation. Table 1 also specifies domains used during a standby idle mode that may be initiated under control of system software in order to enter a reduced power state using the volatile retention function of the retention flip flops. A set of switches indicated at 108 are used to control the various power domains. There may bemultiple switches 108 that may be distributed throughoutSoC 100 and controlled by software executed by a processor onSoC 100 and/or by a hardware controller (not shown) withinSoC 100. There may be additional domains in addition to the three illustrated here, as will be described later. -
TABLE 1 system power modes Trigger SoC Mode Trigger source VDDL VDDR VDDN Regular operation na na ON ON OFF System backup to Power external ON ON ON NVL bad Sleep mode Backup NVL OFF OFF OFF done controller System restoration Power external OFF ON ON from NVL good Regular operation Restore NVL ON ON OFF done controller Standby retention idle System OFF ON OFF mode software - State info could be saved in a large centralized FRAM array, but would require a more time to enter sleep mode, longer wakeup time, excessive routing, and power costs caused by the lack of parallel access to system FFs.
-
FIG. 2 is a more detailed block diagram of oneFF cloud 102 used inSoC 100. In this embodiment, each FF cloud includes up to 248 flip flops and each NVL array is organized as an 8×32 bit array, but one bit is used for parity in this embodiment. However, in other embodiments, the number of flip flops and the organization of the NVL array may have a different configuration, such as 4×m, 16×m, etc, where m is chosen to match the size of the FF cloud. In some embodiments, all of the NVL arrays in the various clouds may be the same size, while in other approaches there may be different size NVL arrays in the same SoC. -
Block 220 is a more detailed schematic of eachretention FF 120. Several of the signals have an inverted version indicated by suffix “B” (referring to “bar” or /), such as RET and RETB, CLK and CLKB, etc. Each retention FF includes amaster latch 221 and aslave latch 222.Slave latch 222 is formed byinverter 223 andinverter 224.Inverter 224 includes a set of transistors controlled by the retention signal (RET, RETB) that are used to retain the FF state during low power sleep periods, during which power domain VDDR remains on while power domain VDDL is turned off, as described above and in Table 1. -
NVL array 110 is logically connected with the 248 FFs it serves incloud 102. Generally speaking, to enable data transfer from an NVL array to the FFs, individual FFs include circuitry configured to enable write back of data from individual ones of the plurality ofNVL arrays 110. In the illustrated example, two additional ports are provided on theslave latch 222 of each FF as shown inblock 220. A data input port (gate 225) is configured to insert data ND from one of theNVL arrays 110 to an associatedvolatile storage element 220. The data input port is configured to insert the data ND by allowing passage of a stored data related signal from the one of the NVL arrays to a slave stage of the associated flip flop circuit in response to receiving an update signal NU from the at least onenon-volatile logic controller 106 on a data input enable port to trigger the data input port.Inverter 223 is configured to be disabled in response to receiving the inverted NVL update signal NUZ to avoid an electrical conflict between thetri-state inverter 223 and the NVL data port inputtri-state inverter 225. - More specifically, in the illustrated example, the inv-inv feedback pair (223 and 224) form the latch itself. These inverters make a very stable configuration for holding the data state and will fight any attempts to change the latch state unless at least one of the inverters is disabled to prevent electrical conflict when trying to overwrite the current state with the next state via one of the data ports. The illustrated
NVL FF 220 includes two data ports that access theslave latch 222 as compared to one data port for a regular flop. One port transfers data from themaster stage 221 to theslave stage 222 via the cmos pass gate controlled by the clock. When using this port to update theslave state 221, theinverter 224 driving onto the output node of the pass gate controlled by CLK is disabled to avoid an electrical conflict while theinverter 223 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state in preparation for holding the data when clock goes low (for a posedge FF). - For the same reason, the
inverter 223 is disabled when the ND data port is activated by NU transitioning to the active high state to avoid an electrical conflict on the ND port. Thesecond inverter 224 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state to be latched when NU goes low. In this example, the NU port does not in any way impact the other data port controlled by the clock. On a dual port FF, having both ports active at the same time is an illegal control condition, and the resulting port conflict means the resulting next state will be indeterminate. To avoid a port conflict, the system holds the clock in the inactive state if the slave state is updated while in functional mode. In retention mode, the RET signal along with supporting circuits inside the FF are used to prevent electrical conflicts independent of the state of CLK while in retention mode (see the inverter controlled by RETB in the master stage). - As illustrated these additional elements are disposed in the
slave stage 222 of the associated FF. The additional transistors, however, are not on the critical path of the FF and have only 1.8% and 6.9% impact on normal FF performance and power (simulation data) in this particular implementation. When data from the NVL array is valid on the ND (NVL-Data) port, the NU (NVL-Update) control input is pulsed high for a cycle to write to the FF. The thirty-one bit data output of an NVL array fans out to ND ports of eight thirty-one bit FF groups. - To save flip-flop state, a multiplexer is configured to pass states from a plurality of the individual ones of the plurality of
volatile storage elements 220 for essentially simultaneous storage in an individual one of the plurality ofNVL arrays 110. For instance, the multiplexer may be configured to connect to N groups of M volatile storage elements of the plurality of volatile storage elements per group and to an N by M size NVL array of the plurality of NVL arrays. In this configuration, the multiplexer connects one of the N groups to the N by M size NVL array to store data from the M volatile storage elements into a row of the N by M size NVL array at one time. In the illustrated example, Q outputs of 248 FFs are connected to the 31b parallel data input ofNVL array 110 through a 31b wide 8-1mux 212. To minimize FF loading, the mux may be broken down into smaller muxes based on the layout of the FF cloud and placed close to the FFs they serve. Again, the NVL controller synchronizes writing to the NVL array, and the select signals MUX SEL <2:0> of 8-1mux 212. - When the FFs are operating in a retention mode, a clock CLK of the computing device is a “don't care” such that it is irrelevant for the volatile storage elements with respect to updating the slave stage state whenever the NU signal is active, whereby the non-volatile logic controller is configured to control and effect storage of data from individual ones of the volatile storage elements into individual ones of the non-volatile storage elements. In other words, the clock CLK control is not needed during NVL data recovery during retention mode, but the clock CLK should be controlled at the system level once the system state is restored, right before the transition between retention mode and functional mode. In another approach, the NVL state can be recovered to the volatile storage elements when the system is in a functional mode. In this situation where the VDDL power is active, the clock CLK is held in the inactive state for the volatile storage elements during the data restoration from the NVL array, whereby the non-volatile logic controller is configured to control and effect transfer of data from individual ones of the non-volatile storage elements into individual ones of the volatile storage elements. For example, a system clock CLK is typically held low for positive edge FF based logic and held high for negative edge FF based logic.
- Generally speaking, to move from regular operation into system backup mode, the first step is to stop the system clock(s) in an inactive state to freeze the machine state to not change while the backup is in progress. The clocks are held in the inactive state until backup is complete. After backup is complete, all power domains are powered down and the state of the clock becomes a don't care in sleep mode by definition.
- When restoring the state from NVL arrays, the FF are placed in a retention state (see Table 2 below) in which the clock continues to be a don't care as long as the RET signal is active (clock can be a don't care by virtue of special transistors added to each retention FF and is controlled by the RET signal). While restoring NVL state, the flops remain in retention mode so clock remains a don't care. Once the NVL state is recovered, the state of the machine logic that controls the state of the system clocks will also be restored to the state they were in at the time of the state backup, which also means that for this example all the controls (including the volatile storage elements or FF's) that placed the system clock into inactive states have now been restored such that the system clocks will remain in the inactive state upon completion of NVL data recovery. Now the RET signal can be deactivated, and the system will sit quiescent with clocks deactivated until the NVL controller signals to the power management controller that the restoration is complete, in response to which the power management controller will enable the clocks again.
- To restore flip-flop state during restoration,
NVL controller 106 reads an NVL row inNVL array 110 and then pulses the NU signal for the appropriate flip-flop group. During system restore, retention signal RET is held high and the slave latch is written from ND with power domain VDDL unpowered; at this point the state of the system clock CLK is a don't care. FF's are placed in the retention state with VDDL=0V and VDDR=VDD in order to suppress excess power consumption related to spurious data switching that occurs as each group of 31 FF's is updated during NVL array read operations. Suitably modified non-retention flops can be used in NVL based SOC's at the expense of higher power consumption during NVL data recovery operations. - System clock CLK should start from low once VDDL comes up and thereafter normal synchronous operation continues with updated information in the FFs. Data transfer between the NVL arrays and their respective FFs can be done in serial or parallel or any combination thereof to tradeoff peak current and backup/restore time. Because a direct access is provided to FFs controlled by at least one non-volatile logic controller that is separate from a central processing unit for the computing device apparatus, intervention from a microcontroller processing unit (CPU) is not required for NVL operations; therefore the implementation is SoC/CPU architecture agnostic. Table 2 summarizes operation of the NVL flip flops.
-
TABLE 2 NVL Flip Flop truth table Clock Retention NVL update mode (CLK) (RET) (NU) Value saved Regular Pulsed 0 0 From D input operation retention X 1 0 Q value NVL system 0 0 0 From Q output backup NVL system X 1 pulsed NVL cell bit data restore (ND) - Because the at least one non-volatile logic controller is configured to variably control data transfer to or reading from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals, system designers have additional options with respect to tailoring system operation specifications to particular needs. For instance, because no computation can occur on an MCU SOC during the time the system enters a low power system state or to wakeup from a low power state, minimizing the wakeup or go to sleep time is advantageous. On the other hand, non-volatile state retention is power intensive because significant energy is needed to save and restore state to or from non-volatile elements such as ferro-electric capacitors. The power required to save and restore system state can exceed the capacity of the power delivery system and cause problems such as electromigration induced power grid degradation, battery life reduction due to excessive peak current draw, or generation of high levels of noise on the power supply system that can degrade signal integrity on die. Thus, allowing a system designer to be able to balance between these two concerns is desirable.
- In one such approach, the at least one
non-volatile logic controller 106 is configured to receive the input signals through auser interface 125, such as those known to those of skill in the art. In another approach, the at least one non-volatile logic controller is configured to receive the input signals from aseparate computing element 130 that may be executing an application. In one such approach, the separate computing element is configured to execute the application to determine a reading sequence for the plurality of non-volatile arrays based at least in part on a determination of power and computing resource requirements for thecomputing device apparatus 130. So configured, a system user can manipulate the system state store and retrieve procedure to fit a given design. -
FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor. The general operation of ferroelectric bit cells is known. When most materials are polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function, referred to as dielectric polarization. In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero polarization as illustrated inFIG. 3 when the applied field E is zero. The distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by an applied electric field; the polarization is dependent not only on the current electric field but also on its history, yielding a hysteresis loop. The term “ferroelectric” is used to indicate the analogy to ferromagnetic materials, which have spontaneous magnetization and also exhibit hysteresis loops. - The dielectric constant of a ferroelectric capacitor is typically much higher than that of a linear dielectric because of the effects of semi-permanent electric dipoles formed in the crystal structure of the ferroelectric material. When an external electric field is applied across a ferroelectric dielectric, the dipoles tend to align themselves with the field direction, produced by small shifts in the positions of atoms that result in shifts in the distributions of electronic charge in the crystal structure. After the charge is removed, the dipoles retain their polarization state. Binary “0”s and “1”s are stored as one of two possible electric polarizations in each data storage cell. For example, in the figure a “1” may be encoded using the negative
remnant polarization 302, and a “0” may be encoded using the positiveremnant polarization 304, or vice versa. - Ferroelectric random access memories have been implemented in several configurations. A one transistor, one capacitor (1T-1C) storage cell design in an FeRAM array is similar in construction to the storage cell in widely used DRAM in that both cell types include one capacitor and one access transistor. In a DRAM cell capacitor, a linear dielectric is used, whereas in an FeRAM cell capacitor the dielectric structure includes ferroelectric material, typically lead zirconate titanate (PZT). Due to the overhead of accessing a DRAM type array, a 1T-1C cell is less desirable for use in small arrays such as
NVL array 110. - A four capacitor, six transistor (4C-6T) cell is a common type of cell that is easier to use in small arrays. An improved four capacitor cell will now be described.
-
FIG. 4 is a schematic illustrating one embodiment of a ferroelectricnonvolatile bitcell 400 that includes four capacitors and twelve transistors (4C-12T). The four FeCaps are arranged as two pairs in a differential arrangement. FeCaps C1 and C2 are connected in series to formnode Q 404, while FeCaps C1′ and C2′ are connected in series to formnode QB 405, where a data bit is written into node Q and stored in FeCaps C1 and C2 via bit line BL and an inverse of the data bit is written into node QB and stored in FeCaps C1′ and C2′ via inverse bitline BLB.Sense amp 410 is coupled to node Q and to node QB and is configured to sense a difference in voltage appearing on nodes Q, QB when the bitcell is read. The four transistors insense amp 410 are configured as two cross coupled inverters to form a latch.Pass gate 402 is configured to couple node Q to bitline B and passgate 403 is configured to couple node QB to bit line BLB. Eachpass gate - Typically, there will be an array of
bit cells 400. There may then be multiple columns of similar bitcells to form an n row by m column array. For example, inSoC 100, the NVL arrays are 8×32; however, as discussed earlier, different configurations may be implemented. -
FIGS. 5 and 6 are timing diagram illustrating read and write waveforms for reading a data value of logical 0 and writing a data value of logical 0, respectively. Reading and writing to the NVL array is a multi-cycle procedure that may be controlled by the NVL controller and synchronized by the NVL clock. In another embodiment, the waveforms may be sequenced by fixed or programmable delays starting from a trigger signal, for example. During regular operation, a typical 4C-6T bitcell is susceptible to time dependent dielectric breakdown (TDDB) due to a constant DC bias across FeCaps on the side storing a “1”. In a differential bitcell, since an inverted version of the data value is also stored, one side or the other will always be storing a “1”. - To avoid TDDB, plate line PL1, plate line PL2, node Q and node QB are held at a quiescent low value when the cell is not being accessed, as indicated during time periods s0 in
FIGS. 5 , 6. Powerdisconnect transistors MP 411 andMN 412 allowsense amp 410 to be disconnected from power during time periods s0 in response to sense amp enable signals SAEN and SAENB.Clamp transistor MC 406 is coupled to node Q and clamp transistor MC′ 407 is coupled to node QB.Clamp transistors - In this embodiment, Vdd is 1.5 volts and the ground reference plane has a value of 0 volts. A logic high has a value of approximately 1.5 volts, while a logic low has a value of approximately 0 volts. Other embodiments that use logic levels that are different from ground for logic 0 (low) and Vdd for logic 1 (high) would clamp nodes Q, QB to a voltage corresponding to the quiescent plate line voltage so that there is effectively no voltage across the FeCaps when the bitcell is not being accessed.
- In another embodiment, two clamp transistors may be used. Each of these two transistors is used to clamp the voltage across each FeCap to be no greater than one transistor Vt (threshold voltage). Each transistor is used to short out the FeCaps. In this case, for the first transistor, one terminal connects to Q and the other one connects to PL1, while for transistor two, one terminal connects to Q and the other connects to PL2. The transistor can be either NMOS or PMOS, but NMOS is more likely to be used.
- Typically, a bit cell in which the two transistor solution is used does not consume significantly more area than the one transistor solution. The single transistor solution assumes that PL1 and PL2 will remain at the same ground potential as the local VSS connection to the single clamp transistor, which is normally a good assumption. However, noise or other problems may occur (especially during power up) that might cause PL1 or PL2 to glitch or have a DC offset between the PL1/PL2 driver output and VSS for brief periods; therefore, the two transistor design may provide a more robust solution.
- To read bitcell 400, plate line PL1 is switched from low to high while keeping plate line PL2 low, as indicated in time period s2. This induces voltages on nodes Q, QB whose values depend on the capacitor ratio between C1-C2 and C1′-C2′ respectively. The induced voltage in turn depends on the remnant polarization of each FeCap that was formed during the last data write operation to the FeCap's in the bit cell. The remnant polarization in effect “changes” the effective capacitance value of each FeCap which is how FeCaps provide nonvolatile storage. For example, when a
logic 0 was written to bitcell 400, the remnant polarization of C2 causes it to have a lower effective capacitance value, while the remnant polarization of C1 causes it to have a higher effective capacitance value. Thus, when a voltage is applied across C1-C2 by switching plate line PL1 high while holding plate line PL2 low, the resultant voltage on node Q conforms to equation (1). A similar equation holds for node QB, but the order of the remnant polarization of C1′ and C2′ is reversed, so that the resultant voltages on nodes Q and QB provide a differential representation of the data value stored inbit cell 400, as illustrated at 502, 503 inFIG. 5 . -
- The
local sense amp 410 is then enabled during time period s3. After sensing thedifferential values sense amp 410 produces afull rail signal transfer gates NVL array 110, for example -
FIG. 6 is a timing diagram illustrating writing alogic 0 to bitcell 400. The write operation begins by raising both plate lines to Vdd during time period s1. This is called the primary storage method. The signal transitions on PL1 and PL2 are capacitively coupled onto nodes Q and QB, effectively pulling both storage nodes almost all the way to VDD (1.5 v). Data is provided on the bit lines BL, BLB and thetransfer gates Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive after the write data drivers have forced adequate differential on Q/QB during time period s2. However, to avoid a short from the sense amp to the 1.2 v driver supply, the write data drivers are turned off at the end of time period s2 before the sense amp is turned on during time periods s3, s4. In an alternative embodiment called the secondary store method, write operations hold PL2 at 0 v or ground throughout the data write operation. This can save power during data write operations, but reduces the resulting read signal margin by 50% as C2 and C2′ no longer hold data via remnant polarization and only provide a linear capacitive load to the C1 and C2 FeCaps. - Key states such as PL1 high to SAEN high during s2, SAEN high pulse during s3 during read and FeCap DC bias states s3-4 during write can selectively be made multi-cycle to provide higher robustness without slowing down the NVL clock.
- For FeCap based circuits, reading data from the FeCap's may partially depolarize the capacitors. For this reason, reading data from FeCaps is considered destructive in nature; i.e. reading the data may destroy the contents of the FeCap's or reduce the integrity of the data at a minimum. For this reason, if the data contained in the FeCap's is expected to remain valid after a read operation has occurred, the data must be written back into the FeCaps.
- In certain applications, specific NVL arrays may be designated to store specific information that will not change over a period of time. For example, certain system states can be saved as a default return state where returning to that state is preferable to full reboot of the device. The reboot and configuration process for a state of the art ultra low power SoC can take 1000-10000 clock cycles or more to reach the point where control is handed over to the main application code thread. This boot time becomes critical for energy harvesting applications in which power is intermittent, unreliable, and limited in quantity. The time and energy cost of rebooting can consume most or all of the energy available for computation, preventing programmable devices such as MCU's from being used in energy harvesting applications. An example application would be energy harvesting light switches. The energy harvested from the press of the button on the light switch represents the entire energy available to complete the following tasks: 1) determine the desired function (on/off or dimming level), 2) format the request into a command packet, 3) wake up a radio and squirt the packet over an RF link to the lighting system. Known custom ASIC chips with hard coded state machines are often used for this application due to the tight energy constraints, which makes the system inflexible and expensive to change because new ASIC chips have to be designed and fabricated whenever any change is desired. A programmable MCU SOC would be a much better fit, except for the power cost of the boot process consumes most of the available energy, leaving no budget for executing the required application code.
- To address this concern, in one approach, at least one of the plurality of non-volatile logic element arrays is configured to store a boot state representing a state of the computing device apparatus after a given amount of a boot process is completed. The at least one non-volatile logic controller in this approach is configured to control restoration of data representing the boot state from the at least one of the plurality of non-volatile logic element arrays to corresponding ones of the plurality of volatile storage elements in response to detecting a previous system reset or power loss event for the computing device apparatus. To conserve power over a typical read/write operation for the NVL arrays, the at least one non-volatile logic controller can be configured to execute a round-trip data restoration operation that automatically writes back data to an individual non-volatile logic element after reading data from the individual non-volatile logic element without completing separate read and write operations.
- An example execution of a round-trip data restoration is illustrated in
FIG. 7 , which illustrates a writeback operation onbitcell 400, where the bitcell is read, and then written to the same value. As illustrated, initiating reading of data from the individual non-volatile logic element is started at a first time S1 by switching a first plate line PL1 high to induce a voltage on a node of a corresponding ferroelectric capacitor bit cell based on a capacitance ratio for ferroelectric capacitors of the corresponding ferroelectric capacitor bit cell. If clamp switches are used to ground the nodes of the ferroelectric capacitors, a clear signal CLR is switched from high to low at the first time S1 to unclamp those aspects of the individual non-volatile logic element from electrical ground. At a second time S2, a sense amplifier enable signal SAEN is switched high to enable a sense amplifier to detect the voltage induced on the node and to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a third time S3, a pass line PASS is switched high to open transfer gates to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a fourth time S4, a second plate line PL2 is switched high to induce a polarizing signal across the ferroelectric capacitors to write data back to the corresponding ferroelectric capacitor bit cell corresponding to the data stored in the individual non-volatile logic element. To the individual non-volatile logic element to a non-volatile storage state having the same data stored therein, at a fifth time S5 the first plate line PL1 and the second plate line PL2 are switched low, the pass line PASS is switched low at the sixth time S6, and the sense amplifier enable signal SAEN is switched law at the seventh time S7. If clamp switches are used to ground the nodes of the ferroelectric capacitors, at the seventh time a clear signal CLR is switched from low to high to clamp the aspects of the individual non-volatile logic element to the electrical ground to help maintain data integrity as discussed herein. This process includes a lower total number of transitions than what is needed for distinct and separate read and write operations (read, then write). This lowers the overall energy consumption. -
Bitcell 400 is designed to maximize read differential across Q/QB in order to provide a highly reliable first generation of NVL products. Two FeCaps are used on each side rather than using one FeCap and constant BL capacitance as a load because this doubles the differential voltage that is available to the sense amp. A sense amp is placed inside the bitcell to prevent loss of differential due to charge sharing between node Q and the BL capacitance and to avoid voltage drop across the transfer gate. The sensed voltages are around VDD/2, and a HVT transfer gate takes a long time to pass them to the BL.Bitcell 400 helps achieve twice the signal margin of a regular FRAM bitcell known in the art, while not allowing any DC stress across the FeCaps. - The timing of signals shown in
FIGS. 5 and 6 are for illustrative purposes. Various embodiments may signal sequences that vary depending on the clock rate, process parameters, device sizes, etc. For example, in another embodiment, the timing of the control signals may operate as follows. During time period S1: PASS goes from 0 to 1 and PL1/PL2 go from 0 to 1. During time period S2: SAEN goes from 0 to 1, during which time the sense amp may perform level shifting as will be described later, or provides additional drive strength for a non-level shifted design. During time period S3: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same, but are moved up one clock cycle. This sequence is one clock cycle shorter than that illustrated inFIG. 6 . - In another alternative, the timing of the control signals may operate as follows. During time period S1: PASS goes from 0 to 1 (BL/BLB, Q/QB are 0 v and VDDL respectively). During time period S2: SAEN goes from 0 to 1 (BL/BLB, Q/QB are 0 v and VDDN respectively). During time period S3: PL1/PL2 go from 0 to 1 (BL/Q is coupled above ground by PL1/PL2 and is driven back low by the SA and BL drivers). During time period S4: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same.
-
FIGS. 8-9 are a schematic and timing diagram illustrating another embodiment of a ferroelectricnonvolatile bit cell 800, a 2C-3T self-referencing based NVL bitcell. The previously described 4-FeCap based bitcell 400 uses two FeCaps on each side of a sense amp to get a differential read with double the margin as compared to a standard 1C-1T FRAM bitcell. However, a 4-FeCap based bitcell has a larger area and may have a higher variation because it uses more FeCaps. -
Bitcell 800 helps achieve a differential 4-FeCap like margin in lower area by using itself as a reference, referred to herein as self-referencing. By using fewer FeCaps, it also has lower variation than a 4 FeCap bitcell. Typically, a single sided cell needs to use a reference voltage that is in the middle of the operating range of the bitcell. This in turn reduces the read margin by half as compared to a two sided cell. However, as circuit fabrication process moves, the reference value may become skewed, further reducing the read margin. A self-reference scheme allows comparison of a single sided cell against itself, thereby providing a higher margin. Tests of the self-referencing cell described herein have provided at least double the margin over a fixed reference cell. -
Bitcell 800 has two FeCaps C1, C2 that are connected in series to formnode Q 804. Plate line 1 (PL1) is coupled to FeCap C1 and plate line 2 (PL2) is coupled to FeCap C2. The plate lines are use to provide biasing to the FeCaps during reading and writing operations.Pass gate 802 is configured to couple node Q to bitlineB. Pass gate 802 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps. Alternatively, an NMOS pass gate may be used with a boosted word line voltage. In this case, the PASS signal would be boosted by one NFET Vt (threshold voltage). However, this may lead to reliability problems and excess power consumption. Using a CMOS pass gate adds additional area to the bit cell but improves speed and power consumption.Clamp transistor MC 806 is coupled to nodeQ. Clamp transistor 806 is configured to clamp the Q node to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods s0, which in thisembodiment 0 volts, ground. In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB and unintended partial depolarization is essentially eliminated. - The initial state of node Q, plate lines PL1 and PL2 are all 0, as shown in
FIG. 9 at time period s0, so there is no DC bias across the FeCaps when the bitcell is not being accessed. To begin a read operation, PL1 is toggled high while PL2 is kept low, as shown during time period s1. Asignal 902 develops on node Q from a capacitance ratio based on the retained polarization of the FeCaps from a last data value previously written into the cell, as described above with regard toequation 1. This voltage is stored on aread capacitor 820 external to the bitcell by passing the voltage thoughtransfer gate 802 onto bit line BL and then throughtransfer gate 822 in response to a second enable signal EN1. Note: BL and the read capacitors are precharged to VDD/2 before thepass gates read storage capacitors clamp transistor 806 during time period s2. Next, PL2 is toggled high keeping PL1 low during time period s3. Anew voltage 904 develops on node Q, but this time with the opposite capacitor ratio. This voltage is then stored on anotherexternal read capacitor 821 viatransfer gate 823. Thus, the same two FeCaps are used to read a high as well as low signal.Sense amplifier 810 can then determine the state of the bitcell by using the voltages stored on theexternal read capacitors - Typically, there will be an array of
bit cells 800. One column of bit cells 800-800 n is illustrated inFIG. 8 coupled viabit line 801 to readtransfer gates SoC 100, the NVL arrays are 8×32; however, as discussed earlier, different configurations may be implemented. The read capacitors and sense amps may be located in the periphery of the memory array, for example. -
FIG. 10 is a block diagram illustratingNVL array 110 in more detail. Embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. As discussed earlier with reference toFIG. 1 , adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. To amortize the test feature costs and improve manufacturability,SoC 100 is implemented using 256bmini-NVL arrays 110, of FeCap based bitcells dispersed throughout the logic cloud to save state of thevarious flip flops 120 when power is removed. Each cloud 102-104 ofFFs 120 includes an associatedNVL array 110. Acentral NVL controller 106 controls all the arrays and their communication withFFs 120. - While an NVL array may be implemented in any number of n rows of m column configurations, in this example,
NVL array 110 is implemented with anarray 1040 of eight rows and thirty-two bit columns of bitcells. Each individual bit cell, such asbitcell 1041, is coupled to a set of control lines provided byrow drivers 1042. The control signals described earlier, including plate lines (PL1, PL2), sense amp enable (SEAN), transfer gate enable (PASS), and clear (CLR) are all driven by the row drivers. There is a set of row drivers for each row of bitcells. - Each individual bit cell, such as
bitcell 1041 is also coupled via the bitlines to a set of input/output (IO)drivers 1044. In this implementation, there are thirty-two sets of IO drivers, such as IO driver set 1045. Each driver set produces anoutput signal 1046 that provides a data value when a row of bit lines is read. Each bitline runs the length of a column of bitcells and couples to an IO driver for that column. Each bitcell may be implemented as 2C-3T bitcell 800, for example. In this case, a single bitline will be used for each column, and the sense amps and read capacitors will be located inIO driver block 1044. In another implementation ofNVL array 110, each bitcell may be implemented as 4C-12T bit cell 400. In this case, the bitlines will be a differential pair with two IO drivers for each column. A comparator receives the differential pair of bitlines and produces a final single bit line that is provided to the output latch. Other implementations ofNVL array 110 may use other known or later developed bitcells in conjunction with the row drivers and IO drivers that will be described in more detail below. -
Timing logic 1046 generates timing signals that are used to control the read drivers to generate the sequence of control signals for each read and write operation.Timing logic 1046 may be implemented using both synchronous or asynchronous state machines, or other known or later developed logic technique. One potential alternative embodiment utilizes a delay chain with multiple outputs that “tap” the delay chain at desired intervals to generate control signals. Multiplexors can be used to provide multiple timing options for each control signal. Another potential embodiment uses a programmable delay generator that produces edges at the desired intervals using dedicated outputs that are connected to the appropriate control signals. -
FIG. 11 is a more detailed schematic of a set of input/output circuits 1150 used in the NVL array ofFIG. 10 . Referring back toFIG. 10 , each IO set 1045 of the thirty-two drivers inIO block 1044 is similar toIO circuits 1150. I/O block 1044 provides several features to aid testability of NVL bits. - Referring now to
FIG. 11 , a first latch (L1) 1151 serves as an output latch during a read and also combines with a second latch (L2) 1152 to form a scan flip flop. The scan output (SO) signal is routed to multiplexor 1153 in thewrite driver block 1158 to allow writing scanned data into the array during debug. Scan output (SO) is also coupled to the scan input (SI) of the next set of IO drivers to form a thirty-two bit scan chain that can be used to read or write a complete row of bits fromNVL array 110. WithinSoC 100, the scan latch of each NVL array is connected in a serial manner to form a scan chain to allow all of the NVL arrays to be accessed using the scan chain. Alternatively, the scan chain within each NVL array may be operated in a parallel fashion (N arrays will generate N chains) to reduce the number of internal scan flop bits on each chain in order to speed up scan testing. The number of chains and the number of NVL arrays per chain may be varied as needed. Typically, all of the storage latches and flipflops withinSoC 100 include scan chains to allow complete testing ofSoC 100. Scan testing is well known and does not need to be described in more detail herein. In this embodiment, the NVL chains are segregated from the logic chains on a chip so that the chains can be exercised independently and NVL arrays can be tested without any dependencies on logic chain organization, implementation, or control. The maximum total length of NVL scan chains will always be less than the total length of logic chains since the NVL chain length is reduced by a divisor equal to the number of rows in the NVL arrays. In the current embodiment, there are 8 entries per NVL array, so the total length of NVL scan chains is ⅛th the total length of the logic scan chains. This reduces the time required to access and test NVL arrays and thus reduces test cost. Also, it eliminates the need to determine the mapping between logic flops, their position on logic scan chains and their corresponding NVL array bit location (identifying the array, row, and column location), greatly simplifying NVL test, debug, and failure analysis. - While scan testing is useful, it does not provide a good mechanism for production testing of
SoC 100 since it may take a significant amount of time to scan in hundreds or thousands of bits for testing the various NVL arrays withinSoC 100. This is because there is no direct access to bits within the NVL array. Each NVL bitcell is coupled to an associated flip-flop and is only written to by saving the state of the flip flop. Thus, in order to load a pattern test into an NVL array from the associated flipflops, the corresponding flipflops must be set up using a scan chain. Determining which bits on a scan chain have to be set or cleared in order to control the contents of a particular row in an NVL array is a complex task as the connections are made based on the physical location of arbitrary groups of flops on a silicon die and not based on any regular algorithm. As such, the mapping of flops to NVL locations need not be controlled and is typically somewhat random. - An improved testing technique is provided within
IO drivers 1150.NVL controller 106, referring back toFIG. 1 , has state machine(s) to perform fast pass/fail tests for all NVL arrays on the chip to screen out bad dies. In one such approach, at least one non-volatile logic controller is configured to control a built-in-self-test mode where all zeros or all ones are written to at least a portion of an NVL array of the plurality of NVL arrays and then it is determined whether data read from the at least the portion of the NVL array is all ones or all zeros. This is done by first writing all 0's or 1's to a row using all 0/1write driver 1180, applying an offset disturb voltage (V_Off), then reading the same row using parallelread test logic 1170.Signal corr —1 from AND gate G1 goes high if the data output signal (OUT) from data latch 1151 is high, and signalcorr —1 from an adjacent column's IO driver's parallel read test logic AND gate G1 is high. In this manner, the G1 AND gates of the thirty-two sets of I/O blocks 1150 inNVL array 110 implement a large 32 input AND gate that tell the NVL controller if all outputs are high for the selected row ofNVL array 110. OR gate G0 does the same for reading 0's. In this manner, the NVL controller may instruct all of the NVL arrays withinSoC 100 to simultaneously perform an all ones write to a selected row, and then instruct all of the NVL arrays to simultaneously read the selected row and provide a pass fail indication using only a few control signals without transferring any explicit test data from the NVL controller to the NVL arrays. In typical memory array BIST (Built In Self Test) implementations, the BIST controller must have access to all memory output values so that each output bit can be compared with the expected value. Given there are many thousands of logic flops on typical silicon SOC chips, the total number of NVL array outputs can also measure in the thousands. It would be impractical to test these arrays using normal BIST logic circuits due to the large number of data connections and data comparators required. The NVL test method can then be repeated eight times, for NVL arrays having eight rows (the number of repetitions will vary according to the array organization. In one example, a 10 entry NVL array implementation would repeat thetest method 10 times), so that all of the NVL arrays inSoC 100 can be tested for correct all ones operation in only eight write cycles and eight read cycles. Similarly, all of the NVL arrays inSoC 100 can be tested for correct all zeros operation in only eight write cycles and eight read cycles. The results of all of the NVL arrays may be condensed into a single signal indicating pass or fail by an additional AND gate and OR gate that receive thecorr —0 andcorr —1 signals from each of the NVL arrays and produces asingle corr —0 andcorr —1 signal, or the NVL controller may look at eachindividual corr —0 andcorr —1 signal. - All 0/1
write driver 1180 includes PMOS devices M1, M3 and NMOS devices M2, M4. Devices M1 and M2 are connected in series to form a node that is coupled to the bitline BL, while devices M3 and M4 are connected in series to form a node that is coupled to the inverse bitline BLB. Control signal “all—1_A” and inverse “all—1_B” are generated byNVL controller 106. When asserted during a write cycle, they activate device devices M1 and M4 to cause the bit lines BL and BLB to be pulled to represent a data value oflogic 1. Similarly, control signal “all—0_A” and inverse “all—0_B” are generated byNVL controller 106. When asserted during a write cycle, they activate devices M2 and M3 to cause the bit lines BL and BLB to be pulled to represent a data value oflogic 0. In this manner, the thirty-two drivers are operable to write all ones into a row of bit cells in response to a control signal and to write all zeros into a row of bit cells in response to another control signal. One skilled in the art can easily design other circuit topologies to accomplish the same task. The current embodiment is preferred as it only requires 4 transistors to accomplish the required data writes. - During a normal write operation, write
driver block 1158 receives a data bit value to be stored on the data_in signal.Write drivers Write drivers -
FIG. 12A is a timing diagram illustrating an offset voltage test during a read cycle. To apply a disturb voltage to a bitcell, state s1 is modified during a read. This figure illustrates a voltage disturb test for reading a data value of “0” (node Q); a voltage disturb test for a data value of “1” is similar, but injects the disturb voltage onto the opposite side of the sense amp (node QB). Thus, the disturb voltage in this embodiment is injected onto the low voltage side of the sense amp based on the logic value being read.Transfer gates NVL controller 106, by an off-chip test controller, or via an external production tester to produce a desired amount of offset voltage V_OFF.NVL controller 106 may assert the Vcon control signal for the bitline side storing a “0” during the s1 time period to thereby enableVcon transfer gate transfer gates -
FIG. 12B illustrates a histogram generated during a sweep of offset voltage. Bit level failure margins can be studied by sweeping V_Off and scanning out the read data bits using a sequence of read cycles, as described above. In this example, the worst case read margin is 550 mv, the mean value is 597 mv, and the standard deviation is 22 mv. In this manner, the operating characteristics of all bit cells in each NVL array on an SoC may be easily determined. - As discussed above, embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. The NVL bitcell should be designed for maximum read signal margin and in-situ testability as is needed for any NV-memory technology. However, NVL implementation cannot rely on SRAM like built in self test (BIST) because NVL arrays are distributed inside the logic cloud. The NVL implementation described above includes NVL arrays controlled by a
central NVL controller 106. While screening a die for satisfactory behavior,NVL controller 106 runs a sequence of steps that are performed on-chip without any external tester interference. The tester only needs to issue a start signal, and apply an analog voltage which corresponds to the desired signal margin. The controller first writes all 0s or 1s to all bits in the NVL array. It then starts reading an array one row at a time. The NVL array read operations do not necessarily immediately follow NVL array write operations. Often, high temperature bake cycles are inserted between data write operations and data read operations in order to accelerate time and temperature dependent failure mechanisms so that defects that would impact long term data retention can be screened out during manufacturing related testing. As described above in more detail, the array contains logic that ANDs and ORs all outputs of the array. These two signals are sent to the controller. Upon reading each row, the controller looks at the two signals from the array, and based on knowledge of what it previously wrote, decides it the data read was correct or not in the presence of the disturb voltage. If the data is incorrect, it issues a fail signal to the tester, at which point the tester can eliminate the die. If the row passes, the controller moves onto the next row in the array. All arrays can be tested in parallel at the normal NVL clock frequency. This enables high speed on-chip testing of the NVL arrays with the tester only issuing a start signal and providing the desired read signal margin voltage while the NVL controller reports pass at the end of the built in testing procedure or generates a fail signal whenever the first failing row is detected. Fails are reported immediately so the tester can abort the test procedure at the point of first failure rather than waste additional test time testing the remaining rows. This is important as test time and thus test cost for all non-volatile memories (NVM) often dominates the overall test cost for an SOC with embedded NVM. If the NVL controller activates the “done” signal and the fail signal has not been activated at any time during the test procedure, the die undergoing testing has passed the required tests. - For further failure analysis, the controller may also have a debug mode. In this mode, the tester can specify an array and row number, and the NVL controller can then read or write to just that row. The read contents can be scanned out using the NVL scan chain. This method provides read or write access to any NVL bit on the die without CPU intervention or requiring the use of a long complicated SOC scan chains in which the mapping of NVL array bits to individual flops is random. Further, this can be done in concert with applying an analog voltage for read signal margin determination, so exact margins for individual bits can be measured.
- These capabilities help make NVL practical because without testability features it would be risky to use non-volatile logic elements in a product. Further, pass/fail testing on-die with minimal tester interaction reduces test time and thereby cost.
- NVL implementation using mini-arrays distributed in the logic cloud means that a sophisticated error detection method like ECC would require a significant amount of additional memory columns and control logic to be used on a per array basis, which could be prohibitive from an area standpoint. However, in order to provide an enhanced level of reliability, the NVL arrays of
SoC 100 may include parity protection as a low cost error detection method, as will now be described in more detail. -
FIG. 13 is a schematic illustrating parity generation inNVL array 110 that illustrates an example NVL array having thirty-two columns of bits (0:31), that exclusive-ors the input data value DATA IN 1151 with the output of a similar XOR gate of the previous column's IO driver. Each IO driver section, such assection 1350, of the NVL array may contain anXOR gate 1160, referring again toFIG. 11A . During a row write, the output ofXOR gate 1160 that is incolumn 30 is the overall parity value of the row of data that is being written in bit columns 0:30 and is used to write parity values into the last column by feeding its output to the data input ofcolumn 31 the NVL mini-array, shown as XOR_IN inFIG. 11B . - In a similar manner, during a read,
XOR gate 1160 exclusive-ors the data value DATA_OUT fromread latch 1151 via mux 1161 (seeFIG. 11 ) with the output of a similar XOR gate of the previous column's IO driver. The output ofXOR gate 1160 that is inbit column 30 is the overall parity value for the row of data that was read from bit columns 0:30 and is used to compare to a parity value read frombit column 31 inparity error detector 1370. If the overall parity value determined from the read data does not match the parity bit read fromcolumn 31, then a parity error is declared. - When a parity error is detected, it indicates that the stored FF state values are not trustworthy. Since the NVL array is typically being read when the SoC is restarting operation after being in a power off state, then detection of a parity error indicates that a full boot operation needs to be performed in order to regenerate the correct FF state values.
- However, if the FF state was not properly stored prior to turning off the power or this is a brand new device, for example, then an indeterminate condition may exist. For example, if the NVL array is empty, then typically all of the bits may have a value of zero, or they may all have a value of one. In the case of all zeros, the parity value generated for all zeros would be zero, which would match the parity bit value of zero. Therefore, the parity test would incorrectly indicate that the FF state was correct and that a boot operation is not required, when in fact it would be required. In order to prevent this occurrence, an inverted version of the parity bit may be written to
column 31 bybit line driver 1365, for example. Referring again toFIG. 11A , note that whilebit line driver 1156 for columns 0-30 also inverts the input data bits,mux 1153 inverts the data_in bits when they are received, so the result is that the data in columns 0-30 is stored un-inverted. In another embodiment, the data bits may be inverted and the parity error not inverted, for example. - In the case of all ones, if there is an even number of columns, then the calculated parity would equal zero, and an inverted value of one would be stored in the parity column. Therefore, in an NVL array with an even number of data columns with all ones would not detect a parity error. In order to prevent this occurrence,
NVL array 110 is constrained to have an odd number of data columns. For example, in this embodiment, there are thirty-one data columns and one parity column, for a total of thirty-two bitcell columns. - In some embodiments, when an NVL read operation occurs, control logic for the NVL array causes the parity bit to be read, inverted, and written back. This allows the NVL array to detect when prior NVL array writes were incomplete or invalid/damaged. Remnant polarization is not completely wiped out by a single read cycle. Typically, it take 5-15 read cycles to fully depolarize the FeCaps or to corrupt the data enough to reliably trigger an NVL read parity. For example, if only four out of eight NVL array rows were written during the last NVL store operation due to loss of power, this would most likely result in an incomplete capture of the prior machine state. However, because of remnant polarization, the four rows that were not written in the most recent state storage sequence will likely still contain stale data from back in time, such as two NVL store events ago, rather than data from the most recent NVL data store event. The parity and stale data from the four rows will likely be read as valid data rather than invalid data. This is highly likely to cause the machine to lock up or crash when the machine state is restored from the NVL arrays during the next wakeup/power up event. Therefore, by writing back the parity bit inverted after every entry is read, each row of stale data is essentially forcibly invalidated.
- Writing data back to NVL entries is power intensive, so it is preferable to not write data back to all bits, just the parity bit. The current embodiment of the array disables the PL1, PL2, and sense amp enable signals for all non-parity bits (i.e. Data bits) to minimize the parasitic power consumption of this feature.
- In this manner, each time the SoC transitions from a no-power state to a power-on state, a valid determination can be made that the data being read from the NVL arrays contains valid FF state information. If a parity error is detected, then a boot operation can be performed in place of restoring FF state from the NVL arrays.
- Referring back to
FIG. 1 ,low power SoC 100 has multiple voltage and power domains, such as VDDN_FV, VDDN_CV for the NVL arrays, VDDR for the sleep mode retention latches and well supplies, and VDDL for the bulk of the logic blocks that form the system microcontroller, various peripheral devices, SRAM, ROM, etc., as described earlier with regard to Table 1 and Table 2. FRAM has internal power switches and is connected to the always on supply VDDZ In addition, the VDDN_FV domain may be designed to operate at one voltage, such as 1.5 volts needed by the FeCap bit cells, while the VDDL and VDDN_CV domain may be designed to operate at a lower voltage to conserve power, such as 0.9-1.5 volts, for example. Such an implementation requires usingpower switches 108, level conversion and isolation in appropriate areas. Aspects of isolation and level conversion needed with respect toNVL blocks 110 will now be described in more detail. The circuits are designed such that VDDL/VDDN_CV can be any valid voltage less than or equal to VDDN_FV and the circuit will function correctly. -
FIG. 14 is a block diagram illustrating power domains withinNVL array 110. Various block of logic and memory may be arranged as illustrated in Table 3. -
TABLE 3 example full chip power domains Full Chip Voltage Voltage Domain level VDD 0.9-1.5 Always ON supply for VDDL, VDDR, VDDN_CV power switches, and always ON logic (if any) VDDZ 1.5 Always on 1.5 V supply for FRAM, and for VDDN_FV power switches. FRAM has internal power switches. VDDL 0.9-1.5 All logic, and master stage of all flops, SRAM, ROM, Write multiplexor, buffers on FF outputs, and mux outputs: Variable logic voltage; e.g. 0.9 to 1.5 V (VDDL). This supply is derived from the output of VDDL power switches VDDN_CV 0.9-1.5 NVL array control and timing logic, and IO circuits, NVL controller. Derived from VDDN_CV power switches. VDDN_FV 1.5 NVL array Wordline driver circuits 1042 and NVL bitcellarray 1040: Same voltage as FRAM. Derived from VDDN_FV power switches. VDDR 0.9-1.5 This is the data retention domain and includes the slave stage of retention flops, buffers on NVL clock, flop retention enable signal buffers, and NVL control outputs such as flop update control signal buffers, and buffers on NVL data outputs. Derived from VDDR power switches. - Power domains VDDL, VDDN_CV, VDDN_FV, and VDDR described in Table 3 are controlled using a separate set of power switches, such as
switches 108 described earlier. However, isolation may be needed for some conditions. Data output buffers withinIO buffer block 1044 are in the NVL logic power domain VDDN_CV and therefore may remain off while domain VDDR (or VDDL depending on the specific implementation) is ON during normal operation of the chip. ISO-Low isolation is implemented to tie all such signals to ground during such a situation. While VDDN_CV is off, logic connected to data outputs in VDDR (or VDDL depending on the specific implementation) domain in random logic area may generate short circuit current between power and ground in internal circuits if any signals from the VDDN_CV domain are floating (not driven when VDDN_CV domain is powered down) if they are not isolated. The same is applicable for correct—0/1 outputs and scan out output of the NVL arrays. The general idea here is that any outputs of the NVL array will be isolated when the NVL array has no power given to it. In case there is always ON logic present in the chip, all signals going from VDDL or VDDN_CV to VDD must be isolated using input isolation at the VDD domain periphery. Additional built-in isolation exists in NVL flops at the ND input. Here, the input goes to a transmission gate, whose control signal NU is driven by an always on signal. When the input is expected to be indeterminate, NU is made low, thereby disabling the ND input port. Similar built-in isolation exists on data inputs and scan-in of the NVL array. This isolation would be needed during NVL restore when VDDL is OFF. Additionally, signals NU and NVL data input multiplexor enable signals (mux_sel) must be buffered only in the VDDR domain. The same applies for the retention enable signal. - To enable the various power saving modes of operation, VDDL and VDDN* domain are shut off at various times, and isolation is makes that possible without burning short circuit current.
- Level conversion from the lower voltage VDDL domain to the higher voltage VDDN domain is needed on control inputs of the NVL arrays that go to the NVL bitcells, such as: row enables, PL1, PL2, restore, recall, and clear, for example. This enables a reduction in system power dissipation by allowing blocks of SOC logic and NVL logic gates that can operate at a lower voltage to do so. For each row of bitcells in
bitcell array 1040, there is a set ofword line drivers 1042 that drive the signals for each row of bitcells, including plate lines PL1, PL2, transfer gate enable PASS, sense amp enable SAEN, clear enable CLR, and voltage margin test enable VCON, for example. Thebitcell array 1040 and thewordline circuit block 1042 are supplied by VDDN. Level shifting on input signals to 1042 are handled by dedicated level shifters (seeFIG. 15 ), while level shifting on inputs to thebitcell array 1040 are handled by special sequencing of the circuits within the NVL bitcells without adding any additional dedicated circuits to the array datapath or bitcells. -
FIG. 15 is a schematic of alevel converter 1500 for use inNVL array 110.FIG. 15 illustrates one wordline driver that may be part of the set of wordlinedrivers 1402.Level converter 1500 includes PMOS transistors P1, P2 and NMOS transistor N1, N2 that are formed inregion 1502 in the 1.5 volt VDDN domain for wordlinedrivers 1042. However, the control logic in timing andcontrol module 1046 is located inregion 1503 in the 1.2 v VDDL domain (1.2 v is used to represent the variable VDDL core supply that can range from 0.9 v to 1.5 v). 1.2volt signal 1506 is representative of any of the row control signals that are generated bycontrol module 1046, for use in accessingNVL bitcell array 1040.Inverter 1510 forms a complimentary pair ofcontrol signals region 1503 that are then routed to transistors N1 and N2 inlevel converter 1500. In operation, when 1.2volt signal 1506 goes high, NMOS device N1 pulls the gate of PMOS device P2 low, which causes P2 to pullsignal 1504 up to 1.5 volts. Similarly, when 1.2volt signal 1506 goes low,complimentary signal 1512 causes NMOS device N2 to pull the gate of PMOS device P1 low, which pulls up the gate of PMOS device P2 and allowssignal 1504 to go low, approximately zero volts. The NMOS devices must be stronger than the PMOS so the converter doesn't get stuck. In this manner, level shifting may done across the voltage domains and power may be saved by placing the control logic, includinginverter 1510, in thelower voltage domain 1503. For each signal, the controller is coupled to each oflevel converter 1500 by twocomplimentary control signals -
FIG. 16 is a timing diagram illustrating operation of level shifting using a sense amp within a ferroelectric bitcell. Input data that is provided toNVL array 110 frommultiplexor 212, referring again toFIG. 2 , also needs to be level shifted from the 1.2 v VDDL domain to 1.5 volts needed for best operation of the FeCaps in the 1.5 volt VDDN domain during write operations. This may be done using the sense amp ofbit cell 400, for example. Referring again toFIG. 4 and toFIG. 13 , note that each bit line BL, such as BL 1352, which comes from the 1.2 volt VDDL domain, is coupled to transfergate bitcell 400.Sense amp 410 operates in the 1.5 v VDDN power domain. Referring now toFIG. 16 , note that during time period s2, data is provided on the bit lines BL, BLB and thetransfer gates -
Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive, as illustrated at 1604, after the write data drivers, such aswrite driver logic 0 side of the sense amp to VSS (Q or QB) while the other side containing thelogic 1 is pulled up to VDDN voltage level. In this manner, the existing NVL array hardware is reused to provide a voltage level shifting function during NVL store operations. - However, to avoid a short from the sense amp to the 1.2 v driver supply, the write data drivers are isolated from the sense amp at the end of time period s2 before the sense amp is turned on during time periods s3, s4. This may be done by turning off the bit line drivers by de-asserting the STORE signal after time period s2 and/or also by disabling the transfer gates by de-asserting PASS after time period s2.
- Using the above described arrangements, various configurations are possible to maximize power savings or usability at various points in a processing or computing devices operation cycle. In one such approach, a computing device can be configured to operate continuously across a series of power interruptions without loss of data or reboot. With reference to the example illustrated in
FIG. 17 , aprocessing device 1700 as described above includes a plurality of non-volatilelogic element arrays 1710, a plurality ofvolatile storage elements 1720, and at least onenon-volatile logic controller 1730 configured to control the plurality of non-volatilelogic element arrays 1710 to store a machine state represented by the plurality ofvolatile storage elements 1720 and to read out a stored machine state from the plurality of non-volatilelogic element arrays 1710 to the plurality ofvolatile storage elements 1720. A voltage orcurrent detector 1740 is configured to sense a power quality from aninput power supply 1750. - A
power management controller 1760 is in communication with the voltage orcurrent detector 1740 to receive information regarding the power quality from the voltage orcurrent detector 1710. Thepower management controller 1760 is also configured to be in communication with the at least onenon-volatile logic controller 1710 to provide information effecting storing the machine state to and restoration of the machine state from the plurality of non-volatilelogic element arrays 1710. - A
voltage regulator 1770 is connected to receive power from theinput power supply 1750 and provide power to an outputpower supply rail 1755 configured to provide power to theprocessing device 1700. Thevoltage regulator 1770 is further configured to be in communication with thepower management controller 1760 and to disconnect the outputpower supply rail 1755 from theinput power supply 1750, such as through control of aswitch 1780, in response to a determination that the power quality is below a threshold. - The
power management controller 1760 and the voltage orcurrent detector 1740 work together with the at least onenon-volatile logic controller 1730 andvoltage regulator 1770 to manage the data backup and restoration processes independent of the primary computing path. In one such example, thepower management controller 1760 is configured to send a signal to effect stoppage of clocks for theprocessing device 1700 in response to the determination that the power quality is below the threshold. Thevoltage regulator 1770 can then send a disconnect signal to thepower management controller 1760 in response to disconnecting the outputpower supply rail 1755 from theinput power supply 1750. Thepower management controller 1760 sends a backup signal to the at least onenon-volatile logic controller 1710 in response to receiving the disconnect signal. Upon completion of the backup of system state into NVL arrays, the power can be removed from the SOC, or can continue to degrade without further concern for loss of machine state. - The individual elements that make the determination of power quality can vary in different approaches. For instance, the
voltage regulator 1770 can be configured to detect the power quality's rising above the threshold and, in response, to send a good power signal to thepower management controller 1760. In response, thepower management controller 1760 is configured to send a signal to provide power to the plurality of non-volatilelogic element arrays 1710 and the at least onenon-volatile logic controller 1730 to facilitate restoration of the machine state. Thepower management controller 1760 is configured to determine that power up is complete and, in response, send a signal to effect release of clocks for theprocessing device 1700 wherein theprocessing device 1700 resumes operation from the machine state prior to the determination that the power quality was below the threshold. - To assure that the
processing device 1700 has enough power to complete a backup process, acharge storage element 1790 is configured to provide temporary power to theprocessing device 1700 sufficient to power it long enough to store the machine state in the plurality of non-volatilelogic element arrays 1710 after the outputpower supply rail 1755 is disconnected from theinput power supply 1750. Thecharge storage element 1790 may be at least one dedicated on-die (or off-die) capacitor designed to store such emergency power. In another approach, thecharge storage element 1790 may be circuitry in which naturally occurring parasitic charge builds up in the die where the dissipation of the charge from the circuitry to ground provides sufficient power to complete a backup operation. - The architecture described above can facilitate a number of operation configurations that improve overall processing device function over previous designs. In one such example, known ULP applications sometimes require different tasks to be performed for each interrupt that is triggered such that based on the specific interrupt that is triggered, different operations and code execution is desired. Also, Real Time Operating Systems (RTOS) often switch back and forth between multiple operating threads. Today, these applications copy the current machine context (program counter, stack pointer, register file contents, and the like) into a temporary storage before switching to a different thread or interrupt and later be restored upon returning to the current thread of code execution. This storage and restoration requires a lot of time and power. Time is the enemy of RTOS since the goal of an RTOS is to service operating requests “in real time” (almost instantly).
- To address this concern, a version of the processing or computing device described above can be configured to handle two or more operating threads or virtual machines. In one approach, the at least one non-volatile logic controller is configured to store first program data from a first program executed by the computing device apparatus in a first set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays. Similarly, the at least one non-volatile logic controller is further configured to store second program data from a second program executed by the computing device apparatus in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays. The first program and the second program can correspond to distinct executing threads or virtual machines for the computing device apparatus, and the storage can be completed in response to receiving stimulus regarding an interrupt for the computing device apparatus or in response to a power supply quality problem for the computing device apparatus. When the device needs to switch between processing threads or virtual machines, the at least one non-volatile logic controller is further configured to restore the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first program or the second program is to be executed by the computing device apparatus. The stimuli described above could be an actual instruction that triggers the context switch, an interrupt signal, an event from an internal timer, an event coming from the outside of the chip, or the like.
- An example arrangement used to effect the storage and restoration of the different processing threads or virtual machines is illustrated in
FIG. 18 , which represents a modification of the example systems ofFIGS. 1 and 2 . InFIG. 18 , a givencloud 1805 ofvolatile storage elements plurality 1810 ofNVL arrays volatile storage elements multiplexer 212 is connected to variably connect individual ones of thevolatile storage elements logic element arrays non-volatile logic controller 1806 is further configured to store the first program data or the second program data to the plurality of non-volatilelogic element arrays multiplexer 212 to connect individual ones of the plurality ofvolatile storage elements first set 1812 of non-volatile logic element arrays or thesecond set 1814 of non-volatile logic element arrays based on whether the first program or the second program is executing in the computing device apparatus. Asecond multiplexer 1822 is connected to variably connect outputs of individual ones of the non-volatilelogic element arrays volatile storage elements non-volatile logic controller 1806 is further configured to restore the first program data or the second program data to the plurality ofvolatile storage elements multiplexer 1822 to connect inputs of individual ones of the plurality ofvolatile storage elements first set 1812 of non-volatile logic element arrays or thesecond set 1814 of non-volatile logic element arrays based on whether the first program or the second program is to be executed in the computing device apparatus. Generally speaking, in this example, the NVL arrays receive signals from the associated NVL controller during both read and write, whereas thefirst multiplexer 212 receives signals during a write to NVL array process and thesecond multiplexer 1822 receives signals during a read from NVL arrays process. -
FIG. 19 is a flow chart illustrating operation of a processing device operating two or more processing threads as described above. The method includes operating 1902 a processing device having at least a first processing thread and a second processing thread using a plurality of volatile storage elements. First program data stored in the plurality of volatile storage elements during execution of the first processing thread is stored 1904 in a first set of non-volatile logic element arrays of a plurality of non-volatile logic element arrays. Similarly, second program data stored in the plurality of volatile storage elements during execution of the second processing thread is stored 1906 in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays. The storage in the NVL arrays can be done in response to a program based or power supply quality problem based interrupt, and the choice of which set of data to backup in the NVL arrays can be made based on the type of interrupt received. By one approach, the method can include controlling a multiplexer to connect individual ones of the plurality of volatile storage elements to either the first set of non-volatile logic element arrays or the second set of non-volatile logic element arrays based on whether the first processing thread or the second processing thread is executing in the processing device. To allow further processing of the respective threads, the method includes restoring 1908 the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first processing thread or the second processing thread is to be executed. - So configured, with reference to the example discussed above, by using NVL mini-arrays to save the key machine context, any number of distinct executing threads or virtual machines can be supported (limited only by the die area needed for the required NVL arrays). Switching to a different code stream based on the nature of the interrupt that needs to be serviced is simply a matter of saving the current machine context (program counter, registers, stack pointer, and the like) to the NVL mini-arrays dedicated to that operating thread and recovering the desired operating context from another set of NVL-arrays. Switching between two operating contexts is controlled in hardware by using muxes on the NVL mini-array read and write data ports and control inputs to select the desired set of mini-arrays for the required operation. The multiple machine contexts are saved in NVL mini-arrays and are thus not sensitive to interruptions in the power supply. Machine execution can continue uninterrupted across supply disruptions, independent of the operating context currently being executed when the power is lost.
- Moreover, time and power savings in switching between operating threads or machine states can be realized. For example, in known systems, the existing machine context must be saved before operations are switched to another machine context. This is typically done by moving the machine context in chunks equal in size to the normal machine data path width (8 bit, 16 bit, 32 bit, 64 bit, etc). Because the entire data bandwidth to memory in a typical machine is limited by the size of the machines data path, it takes more than one machine clock cycle to store the machine context. For example, if a context must save a 32 bit program counter, a 32 bit stack pointer, a 64 entry×32 bit register file, and a 32 entry×32 bit register file, then the total machine context is 98 thirty two bit machine “words”. A full context save would take 98 clock cycles assuming the memory can accept one 32 bit word per clock cycle. A full machine context can contain 1K-500K FF's depending on system complexity. For a system with a 32 bit data word and 500K FF's, it could take 500,000/32 bits per cycle=15,625 clock cycles to save the entire virtual machine state. By contrast, NVL arrays arranged as described herein have parallel access to all FF's. In an example with 8 entries per NVL array and all NVL arrays operating in parallel, it would only take 8 clock cycles to store 500K FF's worth of machine state.
-
FIG. 20 is a block diagram of anotherSoC 2000 that includes NVL arrays, as described above.SoC 2000 features a Cortex-M0 processor core 2002, universal asynchronous receiver/transmitter (UART) 2004 and SPI (serial peripheral interface) 2006 interfaces, and 10KB ROM KB SRAM 2012, 64 KB (Ferroelectric RAM)FRAM 2014 memory blocks, characteristic of a commercial ultra low power (ULP) microcontroller. The 130 nm FRAM process based SoC uses a single 1.5V supply, an 8 MHz system clock and a 125 MHz clock for NVL operation. The SoC consumes 75 uA/MHz & 170 uA/MHz while running code from SRAM & FRAM respectively. The energy and time cost of backing up and restoring the entire system state of 2537 FFs requires only 4.72 nJ & 320 ns and 1.34 nJ & 384 ns respectively, which sets the industry benchmark for this class of device.SoC 2000 provides test capability for each NVL bit, as described in more detail above, and in-situ read signal margin of 550 mV. -
SoC 2000 has 2537 FFs and latches served by 10 NVL arrays. A central NVL controller controls all the arrays and their communication with FFs, as described in more detail above. The distributed NVL mini-array system architecture helps amortize test feature costs, achieving a SoC area overhead of only 3.6% with exceptionally low system level sleep/wakeup energy cost of 2.2 pJ/0.66 pJ per bit. - Although the invention finds particular application to microcontrollers (MCU) implemented, for example, in a System on a Chip (SoC), it also finds application to other forms of processors. A SoC may contain one or more modules which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library.
- While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, other portable, or mobile systems such as remote controls, access badges and fobs, smart credit/debit cards and emulators, smart phones, digital assistants, and any other now known or later developed portable or embedded system may embody NVL arrays as described herein to allow nearly immediate recovery to a full operating state from a completely powered down state.
- While embodiments of retention latches coupled to a nonvolatile FeCap bitcell are described herein, in another embodiment, a nonvolatile FeCap bitcell from an NVL array may be coupled to flip-flop or latch that does not include a low power retention latch. In this case, the system would transition between a full power state, or otherwise reduced power state based on reduced voltage or clock rate, and a totally off power state, for example. As described above, before turning off the power, the state of the flipflops and latches would be saved in distributed NVL arrays. When power is restored, the flipflops would be initialized via an input provided by the associated NVL array bitcell.
- The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP). The software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device and loaded and executed in the processor. In some cases, the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium. In some cases, the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc.
- Certain terms are used throughout the description and the claims to refer to particular system components. As one skilled in the art will appreciate, components in digital systems may be referred to by different names and/or may be combined in ways not shown herein without departing from the described functionality. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . .” Also, the term “couple” and derivatives thereof are intended to mean an indirect, direct, optical, and/or wireless electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, through an indirect electrical connection via other devices and connections, through an optical electrical connection, and/or through a wireless electrical connection.
- Although method steps may be presented and described herein in a sequential fashion, one or more of the steps shown and described may be omitted, repeated, performed concurrently, and/or performed in a different order than the order shown in the figures and/or described herein. Accordingly, embodiments of the invention should not be considered limited to the specific ordering of steps shown in the figures and/or described herein.
- It is therefore contemplated that the appended claims will cover any such modifications of the embodiments as fall within the true scope of the invention.
Claims (15)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/770,583 US20140075091A1 (en) | 2012-09-10 | 2013-02-19 | Processing Device With Restricted Power Domain Wakeup Restore From Nonvolatile Logic Array |
CN201380046962.4A CN104620217B (en) | 2012-09-10 | 2013-09-10 | The equipment with limited power domain restored is waken up from non-volatile logic array |
PCT/US2013/059006 WO2014040051A1 (en) | 2012-09-10 | 2013-09-10 | Processing device with restricted power domain wakeup restore from nonvolatile logic array |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261698906P | 2012-09-10 | 2012-09-10 | |
US13/770,583 US20140075091A1 (en) | 2012-09-10 | 2013-02-19 | Processing Device With Restricted Power Domain Wakeup Restore From Nonvolatile Logic Array |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140075091A1 true US20140075091A1 (en) | 2014-03-13 |
Family
ID=50234569
Family Applications (18)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/770,004 Active 2034-04-24 US9899066B2 (en) | 2012-09-10 | 2013-02-19 | Priority based backup in nonvolatile logic arrays |
US13/769,963 Active 2034-08-25 US9715911B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile backup of a machine state when a power supply drops below a threshhold |
US13/770,280 Active 2034-10-24 US9830964B2 (en) | 2012-09-10 | 2013-02-19 | Non-volatile array wakeup and backup sequencing control |
US13/770,583 Abandoned US20140075091A1 (en) | 2012-09-10 | 2013-02-19 | Processing Device With Restricted Power Domain Wakeup Restore From Nonvolatile Logic Array |
US13/770,304 Active 2035-12-07 US10102889B2 (en) | 2012-09-10 | 2013-02-19 | Processing device with nonvolatile logic array backup |
US13/770,399 Active 2035-09-30 US9711196B2 (en) | 2012-09-10 | 2013-02-19 | Configuration bit sequencing control of nonvolatile domain and array wakeup and backup |
US13/770,041 Abandoned US20140075174A1 (en) | 2012-09-10 | 2013-02-19 | Boot State Restore from Nonvolatile Bitcell Array |
US13/770,516 Abandoned US20140075175A1 (en) | 2012-09-10 | 2013-02-19 | Control of Dedicated Non-Volatile Arrays for Specific Function Availability |
US13/770,368 Active 2033-11-22 US9058126B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile logic array with retention flip flops to reduce switching power during wakeup |
US13/770,498 Active 2033-12-16 US9342259B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile logic array and power domain segmentation in processing device |
US13/770,448 Active 2034-01-05 US9335954B2 (en) | 2012-09-10 | 2013-02-19 | Customizable backup and restore from nonvolatile logic array |
US15/089,607 Active US11244710B2 (en) | 2012-09-10 | 2016-04-04 | Customizable backup and restore from nonvolatile logic array |
US15/623,441 Active US10902895B2 (en) | 2012-09-10 | 2017-06-15 | Configuration bit sequencing control of nonvolatile domain and array wakeup and backup |
US15/659,111 Active US10541012B2 (en) | 2012-09-10 | 2017-07-25 | Nonvolatile logic array based computing over inconsistent power supply |
US15/899,302 Active US10796738B2 (en) | 2012-09-10 | 2018-02-19 | Priority based backup in nonvolatile logic arrays |
US16/159,433 Active US10468079B2 (en) | 2012-09-10 | 2018-10-12 | Processing device with nonvolatile logic array backup |
US16/674,525 Active US10930328B2 (en) | 2012-09-10 | 2019-11-05 | Processing device with nonvolatile logic array backup |
US17/558,847 Active 2033-11-03 US12087395B2 (en) | 2012-09-10 | 2021-12-22 | Customizable backup and restore from nonvolatile logic array |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/770,004 Active 2034-04-24 US9899066B2 (en) | 2012-09-10 | 2013-02-19 | Priority based backup in nonvolatile logic arrays |
US13/769,963 Active 2034-08-25 US9715911B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile backup of a machine state when a power supply drops below a threshhold |
US13/770,280 Active 2034-10-24 US9830964B2 (en) | 2012-09-10 | 2013-02-19 | Non-volatile array wakeup and backup sequencing control |
Family Applications After (14)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/770,304 Active 2035-12-07 US10102889B2 (en) | 2012-09-10 | 2013-02-19 | Processing device with nonvolatile logic array backup |
US13/770,399 Active 2035-09-30 US9711196B2 (en) | 2012-09-10 | 2013-02-19 | Configuration bit sequencing control of nonvolatile domain and array wakeup and backup |
US13/770,041 Abandoned US20140075174A1 (en) | 2012-09-10 | 2013-02-19 | Boot State Restore from Nonvolatile Bitcell Array |
US13/770,516 Abandoned US20140075175A1 (en) | 2012-09-10 | 2013-02-19 | Control of Dedicated Non-Volatile Arrays for Specific Function Availability |
US13/770,368 Active 2033-11-22 US9058126B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile logic array with retention flip flops to reduce switching power during wakeup |
US13/770,498 Active 2033-12-16 US9342259B2 (en) | 2012-09-10 | 2013-02-19 | Nonvolatile logic array and power domain segmentation in processing device |
US13/770,448 Active 2034-01-05 US9335954B2 (en) | 2012-09-10 | 2013-02-19 | Customizable backup and restore from nonvolatile logic array |
US15/089,607 Active US11244710B2 (en) | 2012-09-10 | 2016-04-04 | Customizable backup and restore from nonvolatile logic array |
US15/623,441 Active US10902895B2 (en) | 2012-09-10 | 2017-06-15 | Configuration bit sequencing control of nonvolatile domain and array wakeup and backup |
US15/659,111 Active US10541012B2 (en) | 2012-09-10 | 2017-07-25 | Nonvolatile logic array based computing over inconsistent power supply |
US15/899,302 Active US10796738B2 (en) | 2012-09-10 | 2018-02-19 | Priority based backup in nonvolatile logic arrays |
US16/159,433 Active US10468079B2 (en) | 2012-09-10 | 2018-10-12 | Processing device with nonvolatile logic array backup |
US16/674,525 Active US10930328B2 (en) | 2012-09-10 | 2019-11-05 | Processing device with nonvolatile logic array backup |
US17/558,847 Active 2033-11-03 US12087395B2 (en) | 2012-09-10 | 2021-12-22 | Customizable backup and restore from nonvolatile logic array |
Country Status (4)
Country | Link |
---|---|
US (18) | US9899066B2 (en) |
JP (4) | JP2015534675A (en) |
CN (12) | CN104603715B (en) |
WO (9) | WO2014040051A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140025978A1 (en) * | 2012-07-20 | 2014-01-23 | Semiconductor Energy Laboratory Co., Ltd. | Power supply control circuit and signal processing circuit |
US9454437B2 (en) | 2013-09-24 | 2016-09-27 | Texas Instruments Incorporated | Non-volatile logic based processing device |
WO2016140770A3 (en) * | 2015-03-04 | 2016-11-03 | Qualcomm Incorporated | Systems and methods for implementing power collapse in a memory |
CN106230839A (en) * | 2016-08-03 | 2016-12-14 | 青岛海信宽带多媒体技术有限公司 | The acceptance control method of Real Time Streaming and device |
US20170300101A1 (en) * | 2016-04-14 | 2017-10-19 | Advanced Micro Devices, Inc. | Redirecting messages from idle compute units of a processor |
US10545728B2 (en) | 2017-07-27 | 2020-01-28 | Texas Instruments Incorporated | Non-volatile counter system, counter circuit and power management circuit with isolated dynamic boosted supply |
CN112650384A (en) * | 2021-01-05 | 2021-04-13 | 大唐微电子技术有限公司 | Low-power-consumption dormancy awakening control circuit and control circuit of multiple power domains |
CN113760071A (en) * | 2020-06-02 | 2021-12-07 | 晶豪科技股份有限公司 | Method, controller and system for running memory system in advance during power-on period |
US20230195321A1 (en) * | 2021-12-17 | 2023-06-22 | Samsung Electronics Co., Ltd. | Storage device and operating method thereof |
Families Citing this family (127)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9424127B2 (en) * | 2013-02-01 | 2016-08-23 | Broadcom Corporation | Charger detection and optimization prior to host control |
KR20140102070A (en) * | 2013-02-13 | 2014-08-21 | 삼성전자주식회사 | Method and apparatus for fast booting of user device |
US8953365B2 (en) * | 2013-06-07 | 2015-02-10 | International Business Machines Corporation | Capacitor backup for SRAM |
US8908463B1 (en) * | 2013-07-29 | 2014-12-09 | Kabushiki Kaisha Toshiba | Nonvolatile semiconductor memory device and control method thereof |
US9100002B2 (en) * | 2013-09-12 | 2015-08-04 | Micron Technology, Inc. | Apparatus and methods for leakage current reduction in integrated circuits |
CN106258006A (en) * | 2014-04-29 | 2016-12-28 | 惠普发展公司,有限责任合伙企业 | Use status information recovery system |
US9286056B2 (en) | 2014-05-19 | 2016-03-15 | International Business Machines Corporation | Reducing storage facility code load suspend rate by redundancy check |
US9395797B2 (en) * | 2014-07-02 | 2016-07-19 | Freescale Semiconductor, Inc. | Microcontroller with multiple power modes |
US10847242B2 (en) * | 2014-07-23 | 2020-11-24 | Texas Instruments Incorporated | Computing register with non-volatile-logic data storage |
US9753086B2 (en) * | 2014-10-02 | 2017-09-05 | Samsung Electronics Co., Ltd. | Scan flip-flop and scan test circuit including the same |
US10275003B2 (en) * | 2014-10-27 | 2019-04-30 | Hewlett Packard Enterprise Development Lp | Backup power communication |
WO2016069003A1 (en) * | 2014-10-31 | 2016-05-06 | Hewlett Packard Enterprise Development Lp | Backup power supply cell in memory device |
TWI533319B (en) * | 2014-11-20 | 2016-05-11 | 財團法人工業技術研究院 | Non-volatile memory device and control method thereof |
JP6582435B2 (en) * | 2015-02-24 | 2019-10-02 | セイコーエプソン株式会社 | Integrated circuit device and electronic apparatus |
US10037071B2 (en) | 2015-02-25 | 2018-07-31 | Texas Instruments Incorporated | Compute through power loss approach for processing device having nonvolatile logic memory |
US9986569B2 (en) | 2015-03-18 | 2018-05-29 | Microsoft Technology Licensing, Llc | Battery-backed RAM for wearable devices |
US9830093B2 (en) * | 2015-03-27 | 2017-11-28 | Intel Corporation | Method and apparatus for improving immunity to defects in a non-volatile memory |
US10048893B2 (en) * | 2015-05-07 | 2018-08-14 | Apple Inc. | Clock/power-domain crossing circuit with asynchronous FIFO and independent transmitter and receiver sides |
US9859358B2 (en) * | 2015-05-26 | 2018-01-02 | Altera Corporation | On-die capacitor (ODC) structure |
TWI522794B (en) * | 2015-06-10 | 2016-02-21 | 國立成功大學 | Energy-efficient nonvolatile microprocessor |
US10120815B2 (en) * | 2015-06-18 | 2018-11-06 | Microchip Technology Incorporated | Configurable mailbox data buffer apparatus |
US9785362B2 (en) * | 2015-07-16 | 2017-10-10 | Qualcomm Incorporated | Method and apparatus for managing corruption of flash memory contents |
WO2017012072A1 (en) * | 2015-07-21 | 2017-01-26 | 京微雅格(北京)科技有限公司 | Circuit and method for power-on initialization of fpga configuration memory |
US9449655B1 (en) * | 2015-08-31 | 2016-09-20 | Cypress Semiconductor Corporation | Low standby power with fast turn on for non-volatile memory devices |
US10581410B2 (en) | 2015-09-10 | 2020-03-03 | Samsung Electronics Co., Ltd | High speed domino-based flip flop |
WO2017048294A1 (en) * | 2015-09-18 | 2017-03-23 | Hewlett Packard Enterprise Development Lp | Memory persistence from a volatile memory to a non-volatile memory |
US11016770B2 (en) | 2015-09-19 | 2021-05-25 | Microsoft Technology Licensing, Llc | Distinct system registers for logical processors |
US11126433B2 (en) * | 2015-09-19 | 2021-09-21 | Microsoft Technology Licensing, Llc | Block-based processor core composition register |
US9673787B2 (en) * | 2015-09-22 | 2017-06-06 | Qualcomm Incorporated | Power multiplexing with flip-flops |
US9564897B1 (en) | 2015-10-06 | 2017-02-07 | Samsung Electronics Co., Ltd | Apparatus for low power high speed integrated clock gating cell |
US9933954B2 (en) * | 2015-10-19 | 2018-04-03 | Nxp Usa, Inc. | Partitioned memory having pipeline writes |
US10452594B2 (en) | 2015-10-20 | 2019-10-22 | Texas Instruments Incorporated | Nonvolatile logic memory for computing module reconfiguration |
US10515016B2 (en) * | 2015-12-03 | 2019-12-24 | Hitachi, Ltd. | Method and apparatus for caching in software-defined storage systems |
US10007519B2 (en) * | 2015-12-22 | 2018-06-26 | Intel IP Corporation | Instructions and logic for vector bit field compression and expansion |
US10331203B2 (en) | 2015-12-29 | 2019-06-25 | Texas Instruments Incorporated | Compute through power loss hardware approach for processing device having nonvolatile logic memory |
US9964986B2 (en) * | 2015-12-29 | 2018-05-08 | Silicon Laboratories Inc. | Apparatus for power regulator with multiple inputs and associated methods |
US9836071B2 (en) * | 2015-12-29 | 2017-12-05 | Silicon Laboratories Inc. | Apparatus for multiple-input power architecture for electronic circuitry and associated methods |
CN106936422B (en) * | 2015-12-30 | 2022-12-30 | 格科微电子(上海)有限公司 | Level conversion circuit |
US10591902B2 (en) | 2016-01-03 | 2020-03-17 | Purdue Research Foundation | Microcontroller energy management system |
US10254967B2 (en) | 2016-01-13 | 2019-04-09 | Sandisk Technologies Llc | Data path control for non-volatile memory |
US10608615B2 (en) * | 2016-01-28 | 2020-03-31 | Samsung Electronics Co., Ltd. | Semiconductor device including retention reset flip-flop |
KR102378150B1 (en) * | 2016-01-28 | 2022-03-24 | 삼성전자주식회사 | Semiconductor device comprising low power retention flip-flop |
US10404240B2 (en) | 2016-01-28 | 2019-09-03 | Samsung Electronics Co., Ltd. | Semiconductor device comprising low power retention flip-flop |
US9824729B2 (en) | 2016-03-25 | 2017-11-21 | Taiwan Semiconductor Manufacturing Company, Ltd. | Memory macro and method of operating the same |
US9766827B1 (en) * | 2016-05-10 | 2017-09-19 | Intel Corporation | Apparatus for data retention and supply noise mitigation using clamps |
CN106407048B (en) * | 2016-05-25 | 2019-04-05 | 清华大学 | Input/output communication interface, the data backup and resume method based on the interface |
US10539617B2 (en) * | 2016-06-02 | 2020-01-21 | Taiwan Semiconductor Manufacturing Co., Ltd. | Scan architecture for interconnect testing in 3D integrated circuits |
CN106406767A (en) * | 2016-09-26 | 2017-02-15 | 上海新储集成电路有限公司 | A nonvolatile dual-in-line memory and storage method |
KR102506838B1 (en) * | 2016-09-30 | 2023-03-08 | 에스케이하이닉스 주식회사 | Semiconductor device and operating method thereof |
US10528255B2 (en) | 2016-11-11 | 2020-01-07 | Sandisk Technologies Llc | Interface for non-volatile memory |
US10528267B2 (en) | 2016-11-11 | 2020-01-07 | Sandisk Technologies Llc | Command queue for storage operations |
US10528286B2 (en) | 2016-11-11 | 2020-01-07 | Sandisk Technologies Llc | Interface for non-volatile memory |
US10114589B2 (en) * | 2016-11-16 | 2018-10-30 | Sandisk Technologies Llc | Command control for multi-core non-volatile memory |
KR20180092430A (en) * | 2017-02-09 | 2018-08-20 | 에스케이하이닉스 주식회사 | Data storage device and operating method thereof |
CN106991022B (en) * | 2017-03-07 | 2020-12-18 | 记忆科技(深圳)有限公司 | Chip analysis method based on scan chain |
US9947419B1 (en) | 2017-03-28 | 2018-04-17 | Qualcomm Incorporated | Apparatus and method for implementing design for testability (DFT) for bitline drivers of memory circuits |
US10298235B2 (en) * | 2017-04-02 | 2019-05-21 | Samsung Electronics Co., Ltd. | Low power integrated clock gating cell using controlled inverted clock |
US10430302B2 (en) * | 2017-04-12 | 2019-10-01 | Qualcomm Incorporated | Data retention with data migration |
US10419004B2 (en) * | 2017-04-21 | 2019-09-17 | Windbond Electronics Corporation | NVFF monotonic counter and method of implementing same |
US10224072B2 (en) * | 2017-05-26 | 2019-03-05 | Micron Technology, Inc. | Error detection code hold pattern synchronization |
US10153020B1 (en) * | 2017-06-09 | 2018-12-11 | Micron Technology, Inc. | Dual mode ferroelectric memory cell operation |
US10845866B2 (en) * | 2017-06-22 | 2020-11-24 | Micron Technology, Inc. | Non-volatile memory system or sub-system |
US10083973B1 (en) * | 2017-08-09 | 2018-09-25 | Micron Technology, Inc. | Apparatuses and methods for reading memory cells |
US10388335B2 (en) | 2017-08-14 | 2019-08-20 | Micron Technology, Inc. | Sense amplifier schemes for accessing memory cells |
CN107608824B (en) * | 2017-09-01 | 2020-07-31 | 中国科学院计算技术研究所 | Nonvolatile computing device and working method thereof |
KR102244921B1 (en) | 2017-09-07 | 2021-04-27 | 삼성전자주식회사 | Storage device and Method for refreshing thereof |
WO2019129389A1 (en) * | 2017-12-26 | 2019-07-04 | Silicon Mobility Sas | Flexible logic unit adapted for real-time task switching |
US10981576B2 (en) | 2017-12-27 | 2021-04-20 | Micron Technology, Inc. | Determination of reliability of vehicle control commands via memory test |
KR102427638B1 (en) * | 2018-01-10 | 2022-08-01 | 삼성전자주식회사 | Non-volatile memory device and read method thereof |
KR102518370B1 (en) * | 2018-01-19 | 2023-04-05 | 삼성전자주식회사 | Storage device and debugging system thereof |
US10217496B1 (en) * | 2018-02-28 | 2019-02-26 | Arm Limited | Bitline write assist circuitry |
KR102469098B1 (en) * | 2018-03-21 | 2022-11-23 | 에스케이하이닉스 주식회사 | Nonvolatile memory device, operating method thereof and data storage apparatus including the same |
US10290340B1 (en) | 2018-03-29 | 2019-05-14 | Qualcomm Technologies, Incorporated | Offset-canceling (OC) write operation sensing circuits for sensing switching in a magneto-resistive random access memory (MRAM) bit cell in an MRAM for a write operation |
JP7282749B2 (en) * | 2018-04-19 | 2023-05-29 | ソニーセミコンダクタソリューションズ株式会社 | Non-volatile memory circuit |
US10638584B2 (en) * | 2018-04-24 | 2020-04-28 | Current Lighting Solutions, Llc | System and method for communicating between non-networked monitoring device and networked lighting controllers |
US10621387B2 (en) | 2018-05-30 | 2020-04-14 | Seagate Technology Llc | On-die decoupling capacitor area optimization |
JP2019215941A (en) * | 2018-06-11 | 2019-12-19 | 一般財団法人生産技術研究奨励会 | Non-volatile SRAM with ferroelectric capacitor |
US10979034B1 (en) * | 2018-06-19 | 2021-04-13 | Xilinx, Inc. | Method and apparatus for multi-voltage domain sequential elements |
CN108962311B (en) * | 2018-07-06 | 2020-12-11 | 孤山电子科技(上海)有限公司 | SRAM control circuit and method for sequentially entering and exiting low-power-consumption state |
US11314596B2 (en) | 2018-07-20 | 2022-04-26 | Winbond Electronics Corp. | Electronic apparatus and operative method |
CN109144232B (en) * | 2018-08-01 | 2020-12-01 | Oppo广东移动通信有限公司 | Process processing method and device, electronic equipment and computer readable storage medium |
TWI703433B (en) * | 2018-08-27 | 2020-09-01 | 華邦電子股份有限公司 | Electronic apparatus and operative method thereof |
CN109188246B (en) * | 2018-09-06 | 2020-09-08 | 长沙理工大学 | Testability design structure of safe encryption chip |
KR102546652B1 (en) | 2018-09-07 | 2023-06-22 | 삼성전자주식회사 | Semiconductor memory device, and memory system having the same |
CN111061358B (en) * | 2018-10-15 | 2021-05-25 | 珠海格力电器股份有限公司 | Clock-free chip wake-up circuit, wake-up method and chip |
US11106539B2 (en) * | 2018-10-25 | 2021-08-31 | EMC IP Holding Company LLC | Rule book based retention management engine |
US11507175B2 (en) * | 2018-11-02 | 2022-11-22 | Micron Technology, Inc. | Data link between volatile memory and non-volatile memory |
CN109245756B (en) * | 2018-11-07 | 2023-10-03 | 深圳讯达微电子科技有限公司 | Method for reducing power domain switching noise and chip output interface circuit |
CN111381654B (en) * | 2018-12-29 | 2022-01-11 | 成都海光集成电路设计有限公司 | Load detection circuit, SOC system, and method for configuring load detection circuit |
US10925154B2 (en) * | 2019-01-31 | 2021-02-16 | Texas Instruments Incorporated | Tamper detection |
CA3126754C (en) | 2019-02-06 | 2023-09-05 | Hewlett-Packard Development Company, L.P. | Integrated circuits including customization bits |
US20200285780A1 (en) * | 2019-03-06 | 2020-09-10 | Nvidia Corp. | Cross domain voltage glitch detection circuit for enhancing chip security |
CN110018929B (en) * | 2019-04-11 | 2020-11-10 | 苏州浪潮智能科技有限公司 | Data backup method, device, equipment and storage medium |
US11671111B2 (en) * | 2019-04-17 | 2023-06-06 | Samsung Electronics Co., Ltd. | Hardware channel-parallel data compression/decompression |
US10637462B1 (en) | 2019-05-30 | 2020-04-28 | Xilinx, Inc. | System and method for SoC power-up sequencing |
US10992292B2 (en) | 2019-06-13 | 2021-04-27 | Arris Enterprises Llc | Electronic persistent switch |
CN110189704B (en) * | 2019-06-28 | 2021-10-15 | 上海天马有机发光显示技术有限公司 | Electroluminescent display panel, driving method thereof and display device |
US10964356B2 (en) * | 2019-07-03 | 2021-03-30 | Qualcomm Incorporated | Compute-in-memory bit cell |
JP7214602B2 (en) | 2019-09-24 | 2023-01-30 | 株式会社東芝 | SEMICONDUCTOR DEVICE AND CONTROL METHOD OF SEMICONDUCTOR DEVICE |
US11461020B2 (en) | 2019-10-09 | 2022-10-04 | Micron Technology, Inc. | Memory device equipped with data protection scheme |
US11520658B2 (en) * | 2019-10-31 | 2022-12-06 | Arm Limited | Non-volatile memory on chip |
CN111049513B (en) * | 2019-11-29 | 2023-08-08 | 北京时代民芯科技有限公司 | Rail-to-rail bus holding circuit with cold backup function |
CN112947738A (en) * | 2019-12-10 | 2021-06-11 | 珠海全志科技股份有限公司 | Intelligent terminal power supply system and intelligent terminal standby and wake-up method |
US11726543B2 (en) * | 2019-12-13 | 2023-08-15 | Stmicroelectronics S.R.L. | Computing system power management device, system and method |
US11488879B2 (en) * | 2019-12-30 | 2022-11-01 | Micron Technology, Inc. | Methods and apparatuses to wafer-level test adjacent semiconductor die |
US11088678B1 (en) | 2020-02-11 | 2021-08-10 | Xilinx, Inc. | Pulsed flip-flop capable of being implemented across multiple voltage domains |
TWI767212B (en) * | 2020-04-16 | 2022-06-11 | 晶豪科技股份有限公司 | Method for facilitating a memory system operable in advance during power-up, memory controller therefor, and memory system capable of being operable in advance during power-up |
US11366162B2 (en) | 2020-04-16 | 2022-06-21 | Mediatek Inc. | Scan output flip-flop with power saving feature |
CN111580475A (en) * | 2020-04-29 | 2020-08-25 | 苏州欧立通自动化科技有限公司 | Multifunctional industrial control method based on OLT-MFIC01 controller |
US11018687B1 (en) * | 2020-05-13 | 2021-05-25 | Qualcomm Incorporated | Power-efficient compute-in-memory analog-to-digital converters |
US11803226B2 (en) * | 2020-05-14 | 2023-10-31 | Stmicroelectronics S.R.L. | Methods and devices to conserve microcontroller power |
CN111431536B (en) * | 2020-05-18 | 2023-05-02 | 深圳市九天睿芯科技有限公司 | Subunit, MAC array and bit width reconfigurable analog-digital mixed memory internal computing module |
US11416057B2 (en) * | 2020-07-27 | 2022-08-16 | EMC IP Holding Company LLC | Power disruption data protection |
CN112162898A (en) * | 2020-09-07 | 2021-01-01 | 深圳比特微电子科技有限公司 | Computing power chip array state information acquisition system and method and virtual currency mining machine |
US11626156B2 (en) * | 2020-12-02 | 2023-04-11 | Qualcomm Incorporated | Compute-in-memory (CIM) bit cell circuits each disposed in an orientation of a cim bit cell circuit layout including a read word line (RWL) circuit in a cim bit cell array circuit |
US11442106B2 (en) | 2020-12-14 | 2022-09-13 | Western Digital Technologies, Inc. | Method and apparatus for debugging integrated circuit systems using scan chain |
US20220198022A1 (en) * | 2020-12-23 | 2022-06-23 | Intel Corporation | Secure device power-up apparatus and method |
US11631455B2 (en) | 2021-01-19 | 2023-04-18 | Qualcomm Incorporated | Compute-in-memory bitcell with capacitively-coupled write operation |
CN112965010B (en) * | 2021-02-07 | 2023-04-07 | 潍柴动力股份有限公司 | Fault detection method and device of electronic actuator, electronic control equipment and medium |
CN113359935B (en) * | 2021-06-10 | 2022-09-09 | 海光信息技术股份有限公司 | Voltage regulation method and device of SOC power domain and storage medium |
CN113254289B (en) * | 2021-06-11 | 2021-10-15 | 武汉卓目科技有限公司 | Single machine testing method, device and system based on NVMe disk array |
US11996144B2 (en) * | 2021-06-15 | 2024-05-28 | Seagate Technology Llc | Non-volatile memory cell with multiple ferroelectric memory elements (FMEs) |
US20220413590A1 (en) * | 2021-06-23 | 2022-12-29 | Maxim Integrated Products, Inc. | Systems and methods for reducing power consumption in compute circuits |
CN113409165B (en) * | 2021-08-19 | 2021-12-07 | 清华四川能源互联网研究院 | Power data integration method and device, electronic equipment and readable storage medium |
CN113704025A (en) * | 2021-09-02 | 2021-11-26 | 西安紫光国芯半导体有限公司 | Nonvolatile programmable chip and memory device |
US11854587B2 (en) | 2021-12-03 | 2023-12-26 | Taiwan Semiconductor Manufacturing Company, Ltd. | Low power wake up for memory |
TWI803119B (en) * | 2021-12-29 | 2023-05-21 | 新唐科技股份有限公司 | Data retention circuit and method |
Citations (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6185660B1 (en) * | 1997-09-23 | 2001-02-06 | Hewlett-Packard Company | Pending access queue for providing data to a target register during an intermediate pipeline phase after a computer cache miss |
US6513097B1 (en) * | 1999-03-03 | 2003-01-28 | International Business Machines Corporation | Method and system for maintaining information about modified data in cache in a storage system for use during a system failure |
US20040039880A1 (en) * | 2002-08-23 | 2004-02-26 | Vladimir Pentkovski | Method and apparatus for shared cache coherency for a chip multiprocessor or multiprocessor system |
US20040117580A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for efficiently and reliably performing write cache mirroring |
US20040117579A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for implementing shared memory regions in distributed shared memory systems |
US20040117563A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for synchronizing access to shared resources |
US20040193955A1 (en) * | 2003-03-31 | 2004-09-30 | Leete Brian A. | Computer memory power backup |
US20050027945A1 (en) * | 2003-07-30 | 2005-02-03 | Desai Kiran R. | Methods and apparatus for maintaining cache coherency |
US20050027946A1 (en) * | 2003-07-30 | 2005-02-03 | Desai Kiran R. | Methods and apparatus for filtering a cache snoop |
US7017038B1 (en) * | 2002-08-26 | 2006-03-21 | Network Equipment Technologies, Inc. | Method and system to provide first boot to a CPU system |
US20060136656A1 (en) * | 2004-12-21 | 2006-06-22 | Conley Kevin M | System and method for use of on-chip non-volatile memory write cache |
US20070094446A1 (en) * | 2005-10-20 | 2007-04-26 | Hitachi, Ltd. | Storage system |
US20070260922A1 (en) * | 2006-04-20 | 2007-11-08 | Inventec Corporation | Method of protecting data in cache memory of storage system |
US7398286B1 (en) * | 1998-03-31 | 2008-07-08 | Emc Corporation | Method and system for assisting in backups and restore operation over different channels |
US20090098901A1 (en) * | 2007-10-10 | 2009-04-16 | Unity Semiconductor Corporation | Memory emulation in a cellular telephone |
US20090132760A1 (en) * | 2006-12-06 | 2009-05-21 | David Flynn | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage |
US20090157946A1 (en) * | 2007-12-12 | 2009-06-18 | Siamak Arya | Memory having improved read capability |
US20090161435A1 (en) * | 2007-12-24 | 2009-06-25 | Hynix Semiconductor Inc. | Non-volatile memory device and method of programming the same |
US20090172324A1 (en) * | 2007-12-26 | 2009-07-02 | Chunqi Han | Storage system and method for opportunistic write-verify |
US20090303630A1 (en) * | 2008-06-10 | 2009-12-10 | H3C Technologies Co., Ltd. | Method and apparatus for hard disk power failure protection |
US20100153668A1 (en) * | 2008-12-15 | 2010-06-17 | Fujitsu Limited | Storage system, storage managing device and storage managing method |
US20100180065A1 (en) * | 2009-01-09 | 2010-07-15 | Dell Products L.P. | Systems And Methods For Non-Volatile Cache Control |
US7800856B1 (en) * | 2009-03-24 | 2010-09-21 | Western Digital Technologies, Inc. | Disk drive flushing write cache to a nearest set of reserved tracks during a power failure |
US20100325352A1 (en) * | 2009-06-19 | 2010-12-23 | Ocz Technology Group, Inc. | Hierarchically structured mass storage device and method |
US7954006B1 (en) * | 2008-12-02 | 2011-05-31 | Pmc-Sierra, Inc. | Method and apparatus for archiving data during unexpected power loss |
US20110258355A1 (en) * | 2009-10-13 | 2011-10-20 | Ocz Technology Group, Inc. | Modular mass storage devices and methods of using |
US20130290597A1 (en) * | 2011-09-30 | 2013-10-31 | Intel Corporation | Generation of far memory access signals based on usage statistic tracking |
US8677054B1 (en) * | 2009-12-16 | 2014-03-18 | Apple Inc. | Memory management schemes for non-volatile memory devices |
Family Cites Families (135)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS607854B2 (en) * | 1977-10-28 | 1985-02-27 | 株式会社東芝 | Monostable multivibrator circuit |
US5317752A (en) * | 1989-12-22 | 1994-05-31 | Tandem Computers Incorporated | Fault-tolerant computer system with auto-restart after power-fall |
JP3430231B2 (en) * | 1994-09-21 | 2003-07-28 | 富士通株式会社 | Logic cell and semiconductor integrated circuit using the same |
JPH0897685A (en) * | 1994-09-22 | 1996-04-12 | Fujitsu Ltd | Flip-flop circuit |
US5847577A (en) * | 1995-02-24 | 1998-12-08 | Xilinx, Inc. | DRAM memory cell for programmable logic devices |
US5627784A (en) | 1995-07-28 | 1997-05-06 | Micron Quantum Devices, Inc. | Memory system having non-volatile data storage structure for memory control parameters and method |
US6336161B1 (en) * | 1995-12-15 | 2002-01-01 | Texas Instruments Incorporated | Computer configuration system and method with state and restoration from non-volatile semiconductor memory |
US5773993A (en) * | 1996-09-26 | 1998-06-30 | Xilinx, Inc. | Configurable electronic device which is compatible with a configuration bitstream of a prior generation configurable electronic device |
EP0935789A1 (en) * | 1996-11-04 | 1999-08-18 | 3-Dimensional Pharmaceuticals, Inc. | System, method, and computer program product for the visualization and interactive processing and analysis of chemical data |
US6418506B1 (en) | 1996-12-31 | 2002-07-09 | Intel Corporation | Integrated circuit memory and method for transferring data using a volatile memory to buffer data for a nonvolatile memory array |
KR100281535B1 (en) * | 1997-02-12 | 2001-02-15 | 윤종용 | Computer system and its control method |
US6127843A (en) * | 1997-12-22 | 2000-10-03 | Vantis Corporation | Dual port SRAM memory for run time use in FPGA integrated circuits |
US6226556B1 (en) * | 1998-07-09 | 2001-05-01 | Motorola Inc. | Apparatus with failure recovery and method therefore |
US6137711A (en) * | 1999-06-17 | 2000-10-24 | Agilent Technologies Inc. | Ferroelectric random access memory device including shared bit lines and fragmented plate lines |
US6542000B1 (en) * | 1999-07-30 | 2003-04-01 | Iowa State University Research Foundation, Inc. | Nonvolatile programmable logic devices |
JP2001188689A (en) * | 2000-01-04 | 2001-07-10 | Mitsubishi Electric Corp | Data processor |
EP1115204B1 (en) * | 2000-01-07 | 2009-04-22 | Nippon Telegraph and Telephone Corporation | Function reconfigurable semiconductor device and integrated circuit configuring the semiconductor device |
US6922846B2 (en) * | 2001-04-09 | 2005-07-26 | Sony Corporation | Memory utilization for set top box |
US6851065B2 (en) * | 2001-09-10 | 2005-02-01 | Dell Products L.P. | System and method for executing resume tasks during a suspend routine |
US7046687B1 (en) * | 2002-01-16 | 2006-05-16 | Tau Networks | Configurable virtual output queues in a scalable switching system |
EP1331736A1 (en) * | 2002-01-29 | 2003-07-30 | Texas Instruments France | Flip-flop with reduced leakage current |
EP1351146A1 (en) * | 2002-04-04 | 2003-10-08 | Hewlett-Packard Company | Power management system and method with recovery after power failure |
DE10219652B4 (en) * | 2002-05-02 | 2007-01-11 | Infineon Technologies Ag | Memory circuit and method for operating a memory circuit |
EP1363132B1 (en) * | 2002-05-13 | 2007-09-05 | STMicroelectronics Pvt. Ltd | A method and device for testing of configuration memory cells in programmable logic devices (PLDS) |
JP3986393B2 (en) * | 2002-08-27 | 2007-10-03 | 富士通株式会社 | Integrated circuit device having nonvolatile data storage circuit |
US6901298B1 (en) * | 2002-09-30 | 2005-05-31 | Rockwell Automation Technologies, Inc. | Saving and restoring controller state and context in an open operating system |
JP3910902B2 (en) * | 2002-10-02 | 2007-04-25 | 松下電器産業株式会社 | Integrated circuit device |
JP2004133969A (en) | 2002-10-08 | 2004-04-30 | Renesas Technology Corp | Semiconductor device |
US7031192B1 (en) * | 2002-11-08 | 2006-04-18 | Halo Lsi, Inc. | Non-volatile semiconductor memory and driving method |
JP3756882B2 (en) * | 2003-02-20 | 2006-03-15 | 株式会社東芝 | Information processing apparatus and information processing method |
JP4250143B2 (en) * | 2003-02-27 | 2009-04-08 | 富士通マイクロエレクトロニクス株式会社 | Semiconductor memory device |
US7069522B1 (en) | 2003-06-02 | 2006-06-27 | Virage Logic Corporation | Various methods and apparatuses to preserve a logic state for a volatile latch circuit |
JP2006526831A (en) * | 2003-06-03 | 2006-11-24 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Boot from non-volatile memory |
US7079148B2 (en) * | 2003-07-23 | 2006-07-18 | Hewlett-Packard Development Company, L.P. | Non-volatile memory parallel processor |
US7170315B2 (en) * | 2003-07-31 | 2007-01-30 | Actel Corporation | Programmable system on a chip |
US20050093572A1 (en) | 2003-11-03 | 2005-05-05 | Macronix International Co., Ltd. | In-circuit configuration architecture with configuration on initialization function for embedded configurable logic array |
US20050097499A1 (en) * | 2003-11-03 | 2005-05-05 | Macronix International Co., Ltd. | In-circuit configuration architecture with non-volatile configuration store for embedded configurable logic array |
US7227383B2 (en) * | 2004-02-19 | 2007-06-05 | Mosaid Delaware, Inc. | Low leakage and data retention circuitry |
US7183825B2 (en) | 2004-04-06 | 2007-02-27 | Freescale Semiconductor, Inc. | State retention within a data processing system |
US7536506B2 (en) * | 2004-06-21 | 2009-05-19 | Dot Hill Systems Corporation | RAID controller using capacitor energy source to flush volatile cache data to non-volatile memory during main power outage |
US7135886B2 (en) | 2004-09-20 | 2006-11-14 | Klp International, Ltd. | Field programmable gate arrays using both volatile and nonvolatile memory cell properties and their control |
JP2006100991A (en) * | 2004-09-28 | 2006-04-13 | Matsushita Electric Ind Co Ltd | Non-volatile logic circuit and system lsi having the same |
DE602004003583T2 (en) | 2004-10-04 | 2007-11-22 | Research In Motion Ltd., Waterloo | System and method for data backup in case of power failure |
US20060080515A1 (en) * | 2004-10-12 | 2006-04-13 | Lefthand Networks, Inc. | Non-Volatile Memory Backup for Network Storage System |
US7173859B2 (en) | 2004-11-16 | 2007-02-06 | Sandisk Corporation | Faster programming of higher level states in multi-level cell flash memory |
US7242218B2 (en) * | 2004-12-02 | 2007-07-10 | Altera Corporation | Techniques for combining volatile and non-volatile programmable logic on an integrated circuit |
JP4713143B2 (en) * | 2004-12-15 | 2011-06-29 | 富士通セミコンダクター株式会社 | Semiconductor memory device |
WO2006074176A2 (en) | 2005-01-05 | 2006-07-13 | The Regents Of The University Of California | Memory architectures including non-volatile memory devices |
US7248090B2 (en) | 2005-01-10 | 2007-07-24 | Qualcomm, Incorporated | Multi-threshold MOS circuits |
US7778675B1 (en) * | 2005-01-14 | 2010-08-17 | American Megatrends, Inc. | Remotely accessing a computing device in a low-power state |
US7251168B1 (en) * | 2005-02-01 | 2007-07-31 | Xilinx, Inc. | Interface for access to non-volatile memory on an integrated circuit |
US7180348B2 (en) * | 2005-03-24 | 2007-02-20 | Arm Limited | Circuit and method for storing data in operational and sleep modes |
US7620773B2 (en) * | 2005-04-15 | 2009-11-17 | Microsoft Corporation | In-line non volatile memory disk read cache and write buffer |
TWI324773B (en) * | 2005-05-09 | 2010-05-11 | Nantero Inc | Non-volatile shadow latch using a nanotube switch |
US7394687B2 (en) | 2005-05-09 | 2008-07-01 | Nantero, Inc. | Non-volatile-shadow latch using a nanotube switch |
US7639056B2 (en) | 2005-05-26 | 2009-12-29 | Texas Instruments Incorporated | Ultra low area overhead retention flip-flop for power-down applications |
JP2006344289A (en) * | 2005-06-08 | 2006-12-21 | Toshiba Corp | Ferroelectric memory device |
US7650549B2 (en) * | 2005-07-01 | 2010-01-19 | Texas Instruments Incorporated | Digital design component with scan clock generation |
US7451348B2 (en) * | 2005-08-04 | 2008-11-11 | Dot Hill Systems Corporation | Dynamic write cache size adjustment in raid controller with capacitor backup energy source |
US7480791B2 (en) | 2005-09-15 | 2009-01-20 | Intel Corporation | Method and apparatus for quick resumption where the system may forego initialization of at least one memory range identified in the resume descriptor |
WO2007034265A1 (en) * | 2005-09-21 | 2007-03-29 | Freescale Semiconductor, Inc. | System and method for storing state information |
US7409537B2 (en) * | 2005-10-06 | 2008-08-05 | Microsoft Corporation | Fast booting an operating system from an off state |
US20070101158A1 (en) * | 2005-10-28 | 2007-05-03 | Elliott Robert C | Security region in a non-volatile memory |
US20070136523A1 (en) * | 2005-12-08 | 2007-06-14 | Bonella Randy M | Advanced dynamic disk memory module special operations |
US8056088B1 (en) * | 2005-12-13 | 2011-11-08 | Nvidia Corporation | Using scan chains for context switching |
JP4915551B2 (en) * | 2006-03-16 | 2012-04-11 | パナソニック株式会社 | Time switch |
US20070255889A1 (en) * | 2006-03-22 | 2007-11-01 | Yoav Yogev | Non-volatile memory device and method of operating the device |
US20080028246A1 (en) * | 2006-07-31 | 2008-01-31 | Witham Timothy D | Self-monitoring and self-adjusting power consumption computer control system |
US8019929B2 (en) | 2006-09-13 | 2011-09-13 | Rohm Co., Ltd. | Data processing apparatus and data control circuit for use therein |
US7765394B2 (en) * | 2006-10-31 | 2010-07-27 | Dell Products, Lp | System and method for restoring a master boot record in association with accessing a hidden partition |
KR100843208B1 (en) * | 2006-11-02 | 2008-07-02 | 삼성전자주식회사 | Semiconductor chip package and method of testing the same |
US7817470B2 (en) * | 2006-11-27 | 2010-10-19 | Mosaid Technologies Incorporated | Non-volatile memory serial core architecture |
US8146714B2 (en) * | 2006-12-14 | 2012-04-03 | Otis Elevator Company | Elevator system including regenerative drive and rescue operation circuit for normal and power failure conditions |
US7908504B2 (en) * | 2007-03-23 | 2011-03-15 | Michael Feldman | Smart batteryless backup device and method therefor |
DE102007016170A1 (en) | 2007-04-02 | 2008-10-09 | Francotyp-Postalia Gmbh | Security module for a franking machine |
US7560965B2 (en) * | 2007-04-30 | 2009-07-14 | Freescale Semiconductor, Inc. | Scannable flip-flop with non-volatile storage element and method |
US20080307240A1 (en) | 2007-06-08 | 2008-12-11 | Texas Instruments Incorporated | Power management electronic circuits, systems, and methods and processes of manufacture |
WO2008157556A2 (en) * | 2007-06-21 | 2008-12-24 | Board Of Regents, The University Of Texas System | Method for providing fault tolerance to multiple servers |
US7583121B2 (en) * | 2007-08-30 | 2009-09-01 | Freescale Semiconductor, Inc. | Flip-flop having logic state retention during a power down mode and method therefor |
US7853912B2 (en) * | 2007-11-05 | 2010-12-14 | International Business Machines Corporation | Arrangements for developing integrated circuit designs |
US8024588B2 (en) | 2007-11-28 | 2011-09-20 | Mediatek Inc. | Electronic apparatus having signal processing circuit selectively entering power saving mode according to operation status of receiver logic and related method thereof |
US7827445B2 (en) * | 2007-12-19 | 2010-11-02 | International Business Machines Corporation | Fault injection in dynamic random access memory modules for performing built-in self-tests |
US7743191B1 (en) * | 2007-12-20 | 2010-06-22 | Pmc-Sierra, Inc. | On-chip shared memory based device architecture |
JP5224800B2 (en) * | 2007-12-21 | 2013-07-03 | 株式会社東芝 | Information processing apparatus and data recovery method |
US20090172251A1 (en) * | 2007-12-26 | 2009-07-02 | Unity Semiconductor Corporation | Memory Sanitization |
US7834660B2 (en) | 2007-12-30 | 2010-11-16 | Unity Semiconductor Corporation | State machines using resistivity-sensitive memories |
JP5140459B2 (en) * | 2008-02-28 | 2013-02-06 | ローム株式会社 | NONVOLATILE STORAGE GATE AND OPERATION METHOD THEREOF, NONVOLATILE STORAGE GATE EQUIPPED LOGIC CIRCUIT AND OPERATION METHOD |
US8082384B2 (en) * | 2008-03-26 | 2011-12-20 | Microsoft Corporation | Booting an electronic device using flash memory and a limited function memory controller |
US8325554B2 (en) | 2008-07-10 | 2012-12-04 | Sanmina-Sci Corporation | Battery-less cache memory module with integrated backup |
US7719876B2 (en) | 2008-07-31 | 2010-05-18 | Unity Semiconductor Corporation | Preservation circuit and methods to maintain values representing data in one or more layers of memory |
US8069300B2 (en) | 2008-09-30 | 2011-11-29 | Micron Technology, Inc. | Solid state storage device controller with expansion mode |
US20110197018A1 (en) | 2008-10-06 | 2011-08-11 | Sam Hyuk Noh | Method and system for perpetual computing using non-volatile random access memory |
US8825912B2 (en) | 2008-11-12 | 2014-09-02 | Microchip Technology Incorporated | Dynamic state configuration restore |
US8266365B2 (en) * | 2008-12-17 | 2012-09-11 | Sandisk Il Ltd. | Ruggedized memory device |
JP2012515376A (en) * | 2009-01-12 | 2012-07-05 | ラムバス・インコーポレーテッド | Clock transfer low power signaling system |
US7888965B2 (en) * | 2009-01-29 | 2011-02-15 | Texas Instruments Incorporated | Defining a default configuration for configurable circuitry in an integrated circuit |
US7983107B2 (en) * | 2009-02-11 | 2011-07-19 | Stec, Inc. | Flash backed DRAM module with a selectable number of flash chips |
US7990797B2 (en) * | 2009-02-11 | 2011-08-02 | Stec, Inc. | State of health monitored flash backed dram module |
US20100205349A1 (en) * | 2009-02-11 | 2010-08-12 | Stec, Inc. | Segmented-memory flash backed dram module |
WO2010093356A1 (en) | 2009-02-11 | 2010-08-19 | Stec, Inc. | A flash backed dram module |
EP2224344A1 (en) * | 2009-02-27 | 2010-09-01 | Panasonic Corporation | A combined processing and non-volatile memory unit array |
US8489801B2 (en) * | 2009-03-04 | 2013-07-16 | Henry F. Huang | Non-volatile memory with hybrid index tag array |
KR101504632B1 (en) * | 2009-03-25 | 2015-03-20 | 삼성전자주식회사 | Apparatuses and methods for using redundant array of independent disks |
JP5289153B2 (en) | 2009-04-14 | 2013-09-11 | キヤノン株式会社 | Information processing apparatus, control method therefor, and computer program |
US8452734B2 (en) * | 2009-04-30 | 2013-05-28 | Texas Instruments Incorporated | FAT file in reserved cluster with ready entry state |
KR101562973B1 (en) * | 2009-05-22 | 2015-10-26 | 삼성전자 주식회사 | Memory apparatus and method for operating thereof |
GB2472050B (en) * | 2009-07-22 | 2013-06-19 | Wolfson Microelectronics Plc | Power management apparatus and methods |
US8542522B2 (en) * | 2009-07-23 | 2013-09-24 | Hewlett-Packard Development Company, L.P. | Non-volatile data-storage latch |
US8429436B2 (en) | 2009-09-09 | 2013-04-23 | Fusion-Io, Inc. | Apparatus, system, and method for power reduction in a storage device |
US20120271988A1 (en) | 2009-09-23 | 2012-10-25 | Infinite Memory Ltd. | Methods circuits data-structures devices and system for operating a non-volatile memory device |
ATE525688T1 (en) | 2009-09-23 | 2011-10-15 | St Ericsson Sa | POWER SUPPLY POWER UP MECHANISM, APPARATUS AND METHOD FOR CONTROLLING THE ACTIVATION OF POWER SUPPLY CIRCUITS |
JPWO2011043012A1 (en) | 2009-10-05 | 2013-02-28 | パナソニック株式会社 | Nonvolatile semiconductor memory device, signal processing system, signal processing system control method, and nonvolatile semiconductor memory device rewrite method |
CN102074998B (en) * | 2009-11-19 | 2013-03-20 | 国基电子(上海)有限公司 | Protection circuit and Ethernet electrical equipment |
KR101729933B1 (en) | 2009-12-18 | 2017-04-25 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | Non-volatile latch circuit and logic circuit, and semiconductor device using the same |
KR20110094468A (en) * | 2010-02-16 | 2011-08-24 | 삼성전자주식회사 | Method for restoring the master boot record of storage medium, storage medium driving device, and storage medium thereof |
US8566561B2 (en) * | 2010-05-14 | 2013-10-22 | Rockwell Automation Technologies, Inc. | Method to separate and persist static and dynamic portions of a control application |
US8578144B2 (en) * | 2010-08-04 | 2013-11-05 | International Business Machines Corporation | Partial hibernation restore for boot time reduction |
WO2012027202A1 (en) * | 2010-08-27 | 2012-03-01 | Raytheon Company | Controller and a method for controlling a boot process |
US8904161B2 (en) * | 2010-10-20 | 2014-12-02 | Samsung Electronics Co., Ltd. | Memory system and reset method thereof to prevent nonvolatile memory corruption due to premature power loss |
JP5549535B2 (en) * | 2010-10-22 | 2014-07-16 | 富士通株式会社 | Information processing apparatus, control method, and control apparatus |
US8381163B2 (en) * | 2010-11-22 | 2013-02-19 | Advanced Micro Devices, Inc. | Power-gated retention flops |
US8527693B2 (en) * | 2010-12-13 | 2013-09-03 | Fusion IO, Inc. | Apparatus, system, and method for auto-commit memory |
US9251005B2 (en) | 2010-12-20 | 2016-02-02 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Power isolation for memory backup |
US8738843B2 (en) * | 2010-12-20 | 2014-05-27 | Lsi Corporation | Data manipulation during memory backup |
KR20120085968A (en) * | 2011-01-25 | 2012-08-02 | 삼성전자주식회사 | Method of booting a computing system and computing system performing the same |
US10079068B2 (en) * | 2011-02-23 | 2018-09-18 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Devices and method for wear estimation based memory management |
US8819471B2 (en) * | 2011-06-03 | 2014-08-26 | Apple Inc. | Methods and apparatus for power state based backup |
JP5833347B2 (en) * | 2011-06-08 | 2015-12-16 | ローム株式会社 | Data processing device |
US8792273B2 (en) * | 2011-06-13 | 2014-07-29 | SMART Storage Systems, Inc. | Data storage system with power cycle management and method of operation thereof |
JP5476363B2 (en) * | 2011-12-19 | 2014-04-23 | レノボ・シンガポール・プライベート・リミテッド | Computer startup method using biometric authentication device and computer |
US9251052B2 (en) * | 2012-01-12 | 2016-02-02 | Intelligent Intellectual Property Holdings 2 Llc | Systems and methods for profiling a non-volatile cache having a logical-to-physical translation layer |
WO2013186807A1 (en) * | 2012-06-11 | 2013-12-19 | Hitachi, Ltd. | Disk subsystem and corresponding data restoration method |
WO2014008234A1 (en) * | 2012-07-02 | 2014-01-09 | Microsemi Soc Corp. | On-chip probe circuit for detecting faults in an fpga |
US11205469B2 (en) * | 2019-07-12 | 2021-12-21 | Micron Technology, Inc. | Power domain switches for switching power reduction |
US20230131586A1 (en) * | 2021-10-26 | 2023-04-27 | Dialog Semiconductor US Inc. | Low power standby mode for memory devices |
-
2013
- 2013-02-19 US US13/770,004 patent/US9899066B2/en active Active
- 2013-02-19 US US13/769,963 patent/US9715911B2/en active Active
- 2013-02-19 US US13/770,280 patent/US9830964B2/en active Active
- 2013-02-19 US US13/770,583 patent/US20140075091A1/en not_active Abandoned
- 2013-02-19 US US13/770,304 patent/US10102889B2/en active Active
- 2013-02-19 US US13/770,399 patent/US9711196B2/en active Active
- 2013-02-19 US US13/770,041 patent/US20140075174A1/en not_active Abandoned
- 2013-02-19 US US13/770,516 patent/US20140075175A1/en not_active Abandoned
- 2013-02-19 US US13/770,368 patent/US9058126B2/en active Active
- 2013-02-19 US US13/770,498 patent/US9342259B2/en active Active
- 2013-02-19 US US13/770,448 patent/US9335954B2/en active Active
- 2013-09-10 WO PCT/US2013/059006 patent/WO2014040051A1/en active Application Filing
- 2013-09-10 WO PCT/US2013/058871 patent/WO2014040011A1/en active Application Filing
- 2013-09-10 JP JP2015531313A patent/JP2015534675A/en active Pending
- 2013-09-10 CN CN201380046963.9A patent/CN104603715B/en active Active
- 2013-09-10 CN CN201380046965.8A patent/CN104620216A/en active Pending
- 2013-09-10 CN CN201380046962.4A patent/CN104620217B/en active Active
- 2013-09-10 WO PCT/US2013/058867 patent/WO2014040009A1/en active Application Filing
- 2013-09-10 CN CN201380046964.3A patent/CN104620232A/en active Pending
- 2013-09-10 CN CN201380046969.6A patent/CN104603759B/en active Active
- 2013-09-10 WO PCT/US2013/058875 patent/WO2014040012A1/en active Application Filing
- 2013-09-10 WO PCT/US2013/059036 patent/WO2014040065A1/en active Application Filing
- 2013-09-10 CN CN201310537573.3A patent/CN103678034B/en active Active
- 2013-09-10 WO PCT/US2013/059030 patent/WO2014040062A1/en active Application Filing
- 2013-09-10 CN CN201380046974.7A patent/CN104620243B/en active Active
- 2013-09-10 CN CN201310532311.8A patent/CN103956185B/en active Active
- 2013-09-10 WO PCT/US2013/058998 patent/WO2014040047A1/en active Application Filing
- 2013-09-10 CN CN201811580481.2A patent/CN109637573B/en active Active
- 2013-09-10 JP JP2015531319A patent/JP6296513B2/en active Active
- 2013-09-10 CN CN201380046961.XA patent/CN104620192B/en active Active
- 2013-09-10 JP JP2015531303A patent/JP6322632B2/en active Active
- 2013-09-10 WO PCT/US2013/058990 patent/WO2014040043A1/en active Application Filing
- 2013-09-10 CN CN201380046972.8A patent/CN104620194B/en active Active
- 2013-09-10 CN CN201380046971.3A patent/CN104620193A/en active Pending
- 2013-09-10 JP JP2015531301A patent/JP6336985B2/en active Active
- 2013-09-10 WO PCT/US2013/059020 patent/WO2014040058A1/en active Application Filing
-
2016
- 2016-04-04 US US15/089,607 patent/US11244710B2/en active Active
-
2017
- 2017-06-15 US US15/623,441 patent/US10902895B2/en active Active
- 2017-07-25 US US15/659,111 patent/US10541012B2/en active Active
-
2018
- 2018-02-19 US US15/899,302 patent/US10796738B2/en active Active
- 2018-10-12 US US16/159,433 patent/US10468079B2/en active Active
-
2019
- 2019-11-05 US US16/674,525 patent/US10930328B2/en active Active
-
2021
- 2021-12-22 US US17/558,847 patent/US12087395B2/en active Active
Patent Citations (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6185660B1 (en) * | 1997-09-23 | 2001-02-06 | Hewlett-Packard Company | Pending access queue for providing data to a target register during an intermediate pipeline phase after a computer cache miss |
US7398286B1 (en) * | 1998-03-31 | 2008-07-08 | Emc Corporation | Method and system for assisting in backups and restore operation over different channels |
US6513097B1 (en) * | 1999-03-03 | 2003-01-28 | International Business Machines Corporation | Method and system for maintaining information about modified data in cache in a storage system for use during a system failure |
US20040039880A1 (en) * | 2002-08-23 | 2004-02-26 | Vladimir Pentkovski | Method and apparatus for shared cache coherency for a chip multiprocessor or multiprocessor system |
US7017038B1 (en) * | 2002-08-26 | 2006-03-21 | Network Equipment Technologies, Inc. | Method and system to provide first boot to a CPU system |
US20040117580A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for efficiently and reliably performing write cache mirroring |
US20040117563A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for synchronizing access to shared resources |
US20040117579A1 (en) * | 2002-12-13 | 2004-06-17 | Wu Chia Y. | System and method for implementing shared memory regions in distributed shared memory systems |
US20040193955A1 (en) * | 2003-03-31 | 2004-09-30 | Leete Brian A. | Computer memory power backup |
US20050027945A1 (en) * | 2003-07-30 | 2005-02-03 | Desai Kiran R. | Methods and apparatus for maintaining cache coherency |
US20050027946A1 (en) * | 2003-07-30 | 2005-02-03 | Desai Kiran R. | Methods and apparatus for filtering a cache snoop |
US20060136656A1 (en) * | 2004-12-21 | 2006-06-22 | Conley Kevin M | System and method for use of on-chip non-volatile memory write cache |
US20070094446A1 (en) * | 2005-10-20 | 2007-04-26 | Hitachi, Ltd. | Storage system |
US20070260922A1 (en) * | 2006-04-20 | 2007-11-08 | Inventec Corporation | Method of protecting data in cache memory of storage system |
US20090132760A1 (en) * | 2006-12-06 | 2009-05-21 | David Flynn | Apparatus, system, and method for solid-state storage as cache for high-capacity, non-volatile storage |
US20090098901A1 (en) * | 2007-10-10 | 2009-04-16 | Unity Semiconductor Corporation | Memory emulation in a cellular telephone |
US20090157946A1 (en) * | 2007-12-12 | 2009-06-18 | Siamak Arya | Memory having improved read capability |
US20090161435A1 (en) * | 2007-12-24 | 2009-06-25 | Hynix Semiconductor Inc. | Non-volatile memory device and method of programming the same |
US20090172324A1 (en) * | 2007-12-26 | 2009-07-02 | Chunqi Han | Storage system and method for opportunistic write-verify |
US20090303630A1 (en) * | 2008-06-10 | 2009-12-10 | H3C Technologies Co., Ltd. | Method and apparatus for hard disk power failure protection |
US7954006B1 (en) * | 2008-12-02 | 2011-05-31 | Pmc-Sierra, Inc. | Method and apparatus for archiving data during unexpected power loss |
US20100153668A1 (en) * | 2008-12-15 | 2010-06-17 | Fujitsu Limited | Storage system, storage managing device and storage managing method |
US20100180065A1 (en) * | 2009-01-09 | 2010-07-15 | Dell Products L.P. | Systems And Methods For Non-Volatile Cache Control |
US7800856B1 (en) * | 2009-03-24 | 2010-09-21 | Western Digital Technologies, Inc. | Disk drive flushing write cache to a nearest set of reserved tracks during a power failure |
US20100325352A1 (en) * | 2009-06-19 | 2010-12-23 | Ocz Technology Group, Inc. | Hierarchically structured mass storage device and method |
US20110258355A1 (en) * | 2009-10-13 | 2011-10-20 | Ocz Technology Group, Inc. | Modular mass storage devices and methods of using |
US8677054B1 (en) * | 2009-12-16 | 2014-03-18 | Apple Inc. | Memory management schemes for non-volatile memory devices |
US20130290597A1 (en) * | 2011-09-30 | 2013-10-31 | Intel Corporation | Generation of far memory access signals based on usage statistic tracking |
Non-Patent Citations (6)
Title |
---|
Computer Hope, "Cache", June 20, 2001, Pages 1,https://web.archive.org/web/20010620222018/http://www.computerhope.com/jargon/c/cache.htm * |
David White, "What Is A System On A Chip (SOC)?", February 22, 2007, Pages 1 - 2,https://web.archive.org/web/20070222070108/http://www.wisegeek.com/what-is-a-system-on-a-chip-soc.htm * |
Sebastian Anthony, "SOC vs. CPU - The Battle For The Future Of Computing", April 19, 2012, Pages 1 - 16,http://www.extremetech.com/computing/126235-soc-vs-cpu-the-battle-for-the-future-of-computing * |
Webopedia, "Cache", April 11, 2001, Pages 1 - 3,https://web.archive.org/web/20010411033304/http://www.webopedia.com/TERM/c/cache.html * |
Webopedia, "Interrupt", August 6, 2002, Pages 1 - 2,https://web.archive.org/web/20020806203437/http://www.webopedia.com/TERM/I/interrupt.html * |
Webopedia, "SRAM", April 6, 2001, Pages 1 - 2,https://web.archive.org/web/20010406020345/http://www.webopedia.com/TERM/S/SRAM.html * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140025978A1 (en) * | 2012-07-20 | 2014-01-23 | Semiconductor Energy Laboratory Co., Ltd. | Power supply control circuit and signal processing circuit |
US9857860B2 (en) * | 2012-07-20 | 2018-01-02 | Semiconductor Energy Laboratory Co., Ltd. | Power supply control circuit and signal processing circuit |
US9454437B2 (en) | 2013-09-24 | 2016-09-27 | Texas Instruments Incorporated | Non-volatile logic based processing device |
US10303235B2 (en) | 2015-03-04 | 2019-05-28 | Qualcomm Incorporated | Systems and methods for implementing power collapse in a memory |
WO2016140770A3 (en) * | 2015-03-04 | 2016-11-03 | Qualcomm Incorporated | Systems and methods for implementing power collapse in a memory |
US20170300101A1 (en) * | 2016-04-14 | 2017-10-19 | Advanced Micro Devices, Inc. | Redirecting messages from idle compute units of a processor |
CN106230839A (en) * | 2016-08-03 | 2016-12-14 | 青岛海信宽带多媒体技术有限公司 | The acceptance control method of Real Time Streaming and device |
US10545728B2 (en) | 2017-07-27 | 2020-01-28 | Texas Instruments Incorporated | Non-volatile counter system, counter circuit and power management circuit with isolated dynamic boosted supply |
US11200030B2 (en) | 2017-07-27 | 2021-12-14 | Texas Instruments Incorporated | Non-volatile counter system, counter circuit and power management circuit with isolated dynamic boosted supply |
US11847430B2 (en) | 2017-07-27 | 2023-12-19 | Texas Instruments Incorporated | Non-volatile counter system, counter circuit and power management circuit with isolated dynamic boosted supply |
CN113760071A (en) * | 2020-06-02 | 2021-12-07 | 晶豪科技股份有限公司 | Method, controller and system for running memory system in advance during power-on period |
CN112650384A (en) * | 2021-01-05 | 2021-04-13 | 大唐微电子技术有限公司 | Low-power-consumption dormancy awakening control circuit and control circuit of multiple power domains |
US20230195321A1 (en) * | 2021-12-17 | 2023-06-22 | Samsung Electronics Co., Ltd. | Storage device and operating method thereof |
US12032832B2 (en) * | 2021-12-17 | 2024-07-09 | Samsung Electronics Co., Ltd. | Storage device and operating method thereof |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10930328B2 (en) | Processing device with nonvolatile logic array backup | |
US8792288B1 (en) | Nonvolatile logic array with built-in test drivers | |
US20140210535A1 (en) | Signal Level Conversion in Nonvolatile Bitcell Array | |
US8817520B2 (en) | Two capacitor self-referencing nonvolatile bitcell | |
US8897088B2 (en) | Nonvolatile logic array with built-in test result signal | |
US8797783B1 (en) | Four capacitor nonvolatile bit cell |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TEXAS INSTRUMENTS INCORPORATED, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BARTLING, STEVEN CRAIG;KHANNA, SUDHANSHU;REEL/FRAME:030219/0924 Effective date: 20130218 |
|
AS | Assignment |
Owner name: TEXAS INSTRUMENTS INCORPORATED, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BARTLING, STEVEN CRAIG;KHANNA, SUDHANSHU;REEL/FRAME:031354/0111 Effective date: 20130930 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |