US20080250056A1 - Method and apparatus for writing binary data with low power consumption - Google Patents
Method and apparatus for writing binary data with low power consumption Download PDFInfo
- Publication number
- US20080250056A1 US20080250056A1 US12/062,138 US6213808A US2008250056A1 US 20080250056 A1 US20080250056 A1 US 20080250056A1 US 6213808 A US6213808 A US 6213808A US 2008250056 A1 US2008250056 A1 US 2008250056A1
- Authority
- US
- United States
- Prior art keywords
- nodes
- binary
- parity
- information
- toggling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32203—Spatial or amplitude domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0021—Image watermarking
- G06T1/0028—Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32203—Spatial or amplitude domain methods
- H04N1/32229—Spatial or amplitude domain methods with selective or adaptive application of the additional information, e.g. in selected regions of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32203—Spatial or amplitude domain methods
- H04N1/32256—Spatial or amplitude domain methods in halftone data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/913—Television signal processing therefor for scrambling ; for copy protection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0051—Embedding of the watermark in the spatial domain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2201/00—General purpose image data processing
- G06T2201/005—Image watermarking
- G06T2201/0061—Embedding of the watermark in each block of the image, e.g. segmented watermarking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3225—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
- H04N2201/3233—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of authentication information, e.g. digital signature, watermark
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3269—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs
- H04N2201/327—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs which are undetectable to the naked eye, e.g. embedded codes
Definitions
- the present disclosure relates generally to data processing, and more particularly to techniques for low power consumption memory design for data processing systems.
- Power consumption is a major concern of modern development of computer and/or other components capable of computing. This is most apparent in battery-powered portable devices. People often carry extra batteries, AC adapters, and battery rechargers to ensure against a loss of functionality. Having to carry these accessories and supplies decreases the convenience of the portable devices. The need to carry extra batteries and power accessories can be obviated in part by using larger (or more) batteries, but this increases device bulk and thus decreases portability.
- Reducing power requirements allows the use of smaller batteries and/or decreases the frequency with which batteries must be replaced or recharged. Using smaller batteries decreases device bulk. Reducing frequency of replacement reduces the financial and environmental cost of device ownership. Reducing the frequency of recharging extends battery life and makes it more practical to leave power accessories behind. In some cases, lower power requirements increase the viability of solar power to replace or supplement battery power, further enhancing the portability. Reducing power consumption also reduces heat dissipation, so that less bulk needs to be dedicated to removing heat from a device.
- a hierarchical data structure such as a binary pyramid structure and/or an N-ary tree structure, is used to record information.
- Information can be represented as the parity values of a set of nodes in such a data structure, rather than individual nodes themselves.
- a Champagne Pyramid Parity Check (CPPC) algorithm and/or a Tree-Based Parity Check (TBPC) algorithm can be utilized to reduce the number of toggling operations required to write data to a memory component (e.g., to reduce the number of changes necessary in a memory for converting an original, previously stored set of information to a to-be-written set of information in the memory).
- the CPPC algorithm and/or the TBPC algorithm can reduce the total number of binary nodes changes required during the data writing process, thereby reducing memory power consumption.
- FIG. 1 is a high-level block diagram of a system for recording binary information with low power consumption in accordance with various aspects described herein.
- FIG. 2 is a block diagram of a system that implements an example technique for writing binary information in accordance with various aspects described herein.
- FIGS. 3-5 illustrate example data structures that can be utilized in connection with one or more data processing techniques described herein.
- FIGS. 6-7 illustrate example update and toggle operations that can be performed in connection with one or more data processing techniques described herein.
- FIG. 8 is a block diagram of a system that implements another example technique for writing binary information in accordance with various aspects described herein.
- FIG. 9 illustrates an example data structure that can be utilized for one or more data processing techniques described herein.
- FIGS. 10-11 illustrate respective operations that can be performed in connection with one or more data processing techniques described herein.
- FIGS. 12-14 are flowcharts of respective methods for low-power recording of binary information in accordance with various aspects described herein.
- FIG. 15 is a block diagram of an example operating environment in which various aspects described herein can function.
- FIG. 16 is a block diagram of an example networked computing environment in which various aspects described herein can function.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- an application running on a server and the server can be a component.
- One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- the methods and apparatus of the claimed subject matter may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the claimed subject matter.
- the components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
- system 100 can include a memory component 110 for storing binary data.
- the memory component 110 initially stores a set of original data.
- the memory component 110 can comprise a parity check component 130 and/or a modification component 140 .
- the parity check component 130 can divide storage locations in the memory component 110 that store respective bits of original data into groups. These groups can then be respectively characterized by the parity of the binary data values or bits stored at the locations that constitute the respective groups.
- binary data can be represented as a binary digit “0” or “1,” and the parity of a group of such data can be even-odd parity such that, for example, the parity of a group is “1” if the number of bits in the group having a value of “1” is odd and “0” if the number is even. It should be appreciated, however, that this constitutes merely one way in which parity values can be obtained and that any suitable technique could be used.
- the parity of respective bits of original data stored at the memory component 110 as formulated by the parity check component 130 can be utilized as a basis for representing to-be-written data. Further, the parity check component 130 can formulate groups of data bits such that they overlap, thereby allowing the parity of multiple groups to be altered by toggling the value of a single bit in the group. By doing so, the amount of toggling necessary to modify the original data stored by the memory component 110 into the to-be-written data can be reduced, thereby increasing embedding efficiency and reducing the amount of power required for writing data to the memory component 110 . In accordance with various aspects described herein, the parity check component 130 can utilize a variety of algorithms for dividing a set of data bits into groups to facilitate efficient watermarking of the image. Examples of algorithms that can be utilized are described in detail infra.
- groups of data bits formed by the parity check component 130 and their respective parity values can be provided to a modification component 140 to facilitate recording of to-be-written data to the memory component 110 .
- the modification component 140 can compare respective information bits of the to-be-written data to the parity of corresponding groups of bits of the original data. If it is determined that an information bit in the to-be-written data and the parity of its corresponding group of original data bits does not match, a data bit in the group can be toggled to alter the parity of the group such that a match is created.
- respective groups of data bits can share bits among each other and/or otherwise overlap.
- toggling can be based on one or more techniques as generally known in the art, such as simple toggling of a single bit or pair toggling of a master/slave data bit pair.
- FIG. 1 and the above description relate to techniques for writing data to a memory component 110
- various techniques described herein can be applied to any computing and/or other application where data is desirably modified from a first state to a second state.
- various aspects described herein can additionally be applied to applications such as disk storage management or communication management.
- the techniques described herein can be applied to data communication by, for example, buffering and/or maintaining a previously transmitted set of information. For a subsequent communication(s), instead of requiring communication of an entire set of information, one or more toggling positions can instead be transmitted to enable a device receiving the communication to obtain a set of information by toggling at indicated positions in a previously received communication.
- By transmitting toggling locations rather than the corresponding data itself, communicated information can effectively be compressed prior to transmission, thereby allowing a communication system to make more effective use of network bandwidth and/or other system resources.
- binary information 210 can be written to the memory component 240 such that it replaces original information 220 pre-recorded in the memory component 240 .
- the memory component 240 can implement a Champagne Pyramid Parity Check (CPPC) algorithm for writing binary information 210 into the memory component 240 by utilizing subcomponents 244 - 252 .
- CPPC Champagne Pyramid Parity Check
- the CPPC algorithm can be utilized by the memory component 240 to reduce power consumption associated with writing binary data by reducing the total number of bits of original information 220 that are required to be changed (e.g., toggled) during the data writing process.
- the CPPC algorithm can be used in combination with one or more conventional data processing algorithms to write binary information 210 into the memory component 240 . By doing so, the CPPC algorithm can be utilized to improve upon such conventional techniques by, for example, reducing power consumption required by such conventional techniques.
- the memory component 240 can extend the functionality of traditional data processing and/or storage techniques by borrowing the functionality of various elements of such techniques and applying additional elements to improve their performance.
- the memory component 240 can classify binary values stored at various locations therein into groups. For example, locations having a binary value of “0” can be classified into a first group, while locations having a binary value of “1” can be classified into a second group.
- a comparison component 248 and/or a toggling component 252 can be utilized to force original information 220 stored by the memory component 240 to match corresponding information bits of the binary information 210 to be written to the memory component 240 .
- the memory component 240 can utilize a CPPC algorithm for improved memory writing functionality as follows.
- a champagne pyramid component 244 can be utilized to organize storage locations at the memory component 240 into one or more champagne pyramid structures as described in more detail infra.
- a parity calculation component 246 can be used to identify an array of information bits corresponding to the original information 220 with the same size as that of the binary information 210 to be written.
- a flavor adding optimization component 250 can then be utilized to find a minimum number of locations that must be toggled to cause the identified array of information bits to match the binary information 210 .
- Data writing can then be processed via the toggling component 252 on the locations marked by the flavor adding optimization component 250 .
- the operation of the champagne pyramid component 244 , the parity calculation component 246 , and/or the flavor adding optimization component 250 can proceed as set forth in the following description.
- the Champagne Pyramid Parity Check algorithm usable by the memory component 240 is so named because it leverages various properties that can be observed in a champagne pyramid.
- a two-dimensional 5-level champagne pyramid 300 built by 15 wine glasses numbered 1 through 15 is illustrated by FIG. 3 .
- FIG. 3 a two-dimensional 5-level champagne pyramid 300 built by 15 wine glasses numbered 1 through 15 is illustrated by FIG. 3 .
- flavorless champagne can be poured into glass 1 at the top of the pyramid 300 while apple-flavored champagne can be poured into glass 4 .
- all of the glasses in the pyramid 300 will come to be filled with champagne.
- glasses 11 through 13 would contain apple-flavored champagne at such a time while glasses 14 and 15 would not.
- the rows of the pyramid 300 can be numbered from the bottom. As illustrated in FIG. 3 , glasses 11 through 15 are on the first row and glass 1 is on the fifth row.
- N successive glasses on the bottom row of the pyramid 300 will contain the same flavor of champagne as a glass on the N-th row of the pyramid 300 if champagne is poured into said glass until it has run off into the bottom row of the pyramid 300 .
- the desired flavor can be poured into a single glass on the N-th row of the pyramid 300 instead of adding the flavor on the bottom row for all N glasses.
- the champagne pyramid component 244 can arrange information of a binary format into a structure that exhibits the above properties.
- the champagne pyramid component 244 can represent a set of original information 220 and its corresponding binary values as a binary pyramid structure, such as the example structure illustrated by diagram 400 in FIG. 4 . Each of these binary values can then be treated as a wine glass in a pyramid structure having a predefined scanning order.
- the number of information bits that can be held by a champagne pyramid structure is equal to the number of elements on the bottom row of the pyramid.
- N can be constrained by the following equation:
- writing binary information 210 such that it replaces original information 220 using CPPC can be dependent on the bottom row of the pyramid structure(s) constructed by the champagne pyramid component 244 .
- data flow through the structure can be visualized as liquid poured from the top of a champagne pyramid such that eventually all glasses on the bottom row of the pyramid are filled up.
- the parity calculation component 246 can begin processing of a pyramid structure(s) by defining the Region of Interest of a wine glass n, ROI(n), as the set of glasses that belong to the possible paths from the top glass of the pyramid to glass n.
- each wine glass in pyramid 300 contains a value of either “0” or “1.”
- the parity calculation component 246 can count the number of glasses containing “1” in the region of interest of each glass on the bottom row of the pyramid. In one example, if the number of such glasses is an even number for a glass, the parity of that glass is set to “0.” Similarly, if the number of such glasses is an odd number for a given glass, the parity of that glass can instead be set to “1.”
- an information array (IA) with the same size as the binary information 910 to be written can be formed.
- Diagram 400 in FIG. 3 illustrates example parity calculations.
- the parity calculation component 246 can define the ROI and then count the number of nodes containing a value of “1” in the defined region.
- the computational complexity of such operations can be reduced by making use of various properties of the ROI.
- the parity calculation component can define the “left source” of x, LS(x), the “right source” of x, RS(x), and the “upper source” of x, US(x), as illustrated in diagram 500 in FIG. 5 . It should be appreciated that for some cases, LS(x), RS(x), and/or US(x) may not exist.
- ROI can be represented as an empty set. Otherwise, for a node x, ROI can be calculated as follows:
- ROI ⁇ ( US ⁇ ( x ) ) ROI ⁇ ( LS ⁇ ( x ) ) ⁇ ROI ⁇ ( RS ⁇ ( x ) ) , ( 2 )
- the parity of a node x can be calculated as follows:
- Parity( x ) Parity( LS ( x )) ⁇ Parity( RS ( x )) ⁇ Parity( US ( x )) ⁇ x. (4)
- the parity calculation component 246 can utilize a smart parity calculation method as follows.
- the parity calculation component 246 can start from the top of the pyramid and process each node in increasing order of glass number, as shown in diagram 300 . If US(x) is “0” for a given node, no further operation is needed. Otherwise, the values contained by of LS(x), RS(x), and x can be toggled as illustrated by diagrams 610 and 620 in FIG. 6 .
- the IA can be processed with an array corresponding to the binary information 210 being written using an exclusive-OR (XOR) operator and/or another appropriate operator at the comparison component 248 .
- XOR exclusive-OR
- the corresponding location on the bottom row of the pyramid structure can be marked as “To Be Flavored” (TBF), as illustrated in diagram 710 in FIG. 7 .
- TBF To Be Flavored
- Toggling optimization can then be performed by the flavor adding optimization component 250 by employing the observation that adding flavor into N successive wine glasses on the bottom row of a champagne pyramid is equivalent to adding flavor into one wine glass on the N-th row.
- N successive nodes in a pyramid structure are marked as TBF, the flavor adding optimization can designate only one location in the original information 220 for toggling instead of requiring toggling of all N locations.
- FIG. 7 An example of the flavor adding optimization used in connection with data processing is illustrated by diagrams 710 and 720 in FIG. 7 .
- FIG. 7 illustrates, given the pyramid structure 400 illustrated in FIG. 4 and a 5-bit set of binary information ⁇ 11000 ⁇ to be written, toggling of the IA can be performed to obtain a resulting array of ⁇ 01110 ⁇
- the second, third, and fourth nodes on the bottom row can then be marked as TBF.
- the memory component 240 can fully record the binary information 210 by toggling only one glass on the third row, as indicated by FIG. 7 .
- binary information 810 can be written into a memory component 840 that stores previously-recorded original information 820 .
- the memory component 840 can implement a TBPC (Tree Based Parity Check) algorithm for recording binary information 810 by utilizing subcomponents 844 - 854 .
- TBPC Te Based Parity Check
- a TBPC algorithm can be utilized by the memory component 840 in conjunction with one or more existing data processing techniques.
- the memory component 840 can utilize a comparison component 848 for comparing binary values of original information 820 with corresponding to-be-written binary information 810 , and/or a modification location determination component 852 for deciding which bit(s) need modifications to hold to-be-written binary data 810 . Modification can then be performed based on the comparisons via a modification component 854 . For every single to-be-written bit, it can be observed that the probability that the original information 820 will require modification is 0.5. Accordingly, the memory component 840 can utilize the TBPC algorithm to attempt to reduce the probability of modifying the original information 820 .
- the memory component 840 can utilize the TBPC algorithm to achieve an improvement in power consumption by reducing the number of to-be-written bits (e.g., nodes) that require modification operations.
- the memory component 840 can employ a tree component 844 , a parity calculation component 846 , and/or a fountain investigation component 850 to extend the functionality of a traditional data processing scheme by way of the TBPC algorithm.
- the TBPC algorithm leverages relationships among ancestors and descendants in an N-ary tree to improve data writing functionality.
- the operation of the tree component 844 , parity calculation component 846 , and/or fountain investigation component 850 is set forth in further detail in the following description.
- the size of binary information 810 to be written is represented as L.
- bit locations In conventional data processing algorithms, the values of bit locations (nodes) can be classified as either “0” or “1.” Thus, 16 bits of information traditionally require 16 nodes to be represented. Immediately after classification, these values are then compared with corresponding bits of the binary information 810 . If respective values are the same as a to-be-written bit, no further operations are performed. Otherwise, one or more processes are carried out to toggle the value.
- the tree component 844 can operate as follows. First, the tree component 844 can populate an N-ary complete tree, herein referred to as a “master tree.” It can be appreciated that because the master tree is an N-ary complete tree, every node of the master tree, except for leaf nodes, can be configured to have N child nodes. In one example, one leaf node can be utilized to hold one information bit, which can correspond to the parity value of the node. Accordingly, it can be appreciated that to write binary information 810 of L bits, a master tree can be required to have L leaves.
- a parity calculation component 846 can be utilized to determine the information bits represented by each leaf node in the master tree. In one example, this can be accomplished for a leaf node by traveling from the leaf node to the root node of the master tree. If the number of occurrences of “1” values is an odd number, the information bit of the leaf node can be regarded as “1.” Otherwise, the information bit can be regarded as “0.” These calculations are further illustrated in FIG. 9 for master tree 900 . As shown, the information array (IA) represents the information written in the master tree being listed under the tree.
- the comparison component 848 can obtain a toggle array.
- the obtained information array is ⁇ 1110110101111000 ⁇ .
- a resultant toggle array obtained by the comparison component 848 becomes ⁇ 1100111100100110 ⁇ . This comparison is illustrated by diagram 1000 in FIG. 10 .
- respective values of “1” in the toggle array can represent the fact that the corresponding nodes of the master tree representing the original information 820 require toggling.
- power consumption is introduced by any single modification to the original information 820 .
- TBPC can be utilized to minimize the number of “1” values in the toggle array.
- a fountain investigation component 850 can build a toggle tree with the same size as the master tree.
- the leaf nodes of the toggle tree can be populated by the elements of the toggle array in the order that the elements appear in the toggle array.
- the remaining nodes in the tree can be initially populated using any suitable value.
- the fountain investigation component can then begin analysis at the root of the toggle tree as follows. For a given node, if all N of the child nodes of the given node contain the value “1,” the child nodes are updated with the value “0” and the node being examined can be set to “1.” Otherwise, the node being processed can be reset to “0.”
- An example toggle tree generation and fountain investigation process for the example toggle array given above is illustrated in diagram 1000 in FIG. 10 .
- to-be-toggled sites in the master tree corresponding to locations in the toggle tree that have respective values of “1” can then be toggled.
- the obtained information array should match the binary information array corresponding to the binary information 810 .
- An example data processing process is illustrated by diagram 1100 in FIG. 11 .
- the maximum achievable payload of the memory component 840 can be found as follows. Initially, it can be appreciated that because the number of nodes in original information 820 is limited, the size of a master tree that can be formed in the tree formation process is also limited. Further, as noted above, the master tree is required to have L leaf nodes to represent L binary information bits. Thus, to form an N-ary complete tree with L leaf nodes, the total number of nodes required, nNodes, can be found by the following equation:
- Equation (6) it can be observed that as N increases, the number of nodes required in the master tree decreases. Thus, because the number of nodes is limited, the payload that can be written by the memory component 840 increases as N increases.
- the minimum N that can be used can be found as follows:
- parity calculation component 846 can operate using a low-complexity parity calculation technique as follows.
- parity(x) can be defined as the parity of the number of occurrences of “1” in the path from a node x to the root node of a master tree. It can be observed that parity(x) is equal to the value of parity(parent(x)) plus the value of node x using bitwise addition.
- the parity calculation component 846 can traverse the master tree starting from the root of the tree and, for each node x being processed, update the parity of its child node y by adding the parity of node x and the value of node y.
- FIGS. 12-14 methodologies that can be implemented in accordance with various aspects described herein are illustrated. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may, in accordance with the claimed subject matter, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies in accordance with the claimed subject matter.
- program modules include routines, programs, objects, data structures, etc., that perform particular tasks or implement particular abstract data types.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- various portions of the disclosed systems above and methods below may include or consist of artificial intelligence or knowledge or rule based components, sub-components, processes, means, methodologies, or mechanisms (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, classifiers . . . ).
- Such components can automate certain mechanisms or processes performed thereby to make portions of the systems and methods more adaptive as well as efficient and intelligent.
- FIG. 12 illustrates a method of low-power recording of binary information in accordance with various aspects described herein.
- binary information to be stored by a memory e.g., a memory component 110
- a hierarchical data structure is generated (e.g., by a parity check component 130 ) that comprises a plurality of nodes corresponding to respective information bits currently stored by the memory.
- respective parity values of nodes in a bottom row of the hierarchical data structure generated at 1204 are determined.
- information bits are toggled (e.g., by a modification component 140 ) corresponding to one or more nodes in the hierarchical data structure generated at 1204 such that the respective parity values of the nodes in the bottom row of the hierarchical data structure as determined at 1206 match respective corresponding bits in the binary information identified at 1202 to be stored by the memory.
- FIG. 13 illustrates a method 1300 of writing binary information (e.g., binary information 210 ) to a memory (e.g., a memory component 240 ) to replace original information (e.g., original information 220 ) previously stored in the memory.
- binary information e.g., binary information 210
- a memory e.g., a memory component 240
- original information e.g., original information 220
- method 1300 can be utilized to implement a Champagne Pyramid Parity Check (CPPC) algorithm.
- CPPC Champagne Pyramid Parity Check
- At 1304 at least one binary pyramid structure having nodes representing original information is provided (e.g., by a champagne pyramid component 244 ).
- the number of nodes at the bottom row of the at least one pyramid structure provided at 1304 can equal the number of bits in the binary information provided at 1302 .
- parity is calculated (e.g., by a parity calculation component 946 ) for the combined set of the respective nodes and all other nodes in the at least one pyramid for having the respective node as a direct or indirect successor.
- groups of successive nodes for which the comparison at 1308 indicates have respective parity values that differ from corresponding bits of the binary information provided at 1302 are identified (e.g., by a flavor adding optimization component 250 ).
- the binary information provided at 1302 is written in the memory at least in part by toggling nodes (e.g., using a toggling component 252 ) corresponding to the respective lowest common predecessor nodes for the groups identified at 1312 .
- method 1400 of writing binary information (e.g., binary information 810 ) into a memory (e.g., a memory component 840 ) is illustrated.
- method 1400 can be utilized to implement a Tree-Based Parity Check (TBPC) algorithm.
- TBPC Tree-Based Parity Check
- binary information to be written in memory is provided.
- one or more N-ary master trees having nodes representing original information are provided (e.g., by a tree component 844 ).
- the number of leaf nodes at the bottom row in the one or more trees can be made equal to the number of bits in the binary information provided at 1402 .
- parity is calculated (e.g., by a parity calculation component 846 ) of nodes in a path from the leaf node to the root node of the master tree to which the leaf node belongs.
- a toggle array is obtained (e.g., by a comparison component 848 ) by performing exclusive-OR operations between respective parity values calculated at 1406 and corresponding bits of the binary information identified at 1402 .
- one or more toggle trees are constructed that correspond to the one or more master trees constructed at 1404 using the toggle array obtained at 1408 as leaf nodes.
- the binary information provided at 1402 is written into the memory at least in part by toggling nodes in the one or more trees created at 1404 (e.g., via a modification component 854 ) corresponding to the respective highest nodes in the toggle trees constructed at 1410 for which all leaf nodes provide a toggling indication (e.g., as determined by a fountain investigation component 850 and/or a modification location determination component 852 ).
- FIG. 15 and the following discussion are intended to provide a brief, general description of a suitable computing environment 1500 in which various aspects of the claimed subject matter can be implemented. Additionally, while the above features have been described above in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that said features can also be implemented in combination with other program modules and/or as a combination of hardware and software.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- the illustrated aspects may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
- program modules can be located in both local and remote memory storage devices.
- Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer-readable media can comprise computer storage media and communication media.
- Computer storage media can include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
- Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
- an exemplary environment 1500 for implementing various aspects described herein includes a computer 1502 , the computer 1502 including a processing unit 1504 , a system memory 1506 and a system bus 1508 .
- the system bus 1508 couples to system components including, but not limited to, the system memory 1506 to the processing unit 1504 .
- the processing unit 1504 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as the processing unit 1504 .
- the system bus 1508 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures.
- the system memory 1506 includes read-only memory (ROM) 1510 and random access memory (RAM) 1512 .
- ROM read-only memory
- RAM random access memory
- a basic input/output system (BIOS) is stored in a non-volatile memory 1510 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 1502 , such as during start-up.
- the RAM 1512 can also include a high-speed RAM such as static RAM for caching data.
- the computer 1502 further includes an internal hard disk drive (HDD) 1514 (e.g., EIDE, SATA), which internal hard disk drive 1514 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 1516 , (e.g., to read from or write to a removable diskette 1518 ) and an optical disk drive 1520 , (e.g., reading a CD-ROM disk 1522 or, to read from or write to other high capacity optical media such as the DVD).
- the hard disk drive 1514 , magnetic disk drive 1516 and optical disk drive 1520 can be connected to the system bus 1508 by a hard disk drive interface 1524 , a magnetic disk drive interface 1526 and an optical drive interface 1528 , respectively.
- the interface 1524 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE-1394 interface technologies. Other external drive connection technologies are within contemplation of the subject disclosure.
- the drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth.
- the drives and media accommodate the storage of any data in a suitable digital format.
- computer-readable media refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods described herein.
- a number of program modules can be stored in the drives and RAM 1512 , including an operating system 1530 , one or more application programs 1532 , other program modules 1534 and program data 1536 . All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 1512 . It is appreciated that the claimed subject matter can be implemented with various commercially available operating systems or combinations of operating systems.
- a user can enter commands and information into the computer 1502 through one or more wired/wireless input devices, e.g., a keyboard 1538 and a pointing device, such as a mouse 1540 .
- Other input devices may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like.
- These and other input devices are often connected to the processing unit 1504 through an input device interface 1542 that is coupled to the system bus 1508 , but can be connected by other interfaces, such as a parallel port, a serial port, an IEEE-1394 port, a game port, a USB port, an IR interface, etc.
- a monitor 1544 or other type of display device is also connected to the system bus 1508 via an interface, such as a video adapter 1546 .
- a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
- the computer 1502 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 1548 .
- the remote computer(s) 1548 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 1502 , although, for purposes of brevity, only a memory/storage device 1550 is illustrated.
- the logical connections depicted include wired/wireless connectivity to a local area network (LAN) 1552 and/or larger networks, e.g., a wide area network (WAN) 1554 .
- LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
- the computer 1502 When used in a LAN networking environment, the computer 1502 is connected to the local network 1552 through a wired and/or wireless communication network interface or adapter 1556 .
- the adapter 1556 may facilitate wired or wireless communication to the LAN 1552 , which may also include a wireless access point disposed thereon for communicating with the wireless adapter 1556 .
- the computer 1502 can include a modem 1558 , or is connected to a communications server on the WAN 1554 , or has other means for establishing communications over the WAN 1554 , such as by way of the Internet.
- the modem 1558 which can be internal or external and a wired or wireless device, is connected to the system bus 1508 via the serial port interface 1542 .
- program modules depicted relative to the computer 1502 can be stored in the remote memory/storage device 1550 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.
- the computer 1502 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
- any wireless devices or entities operatively disposed in wireless communication e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone.
- the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
- Wi-Fi Wireless Fidelity
- Wi-Fi networks use IEEE-802.11 (a, b, g, etc.) radio technologies to provide secure, reliable, and fast wireless connectivity.
- a Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE-802.3 or Ethernet).
- Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 13 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band).
- networks using Wi-Fi wireless technology can provide real-world performance similar to a 10BaseT wired Ethernet network.
- the system 1600 includes one or more client(s) 1602 .
- the client(s) 1602 can be hardware and/or software (e.g., threads, processes, computing devices).
- the client(s) 1602 can house cookie(s) and/or associated contextual information by employing one or more features described herein.
- the system 1600 also includes one or more server(s) 1604 .
- the server(s) 1604 can also be hardware and/or software (e.g., threads, processes, computing devices). In one example, the servers 1604 can house threads to perform transformations by employing one or more features described herein.
- One possible communication between a client 1602 and a server 1604 can be in the form of a data packet adapted to be transmitted between two or more computer processes.
- the data packet may include a cookie and/or associated contextual information, for example.
- the system 1600 includes a communication framework 1606 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1602 and the server(s) 1604 .
- a communication framework 1606 e.g., a global communication network such as the Internet
- Communications can be facilitated via a wired (including optical fiber) and/or wireless technology.
- the client(s) 1602 are operatively connected to one or more client data store(s) 1608 that can be employed to store information local to the client(s) 1602 (e.g., cookie(s) and/or associated contextual information).
- the server(s) 1604 are operatively connected to one or more server data store(s) 1610 that can be employed to store information local to the servers 1604 .
- the disclosed subject matter can be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein.
- article of manufacture “computer program product” or similar terms, where used herein, are intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
- computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick).
- a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
- LAN local area network
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
- Error Detection And Correction (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Patent Application Ser. No. 60/907,513, filed on Apr. 4, 2007, entitled “MULTIMEDIA WATERMARKING TECHNIQUES WITH LOW DISTORTION.”
- The present disclosure relates generally to data processing, and more particularly to techniques for low power consumption memory design for data processing systems.
- Power consumption is a major concern of modern development of computer and/or other components capable of computing. This is most apparent in battery-powered portable devices. People often carry extra batteries, AC adapters, and battery rechargers to ensure against a loss of functionality. Having to carry these accessories and supplies decreases the convenience of the portable devices. The need to carry extra batteries and power accessories can be obviated in part by using larger (or more) batteries, but this increases device bulk and thus decreases portability.
- Reducing power requirements allows the use of smaller batteries and/or decreases the frequency with which batteries must be replaced or recharged. Using smaller batteries decreases device bulk. Reducing frequency of replacement reduces the financial and environmental cost of device ownership. Reducing the frequency of recharging extends battery life and makes it more practical to leave power accessories behind. In some cases, lower power requirements increase the viability of solar power to replace or supplement battery power, further enhancing the portability. Reducing power consumption also reduces heat dissipation, so that less bulk needs to be dedicated to removing heat from a device.
- There are many approaches to reducing power requirements. Advances in semiconductor manufacturing have permitted smaller and more power efficient circuits. Advances in circuit design and processor architecture have also reduced power requirements. Such advances have reduced power requirements across all types of devices including processors, memories, and interface devices.
- In addition to these hardware-oriented approaches, there are software-oriented approaches to reducing power requirements. Considerable effort has been invested in designing instruction sets and data formats for efficient use of available capacities for computation, storage and communication. As these capacities are used more efficiently, power requirements are reduced. However, as dramatic as power reductions have been to date, further reductions are desired to increase portability and convenience, reduce environmental and financial costs, and achieve other objectives.
- The following presents a simplified summary of the claimed subject matter in order to provide a basic understanding of some aspects of the claimed subject matter. This summary is not an extensive overview of the claimed subject matter. It is intended to neither identify key or critical elements of the claimed subject matter nor delineate the scope of the claimed subject matter. Its sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.
- The present disclosure provides systems, components, and methodologies for writing and/or recording information in memory components. In accordance with various aspects described herein, a hierarchical data structure, such as a binary pyramid structure and/or an N-ary tree structure, is used to record information. Information can be represented as the parity values of a set of nodes in such a data structure, rather than individual nodes themselves.
- In one example, a Champagne Pyramid Parity Check (CPPC) algorithm and/or a Tree-Based Parity Check (TBPC) algorithm can be utilized to reduce the number of toggling operations required to write data to a memory component (e.g., to reduce the number of changes necessary in a memory for converting an original, previously stored set of information to a to-be-written set of information in the memory). The CPPC algorithm and/or the TBPC algorithm can reduce the total number of binary nodes changes required during the data writing process, thereby reducing memory power consumption.
- To the accomplishment of the foregoing and related ends, certain illustrative aspects of the claimed subject matter are described herein in connection with the following description and the annexed drawings. These aspects are indicative, however, of but a few of the various ways in which the principles of the claimed subject matter can be employed. The claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the claimed subject matter can become apparent from the following detailed description when considered in conjunction with the drawings.
-
FIG. 1 is a high-level block diagram of a system for recording binary information with low power consumption in accordance with various aspects described herein. -
FIG. 2 is a block diagram of a system that implements an example technique for writing binary information in accordance with various aspects described herein. -
FIGS. 3-5 illustrate example data structures that can be utilized in connection with one or more data processing techniques described herein. -
FIGS. 6-7 illustrate example update and toggle operations that can be performed in connection with one or more data processing techniques described herein. -
FIG. 8 is a block diagram of a system that implements another example technique for writing binary information in accordance with various aspects described herein. -
FIG. 9 illustrates an example data structure that can be utilized for one or more data processing techniques described herein. -
FIGS. 10-11 illustrate respective operations that can be performed in connection with one or more data processing techniques described herein. -
FIGS. 12-14 are flowcharts of respective methods for low-power recording of binary information in accordance with various aspects described herein. -
FIG. 15 is a block diagram of an example operating environment in which various aspects described herein can function. -
FIG. 16 is a block diagram of an example networked computing environment in which various aspects described herein can function. - The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the claimed subject matter.
- As used in this application, the terms “component,” “system,” and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Also, the methods and apparatus of the claimed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the claimed subject matter. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
- Turning now to
FIG. 1 , a block diagram of asystem 100 for recording binary information with low power consumption in accordance with various aspects described herein is illustrated. In one example,system 100 can include amemory component 110 for storing binary data. In one example, thememory component 110 initially stores a set of original data. To facilitate efficient recording of to-be-written data into thememory component 110, thememory component 110 can comprise aparity check component 130 and/or amodification component 140. - In accordance with one aspect, the
parity check component 130 can divide storage locations in thememory component 110 that store respective bits of original data into groups. These groups can then be respectively characterized by the parity of the binary data values or bits stored at the locations that constitute the respective groups. By way of specific, non-limiting example, binary data can be represented as a binary digit “0” or “1,” and the parity of a group of such data can be even-odd parity such that, for example, the parity of a group is “1” if the number of bits in the group having a value of “1” is odd and “0” if the number is even. It should be appreciated, however, that this constitutes merely one way in which parity values can be obtained and that any suitable technique could be used. - In accordance with another aspect, the parity of respective bits of original data stored at the
memory component 110 as formulated by theparity check component 130 can be utilized as a basis for representing to-be-written data. Further, theparity check component 130 can formulate groups of data bits such that they overlap, thereby allowing the parity of multiple groups to be altered by toggling the value of a single bit in the group. By doing so, the amount of toggling necessary to modify the original data stored by thememory component 110 into the to-be-written data can be reduced, thereby increasing embedding efficiency and reducing the amount of power required for writing data to thememory component 110. In accordance with various aspects described herein, theparity check component 130 can utilize a variety of algorithms for dividing a set of data bits into groups to facilitate efficient watermarking of the image. Examples of algorithms that can be utilized are described in detail infra. - After processing by the
parity check component 130, groups of data bits formed by theparity check component 130 and their respective parity values can be provided to amodification component 140 to facilitate recording of to-be-written data to thememory component 110. In one example, themodification component 140 can compare respective information bits of the to-be-written data to the parity of corresponding groups of bits of the original data. If it is determined that an information bit in the to-be-written data and the parity of its corresponding group of original data bits does not match, a data bit in the group can be toggled to alter the parity of the group such that a match is created. In accordance with one aspect, respective groups of data bits can share bits among each other and/or otherwise overlap. As a result, the parity of multiple groups can be changed with a single toggling operation, thereby reducing the amount of toggling operations required for writing data to thememory component 110. In one example, toggling can be based on one or more techniques as generally known in the art, such as simple toggling of a single bit or pair toggling of a master/slave data bit pair. - It should be appreciated that while
FIG. 1 and the above description relate to techniques for writing data to amemory component 110, various techniques described herein can be applied to any computing and/or other application where data is desirably modified from a first state to a second state. Thus, for example, various aspects described herein can additionally be applied to applications such as disk storage management or communication management. The techniques described herein can be applied to data communication by, for example, buffering and/or maintaining a previously transmitted set of information. For a subsequent communication(s), instead of requiring communication of an entire set of information, one or more toggling positions can instead be transmitted to enable a device receiving the communication to obtain a set of information by toggling at indicated positions in a previously received communication. By transmitting toggling locations rather than the corresponding data itself, communicated information can effectively be compressed prior to transmission, thereby allowing a communication system to make more effective use of network bandwidth and/or other system resources. - Referring now to
FIG. 2 , a block diagram of asystem 200 that implements an example technique for writingbinary information 210 in amemory component 240 is provided. In one example,binary information 210 can be written to thememory component 240 such that it replacesoriginal information 220 pre-recorded in thememory component 240. Further, in accordance with one aspect, thememory component 240 can implement a Champagne Pyramid Parity Check (CPPC) algorithm for writingbinary information 210 into thememory component 240 by utilizing subcomponents 244-252. - In accordance with one aspect, the CPPC algorithm can be utilized by the
memory component 240 to reduce power consumption associated with writing binary data by reducing the total number of bits oforiginal information 220 that are required to be changed (e.g., toggled) during the data writing process. In one example, the CPPC algorithm can be used in combination with one or more conventional data processing algorithms to writebinary information 210 into thememory component 240. By doing so, the CPPC algorithm can be utilized to improve upon such conventional techniques by, for example, reducing power consumption required by such conventional techniques. - In one example, the
memory component 240 can extend the functionality of traditional data processing and/or storage techniques by borrowing the functionality of various elements of such techniques and applying additional elements to improve their performance. By way of specific example, thememory component 240 can classify binary values stored at various locations therein into groups. For example, locations having a binary value of “0” can be classified into a first group, while locations having a binary value of “1” can be classified into a second group. Acomparison component 248 and/or atoggling component 252 can be utilized to forceoriginal information 220 stored by thememory component 240 to match corresponding information bits of thebinary information 210 to be written to thememory component 240. - In accordance with one aspect, the
memory component 240 can utilize a CPPC algorithm for improved memory writing functionality as follows. First, achampagne pyramid component 244 can be utilized to organize storage locations at thememory component 240 into one or more champagne pyramid structures as described in more detail infra. Following processing by thechampagne pyramid component 244, aparity calculation component 246 can be used to identify an array of information bits corresponding to theoriginal information 220 with the same size as that of thebinary information 210 to be written. After theparity calculation component 246 identifies such an array, a flavor addingoptimization component 250 can then be utilized to find a minimum number of locations that must be toggled to cause the identified array of information bits to match thebinary information 210. Data writing can then be processed via thetoggling component 252 on the locations marked by the flavor addingoptimization component 250. By way of example, the operation of thechampagne pyramid component 244, theparity calculation component 246, and/or the flavor addingoptimization component 250 can proceed as set forth in the following description. - In accordance with another aspect, the Champagne Pyramid Parity Check algorithm usable by the
memory component 240 is so named because it leverages various properties that can be observed in a champagne pyramid. For example, a two-dimensional 5-level champagne pyramid 300 built by 15 wine glasses numbered 1 through 15 is illustrated byFIG. 3 . It can be appreciated that, if champagne is poured into the highest glass in thepyramid 300, the champagne will fill all glasses below the highest glass as it overflows down thepyramid 300. Similarly, in a more complicated scenario, flavorless champagne can be poured intoglass 1 at the top of thepyramid 300 while apple-flavored champagne can be poured intoglass 4. It can be appreciated that, as time elapses while pouring continues, all of the glasses in thepyramid 300 will come to be filled with champagne. However, considering the glasses at the bottom row of thepyramid 300, it can be appreciated that glasses 11 through 13 would contain apple-flavored champagne at such a time whileglasses - Based on these observations, the rows of the
pyramid 300 can be numbered from the bottom. As illustrated inFIG. 3 , glasses 11 through 15 are on the first row andglass 1 is on the fifth row. Thus, for apyramid 300 with L levels, N successive glasses on the bottom row of thepyramid 300 will contain the same flavor of champagne as a glass on the N-th row of thepyramid 300 if champagne is poured into said glass until it has run off into the bottom row of thepyramid 300. As a result, it can be appreciated that if it is desired to add a common flavor into N successive glasses on the bottom row, the desired flavor can be poured into a single glass on the N-th row of thepyramid 300 instead of adding the flavor on the bottom row for all N glasses. - Similarly, the
champagne pyramid component 244 can arrange information of a binary format into a structure that exhibits the above properties. In one example, thechampagne pyramid component 244 can represent a set oforiginal information 220 and its corresponding binary values as a binary pyramid structure, such as the example structure illustrated by diagram 400 inFIG. 4 . Each of these binary values can then be treated as a wine glass in a pyramid structure having a predefined scanning order. As diagram 400 further illustrates, the number of information bits that can be held by a champagne pyramid structure is equal to the number of elements on the bottom row of the pyramid. Thus, as the number of memory bits (e.g., M) in original information 920 is limited and the size of a binary information 910 to be written (e.g., L) are fixed, the number of memory bits of the original information 920 may not be sufficient to build a single L-level champagne pyramid. In such cases, multiple N-level pyramids can be built instead. In one example, N can be constrained by the following equation: -
- In accordance with one aspect, writing
binary information 210 such that it replacesoriginal information 220 using CPPC can be dependent on the bottom row of the pyramid structure(s) constructed by thechampagne pyramid component 244. As described above with respect to diagram 300, data flow through the structure can be visualized as liquid poured from the top of a champagne pyramid such that eventually all glasses on the bottom row of the pyramid are filled up. Thus, theparity calculation component 246 can begin processing of a pyramid structure(s) by defining the Region of Interest of a wine glass n, ROI(n), as the set of glasses that belong to the possible paths from the top glass of the pyramid to glass n. Thus, forpyramid 300, ROI(11)={1, 2, 4, 7, 11} and ROI(13)={1, 2, 3, 4, 5, 6, 8, 9, 13}. - As noted previously, each wine glass in
pyramid 300 contains a value of either “0” or “1.” Based on this, theparity calculation component 246 can count the number of glasses containing “1” in the region of interest of each glass on the bottom row of the pyramid. In one example, if the number of such glasses is an even number for a glass, the parity of that glass is set to “0.” Similarly, if the number of such glasses is an odd number for a given glass, the parity of that glass can instead be set to “1.” After this calculations, an information array (IA) with the same size as the binary information 910 to be written can be formed. Diagram 400 inFIG. 3 illustrates example parity calculations. - In one example, for each node on the bottom row of a pyramid structure, the
parity calculation component 246 can define the ROI and then count the number of nodes containing a value of “1” in the defined region. By way of specific, non-limiting example, the computational complexity of such operations can be reduced by making use of various properties of the ROI. First, for each node x on the bottom row of a pyramid structure, the parity calculation component can define the “left source” of x, LS(x), the “right source” of x, RS(x), and the “upper source” of x, US(x), as illustrated in diagram 500 inFIG. 5 . It should be appreciated that for some cases, LS(x), RS(x), and/or US(x) may not exist. - The ROI of a non-existing node can be represented as an empty set. Otherwise, for a node x, ROI can be calculated as follows:
-
- Based on the ROI of a node x as calculated in Equations (2)-(3), the parity of a node x can be calculated as follows:
-
Parity(x)=Parity(LS(x))⊕Parity(RS(x))⊕Parity(US(x))⊕x. (4) - Thus, from Equation (4), the
parity calculation component 246 can utilize a smart parity calculation method as follows. Theparity calculation component 246 can start from the top of the pyramid and process each node in increasing order of glass number, as shown in diagram 300. If US(x) is “0” for a given node, no further operation is needed. Otherwise, the values contained by of LS(x), RS(x), and x can be toggled as illustrated by diagrams 610 and 620 inFIG. 6 . - Upon generation of an IA at the
parity calculation component 246, the IA can be processed with an array corresponding to thebinary information 210 being written using an exclusive-OR (XOR) operator and/or another appropriate operator at thecomparison component 248. For each value of “1” that appears in the resulting array, the corresponding location on the bottom row of the pyramid structure can be marked as “To Be Flavored” (TBF), as illustrated in diagram 710 inFIG. 7 . Theoretically, it can be appreciated that a node on the bottom row of the pyramid will be marked as TBF with a probability of 0.5. Toggling optimization can then be performed by the flavor addingoptimization component 250 by employing the observation that adding flavor into N successive wine glasses on the bottom row of a champagne pyramid is equivalent to adding flavor into one wine glass on the N-th row. Thus, if N successive nodes in a pyramid structure are marked as TBF, the flavor adding optimization can designate only one location in theoriginal information 220 for toggling instead of requiring toggling of all N locations. - An example of the flavor adding optimization used in connection with data processing is illustrated by diagrams 710 and 720 in
FIG. 7 . AsFIG. 7 illustrates, given thepyramid structure 400 illustrated inFIG. 4 and a 5-bit set of binary information {11000} to be written, toggling of the IA can be performed to obtain a resulting array of {01110} As illustrated by diagram 710, the second, third, and fourth nodes on the bottom row can then be marked as TBF. Instead of toggling all three glasses, however, thememory component 240 can fully record thebinary information 210 by toggling only one glass on the third row, as indicated byFIG. 7 . - Referring next to
FIG. 8 , anadditional system 800 that can be utilized for low-power recording ofbinary information 820 in accordance with various aspects described herein is illustrated. As illustrated byFIG. 8 ,binary information 810 can be written into amemory component 840 that stores previously-recordedoriginal information 820. In accordance with one aspect, thememory component 840 can implement a TBPC (Tree Based Parity Check) algorithm for recordingbinary information 810 by utilizing subcomponents 844-854. - In accordance with one aspect, a TBPC algorithm can be utilized by the
memory component 840 in conjunction with one or more existing data processing techniques. In one example, thememory component 840 can utilize acomparison component 848 for comparing binary values oforiginal information 820 with corresponding to-be-writtenbinary information 810, and/or a modificationlocation determination component 852 for deciding which bit(s) need modifications to hold to-be-writtenbinary data 810. Modification can then be performed based on the comparisons via amodification component 854. For every single to-be-written bit, it can be observed that the probability that theoriginal information 820 will require modification is 0.5. Accordingly, thememory component 840 can utilize the TBPC algorithm to attempt to reduce the probability of modifying theoriginal information 820. - In accordance with one aspect, the
memory component 840 can utilize the TBPC algorithm to achieve an improvement in power consumption by reducing the number of to-be-written bits (e.g., nodes) that require modification operations. In one example, thememory component 840 can employ atree component 844, aparity calculation component 846, and/or afountain investigation component 850 to extend the functionality of a traditional data processing scheme by way of the TBPC algorithm. In general, the TBPC algorithm leverages relationships among ancestors and descendants in an N-ary tree to improve data writing functionality. By way of example, the operation of thetree component 844,parity calculation component 846, and/orfountain investigation component 850 is set forth in further detail in the following description. As used herein, the size ofbinary information 810 to be written is represented as L. - In conventional data processing algorithms, the values of bit locations (nodes) can be classified as either “0” or “1.” Thus, 16 bits of information traditionally require 16 nodes to be represented. Immediately after classification, these values are then compared with corresponding bits of the
binary information 810. If respective values are the same as a to-be-written bit, no further operations are performed. Otherwise, one or more processes are carried out to toggle the value. - In contrast, by way of example, the
tree component 844 can operate as follows. First, thetree component 844 can populate an N-ary complete tree, herein referred to as a “master tree.” It can be appreciated that because the master tree is an N-ary complete tree, every node of the master tree, except for leaf nodes, can be configured to have N child nodes. In one example, one leaf node can be utilized to hold one information bit, which can correspond to the parity value of the node. Accordingly, it can be appreciated that to writebinary information 810 of L bits, a master tree can be required to have L leaves.FIG. 9 illustrates anexample master tree 900 that can be created by thetree component 844 for N=2 and L=16. As illustrated byFIG. 9 , 31 nodes are utilized to represent 16 bits of information. - Based on a master tree constructed by the
tree component 844, aparity calculation component 846 can be utilized to determine the information bits represented by each leaf node in the master tree. In one example, this can be accomplished for a leaf node by traveling from the leaf node to the root node of the master tree. If the number of occurrences of “1” values is an odd number, the information bit of the leaf node can be regarded as “1.” Otherwise, the information bit can be regarded as “0.” These calculations are further illustrated inFIG. 9 formaster tree 900. As shown, the information array (IA) represents the information written in the master tree being listed under the tree. - Next, by performing bitwise logical exclusive-OR (XOR) operations between respective bits of the
binary information 810 and the information carried by the master tree (the original information), thecomparison component 848 can obtain a toggle array. As an example of this comparison, it can be observed that for theexample master tree 900, the obtained information array is {1110110101111000}. Assuming a binary information array of {0010001001011110}, a resultant toggle array obtained by thecomparison component 848 becomes {1100111100100110}. This comparison is illustrated by diagram 1000 inFIG. 10 . - In accordance with one aspect, respective values of “1” in the toggle array can represent the fact that the corresponding nodes of the master tree representing the
original information 820 require toggling. However, it can be appreciated that power consumption is introduced by any single modification to theoriginal information 820. Accordingly, to reduce power consumption, TBPC can be utilized to minimize the number of “1” values in the toggle array. Referring back toFIG. 9 , when themaster tree 900 is carefully examined, it can be observed that a single change in any node, either from “1” to “0” or from “0” to “1,” can result in a change in the parity of all of the descendants of the changed node. Thus, instead of changing the values of N sibling nodes in themaster tree 900, a single change in their common parent node can give the same effect. - To leverage the above observation, a
fountain investigation component 850 can build a toggle tree with the same size as the master tree. In one example, the leaf nodes of the toggle tree can be populated by the elements of the toggle array in the order that the elements appear in the toggle array. The remaining nodes in the tree can be initially populated using any suitable value. In one example, the fountain investigation component can then begin analysis at the root of the toggle tree as follows. For a given node, if all N of the child nodes of the given node contain the value “1,” the child nodes are updated with the value “0” and the node being examined can be set to “1.” Otherwise, the node being processed can be reset to “0.” An example toggle tree generation and fountain investigation process for the example toggle array given above is illustrated in diagram 1000 inFIG. 10 . - In one example, to write
binary information 810 into thememory component 840, to-be-toggled sites in the master tree corresponding to locations in the toggle tree that have respective values of “1” can then be toggled. After parity calculation for the master tree, the obtained information array should match the binary information array corresponding to thebinary information 810. An example data processing process is illustrated by diagram 1100 inFIG. 11 . - In accordance with one aspect, the maximum achievable payload of the
memory component 840 can be found as follows. Initially, it can be appreciated that because the number of nodes inoriginal information 820 is limited, the size of a master tree that can be formed in the tree formation process is also limited. Further, as noted above, the master tree is required to have L leaf nodes to represent L binary information bits. Thus, to form an N-ary complete tree with L leaf nodes, the total number of nodes required, nNodes, can be found by the following equation: -
- From Equation (6), it can be observed that as N increases, the number of nodes required in the master tree decreases. Thus, because the number of nodes is limited, the payload that can be written by the
memory component 840 increases as N increases. For a fixed binary information size L and available number of nodes M, the minimum N that can be used can be found as follows: -
- In accordance with another aspect, the
parity calculation component 846 can operate using a low-complexity parity calculation technique as follows. First, parity(x) can be defined as the parity of the number of occurrences of “1” in the path from a node x to the root node of a master tree. It can be observed that parity(x) is equal to the value of parity(parent(x)) plus the value of node x using bitwise addition. Thus, theparity calculation component 846 can traverse the master tree starting from the root of the tree and, for each node x being processed, update the parity of its child node y by adding the parity of node x and the value of node y. - Referring now to
FIGS. 12-14 , methodologies that can be implemented in accordance with various aspects described herein are illustrated. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may, in accordance with the claimed subject matter, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement the methodologies in accordance with the claimed subject matter. - Furthermore, the claimed subject matter may be described in the general context of computer-executable instructions, such as program modules, executed by one or more components. Generally, program modules include routines, programs, objects, data structures, etc., that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. Furthermore, as will be appreciated various portions of the disclosed systems above and methods below may include or consist of artificial intelligence or knowledge or rule based components, sub-components, processes, means, methodologies, or mechanisms (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines, classifiers . . . ). Such components, inter alia, can automate certain mechanisms or processes performed thereby to make portions of the systems and methods more adaptive as well as efficient and intelligent.
-
FIG. 12 illustrates a method of low-power recording of binary information in accordance with various aspects described herein. At 1202, binary information to be stored by a memory (e.g., a memory component 110) is identified. At 1204, a hierarchical data structure is generated (e.g., by a parity check component 130) that comprises a plurality of nodes corresponding to respective information bits currently stored by the memory. At 1206, respective parity values of nodes in a bottom row of the hierarchical data structure generated at 1204 are determined. At 1208, information bits are toggled (e.g., by a modification component 140) corresponding to one or more nodes in the hierarchical data structure generated at 1204 such that the respective parity values of the nodes in the bottom row of the hierarchical data structure as determined at 1206 match respective corresponding bits in the binary information identified at 1202 to be stored by the memory. -
FIG. 13 illustrates amethod 1300 of writing binary information (e.g., binary information 210) to a memory (e.g., a memory component 240) to replace original information (e.g., original information 220) previously stored in the memory. In accordance with one aspect,method 1300 can be utilized to implement a Champagne Pyramid Parity Check (CPPC) algorithm. At 1302, binary information to be written in the memory is provided. - At 1304, at least one binary pyramid structure having nodes representing original information is provided (e.g., by a champagne pyramid component 244). In one example, the number of nodes at the bottom row of the at least one pyramid structure provided at 1304 can equal the number of bits in the binary information provided at 1302. At 1306, for each node at the bottom row of the at least one pyramid provided at 1304, parity is calculated (e.g., by a parity calculation component 946) for the combined set of the respective nodes and all other nodes in the at least one pyramid for having the respective node as a direct or indirect successor.
- At 1308, the parity of respective nodes at the bottom row of the at least one pyramid provided at 1304, as calculated at 1306, is compared to corresponding bits of the binary information provided at 1302 (e.g., by a comparison component 248). At 1310, groups of successive nodes for which the comparison at 1308 indicates have respective parity values that differ from corresponding bits of the binary information provided at 1302 are identified (e.g., by a flavor adding optimization component 250). At 1312, the binary information provided at 1302 is written in the memory at least in part by toggling nodes (e.g., using a toggling component 252) corresponding to the respective lowest common predecessor nodes for the groups identified at 1312.
- Turning to
FIG. 14 , anothermethod 1400 of writing binary information (e.g., binary information 810) into a memory (e.g., a memory component 840) is illustrated. In accordance with one aspect,method 1400 can be utilized to implement a Tree-Based Parity Check (TBPC) algorithm. At 1402, binary information to be written in memory is provided. - At 1404, one or more N-ary master trees having nodes representing original information are provided (e.g., by a tree component 844). In one example, the number of leaf nodes at the bottom row in the one or more trees can be made equal to the number of bits in the binary information provided at 1402. At 1406, for each leaf node in the master trees provided at 1404, parity is calculated (e.g., by a parity calculation component 846) of nodes in a path from the leaf node to the root node of the master tree to which the leaf node belongs. At 1408, a toggle array is obtained (e.g., by a comparison component 848) by performing exclusive-OR operations between respective parity values calculated at 1406 and corresponding bits of the binary information identified at 1402. At 1410, one or more toggle trees are constructed that correspond to the one or more master trees constructed at 1404 using the toggle array obtained at 1408 as leaf nodes. Finally, at 1412, the binary information provided at 1402 is written into the memory at least in part by toggling nodes in the one or more trees created at 1404 (e.g., via a modification component 854) corresponding to the respective highest nodes in the toggle trees constructed at 1410 for which all leaf nodes provide a toggling indication (e.g., as determined by a
fountain investigation component 850 and/or a modification location determination component 852). - In order to provide additional context for various aspects described herein,
FIG. 15 and the following discussion are intended to provide a brief, general description of asuitable computing environment 1500 in which various aspects of the claimed subject matter can be implemented. Additionally, while the above features have been described above in the general context of computer-executable instructions that may run on one or more computers, those skilled in the art will recognize that said features can also be implemented in combination with other program modules and/or as a combination of hardware and software. - Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the claimed subject matter can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
- The illustrated aspects may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
- A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media can include both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
- Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
- With reference again to
FIG. 15 , anexemplary environment 1500 for implementing various aspects described herein includes acomputer 1502, thecomputer 1502 including aprocessing unit 1504, asystem memory 1506 and asystem bus 1508. Thesystem bus 1508 couples to system components including, but not limited to, thesystem memory 1506 to theprocessing unit 1504. Theprocessing unit 1504 can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures may also be employed as theprocessing unit 1504. - The
system bus 1508 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. Thesystem memory 1506 includes read-only memory (ROM) 1510 and random access memory (RAM) 1512. A basic input/output system (BIOS) is stored in anon-volatile memory 1510 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within thecomputer 1502, such as during start-up. TheRAM 1512 can also include a high-speed RAM such as static RAM for caching data. - The
computer 1502 further includes an internal hard disk drive (HDD) 1514 (e.g., EIDE, SATA), which internalhard disk drive 1514 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 1516, (e.g., to read from or write to a removable diskette 1518) and anoptical disk drive 1520, (e.g., reading a CD-ROM disk 1522 or, to read from or write to other high capacity optical media such as the DVD). Thehard disk drive 1514,magnetic disk drive 1516 andoptical disk drive 1520 can be connected to thesystem bus 1508 by a harddisk drive interface 1524, a magneticdisk drive interface 1526 and anoptical drive interface 1528, respectively. Theinterface 1524 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE-1394 interface technologies. Other external drive connection technologies are within contemplation of the subject disclosure. - The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the
computer 1502, the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods described herein. - A number of program modules can be stored in the drives and
RAM 1512, including anoperating system 1530, one ormore application programs 1532,other program modules 1534 andprogram data 1536. All or portions of the operating system, applications, modules, and/or data can also be cached in theRAM 1512. It is appreciated that the claimed subject matter can be implemented with various commercially available operating systems or combinations of operating systems. - A user can enter commands and information into the
computer 1502 through one or more wired/wireless input devices, e.g., akeyboard 1538 and a pointing device, such as amouse 1540. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to theprocessing unit 1504 through aninput device interface 1542 that is coupled to thesystem bus 1508, but can be connected by other interfaces, such as a parallel port, a serial port, an IEEE-1394 port, a game port, a USB port, an IR interface, etc. - A
monitor 1544 or other type of display device is also connected to thesystem bus 1508 via an interface, such as avideo adapter 1546. In addition to themonitor 1544, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc. - The
computer 1502 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 1548. The remote computer(s) 1548 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to thecomputer 1502, although, for purposes of brevity, only a memory/storage device 1550 is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN) 1552 and/or larger networks, e.g., a wide area network (WAN) 1554. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet. - When used in a LAN networking environment, the
computer 1502 is connected to thelocal network 1552 through a wired and/or wireless communication network interface oradapter 1556. Theadapter 1556 may facilitate wired or wireless communication to theLAN 1552, which may also include a wireless access point disposed thereon for communicating with thewireless adapter 1556. - When used in a WAN networking environment, the
computer 1502 can include amodem 1558, or is connected to a communications server on theWAN 1554, or has other means for establishing communications over theWAN 1554, such as by way of the Internet. Themodem 1558, which can be internal or external and a wired or wireless device, is connected to thesystem bus 1508 via theserial port interface 1542. In a networked environment, program modules depicted relative to thecomputer 1502, or portions thereof, can be stored in the remote memory/storage device 1550. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used. - The
computer 1502 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices. - Wi-Fi, or Wireless Fidelity, is a wireless technology similar to that used in a cell phone that enables a device to send and receive data anywhere within the range of a base station. Wi-Fi networks use IEEE-802.11 (a, b, g, etc.) radio technologies to provide secure, reliable, and fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE-802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 13 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band). Thus, networks using Wi-Fi wireless technology can provide real-world performance similar to a 10BaseT wired Ethernet network.
- Referring now to
FIG. 16 , there is illustrated a schematic block diagram of an exemplary computer compilation system operable to execute the disclosed architecture. Thesystem 1600 includes one or more client(s) 1602. The client(s) 1602 can be hardware and/or software (e.g., threads, processes, computing devices). In one example, the client(s) 1602 can house cookie(s) and/or associated contextual information by employing one or more features described herein. - The
system 1600 also includes one or more server(s) 1604. The server(s) 1604 can also be hardware and/or software (e.g., threads, processes, computing devices). In one example, theservers 1604 can house threads to perform transformations by employing one or more features described herein. One possible communication between aclient 1602 and aserver 1604 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The data packet may include a cookie and/or associated contextual information, for example. Thesystem 1600 includes a communication framework 1606 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1602 and the server(s) 1604. - Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1602 are operatively connected to one or more client data store(s) 1608 that can be employed to store information local to the client(s) 1602 (e.g., cookie(s) and/or associated contextual information). Similarly, the server(s) 1604 are operatively connected to one or more server data store(s) 1610 that can be employed to store information local to the
servers 1604. - The claimed subject matter has been described herein by way of examples. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
- Additionally, the disclosed subject matter can be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The terms “article of manufacture,” “computer program product” or similar terms, where used herein, are intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally, it is known that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
- The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components, e.g., according to a hierarchical arrangement. Additionally, it should be noted that one or more components can be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, can be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein can also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/062,138 US20080250056A1 (en) | 2007-04-04 | 2008-04-03 | Method and apparatus for writing binary data with low power consumption |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US90751307P | 2007-04-04 | 2007-04-04 | |
US12/062,138 US20080250056A1 (en) | 2007-04-04 | 2008-04-03 | Method and apparatus for writing binary data with low power consumption |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080250056A1 true US20080250056A1 (en) | 2008-10-09 |
Family
ID=39826633
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/060,949 Active 2030-05-09 US7986441B2 (en) | 2007-04-04 | 2008-04-02 | Embedding watermark into halftone image with low distortion using parity values for overlapping groups of candidate sites |
US12/062,138 Abandoned US20080250056A1 (en) | 2007-04-04 | 2008-04-03 | Method and apparatus for writing binary data with low power consumption |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/060,949 Active 2030-05-09 US7986441B2 (en) | 2007-04-04 | 2008-04-02 | Embedding watermark into halftone image with low distortion using parity values for overlapping groups of candidate sites |
Country Status (6)
Country | Link |
---|---|
US (2) | US7986441B2 (en) |
EP (1) | EP2137675A4 (en) |
JP (1) | JP4920783B2 (en) |
KR (2) | KR101305752B1 (en) |
CN (1) | CN101765847B (en) |
WO (1) | WO2008124528A2 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9288191B1 (en) | 2011-12-13 | 2016-03-15 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9292696B1 (en) | 2011-03-08 | 2016-03-22 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9300637B1 (en) * | 2011-03-08 | 2016-03-29 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9338220B1 (en) | 2011-03-08 | 2016-05-10 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9356993B1 (en) | 2011-03-08 | 2016-05-31 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9413526B1 (en) | 2011-03-08 | 2016-08-09 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9432342B1 (en) | 2011-03-08 | 2016-08-30 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9667741B1 (en) | 2011-03-08 | 2017-05-30 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9852311B1 (en) | 2011-03-08 | 2017-12-26 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US11228566B1 (en) | 2011-03-08 | 2022-01-18 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7986441B2 (en) * | 2007-04-04 | 2011-07-26 | Wong Technologies L.L.C. | Embedding watermark into halftone image with low distortion using parity values for overlapping groups of candidate sites |
US8049930B2 (en) * | 2008-09-15 | 2011-11-01 | National Taiwan University Of Science And Technology | Method of halftone watermarking for hiding multi-tone watermark or two-tone watermark |
US9384410B2 (en) | 2012-05-21 | 2016-07-05 | Nvidia Corporation | Method and system for image compression while encoding at least one extra bit |
KR101428028B1 (en) * | 2013-01-11 | 2014-08-11 | 동국대학교 산학협력단 | Apparatus and method for steganography, apparatus and method for data restoration |
DE102013103613B3 (en) * | 2013-04-10 | 2014-09-18 | Cüneyt Göktekin | Generation and recognition of forgery-proof printable image information data |
KR102137686B1 (en) | 2013-08-16 | 2020-07-24 | 삼성전자주식회사 | Method for controlling an content integrity and an electronic device |
US9819969B2 (en) * | 2013-11-26 | 2017-11-14 | Nvidia Corporation | Generalization of methods and systems for image compression while encoding at least one extra bit |
CN104917989A (en) * | 2014-03-11 | 2015-09-16 | 移康智能科技(上海)有限公司 | Hierarchical watermark adding method and system |
KR101706122B1 (en) * | 2015-11-10 | 2017-02-13 | 주식회사 더코더 | Method of manufacturing hologram metallic sticket inserting data |
US10037587B2 (en) * | 2016-11-23 | 2018-07-31 | Macau University Of Science And Technology | Color image watermarking |
CN109685708B (en) * | 2018-12-26 | 2024-02-06 | 珠海奔图电子有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
KR102395647B1 (en) * | 2020-04-13 | 2022-05-09 | 주식회사 한글과컴퓨터 | Electronic device that enables the insertion of data into image contained in electronic document and operating method thereof |
US11348594B2 (en) * | 2020-06-11 | 2022-05-31 | Qualcomm Incorporated | Stream conformant bit error resilience |
KR102337677B1 (en) * | 2020-07-16 | 2021-12-09 | (주)휴먼스케이프 | System for embedding digital verification fingerprint and Method thereof |
KR102642455B1 (en) | 2023-07-18 | 2024-02-29 | (주)티엠에스 아이앤티엘 | Apparel subsidiary material applied with hologram and manufacturing method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448592A (en) * | 1990-07-26 | 1995-09-05 | British Telecommunications Public Limited Company | Coded QAM system |
US5675590A (en) * | 1994-11-23 | 1997-10-07 | At&T Wireless Services, Inc. | Cyclic trellis coded modulation |
US5960041A (en) * | 1995-09-21 | 1999-09-28 | Lucent Technologies Inc. | Method and apparatus for generating high rate codes for recording information on a magnetic medium |
US6665832B1 (en) * | 2000-03-31 | 2003-12-16 | Qualcomm, Incorporated | Slotted mode decoder state metric initialization |
US7197691B2 (en) * | 2000-05-03 | 2007-03-27 | University Of Southern California | Reduced-latency soft-in/soft-out module |
US7318185B2 (en) * | 2001-08-23 | 2008-01-08 | Nortel Networks Limited | Method and apparatus for scrambling based peak-to-average power ratio reduction without side information |
US20080247002A1 (en) * | 2007-04-04 | 2008-10-09 | The Hong Kong University Of Science And Technology | Multimedia watermarking techniques with low distortion |
US7707479B2 (en) * | 2005-12-13 | 2010-04-27 | Samsung Electronics Co., Ltd. | Method of generating structured irregular low density parity checkcodes for wireless systems |
US7765456B1 (en) * | 2005-03-31 | 2010-07-27 | Xilinx, Inc. | Optimal multi-user orthogonal variable spreading factor (OVSF) code generator |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020009208A1 (en) * | 1995-08-09 | 2002-01-24 | Adnan Alattar | Authentication of physical and electronic media objects using digital watermarks |
US7058199B1 (en) * | 2000-08-14 | 2006-06-06 | The Hong Kong University Of Science And Technology | Methods and apparatus for hiding data in halftone images |
US6993150B2 (en) * | 2001-01-24 | 2006-01-31 | Digimarc Corporation | Halftone primitive watermarking and related applications |
US6775394B2 (en) * | 2002-03-12 | 2004-08-10 | Matsushita Electric Industrial Co., Ltd. | Digital watermarking of binary document using halftoning |
GB2400526B (en) * | 2003-04-08 | 2005-12-21 | Hewlett Packard Development Co | Cryptographic key update management |
US7324662B2 (en) * | 2004-05-21 | 2008-01-29 | Nanyang Technological University | Method, software, and device for hiding data in binary image, while preserving image quality |
SG120173A1 (en) * | 2004-08-17 | 2006-03-28 | Sony Corp | Methods and apparatus for watermarking digital data |
US7436977B2 (en) * | 2005-01-26 | 2008-10-14 | Xerox Corporation | Embedding variable watermark information in halftone screens |
US20060195774A1 (en) | 2005-02-17 | 2006-08-31 | Stephen Bowyer | Error correction circuit and method |
US8050446B2 (en) * | 2005-07-12 | 2011-11-01 | The Board Of Trustees Of The University Of Arkansas | Method and system for digital watermarking of multimedia signals |
US7688993B2 (en) * | 2005-10-21 | 2010-03-30 | Nanyang Technological University | Software and method for embedding data in two color images |
KR100967136B1 (en) * | 2006-02-01 | 2010-07-05 | 후지쯔 가부시끼가이샤 | Parity generating circuit, arrangement circuit for parity generating circuit, information processing apparatus, and encoder |
US8650402B2 (en) * | 2007-08-17 | 2014-02-11 | Wong Technologies L.L.C. | General data hiding framework using parity for minimal switching |
-
2008
- 2008-04-02 US US12/060,949 patent/US7986441B2/en active Active
- 2008-04-03 US US12/062,138 patent/US20080250056A1/en not_active Abandoned
- 2008-04-03 EP EP08745039A patent/EP2137675A4/en not_active Withdrawn
- 2008-04-03 WO PCT/US2008/059300 patent/WO2008124528A2/en active Application Filing
- 2008-04-03 CN CN2008800188189A patent/CN101765847B/en not_active Expired - Fee Related
- 2008-04-03 KR KR1020137000108A patent/KR101305752B1/en active IP Right Grant
- 2008-04-03 KR KR1020097020691A patent/KR20090127907A/en not_active Application Discontinuation
- 2008-04-03 JP JP2010502296A patent/JP4920783B2/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5448592A (en) * | 1990-07-26 | 1995-09-05 | British Telecommunications Public Limited Company | Coded QAM system |
US5675590A (en) * | 1994-11-23 | 1997-10-07 | At&T Wireless Services, Inc. | Cyclic trellis coded modulation |
US5960041A (en) * | 1995-09-21 | 1999-09-28 | Lucent Technologies Inc. | Method and apparatus for generating high rate codes for recording information on a magnetic medium |
US6665832B1 (en) * | 2000-03-31 | 2003-12-16 | Qualcomm, Incorporated | Slotted mode decoder state metric initialization |
US7197691B2 (en) * | 2000-05-03 | 2007-03-27 | University Of Southern California | Reduced-latency soft-in/soft-out module |
US7318185B2 (en) * | 2001-08-23 | 2008-01-08 | Nortel Networks Limited | Method and apparatus for scrambling based peak-to-average power ratio reduction without side information |
US7765456B1 (en) * | 2005-03-31 | 2010-07-27 | Xilinx, Inc. | Optimal multi-user orthogonal variable spreading factor (OVSF) code generator |
US7707479B2 (en) * | 2005-12-13 | 2010-04-27 | Samsung Electronics Co., Ltd. | Method of generating structured irregular low density parity checkcodes for wireless systems |
US20080247002A1 (en) * | 2007-04-04 | 2008-10-09 | The Hong Kong University Of Science And Technology | Multimedia watermarking techniques with low distortion |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9292696B1 (en) | 2011-03-08 | 2016-03-22 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9300637B1 (en) * | 2011-03-08 | 2016-03-29 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9338220B1 (en) | 2011-03-08 | 2016-05-10 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9356993B1 (en) | 2011-03-08 | 2016-05-31 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9413526B1 (en) | 2011-03-08 | 2016-08-09 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9432342B1 (en) | 2011-03-08 | 2016-08-30 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9667741B1 (en) | 2011-03-08 | 2017-05-30 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9852311B1 (en) | 2011-03-08 | 2017-12-26 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US11228566B1 (en) | 2011-03-08 | 2022-01-18 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
US9288191B1 (en) | 2011-12-13 | 2016-03-15 | Ciphercloud, Inc. | System and method to anonymize data transmitted to a destination computing device |
Also Published As
Publication number | Publication date |
---|---|
CN101765847B (en) | 2012-11-21 |
EP2137675A2 (en) | 2009-12-30 |
US7986441B2 (en) | 2011-07-26 |
JP2010532595A (en) | 2010-10-07 |
WO2008124528A2 (en) | 2008-10-16 |
CN101765847A (en) | 2010-06-30 |
US20080247002A1 (en) | 2008-10-09 |
KR101305752B1 (en) | 2013-09-06 |
WO2008124528A3 (en) | 2009-08-13 |
JP4920783B2 (en) | 2012-04-18 |
KR20090127907A (en) | 2009-12-14 |
EP2137675A4 (en) | 2010-04-14 |
WO2008124528A4 (en) | 2009-10-15 |
KR20130018368A (en) | 2013-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080250056A1 (en) | Method and apparatus for writing binary data with low power consumption | |
US20240152754A1 (en) | Aggregated embeddings for a corpus graph | |
Acun et al. | Understanding training efficiency of deep learning recommendation models at scale | |
US11593458B2 (en) | System for time-efficient assignment of data to ontological classes | |
US8375227B2 (en) | Abstracting programmatic representation of data storage systems | |
US8706782B2 (en) | Self-contained placement of data objects in a data storage system | |
CN102591947A (en) | Fast and low-RAM-footprint indexing for data deduplication | |
CN110018997B (en) | Mass small file storage optimization method based on HDFS | |
KR102647511B1 (en) | Method for reinforce learning on large language model | |
Wu et al. | NFL: robust learned index via distribution transformation | |
Akrasi-Mensah et al. | An overview of technologies for improving storage efficiency in blockchain-based IIoT applications | |
CN103064991A (en) | Mass data clustering method | |
Peng et al. | MaxK-GNN: Extremely Fast GPU Kernel Design for Accelerating Graph Neural Networks Training | |
US11714688B1 (en) | Sustainability-based computing resource allocation | |
CN105808451A (en) | Data caching method and related apparatus | |
CN106776600A (en) | The method and device of text cluster | |
Peng et al. | A general framework for multi-label learning towards class correlations and class imbalance | |
CN109413487A (en) | A method of spelling is dodged after storing fragment transcoding/synthetic video file based on object | |
JP2005063662A (en) | Method for combining multilevel memory cells and providing error correction mechanism for them | |
Li et al. | MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors | |
CN111556998A (en) | Transfer learning and domain adaptation using distributable data models | |
KR102602593B1 (en) | Method for providing development environment based on remote execution | |
CN114372574B (en) | Quantum dot computer system based on graphene and control method thereof | |
CN113268376B (en) | Data center object storage method and system based on genetic algorithm | |
WO2023051577A1 (en) | Quantum program and quantum chip mapping method, and quantum operating system and computer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AU, OSCAR CHI LIM;LI, RICHARD YUK MING;REEL/FRAME:020892/0248;SIGNING DATES FROM 20080403 TO 20080408 |
|
AS | Assignment |
Owner name: HONG KONG TECHNOLOGIES GROUP LIMITED Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY;REEL/FRAME:024067/0623 Effective date: 20100305 Owner name: HONG KONG TECHNOLOGIES GROUP LIMITED, SAMOA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY;REEL/FRAME:024067/0623 Effective date: 20100305 |
|
AS | Assignment |
Owner name: WONG TECHNOLOGIES L.L.C., DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HONG KONG TECHNOLOGIES GROUP LIMITED;REEL/FRAME:024921/0068 Effective date: 20100728 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |