WO2005013135A1 - System and method for transferring blanks - Google Patents
System and method for transferring blanks Download PDFInfo
- Publication number
- WO2005013135A1 WO2005013135A1 PCT/US2004/023238 US2004023238W WO2005013135A1 WO 2005013135 A1 WO2005013135 A1 WO 2005013135A1 US 2004023238 W US2004023238 W US 2004023238W WO 2005013135 A1 WO2005013135 A1 WO 2005013135A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- line
- cache
- data
- pinned
- lines
- Prior art date
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G47/00—Article or material-handling devices associated with conveyors; Methods employing such devices
- B65G47/22—Devices influencing the relative position or the attitude of articles during transit by conveyors
- B65G47/26—Devices influencing the relative position or the attitude of articles during transit by conveyors arranging the articles, e.g. varying spacing between individual articles
- B65G47/30—Devices influencing the relative position or the attitude of articles during transit by conveyors arranging the articles, e.g. varying spacing between individual articles during transit by a series of conveyors
- B65G47/31—Devices influencing the relative position or the attitude of articles during transit by conveyors arranging the articles, e.g. varying spacing between individual articles during transit by a series of conveyors by varying the relative speeds of the conveyors forming the series
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H29/00—Delivering or advancing articles from machines; Advancing articles to or into piles
- B65H29/12—Delivering or advancing articles from machines; Advancing articles to or into piles by means of the nip between two, or between two sets of, moving tapes or bands or rollers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0817—Cache consistency protocols using directory methods
- G06F12/082—Associative directories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2301/00—Handling processes for sheets or webs
- B65H2301/30—Orientation, displacement, position of the handled material
- B65H2301/35—Spacing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2301/00—Handling processes for sheets or webs
- B65H2301/40—Type of handling process
- B65H2301/44—Moving, forwarding, guiding material
- B65H2301/445—Moving, forwarding, guiding material stream of articles separated from each other
- B65H2301/4452—Regulating space between separated articles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2301/00—Handling processes for sheets or webs
- B65H2301/40—Type of handling process
- B65H2301/44—Moving, forwarding, guiding material
- B65H2301/447—Moving, forwarding, guiding material transferring material between transport devices
- B65H2301/4474—Pair of cooperating moving elements as rollers, belts forming nip into which material is transported
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2511/00—Dimensions; Position; Numbers; Identification; Occurrences
- B65H2511/50—Occurence
- B65H2511/51—Presence
- B65H2511/514—Particular portion of element
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2513/00—Dynamic entities; Timing aspects
- B65H2513/20—Acceleration or deceleration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2557/00—Means for control not provided for in groups B65H2551/00 - B65H2555/00
- B65H2557/20—Calculating means; Controlling methods
- B65H2557/24—Calculating methods; Mathematic models
- B65H2557/242—Calculating methods; Mathematic models involving a particular data profile or curve
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65H—HANDLING THIN OR FILAMENTARY MATERIAL, e.g. SHEETS, WEBS, CABLES
- B65H2701/00—Handled material; Storage means
- B65H2701/10—Handled articles or webs
- B65H2701/17—Nature of material
- B65H2701/176—Cardboard
- B65H2701/1764—Cut-out, single-layer, e.g. flat blanks for boxes
Definitions
- Caching is a well-known technique that uses a smaller, faster storage device to speed up access to data stored in a larger, slower storage device.
- a typical application of caching is found in disk access technology.
- a processor based system accessing data on a hard disk drive may achieve improved performance if a cache implemented in solid state memory that has a lower access time than the drive is interposed between the drive and the processor.
- a cache implemented in solid state memory that has a lower access time than the drive is interposed between the drive and the processor.
- such a cache is populated by data from the disk that is accessed by the system and subsequent accesses to the same data can then be made to the cache instead of to the disk, thereby speeding up performance.
- caching imposes certain constraints on the design of a system, such as a requirement of cache consistency with the main storage device, e.g. when data is written to the cache, as well as performance based constraints which dictate, e.g. what parts of the cache are to be replaced when a data access is made to a data element that is not in the cache and the cache happens to be full (cache replacement policy).
- a well known design for caches, specifically for disk caches, is an N-way set associative cache, where N is some non-zero whole number.
- the cache may be implemented as a collection of N arrays of cache lines, each array representing a set, each set in turn having as members only such data elements, or, simply, elements, from the disk whose addresses map to that set based on an easily computed mapping function.
- any element on a disk can be quickly mapped to a set in the cache by, for example, obtaining the integer value resulting from performing a modulus of the address of the element on disk, its tag, with the number of sets, N, in the cache (the tag MOD N) the result being a number that uniquely maps the element to a set.
- Many other methods may be employed to map a line to a set in a cache, including bit shifting of the tag, or any other unique set of bits associated with the line, to obtain an index for a set; performing a logical AND between the tag or other unique identifier and a mask; XOR-ing the tag or other unique identifier with a mask to derive a set number, among others well known to those in skilled in the art, and the claimed subject matter is not limited to any one or more of these methods.
- a similar implementation of a cache may use a hash table instead of associative sets to organize a cache.
- elements are organized into fixed size arrays, usually of equal sizes.
- a hashing function is used to compute the array within which an element is located. The input to the hashing function may be based on the element's tag and the function then maps the element to a particular hash bucket.
- CATB Constant Access Time Bounded
- the access time to locate an element in a CATB cache is bounded by a constant, or at least is independent of the total cache size, because the time to identify an array is constant and each array is of a fixed size, and so searching within the array is bounded by a constant.
- search group is used to refer to the array (i.e. the set in a set associative cache or the hash bucket in the hash table based cache) that is identified by mapping an element.
- Each element in a CATB cache, or cache line 120 contains both the actual data from the slower storage device that is being accessed by the system as well as some other data termed metadata that is used by the cache management system for administrative purposes.
- the metadata may include a tag i.e.
- the unique identifier or address for the data in the line and other data relating to the state of the line including a bit or flag to indicate if the line is in use (allocated) or not in use (unallocated), as well as bits reserved for other purposes.
- a line in such an implementation may have a flag in its metadata that indicates whether the line is pinned.
- CATB caches that have sets of approximately equal sizes may perform better than those with non-uniform set sizes. If one or more lines in a search group of a CATB cache, such as a set in a set- associative cache, become occupied by pinned data, the effective size of that search group for caching operations with non-pinned data becomes reduced by the number of pinned lines. If the system attempts to access data elements that are mapped to that search group, its performance may be reduced relative to its performance in accessing elements in other search groups that have no pinned elements. This phenomenon is termed hot spot creation and presents an issue for designers of caches with pinned lines.
- Figure 1 depicts a dynamic data structure that may be used to implement a N-way set associative cache.
- Figure 2 depicts the state of a data structure implementing an N-way set associative cache with a portion of the cache reserved for pinned data when no pinned data has been added to the cache, in accordance with an embodiment of the claimed subject matter
- Figure 3 depicts the state of the data structure from Fig. 2 after some pinned cache lines have been inserted into the cache, in an embodiment of the claimed subject matter.
- Figure 4 depicts a flowchart of actions taken to insert pinned data into the cache in one embodiment of the claimed subject matter
- Figure 5 depicts a flowchart of actions taken to reconstruct a cache following a power-down event in a non- volatile implementation in one embodiment of the claimed subject matter.
- Figure 6 depicts a processor based system in accordance with one embodiment of the claimed subject matter.
- a dynamic data structure is used to implement a set associative cache, a type of CATB cache.
- each set in the cache is implemented as a linked list 100.
- This list may be a singly or doubly linked list, in two exemplary embodiments.
- Each set contains cache lines 120, each cache line in turn having both data and metadata as shown at 140. Inserting, accessing and removing elements from this implementation of a cache may be accomplished by computing the identifier for a set using the tag of a cache line and then traversing the linked list corresponding to the set. If a line with the same tag is found, the element is in the cache; if not the element is not in the cache.
- a processor based system such as the one depicted in Fig. 6 implements one exemplary embodiment of the claimed subject matter. The figure shows a processor 620 connected via a bus system 640 to a memory 660 and a disk and cache system including a disk 680 and a disk cache 600.
- the disk cache 600 may be implemented in volatile or in non- volatile memory.
- the processor may execute programs and access data, causing data to be read and written to disk 680 and consequently cached in disk cache 600.
- the system of Fig. 6 is of course merely representative. Many other variations on a processor based system are possible including variations in processor number, bus organization, memory organization, and number and types of disks.
- the claimed subject matter is not restricted to process based systems in particular, but may be extended to caches in general as described in the claims. [12]
- a non-volatile memory unit may be used to implement a disk cache such as that depicted in Fig. 6 using a data structure like that discussed with reference to Fig.
- a cache may be implemented in a volatile store unlike the embodiment discussed above.
- the cache may serve as a cache for purposes other than disk cache, e.g. a networked data or database cache.
- the actual data structure used to organize the sets of the cache may also differ in some embodiments of the claimed subject matter.
- the sets in the cache may not be of exactly equal sizes as is depicted in the figure.
- the embodiment described above is limited to N-way set associative caches for ease of exposition and generally describes a dynamic implementation of such a cache.
- a list or other dynamic data structure may be used to make any type of CATB cache dynamic in an analogous manner.
- a hash table based CATB cache may also similarly be implemented using a dynamic structure such as a linked list of some type instead of an array for each hash bucket.
- a different basic search method may be used, as long as search times do not depend on the total number of elements in the cache and the individual search groups are dynamically variable in size.
- FIG. 3 depicts a snapshot of a set-associative cache implemented in an embodiment in accordance with the claimed subject matter as described above, during its operation.
- a number of pinned lines 380 have been added to the cache.
- a free pinned line is removed from the free pinned list 300 and used to store the pinned line of data.
- its tag is used to select one of the sets 320 into which it is to be inserted.
- the number of non-pinned lines 340 in the set into which a pinned line has been inserted remains the same as before the insertion and that the number of non-pinned lines across the sets remains balanced. As the operation proceeds, the number of free pinned lines 360 may be reduced. [18] The operation of adding pinned data to the cache is further illustrated in the flowchart in Fig. 4. As new pinned data is added to the cache, the cache management system removes a line from the free pinned list 400, stores the pinned data in the line 420, computes the set into which the line should be inserted 440 and adds the line to the selected set 460.
- a set associative cache with a reserved list of pinned lines may be implemented in non- volatile memory, be. in a device that retains its data integrity after external power to the device is shut off as may happen if a system is shut down or in a power failure, thus causing a loss of power to the cache.
- This may include, in one exemplary embodiment, a cache implemented with non- volatile memory as a disk cache. In such an implementation, it may be possible to recover the state of the cache following a power-down event after power is restored. The addition of a reserved group of cache lines for pinned data does not impact such a recovery.
- a recovery process inspects each line in the non- volatile cache. As long as there are more lines to inspect, 500, the process inspects the next line 510. If the line has metadata in which the status information indicates that the line is allocated, i.e. contains valid cached data, it is inserted into the set identified by computing the set's identifier from the tag of the line, 540. If the line is unallocated, it may be added to a pool of unallocated lines in some manner, 530. When all lines are processed, the recovery then inspects each set formed in the first phase of the recovery.
- the recovery procedure adds a line from the pool of unallocated lines to the set to maintain a balanced number of non-allocated lines across all sets, 570, 580. Any remaining lines are returned to the pool, 590.
- Embodiments in accordance with the claimed subject matter may be provided as a computer program product that may include a machine-readable medium having stored thereon data which when accessed by a machine may cause the machine to perform a process according to the claimed subject matter.
- the machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, DVD-ROM disks, DND-RAM disks, DVD-RW disks, DVD+RW disks, CD-R disks, CD-RW disks, CD-ROM disks, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, magnet or optical cards, flash memory, or other type of media / machine-readable medium suitable for storing electronic instructions.
- embodiments of the claimed subject matter may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
- a communication link e.g., a modem or network connection.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE112004001394T DE112004001394T5 (en) | 2003-07-29 | 2004-07-16 | System and method for transferring blanks |
JP2006521892A JP2007500398A (en) | 2003-07-29 | 2004-07-16 | System and method for transporting blanks |
GB0604023A GB2421331B (en) | 2003-07-29 | 2004-07-16 | System and method for transferring blanks |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/629,094 | 2003-07-29 | ||
US10/629,094 US7832545B2 (en) | 2003-06-05 | 2003-07-29 | System and method for transferring blanks in a production line |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005013135A1 true WO2005013135A1 (en) | 2005-02-10 |
Family
ID=34115747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2004/023238 WO2005013135A1 (en) | 2003-07-29 | 2004-07-16 | System and method for transferring blanks |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2007500398A (en) |
CN (1) | CN100465921C (en) |
DE (1) | DE112004001394T5 (en) |
GB (1) | GB2421331B (en) |
WO (1) | WO2005013135A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156677A (en) * | 2011-04-19 | 2011-08-17 | 威盛电子股份有限公司 | Access method and system for quick access memory |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8108880B2 (en) | 2007-03-07 | 2012-01-31 | International Business Machines Corporation | Method and system for enabling state save and debug operations for co-routines in an event-driven environment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960454A (en) * | 1996-12-19 | 1999-09-28 | International Business Machines Corporation | Avoiding cache collisions between frequently accessed, pinned routines or data structures |
US6223256B1 (en) * | 1997-07-22 | 2001-04-24 | Hewlett-Packard Company | Computer cache memory with classes and dynamic selection of replacement algorithms |
US20020062424A1 (en) * | 2000-04-07 | 2002-05-23 | Nintendo Co., Ltd. | Method and apparatus for software management of on-chip cache |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6032207A (en) * | 1996-12-23 | 2000-02-29 | Bull Hn Information Systems Inc. | Search mechanism for a queue system |
CN1165000C (en) * | 2001-12-20 | 2004-09-01 | 中国科学院计算技术研究所 | Microprocessor high speed buffer storage method of dynamic index |
-
2004
- 2004-07-16 GB GB0604023A patent/GB2421331B/en not_active Expired - Fee Related
- 2004-07-16 JP JP2006521892A patent/JP2007500398A/en active Pending
- 2004-07-16 WO PCT/US2004/023238 patent/WO2005013135A1/en active Application Filing
- 2004-07-16 CN CNB2004800222232A patent/CN100465921C/en not_active Expired - Fee Related
- 2004-07-16 DE DE112004001394T patent/DE112004001394T5/en not_active Ceased
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960454A (en) * | 1996-12-19 | 1999-09-28 | International Business Machines Corporation | Avoiding cache collisions between frequently accessed, pinned routines or data structures |
US6223256B1 (en) * | 1997-07-22 | 2001-04-24 | Hewlett-Packard Company | Computer cache memory with classes and dynamic selection of replacement algorithms |
US20020062424A1 (en) * | 2000-04-07 | 2002-05-23 | Nintendo Co., Ltd. | Method and apparatus for software management of on-chip cache |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156677A (en) * | 2011-04-19 | 2011-08-17 | 威盛电子股份有限公司 | Access method and system for quick access memory |
CN102156677B (en) * | 2011-04-19 | 2014-04-02 | 威盛电子股份有限公司 | Access method and system for quick access memory |
Also Published As
Publication number | Publication date |
---|---|
GB0604023D0 (en) | 2006-04-12 |
CN1833231A (en) | 2006-09-13 |
JP2007500398A (en) | 2007-01-11 |
GB2421331A (en) | 2006-06-21 |
CN100465921C (en) | 2009-03-04 |
GB2421331B (en) | 2007-09-12 |
DE112004001394T5 (en) | 2006-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5996088B2 (en) | Cryptographic hash database | |
US7380065B2 (en) | Performance of a cache by detecting cache lines that have been reused | |
EP2281233B1 (en) | Efficiently marking objects with large reference sets | |
KR100978156B1 (en) | Method, apparatus, system and computer readable recording medium for line swapping scheme to reduce back invalidations in a snoop filter | |
CN107066393A (en) | The method for improving map information density in address mapping table | |
US20100146213A1 (en) | Data Cache Processing Method, System And Data Cache Apparatus | |
US20040083341A1 (en) | Weighted cache line replacement | |
US11226904B2 (en) | Cache data location system | |
CN101645043B (en) | Methods for reading and writing data and memory device | |
JP2012531674A (en) | Scalable indexing in non-uniform access memory | |
US8041918B2 (en) | Method and apparatus for improving parallel marking garbage collectors that use external bitmaps | |
CN107992430A (en) | Management method, device and the computer-readable recording medium of flash chip | |
WO2009156558A1 (en) | Copying entire subgraphs of objects without traversing individual objects | |
CN107818052A (en) | Memory pool access method and device | |
CN109407985B (en) | Data management method and related device | |
US7177983B2 (en) | Managing dirty evicts from a cache | |
US20050102465A1 (en) | Managing a cache with pinned data | |
US20020194431A1 (en) | Multi-level cache system | |
US9852074B2 (en) | Cache-optimized hash table data structure | |
Xu et al. | Building a fast and efficient LSM-tree store by integrating local storage with cloud storage | |
CN106164874B (en) | Method and device for accessing data visitor directory in multi-core system | |
CN115129618A (en) | Method and apparatus for optimizing data caching | |
US6915373B2 (en) | Cache with multiway steering and modified cyclic reuse | |
WO2005013135A1 (en) | System and method for transferring blanks | |
US20200272424A1 (en) | Methods and apparatuses for cacheline conscious extendible hashing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480022223.2 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006521892 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 0604023.2 Country of ref document: GB Ref document number: 0604023 Country of ref document: GB |
|
RET | De translation (de og part 6b) |
Ref document number: 112004001394 Country of ref document: DE Date of ref document: 20060622 Kind code of ref document: P |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112004001394 Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase | ||
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8607 |