JP5009700B2 - Data storage device, program, and data storage method - Google Patents

Data storage device, program, and data storage method Download PDF

Info

Publication number
JP5009700B2
JP5009700B2 JP2007167643A JP2007167643A JP5009700B2 JP 5009700 B2 JP5009700 B2 JP 5009700B2 JP 2007167643 A JP2007167643 A JP 2007167643A JP 2007167643 A JP2007167643 A JP 2007167643A JP 5009700 B2 JP5009700 B2 JP 5009700B2
Authority
JP
Japan
Prior art keywords
data
storage
storage means
means
writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2007167643A
Other languages
Japanese (ja)
Other versions
JP2009009213A (en
Inventor
康雄 大橋
Original Assignee
株式会社リコー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社リコー filed Critical 株式会社リコー
Priority to JP2007167643A priority Critical patent/JP5009700B2/en
Publication of JP2009009213A publication Critical patent/JP2009009213A/en
Application granted granted Critical
Publication of JP5009700B2 publication Critical patent/JP5009700B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a data storage device, a program, and a data storage method.

  In data storage devices such as personal computers, DVD recorders, and image-related devices that require recording of large amounts of data and image data, silicon disks that use HDD (Hard Disk Drive) or semiconductor memory for the purpose of data backup And a non-volatile storage means capable of retaining stored contents even when the power is turned off. In particular, in recent years, with the miniaturization of semiconductors, the capacity and cost of nonvolatile semiconductor memory elements have been increasing, and silicon disks using semiconductor memories are becoming widely used. Examples of such a large-capacity nonvolatile storage means (silicon disk or the like) include NAND flash devices (SD, CF, SSD (Solid State Drive) and the like).

  A data storage device having a plurality of nonvolatile storage means as described above has also been developed. The purpose of providing a plurality of nonvolatile storage means in this way includes improving performance by parallel connection, expanding storage capacity, improving the quality of stored data such as mirroring, and reducing downtime due to replacement in the event of a system failure.

  Systems using large-capacity nonvolatile storage means as described above are widely used. For example, in Patent Document 1, the volatile storage means and the nonvolatile storage means are configured to access the volatile storage means during normal operation, and on the volatile storage means such as when the power is turned off or an error occurs. A technique is disclosed in which, when data is destroyed, the latest data is transferred from the nonvolatile storage means to the volatile storage means by using the data on the nonvolatile storage means and the update history.

  By the way, in order for a system using a large-capacity nonvolatile storage means to realize an inexpensive and highly reliable system, it is necessary to solve technical problems. Specifically, a large-capacity nonvolatile storage means such as a NAND flash device has a problem that the number of data rewrites is limited.

  Therefore, in order to extend the service life of a large-capacity nonvolatile storage means such as a NAND flash device with a limited number of data rewrites, the same area can be obtained by smoothing access as a whole storage means using a non-use area. A technique called wear leveling for reducing the number of accesses is generally used.

Japanese Patent Laid-Open No. 2002-041369

  However, in the wear leveling technology for reducing the number of accesses to the same area as described above, a large-capacity nonvolatile memory using a semiconductor memory element is used if the rewrite data capacity is relatively small even if the rewrite data capacity is relatively small. There is a problem that the effect cannot be obtained unless sex storage means is used. For this reason, in a system that constantly stores (rewrites) device counters and data input values, such as printing devices and measuring devices, a large-capacity nonvolatile storage means such as a NAND flash device that has a limited number of rewrites is used. It is desirable to use a large-capacity non-volatile storage means using a semiconductor memory element with a large number of rewrites, but not a large capacity non-volatile with a large number of rewrites using a semiconductor memory element. However, this storage means is more expensive than a large-capacity nonvolatile storage means such as a NAND flash device with a limited number of rewrites.

  In addition, in non-volatile storage means with a large number of rewrites using semiconductor memory elements, when the system is restarted while the data rewrite is incomplete due to system down or abnormal operation during rewrite operation, There is a problem that the stored data differs from the normal value, leading to abnormal operation due to inconsistency. Therefore, in order to avoid this problem, data restoration is performed by multiplexing data or adding a parity bit / sum value. However, in order to realize effective restoration, an additional data capacity of about half or more of the data amount is required. Therefore, in order to solve the above-described problem, the number of rewrites using a semiconductor memory element is limited. Therefore, a large capacity non-volatile storage means is required.

  Because of these problems, it is common to use HDDs and the like that are less reliable but have increased capacity compared to nonvolatile storage means using semiconductor storage elements such as silicon disks. Yes.

  The present invention has been made in view of the above, and provides a data storage device, a program, and a data storage method capable of handling a relatively large amount of data while avoiding data write restrictions with an inexpensive configuration. The purpose is to provide.

  In order to solve the above-described problems and achieve the object, a data storage device according to a first aspect of the present invention includes a volatile first storage unit and a non-volatile second storage unit with a limited number of data rewrites. A non-volatile third storage means having a small capacity and a limited number of data rewrites compared to the second storage means, and the data stored in the second storage means at the time of system start-up A transfer means for transferring to the data, a data reception means for receiving input of new data, and an update means for updating the data transferred to the first storage means based on the data newly received by the data reception means And update history storage means for storing the update history information updated by the update means in the third storage means, and during the storage process of the update history information in the third storage means Aggregating means for aggregating address values of the update history information accumulated in the third storage means when a predetermined condition is satisfied, and generating aggregated data that is valid only for the final access for duplicate accesses And writing means for writing the aggregated data generated by the aggregation means to the second storage means.

  According to a second aspect of the present invention, in the data storage device according to the first aspect, the update history information stored in the third storage unit by the update history storage unit is a difference value from rewritten data to the same address. It is.

  The invention according to claim 3 is the data storage device according to claim 1 or 2, wherein the data stored in the second storage means is duplicated, and at least one is stored in the second storage means. And identifying means for enabling identification of data being written or completion of data writing with respect to the data stored in duplicate, wherein the transfer means is duplicated at the time of system startup. Of the data, the data for which data writing has been completed is selectively transferred to the first storage means.

  According to a fourth aspect of the present invention, in the data storage device according to any one of the first to third aspects, the data compression means for compressing the data stored in the third storage means, and the third storage means Data decompression means for decompressing the data stored after being compressed.

  According to a fifth aspect of the present invention, in the data storage device according to any one of the first to fourth aspects, after the writing of the aggregated data to the second storage unit by the writing unit is completed, the third storage unit An initialization means for initializing is provided.

  According to a sixth aspect of the present invention, in the data storage device according to any one of the first to fifth aspects, the storage positions of the rewrite target data that are rewritten in the second storage unit are aggregated into a file. Management means for managing, and wear leveling means for smoothing access by accessing an unused area, and when rewriting data to the second storage means, the data to be rewritten managed by the management means The unused level area of the second storage means is accessed by the wear leveling means in units of memory areas of the file size in which the data are aggregated.

  According to a seventh aspect of the present invention, there is provided a program for transferring data stored in a non-volatile second storage means having a limited number of data rewrites to a volatile first storage means at system startup. A function, a data reception function for receiving input of new data, an update function for updating the data transferred to the first storage means based on data newly received by the data reception function, and the update function An update history storage function for storing the update history information updated by the data in the nonvolatile third storage means having a small capacity and a large number of data rewrites compared to the second storage means, and the third storage means When a predetermined condition is satisfied during the update history information storage process for the above, the address values of the update history information stored in the third storage means are aggregated, An aggregation function for generating aggregated data that is valid only for the final access for multiple accesses and a writing function for writing the aggregated data generated by the aggregation function to the second storage means are executed on a computer. Let

  The invention according to claim 8 is the program according to claim 7, wherein the update history information stored in the third storage means by the update history storage function is a difference value from rewritten data to the same address. .

  The invention according to claim 9 is the program according to claim 7 or 8, wherein the data stored in the second storage means is duplicated and at least one is stored in the second storage means And an identification function that makes it possible to identify whether data writing is in progress or data writing is completed for the data stored in duplicate, and the transfer function is duplicated when the system is started up. Among the data, the data for which data writing has been completed is selectively transferred to the first storage means.

  According to a tenth aspect of the present invention, in the program according to any one of the seventh to ninth aspects, a data compression function for compressing the data stored in the third storage unit, and a compression by the third storage unit. The computer is caused to execute a data decompression function for decompressing the stored data at the time of reading.

  According to an eleventh aspect of the present invention, in the program according to any one of the seventh to tenth aspects, the third storage unit is initialized after completion of writing of the aggregated data to the second storage unit by the write function. The computer is caused to execute an initialization function to be converted.

  According to a twelfth aspect of the present invention, in the program according to any one of the seventh to eleventh aspects, the storage locations of the rewrite target data that are rewritten in the second storage unit are integrated and managed in a file. A management function and a wear leveling function that smoothes access by accessing an unused area are executed by the computer, and the rewriting managed by the management function at the time of data rewriting to the second storage means The unused area of the second storage means is accessed by the wear leveling function in units of memory areas of the file size in which the target data is aggregated.

  According to a thirteenth aspect of the data storage method of the present invention, at the time of starting the system, the data stored in the non-volatile second storage means having a limited number of data rewrites is transferred to the volatile first storage means. A transfer step, a data reception step for receiving input of new data, an update step for updating the data transferred to the first storage means based on the data newly received in the data reception step, An update history storage step for storing update history information updated in the update process in a nonvolatile third storage means having a small capacity and a limited number of data rewrites compared to the second storage means; and When a predetermined condition is satisfied during the storage process of the update history information in the storage unit, the address value of the update history information stored in the third storage unit is collected. And an aggregation step for generating aggregated data for which only the final access is valid for overlapping accesses, and a writing step for writing the aggregated data generated by the aggregation step to the second storage means. .

  According to a fourteenth aspect of the present invention, in the data storage method according to the thirteenth aspect, the update history information accumulated in the third storage means in the update history accumulation step is a difference value from rewritten data to the same address. It is.

  The invention according to claim 15 is the data storage method according to claim 13 or 14, wherein the data stored in the second storage means is duplicated, and at least one is stored in the second storage means. And an identification step that makes it possible to identify whether data is being written or data writing is complete with respect to the data stored in duplicate, and the transfer step is duplicated at the time of system startup. Of the data, the data for which data writing has been completed is selectively transferred to the first storage means.

  According to a sixteenth aspect of the present invention, in the data storage method according to any one of the thirteenth to fifteenth aspects, a data compression step of compressing the data stored in the third storage unit, and a third storage unit A data decompression step for decompressing the data stored after being compressed.

  The invention according to claim 17 is the data storage method according to any one of claims 13 to 16, wherein the third storage means is completed after the writing of the aggregated data to the second storage means by the writing step. Including an initialization step for initializing.

  According to an eighteenth aspect of the present invention, in the data storage method according to any one of the thirteenth to seventeenth aspects, the storage positions of the rewrite target data that are rewritten in the second storage means are aggregated into a file. A management step for managing, and a wear leveling step for smoothing access by accessing an unused area, and the data to be rewritten managed by the management step when rewriting data to the second storage means The unused area of the second storage means is accessed by the wear leveling process in units of memory areas of the file size in which are collected.

  According to the first, seventh, and thirteenth aspects of the present invention, the volatile first storage means, the non-volatile second storage means with a limited number of data rewrites, and the smaller capacity and data rewrite than the second storage means. Non-volatile third storage means having a large number of times of limitation is provided. Rewriting during system operation is performed on the volatile first storage means, and update history information such as addresses and data values is compared with the second storage means. When data is stored in the non-volatile third storage means having a large number of data rewrites, and a predetermined condition is satisfied when the update history information is stored in the third storage means (for example, the memory of the third storage means) (A full write to the same address exceeds the specified number of times), the address values of the update history information stored in the third storage means are aggregated, and only the last access is valid for duplicate access By generating data and writing the aggregated data to the second storage unit, the access speed is improved by reducing the access frequency to the second storage unit, and the number of rewrites to the second storage unit is reduced. Even if the capacity of the third storage unit is reduced, the data can be securely held by the second storage unit and the third storage unit, so that the data can be stored in a relatively large capacity while avoiding data write restrictions with an inexpensive configuration. The effect is that data can be handled.

  According to the invention according to claims 2, 8, and 14, the update history information stored in the third storage means by the update history storage means is a difference value from the rewritten data to the same address, so that, for example, a counter In the case of a value or the like, the update value is determined in advance, so if only the address value or the address value and the difference value (+1) are stored in the third storage means, the information at the time of data rewriting is securely held. Thus, the capacity of the expensive third storage means can be reduced.

  According to the third, ninth, and fifteenth inventions, at the time of starting the system, the data that has been completely written among the duplicated data is selectively transferred to the first storage means. As a result, it is possible to restore data inconsistency due to system down during rewrite operation when the system is restarted, and the data reliability can be improved, so that the system reliability can be improved. Play.

  Further, according to the inventions according to claims 4, 10, and 16, by compressing and holding the write data to the third storage means, apparently increasing the amount of data stored in the third storage means, By reducing the frequency of data writing back to the second storage means, the service life of the second storage means can be extended, and the storage capacity of the third storage means can be reduced. Play.

  According to the fifth, eleventh, and seventeenth aspects of the present invention, the third storage unit is initialized after the system is started by initializing the third storage unit after the writing unit finishes writing the aggregated data to the second storage unit. Since the update history information will start to be accumulated after initializing the data with arbitrary data, the transfer data incomplete in the third storage means will be placed from the top address, so transfer incomplete There is an effect that data management can be facilitated.

  According to the sixth, twelfth, and eighteenth aspects of the present invention, when data is rewritten to the second storage means, wear leveling is performed in units of memory areas of file sizes in which the rewrite target data managed by the management means is aggregated. By accessing the unused area of the second storage means by means, it becomes possible to realize even the second storage means having no large free space by reducing the occupied space of the unused area. It is possible to easily reduce the cost of the system by extending the lifetime of rewriting.

  Exemplary embodiments of a data storage device, a program, and a data storage method according to the present invention will be explained below in detail with reference to the accompanying drawings.

[First Embodiment]
A first embodiment of the present invention will be described with reference to FIGS. The present embodiment is an example in which a multi function device called MFP (Multi Function Peripheral) is applied as a data storage device.

  FIG. 1 is a block diagram showing a configuration of a multifunction machine 1 according to the first embodiment of the present invention. As shown in FIG. 1, the multifunction machine 1 includes a scanner unit 2 that is an apparatus that captures an image from a paper document, and an engine unit 3 that forms an image captured by the scanner unit 2.

  The scanner unit 2 is a part that converts the information of the document 100 into an image signal by scanning and exposing the document 100 placed on the document table 21. Inside the scanner unit 2, scan exposure is performed by an exposure lamp 22 movable along the document table 21. The reflected light of the document 100 is photoelectrically converted by the CCD color image sensor 28 through the carriage mirror 23, the first half scan mirror 24, the second half scan mirror 25, the imaging mirror 26, and the optical lens 27, and then reflected light. It becomes an electric signal according to the strength of. The image signal generated by the photoelectric conversion is subjected to image processing by the image processing circuit 52 (see FIG. 2), and then transmitted to the engine unit 3 (optical writing unit 32).

  In the engine unit 3, a constant-rotating photosensitive drum 31 that is uniformly charged by a charging charger 30 that is a charging device is irradiated with laser light from a laser diode (LD) 33 of an optical writing unit 32 that is a semiconductor laser exposure device. Exposure to produce an electrostatic latent image. The electrostatic latent image generated on the photosensitive drum 31 is developed with toner by the developing device 34 to become a visualized toner image.

  On the other hand, the transfer paper 200 that has been fed from the paper feed tray 36 in advance by the paper feed roller 35 and has been waiting by the registration roller 37 is transported in synchronism with the drive of the photosensitive drum 31, and serves as a transfer device. The toner image on the photosensitive drum 31 is electrostatically transferred to the transfer paper 200 by the charger 38, and the transfer paper 200 is separated from the photosensitive drum 31 by the paper separation charger 39. After the separation, the toner image on the transfer paper 200 is heated and fixed by the fixing device 40, and discharged to the discharge tray 42 by the discharge roller 41. On the other hand, the toner image remaining on the photosensitive drum 31 after electrostatic transfer is removed by pressure contact with the photosensitive drum 31 by the cleaning device 43, and the photosensitive drum 31 is neutralized by the irradiation light of the neutralizing lamp 44. As described above, the image forming apparatus forms an image by repeating these series of processes.

  FIG. 2 is a block diagram showing a write control system of the multifunction machine 1. As shown in FIG. 2, the writing control system of the multi-function device 1 is configured mainly with a main unit (main control plate) 50. The main unit 50 includes a video processing circuit 51, an image processing circuit 52, a writing processing circuit 53, an LD (laser diode) control unit 54, an ASIC 55, a CPU 56, a ROM 57, a RAM 58 for storing program data, and a volatile first storage means. A DRAM (Dynamic Random Access Memory) 59 serving as a main memory, an NVRAM (Non Volatile RAM) 60, an ATA controller 61, and an SSD (silicon disk) 62 serving as a semiconductor storage device (NAND flash device) are provided. In the present embodiment, the SSD 62 is adopted as the semiconductor memory device (NAND flash device), but an SD card, a CF card, or the like may be used.

  The CPU 56 controls the entire main unit 50. The ASIC 55, the ROM 57, the RAM 58, the DRAM 59, the NVRAM 60, the ATA controller 61, and the like are connected by a bus. The CPU 56 reads out the program data from the ROM 57 at any time, and decodes and executes it. The CPU 56 reads out and executes the program from the ROM 57 as needed, but uses the RAM 58 as a primary storage memory necessary for the operation. Data necessary for the operation is temporarily stored in the RAM 58 and accessed by the CPU 56.

  The ATA controller 61 is an IC that performs interface conversion between the SSD 61 and the internal bus. The SSD 62 connected to the ATA controller 61 is a large-capacity non-volatile memory (NAND flash device) that has a low writing speed but a limited number of rewrites, and stores data that needs to be retained even when the power is turned off. Store. That is, the SSD 62 is a non-volatile second storage unit with a limited number of data rewrites.

  The NVRAM 60 is a non-volatile memory that is a non-volatile third storage means that can be written at a high speed and has a smaller capacity and a larger number of data rewrites than the SSD 62, and must be retained even when the power is turned off. Store the data. The NVRAM 60 stores setting values and counter values that need to be backed up even when the power is turned off.

  The ASIC 55 incorporates a control circuit for connecting each signal from the operation panel 70 and the sensor 80 to the internal bus.

  Further, the video processing circuit 51 takes in RGB image data from the CCD color image sensor 28 and performs processing such as color conversion and smoothing. The image processing circuit 52 performs image processing such as scaling and aggregation processing on the image data processed by the video processing circuit 51. The write processing circuit 53 outputs a signal according to the write data to the LD control unit 54 that controls the light emission of the laser diode 33, exposes the photosensitive drum 31 with the laser light from the laser diode 33, and An electrostatic latent image is generated.

  Next, among the functions that the program stored in the ROM 57 causes the CPU 56 to execute, the characteristic functions in the present embodiment will be described. Here, software processing by the CPU 56 will be described, but processing by hardware processing (DMA transfer) by the ASIC 55 may be performed.

  FIG. 3 is a flowchart schematically showing the flow of the printing process, and FIG. 4 is an explanatory diagram showing a data backup procedure during the printing process. As shown in FIG. 3, when the power is turned on (step S1) and the system startup / initial setting is executed (step S2), the CPU 56 transfers system setting data from the SSD 62 to the DRAM 59 as shown in (1) of FIG. (Step S3: Transfer means), and waits for a copy operation instruction from the operation panel 70 (Step S4). In the normal printing operation after the system is started, as shown in FIG. 4, the CPU 56 does not refer to the data in the SSD 62 but reads the current setting data from the DRAM 59 and operates using the set value. .

  When a copy operation start instruction is issued from the operation panel 70 (Yes in step S4), an image signal is generated by photoelectric conversion in the CCD color image sensor 28 (step S5), and the video processing circuit 51 detects the CCD color image. The image signal from the sensor 28 is subjected to processing such as color conversion and smoothing (step S6), and after the image processing circuit 52 performs scaling and aggregation processing (step S7), the DRAM 59 functions as an image memory. The image data is temporarily stored in (Step S8).

  Next, the CPU 56 performs paper feeding / conveying based on the motor control of the engine unit 3 and input information from various sensors 80 (step S9), and uses the DMA transfer function of the ASIC 55 so that the timing is matched at the position of the transfer charger 38. Then, the data is transferred from the DRAM 59 to the write processing circuit 53 (steps S10 and S11).

  Thereafter, the write processing circuit 53 outputs a signal in accordance with the write data to the LD control unit 54, and the LD control unit 54 causes the laser diode 33 to blink, thereby causing an electrostatic latent image on the photosensitive drum 31. An image is formed. The electrostatic latent image generated on the photosensitive drum 31 is developed with toner by the developing device 34, and the toner image on the photosensitive drum 31 is electrostatically transferred onto the transfer paper 200 by the transfer charger 38 (step S12). After the transfer paper 200 is separated from the photosensitive drum 31 by the paper separation charger 39, the toner image on the transfer paper 200 is heated and fixed by the fixing device 40 (step S13).

  Thereafter, when the CPU 56 detects that the paper is discharged onto the paper discharge tray 42 by the paper discharge roller 41 (Yes in Step S14), the CPU 56 updates the counter register of the ASIC 55 (Step S15). More specifically, a paper discharge sensor which is one of various sensors 80 is mounted in the vicinity of the paper discharge tray 42, and a pulse signal is input to the ASIC 55 every time paper discharge is completed by a copying operation ( Step S15: Data receiving means).

  Next, the CPU 56 recognizes the signal input to the ASIC 55, and system setting data (system setting value, user setting value, counter value, etc.) necessary for system operation on the DRAM 59 as shown in (2) of FIG. The current counter value is read from the address corresponding to the counter information (step S16), and the value obtained by adding 1 to the counter value is written back to the same address in the DRAM 59 for updating (step S17: updating means). Further, as shown in (3) of FIG. 4, the CPU 56 stores the address value and the incremented counter value data in the NVRAM 60 as a pair, and increments the address (pointer) (step S18: update history accumulating means) ). As a result, in the NVRAM 60, the update target address value and the updated counter value data are stored as the update history information updated.

  The processes in steps S4 to S18 described above are repeated until it is determined that the address value and counter value data accumulated in the NVRAM 60 are full (Yes in step S19).

  If it is determined that the address value and counter value data stored in the NVRAM 60 are full (Yes in step S19), the CPU 56 aggregates the address values in the NVRAM 60 as shown in (4) of FIG. For overlapping accesses, a process for validating only the final access is performed (step S20: aggregation means). Note that the CPU 56 determines that the address value in the NVRAM 60 is not limited to the memory full, as shown in (4) of FIG. 4, even when it is determined that writing to the same address has exceeded the specified number of times (Yes in step S19). May be processed so that only the last access is valid for overlapping accesses (step S20: aggregation means).

  Then, the CPU 56 performs a writing process on the SSD 62 only as shown in (5) of FIG. 4 (step S21: writing means) only for the aggregated address value and paired counter value data. In the example shown in FIG. 4, it can be seen that three addresses of addresses = 0x0003A, 0x00142, and 0x0198A are update target addresses, and access was performed five times, once, and twice. Accordingly, for example, since the write processing of address 0x0003A on the SSD 62 is consolidated into one time, it can be seen that the number of accesses to the SSD 62 has decreased to 1/5.

  Then, after the writing process is completed (Yes in Step S22), the CPU 56 initializes the data in the NVRAM 60 with 0 (Step S23), returns to Step S4, and continues the operation.

  As described above, according to the present embodiment, the volatile DRAM 59, the non-volatile SSD 62 having a limited number of data rewrites, and the non-volatile NVRAM 60 having a small capacity and a large number of data rewrites compared to the SSD 62 are provided. Rewriting during system operation is performed in the volatile DRAM 59, and update history information such as addresses and data values is stored in the nonvolatile NVRAM 60, which has a larger number of data rewrites than the SSD 62, and the update history information for the NVRAM 60 is stored. When a predetermined condition is satisfied during the storage process (for example, NVRAM 60 is full, writing to the same address exceeds the specified number of times, etc.), the address values of the update history information stored in NVRAM 60 are aggregated. For duplicate access, only the final access is valid. Data and writing the aggregated data to the SSD 62 to improve the access speed by reducing the access frequency to the SSD 62, reduce the number of rewrites to the SSD 62, and reduce the capacity of the NVRAM 60 even if the capacity of the NVRAM 60 is reduced. Since the NVRAM 60 can reliably hold the data, it can handle a relatively large amount of data while avoiding data write restrictions with an inexpensive configuration.

  As shown in FIG. 5, the storage locations of rewrite target data that frequently rewrites on the SSD 62 are collected and managed together in a small capacity file (management means), and the same when rewriting to the SSD 62. You may make it access the unused area | region of SSD62 using the wear leveling function (wear leveling means) which reduces the frequency | count of access of an area | region.

  The reason for this is as follows. When the size of a file that is frequently rewritten is large, wear leveling must be performed in units of the memory area of that size, and a large free space is required, or the possibility of writing to the same location becomes high, so the lifetime is shortened. This is to avoid them.

  As described above, when data is rewritten to the SSD 62, the unused space of the unused area is accessed by accessing the unused area of the SSD 62 by wear leveling in units of memory areas of the file size in which the managed data to be rewritten is aggregated. As a result, the SSD 62 that does not have a large free space can be realized. Therefore, it is possible to easily reduce the cost of the system by extending the life of rewriting.

  In the present embodiment, the CPU 56 or ASIC 55 may be provided with a data compression / expansion function (data compression means, data expansion means). When the CPU 56 and the ASIC 55 have a data compression / decompression function in this way, the data stored in the NVRAM 60 is compressed and decompressed at the time of reading, so that the write data to the NVRAM 60 is compressed and held, and apparently Increasing the amount of data stored in the NVRAM 60 reduces the frequency of data writing back to the SSD 62, so that the service life of the SSD 62 can be extended and the storage capacity of the NVRAM 60 can be reduced. it can.

  In the present embodiment, the NVRAM 60 may be initialized with specific data after completion of data backup to the SSD 62 (initialization means). This is because if the information remains in the NVRAM 60, the information is mixed when the history information is accumulated after the system is started, and the data management by the CPU 56 is required. For this reason, once the data in the NVRAM 60 is initialized with arbitrary data after the system is started and the accumulation of history information is started, the incomplete transfer data of the backup data in the NVRAM 60 is arranged from the top address. Since the transfer incomplete data can be easily managed, the data management by the CPU 56 can be made unnecessary.

[Second Embodiment]
Next, a second embodiment of the present invention will be described with reference to FIGS. The same parts as those in the first embodiment described above are denoted by the same reference numerals, and description thereof is also omitted.

  In the present embodiment, the data in the SSD 62 is mirrored for the purpose of repairing / restoring even if data file destruction or data inconsistency occurs when the system goes down while writing data to the SSD 62. The configuration is different from that of the first embodiment described above.

  FIG. 6 is an explanatory diagram showing a processing procedure at the time of print processing and system startup according to the second embodiment of the present invention. In the present embodiment, as shown in FIG. 6, the backup data in the SSD 62 is mirrored as File1 and File2 (duplexing means). It is assumed that File 1 and File 2 have flag bits for identifying rewriting or rewriting completion information added or separately managed (identification means). In this embodiment, it is assumed that rewriting is in progress when flag = 01, and rewriting is completed when flag = 02.

  Here, FIG. 7 is a flowchart showing the flow of processing when the system is started. As shown in FIG. 7, at the time of system startup, the CPU 56 reads the flag information of File 1 of the SSD 62 as shown in (3) of FIG. 6 (step S31), and if flag = 02 (Yes in step S32), All the information of File1 is handled as reliable information, and the data of File1 is transferred from the SSD 62 to the DRAM 59 as shown in (4) of FIG. 6 (step S33). On the other hand, if the flag = 01 (No in step S32), the CPU 56 aggregates the address value and the counter value data paired in the NVRAM 60 as shown in (1) and (2) of FIG. Since there is a high possibility that the system has been down at the time of rewriting the data on the File 62 of the SSD 62, the data is treated as unreliable data, and the process proceeds to the File 2 flag check in Step S34. As shown in (3) of FIG. 6, the CPU 56 reads the flag information of the File 2 of the SSD 62 (Step S34). If the File 2 is flag = 02 (Yes in Step S35), the CPU 56 trusts all the data of the File 2. As shown in FIG. 6 (4), the file 2 data is transferred from the SSD 62 to the DRAM 59 (step S36). Further, if File2 is flag = 01 (No in step S35), the CPU 56 is considered to be a failure of the SSD 62, so the SSD 62 is initialized with the system initial setting value (step S37).

  As described above, according to the present embodiment, the transfer means selectively rewrites data that has been completely written to the DRAM 59 among the duplicated data when the system is started up. Data inconsistency due to a system down during operation can be restored when the system is restarted, and the reliability of data can be improved. Therefore, the system reliability can be improved.

[Third Embodiment]
Next, a third embodiment of the present invention will be described with reference to FIG. The same parts as those in the first embodiment described above are denoted by the same reference numerals, and description thereof is also omitted.

  In the first embodiment, the NVRAM 60 stores the address value to be updated and the updated counter value data as the history information updated. In the present embodiment, the NVRAM 60 Is different in that difference data from the time of access is stored as history information updated.

  FIG. 8 is an explanatory diagram showing a data backup procedure during print processing according to the third embodiment of the present invention. As shown in FIG. 8, in the present embodiment, the CPU 56 detects that the paper is discharged onto the paper discharge tray 42 by the paper discharge roller 41 (Yes in step S14 shown in FIG. 3), and the counter of the ASIC 55. When the register is updated (step S15 shown in FIG. 3), the difference value of the update data is calculated. For example, in the case of a counter from the sensor 80, the difference value of the update data is “+1”.

  The CPU 56 recognizes the signal input to the ASIC 55 and reads the current counter value from the address corresponding to the counter information in the system setting data on the DRAM 59 as shown in (2) of FIG. 8 (shown in FIG. 3). In step S16), a value obtained by adding 1 to the counter value is written back to the same address in the DRAM 59 and updated (step S17 shown in FIG. 3). Further, the CPU 56 stores the address value and the difference data as a pair in the NVRAM 60 as shown in (3) of FIG. 8, and increments the address (pointer) (step S18 shown in FIG. 3). As a result, only the update target address and +1 data (difference data) are stored in the NVRAM 60 as the updated history information, so that the amount of data held in the NVRAM 60 can be reduced.

  When it is determined that the address value and counter value data accumulated in the NVRAM 60 are full (Yes in step S19 shown in FIG. 3), the CPU 56 determines the address value in the NVRAM 60 as shown in (4) of FIG. And the process of enabling only the final access for the duplicate access (step S20 shown in FIG. 3), and only the aggregated address value and the paired difference data (5 in FIG. 8). As shown in FIG. 3, a writing process is performed on the SSD 62 (step S21 shown in FIG. 3). In the example shown in FIG. 8, it can be seen that two addresses of addresses = 0x0003A and 0x00142 are the update target addresses, and access was performed four times and ten times, respectively. Therefore, for example, since the write processing at address 0x0003A on the SSD 62 is consolidated into four times, it can be seen that the number of accesses to the SSD 62 is reduced to ¼.

  As described above, according to the present embodiment, the update history information stored in the NVRAM 60 by the update history storage means is a difference value from the rewritten data to the same address. Since it is determined in advance, if only the address value or the address value and the difference value (+1) are stored in the NVRAM 60, the information at the time of data rewriting can be held reliably, so that the capacity of the expensive NVRAM 60 Can be reduced.

  In each of the embodiments, an example in which a multifunction device called MFP is applied as a data storage device has been described. However, the present invention is not limited to this, and a recording / playback device for image data such as a personal computer, a PC server, and a DVD. It can also be applied to music data input / playback devices, measuring devices, in-vehicle car navigation systems, camera systems (cell phones, digital cameras, remote cameras, aeronautical / marine / in-vehicle cameras, etc.), authentication systems, and medical equipment.

1 is a block diagram showing a configuration of a multifunction machine according to a first embodiment of the present invention. FIG. 3 is a block diagram illustrating a write control system of a multifunction machine. 3 is a flowchart schematically showing a flow of printing processing. It is explanatory drawing which shows the procedure of the data backup at the time of a printing process. It is explanatory drawing which shows the procedure of the data backup at the time of a printing process. It is explanatory drawing which shows the procedure of the process at the time of the printing process concerning the 2nd Embodiment of this invention, and a system starting. It is a flowchart which shows the flow of the process at the time of system starting. It is explanatory drawing which shows the procedure of the data backup at the time of the printing process concerning the 3rd Embodiment of this invention.

Explanation of symbols

1 Data storage device 59 First storage means 60 Third storage means 62 Second storage means

Claims (18)

  1. Volatile first storage means;
    Non-volatile second storage means with a limited number of data rewrites;
    Non-volatile third storage means having a small capacity and a large number of data rewrites compared to the second storage means,
    Transfer means for transferring data stored in the second storage means to the first storage means at the time of system startup;
    Data receiving means for receiving new data input;
    Updating means for updating the data transferred to the first storage means based on the data newly received by the data receiving means;
    Update history storage means for storing update history information updated by the update means in the third storage means;
    When a predetermined condition is satisfied during the storage process of the update history information with respect to the third storage unit, the address values of the update history information stored in the third storage unit are aggregated for duplicate access. On the other hand, an aggregation means for generating aggregate data that is valid only for the last access,
    Writing means for writing the aggregated data generated by the aggregation means to the second storage means;
    A data storage device comprising:
  2. The update history information stored in the third storage unit by the update history storage unit is a difference value from rewritten data to the same address.
    The data storage device according to claim 1.
  3. Duplexing the data stored in the second storage means, and storing at least one in the second storage means;
    Identification means for enabling identification of data being written or data writing completed for the data stored in a duplicated manner;
    With
    The transfer means selectively transfers the data for which data writing has been completed among the duplicated data to the first storage means at the time of system startup.
    3. The data storage device according to claim 1, wherein the data storage device is a data storage device.
  4. Data compression means for compressing the data stored in the third storage means;
    Data decompression means for decompressing the data stored in the third storage means upon reading;
    The data storage device according to claim 1, further comprising:
  5. An initialization means for initializing the third storage means after the writing of the aggregated data to the second storage means by the writing means is completed;
    5. The data storage device according to claim 1, wherein the data storage device is a data storage device.
  6. Management means for collecting storage locations of rewrite target data where rewriting occurs in the second storage means and managing them in a file;
    Wear leveling means for smoothing access by accessing unused areas;
    With
    When rewriting data to the second storage unit, the wear leveling unit accesses an unused area of the second storage unit in units of a memory area of a file size in which the rewrite target data managed by the management unit is aggregated. To
    The data storage device according to claim 1, wherein the data storage device is a data storage device.
  7. A transfer function for transferring data stored in the nonvolatile second storage means having a limited number of data rewrites to the volatile first storage means at the time of system startup;
    A data reception function that accepts new data input,
    An update function for updating the data transferred to the first storage means based on data newly received by the data reception function;
    An update history storage function for storing update history information updated by the update function in a nonvolatile third storage unit having a small capacity and a large number of data rewrites compared to the second storage unit;
    When a predetermined condition is satisfied during the storage process of the update history information with respect to the third storage unit, the address values of the update history information stored in the third storage unit are aggregated for duplicate access. On the other hand, an aggregation function that generates aggregate data that is valid only for the last access,
    A writing function for writing the aggregated data generated by the aggregation function to the second storage unit;
    A program that causes a computer to execute.
  8. The update history information stored in the third storage unit by the update history storage function is a difference value from rewritten data to the same address.
    The program according to claim 7, wherein:
  9. A duplex function for duplicating the data stored in the second storage means, and storing at least one in the second storage means;
    An identification function that makes it possible to identify whether data is being written or data writing is complete for the data stored in a duplex manner;
    To the computer,
    The transfer function selectively transfers the data, in which data writing has been completed, among the duplicated data to the first storage means at the time of system startup.
    The program according to claim 7 or 8, characterized in that
  10. A data compression function for compressing the data stored in the third storage means;
    A data decompression function for decompressing the data stored in the third storage means upon reading;
    10. The program according to claim 7, wherein the program is executed by the computer.
  11. Causing the computer to execute an initialization function for initializing the third storage means after completing the writing of the aggregated data to the second storage means by the writing function;
    The program according to any one of claims 7 to 10, wherein:
  12. A management function for collecting storage locations of rewrite target data where rewriting occurs in the second storage means and managing them in a file;
    Wear leveling function that smooths access by accessing unused areas;
    To the computer,
    When rewriting data to the second storage means, the unused level of the second storage means is accessed by the wear leveling function in units of memory areas of the file size in which the data to be rewritten managed by the management function is aggregated. To
    The program according to any one of claims 7 to 11, characterized in that:
  13. A transfer step of transferring data stored in the nonvolatile second storage means having a limited number of data rewrites to the volatile first storage means at the time of system startup;
    A data acceptance process for accepting new data input;
    An update step of updating the data transferred to the first storage means based on the data newly received in the data reception step;
    An update history storage step for storing update history information updated in the update step in a non-volatile third storage unit having a small capacity and a large number of data rewrites compared to the second storage unit;
    When a predetermined condition is satisfied during the storage process of the update history information with respect to the third storage unit, the address values of the update history information stored in the third storage unit are aggregated for duplicate access. On the other hand, an aggregation process for generating aggregate data that is valid only for the last access,
    A writing step of writing the aggregated data generated by the aggregation step into the second storage unit;
    A data storage method comprising:
  14. The update history information stored in the third storage means in the update history storage step is a difference value with rewritten data to the same address.
    The data storage method according to claim 13.
  15. A duplexing step of duplexing the data stored in the second storage means, and storing at least one in the second storage means;
    An identification step for identifying whether data is being written or data writing is completed for the data stored in a duplex manner;
    Including
    The transfer step selectively transfers the data for which data writing has been completed among the duplicated data to the first storage means at the time of system startup.
    15. The data storage method according to claim 13, wherein the data storage method is a data storage method.
  16. A data compression step of compressing the data stored in the third storage means;
    A data decompression step for decompressing the data stored in the third storage means upon reading;
    The data storage method according to claim 13, further comprising:
  17. An initialization step of initializing the third storage means after the completion of writing the aggregated data to the second storage means by the writing step;
    The data storage method according to claim 13, wherein the data storage method is a data storage method.
  18. A management step of collecting storage locations of rewrite target data where rewriting occurs in the second storage means and managing them in a file;
    Wear leveling process that smoothes access by accessing unused areas;
    Including
    When rewriting data to the second storage means, the unused area of the second storage means is accessed by the wear leveling process in units of memory areas of the file size in which the data to be rewritten managed by the management process is aggregated. To
    The data storage method according to claim 13, wherein the data storage method is a data storage method.
JP2007167643A 2007-06-26 2007-06-26 Data storage device, program, and data storage method Active JP5009700B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007167643A JP5009700B2 (en) 2007-06-26 2007-06-26 Data storage device, program, and data storage method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2007167643A JP5009700B2 (en) 2007-06-26 2007-06-26 Data storage device, program, and data storage method

Publications (2)

Publication Number Publication Date
JP2009009213A JP2009009213A (en) 2009-01-15
JP5009700B2 true JP5009700B2 (en) 2012-08-22

Family

ID=40324256

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2007167643A Active JP5009700B2 (en) 2007-06-26 2007-06-26 Data storage device, program, and data storage method

Country Status (1)

Country Link
JP (1) JP5009700B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5377175B2 (en) * 2009-09-08 2013-12-25 株式会社東芝 Controller and data storage device
JP5374313B2 (en) * 2009-10-16 2013-12-25 ファナック株式会社 Information processing apparatus having nonvolatile memory protection function
JP5553309B2 (en) * 2010-08-11 2014-07-16 国立大学法人 東京大学 Data processing device
JP2012108627A (en) * 2010-11-15 2012-06-07 Toshiba Corp Memory system
JP6166796B2 (en) * 2013-12-02 2017-07-19 華為技術有限公司Huawei Technologies Co.,Ltd. Data processing device and data processing method
JP6168660B2 (en) * 2014-04-15 2017-07-26 京セラドキュメントソリューションズ株式会社 Electronics
JP6168661B2 (en) * 2014-04-17 2017-07-26 京セラドキュメントソリューションズ株式会社 Electronics

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06215589A (en) * 1993-01-18 1994-08-05 Hitachi Ltd Semiconductor memory
JPH0922458A (en) * 1995-07-07 1997-01-21 Ricoh Co Ltd Facsimile equipment
KR100484485B1 (en) * 2002-10-01 2005-04-20 한국전자통신연구원 Method for storing data in non-volatile memory and apparatus therefor
JP2004287636A (en) * 2003-03-20 2004-10-14 Hitachi Printing Solutions Ltd Data backup method of nonvolatile memory
JP2004341989A (en) * 2003-05-19 2004-12-02 Matsushita Electric Ind Co Ltd Memory card pack and memory card

Also Published As

Publication number Publication date
JP2009009213A (en) 2009-01-15

Similar Documents

Publication Publication Date Title
US7076599B2 (en) Transactional file system for flash memory
US5208676A (en) Image processing apparatus with data storage region management, memory space allocation in accordance with detected compression ratio of data
CN101815998B (en) Allowable variation detected by the encoder removable storage device
JPWO2007010829A1 (en) Nonvolatile storage device, memory controller, and defective area detection method
US7644288B2 (en) Image forming apparauts that checks authenticity of an update program
US8896872B2 (en) Print control apparatus, printing system, and non-transitory computer readable medium
US6609152B1 (en) System for avoiding the assignment of duplicate MAC addresses to network interface devices
JP2003281078A (en) Dma controller
JP2006318366A (en) Memory controller, nonvolatile storage device, nonvolatile storage system, data writing method
JP4125277B2 (en) Image forming apparatus and data erasing method
US7061640B1 (en) Image processing apparatus that compresses and divides image data for storage
US7542091B2 (en) Device power management method and apparatus
US20110026062A1 (en) Information processing apparatus, method for controlling information processing apparatus, and storage medium
JP5383516B2 (en) Image forming apparatus, firmware updating method thereof, and program
US20070043903A1 (en) Data Processing Apparatus and Method, Control Program Therefor, and Recording Medium Having Program Recorded Thereon
CN102314324B (en) Printing apparatus and control method for printing apparatus
KR20130028473A (en) Crum chip and image forming device for communicating mutually, and method thereof
JP2002113905A (en) Imaging device
US8826066B2 (en) Information processing apparatus, control method of the information processing apparatus, and recording medium
JP5096847B2 (en) Access control device, access control method, access control program, recording medium, storage device, and image processing device
US20020097916A1 (en) Image processing method, image processing apparatus, image processing program and storage medium holding image processing program code
CN101610337B (en) Image processing apparatus and memory management method for image processing apparatus
US20040169885A1 (en) Memory management
JP2007328640A (en) Image processing unit and control method thereof
JP2012018554A (en) Image processing apparatus and hibernation starting method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100205

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120509

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120529

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120531

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150608

Year of fee payment: 3