WO2013100302A1 - Procédé de correction utilisant une mémoire et une mémoire temporaire et serveur et client de correctif l'utilisant - Google Patents

Procédé de correction utilisant une mémoire et une mémoire temporaire et serveur et client de correctif l'utilisant Download PDF

Info

Publication number
WO2013100302A1
WO2013100302A1 PCT/KR2012/006613 KR2012006613W WO2013100302A1 WO 2013100302 A1 WO2013100302 A1 WO 2013100302A1 KR 2012006613 W KR2012006613 W KR 2012006613W WO 2013100302 A1 WO2013100302 A1 WO 2013100302A1
Authority
WO
WIPO (PCT)
Prior art keywords
patch
memory
data
file
size
Prior art date
Application number
PCT/KR2012/006613
Other languages
English (en)
Inventor
Sung Gook Jang
Kwang Hee Yoo
Joo Hyun Sung
Hye Jin Jin
Yoon Hyung Lee
Original Assignee
Neowiz Games Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neowiz Games Co., Ltd. filed Critical Neowiz Games Co., Ltd.
Publication of WO2013100302A1 publication Critical patent/WO2013100302A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • G06F8/658Incremental updates; Differential updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs

Definitions

  • the present invention relates generally to a patch technology and, more particularly, to a patch method using memory and temporary memory which is capable of patching a large amount of data more rapidly and reliably, and a patch server and client using the patch method.
  • a conventional patch technique includes a patch method using information about the version of a patch. For example, there is a method of a patch client accessing a patch server, comparing a current patch version with the patch version of the patch server, and, if patching is necessary, downloading and storing corresponding content.
  • the conventional patch method is problematic in that the resources of the patch server and the patch client are inefficiently used and a bottleneck occurs on the server because patch redundancy may occur if there is an error in information about the version of a patch or the patching is partially performed.
  • an object of the present invention is to apply a patch more rapidly and efficiently by maximizing the utilization of the resources of a patch client using an improved patch algorithm.
  • Another object of the present invention is to patch a patch file having a high capacity rapidly and reliably.
  • Still another object of the present invention is to apply a patch more rapidly using an optimized patch algorithm depending on the size of the data to be patched.
  • Yet another object of the present invention is to apply a patch in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if an error occurs during the patch process.
  • the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data from the patch server; (b) calculating an available space of the memory; (c) if a size of the patch data is smaller than or equal to the available space of the memory, performing patching using the available space of the memory; and (d) if the size of the patch data is greater than the available space of the memory, allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device, and performing patching using the allocated temporary memory.
  • the present invention provides a patch method, the patch method being performed in a patch client, the patch client being connectable to a patch server and including a storage device and memory, the patch method including the steps of (a) accessing the patch server and receiving patch data, including a plurality of files to be patched, from the patch server; (b) calculating an available space of the memory; (c) if at least one of the plurality of files to be patched is smaller than the available space of the memory, patching the at least one file using the available space of the memory; and (d) if at least one of the plurality of files to be patched is greater than the available space of the memory, patching the at least one file using temporary memory allocated to the storage device.
  • the present invention provides a patch server, the patch server being connected to a patch client and providing patch data, the patch server including memory; a hash generation unit configured to generate at least one hash value for received data; and a control unit configured to load an original file and a patch file into the memory, to control the hash generation unit so that the hash generation unit compares the loaded original file with the loaded patch file and generates at least one hash value for a difference, to generate a patch table including the generated hash value, and to generate the patch data including the generated patch table.
  • the present invention provides a patch client, the patch client being able to use memory and a storage device, and accessing a patch server, receiving patch data and performing patching, the patch client including a control unit for comparing a size of the received patch data with an available space of the memory, performing the patch using the memory if the available space of the memory is equal to or greater than the size of the received patch data, and allocating temporary memory of a capacity, corresponding to the size of the patch data, to the storage device and then performing patching using the allocated temporary memory if the available space of the memory is smaller than the size of the received patch data.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied;
  • FIG. 2 is a diagram showing the construction of an embodiment of the patch server of FIG. 1;
  • FIG. 3 is a reference diagram illustrating a process of generating patch data that is performed in the control unit of FIG. 2;
  • FIG. 4 is a diagram showing the construction of an embodiment of a patch client according to the disclosed technology
  • FIG. 5 is a reference diagram illustrating an embodiment of patch that is performed using memory
  • FIG. 6 is a reference diagram illustrating an embodiment of patch that is performed using temporary memory
  • FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client of FIG. 4;
  • FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4;
  • FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • first, second, etc. are each used to distinguish an element from other elements, and the elements should not be limited by these terms.
  • the first element may be referred to as the second element or, similarly, the second element may be referred to as the first element without departing from the scope and technical spirit of the present invention.
  • singular expressions may include plural expressions. It should be understood that in this application, the term include(s), comprise(s) or have(has) implies the inclusion of features, numbers, steps, operations, components, parts, or combinations thereof mentioned in the specification, but does not imply the exclusion of one or more of any other features, numbers, steps, operations, components, parts, or combinations thereof.
  • steps symbols (e.g., a, b, and c ) are used for convenience of description, and the symbols do not describe the sequence of the steps.
  • the steps may be performed in a sequence different from a described sequence unless a specific sequence is clearly described in the context. That is, the steps may be performed in the described sequence, may be performed substantially at the same time, and may be performed in the reverse sequence.
  • an original file refers to a file before applying a patch
  • a patch file is a file on which patch has been performed.
  • Patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment.
  • FIG. 1 is a diagram showing the configuration of an embodiment of a patch system to which a disclosed technology may be applied.
  • the patch system may include a patch server 100 and a patch client 200.
  • the patch server 100 may generate patch data, and provide the patch data to the patch client 200 when the patch client 200 accesses the patch server 100.
  • the patch server 100 refers to a server function that performs patching while operating in conjunction with the patch client 200, and is not limited to a specific implementation.
  • the patch server 100 may be implemented as a single server or a server farm, or may be implemented as one function of a general purpose server. The detailed construction or function of the patch server 100 will be described later with reference to FIG. 2.
  • the patch client 200 may access the patch server 100 over a network, and perform patching on a terminal on which the patch client 200 is being run by performing a specific patch process, which will be described later.
  • the patch client 200 refers to a terminal function that performs patching while operating in conjunction with the patch server 100, and is not limited to a specific implementation.
  • the patch client 200 may be implemented as software that is executed in a terminal, or as logic-designed hardware. The detailed construction or function of the patch client will be described later with reference to FIGS. 4 to 6.
  • the patch server 100 and the patch client 200 may be connected over a network.
  • the network is not limited to a specific standard-based network.
  • the patch client 200 may be connected to the patch server 100 over a wired or wireless network or a combination of wired and wireless networks.
  • a wired or Wi-Fi communication network may be used as the network.
  • the patch client 200 is a smart phone or a tablet PC, the network may be configured via a mobile communication network 3G or 4G.
  • FIG. 2 is a diagram showing the construction of an embodiment of the patch server 100 of FIG. 1.
  • the patch server 100 may include a communication unit 110, memory 120, a hash generation unit 130, a patch data storage unit 140, and a control unit 150.
  • the patch server 100 may further include a client management unit 160.
  • the communication unit 110 may establish or maintain a communication line with the patch client 200 in response to a request from the control unit 150.
  • the memory 120 may include a storage space necessary to generate patch data.
  • the memory 120 may be partitioned into specific spaces for storing different data under the control of the control unit 150.
  • the hash generation unit 130 may generate a hash value for received data.
  • the hash generation unit 130 may generate a hash value for original data or patch data under the control of the control unit 150, and may provide the generated hash value to the control unit 150.
  • the patch data storage unit 140 may store patch data.
  • the patch data is data necessary to perform patching, and may include a patch file or information about the differences between a patch file and an original file depending on the embodiment.
  • the patch data storage unit 140 may store patch data provided by the control unit 150 or information about patch data (e.g., information about the version of a patch, the total size of patch data, and the size of each file in the case of patch data including a plurality of files).
  • the control unit 150 may generally control the elements of the patch server 100, thereby causing the patch server 100 to perform patching.
  • the control unit 150 will be described in more detail below with respect to each function thereof.
  • the control unit 150 may compare an original file with a patch file, and generate patch data.
  • control unit 150 may load a patch file and an original file onto the memory 120, and request the hash generation unit 130 to compare the original file with the patch file and to generate a hash value for the differences therebetween.
  • the control unit 150 may generate a patch table, including the generated hash value and an index associated with the generated hash value, generate patch data including the patch table and the patch file, and store the generated patch data in the patch data storage unit 140.
  • control unit 150 may generate patch data using a hash value for the patch file. That is, the control unit 150 may generate a patch table, including a hash value for the entire patch file and an index for the hash value, and generate patch data including the patch table and the patch file.
  • the control unit 150 may provide the patch data to the patch client 200.
  • control unit 150 may control the communication unit 110 so that a communication line with the patch client 200 is maintained, determine whether patching is necessary based on information about patch data stored in the patch data storage unit 140, and then provide the patch data to the patch client 200.
  • control unit 150 may request information about the patch client 200 connected to the client management unit 160, determine patch data based on the information received in response to the request, and then provide patch data to the patch client 200.
  • the client management unit 160 may manage the patch client 200. In an embodiment, the client management unit 160 may perform authentication on the patch client 200. In an embodiment, the client management unit 160 may manage information about patching performed on the patch client 200 (e.g., patch time and information about the version of a patch).
  • FIG. 3 is a reference diagram schematically illustrating an example of a process of generating a patch table that is performed by the control unit 150 of FIG. 2.
  • an original file and a patch file are separately loaded into the memory 120.
  • the control unit 150 may do comparison more rapidly using the memory 120 because the original and patch files have been loaded onto the memory 120.
  • the 4-digit binary numbers of the original and patch files refer to addresses in the memory 120.
  • the 2-digit binary numbers of the content of the original and patch files refer to the indices of corresponding data.
  • each of the original and patch files include four pieces of data. The four pieces of data may be four different files, or part of at least one file.
  • the control unit 150 may compare the original file with the patch file. That is, the control unit 150 may compare the content of each of the pieces of data of the original file loaded into the memory 120 with the content of each of the pieces of data of the patch file loaded into the memory 120. As shown in the drawing, the control unit 150 may determine that the content of index 01 and the content of index 11 are different from each other, and configure a patch table 300.
  • the patch table 300 may include indices 310, hash values 320, and patch file addresses 330.
  • the hash values 320 may refer to the hash values of the patch file corresponding to the indices 01 and 11, and the patch file addresses 330 may refer to addresses where the content of the patch file is actually stored.
  • the patch file addresses 330 may refer to the addresses or locations of the patch data storage unit 140 where the patch file is stored.
  • FIG. 3 illustrates an example in which an original file is compared with a patch file and patch data is generated based on the content of the patch file corresponding to differences and hash values for the content.
  • the patch client 200 may overwrite the patch file on the original file, calculate hash values, compare the calculated hash values with the hash values of patch data, and then perform patching.
  • FIG. 4 is a diagram showing the construction of an embodiment of the patch client 200 according to the disclosed technology
  • FIGS. 5 and 6 are reference diagrams illustrating embodiments of patch schemes that are performed by the patch client 200.
  • the patch client 200 may include a communication unit 210, memory 220, temporary memory 230, and a control unit 240.
  • the patch client 200 may further include at least one of a patch data storage unit 250 and a hash generation unit 260.
  • the communication unit 210 may establish or maintain a communication line with the patch server 100 at the request of the control unit 240.
  • the memory 220 is a memory device for providing a storage space.
  • the temporary memory 230 is allocated using a storage device (e.g., a HDD, a RAID, or an SSD).
  • the temporary memory 230 performs the same function as the memory 220 in that it provides a storage space, but the performance thereof is lower than the memory 220 because I/O processing occurs during the loading of data.
  • the control unit 240 controls the elements of the patch client 200 so that patching is performed on a terminal.
  • the control unit 240 may perform patching using the memory 220.
  • control unit 240 may perform patching by partitioning the available space of the memory 220 into at least two regions and loading an original file and patch data onto the respective regions. For example, the control unit 240 may perform patching by allocating three regions to the memory 220 and loading (or generating) an original file, a patch file, and patch data onto (or in) the respective regions.
  • FIG. 5 is a reference diagram illustrating this embodiment. Referring to FIG. 5, the control unit 240 may allocate three regions to the memory 220, and use the three regions as a region 510 for an original file, a region 520 for a patch file, and a region 530 for patch data.
  • the control unit 240 may load an original file to be patched onto the original file region 510, load patch data received from the patch server 100 onto the patch data region 530, and then generate a patch file in the patch file region 520.
  • the control unit 240 may analyze the patch table of the patch data region 530, determine that content corresponding to indices 01 and 11 is targeted for patching, determine that the corresponding patch content is stored at addresses 11001 and 11010, and generate a patch file in the patch file region 520 based on the data loaded onto the original file region 510 and the data loaded onto the patch data region 530. That is, the control unit 240 may read data, corresponding to indices not included in the patch table, from the original file, and generate the patch file based on the read data.
  • the control unit 240 may search patch data for data corresponding to indices included in the patch table, and generate the patch file by writing the found patch data into the patch file.
  • control unit 240 may generate the original file, the patch file, and the patch data, and load them onto the memory 220 using different methods.
  • the original file may be loaded into the memory 220 using a memory pool method
  • the patch file may be loaded onto the memory 220 using a buffer-overlapped pool method
  • the patch data may be loaded onto the memory 220 using a memory file-mapped method.
  • the reason for this is that if data is loaded into the memory 220 using the same method, the other loading processes may be affected by an error that occurs in a specific loading process.
  • control unit 240 may vary an offset in different ways when a process of reading the original file, the patch file and the patch data from the memory 220 or a process of writing the original file, the patch file and the patch data onto the memory 220 is performed according to the embodiment disclosed in FIG. 5.
  • the control unit 240 may use a random offset shift method.
  • the control unit 240 may use a sequential offset increase method.
  • the control unit 240 may use a sequential offset increase method. If an offset varies in different ways as described above, the other processes are not affected by an error that occurs in a specific process and the operating efficiency of a processor may be increased.
  • control unit 240 may perform patching using the temporary memory 230.
  • FIG. 6 is a reference diagram illustrating an embodiment of the patch using the temporary memory 230. The embodiment shown in FIG. 6 corresponds to an example of patching that varies the order of the original file data.
  • the control unit 240 may previously measure the size of part of an original file to be changed by comparing the original file with patch data, and allocate the space in the temporary memory 230 corresponding to the measured size. Referring to the example shown in FIG.
  • the control unit 240 determines that three blocks of the original file need to be changed as a result of analyzing the patch table of the patch data, requests a space of a size, corresponding to the size of the three blocks, from the temporary memory 230, and allocates the space of the requested size to the temporary memory 230.
  • the control unit 240 may change the data of the original file using the allocated temporary memory 230, and generate a patch file based on the changed data.
  • the control unit 240 may load the changed data onto the temporary memory 230 using the output buffer memory pool method.
  • the output buffer memory pool method is easy and fast in terms of recovery from a patch error because information about failed patching is automatically recorded on a file when the patching fails.
  • control unit 240 may determine the size of data to be patched, and perform patching using at least one of the memory 220 and the temporary memory 230. That is, the control unit 240 may perform patching using a combination of the embodiments shown in FIGS. 5 and 6. More particularly, when patch data is received, the control unit 240 may check the size of data to be patched and check the currently available size of the memory 220. If the size of the data is equal to or smaller than the currently available size of the memory 220, the control unit 240 may perform patching using the memory 220. In contrast, if the size of the data is larger than the currently available size of the memory 220, the control unit 240 may perform patching using the temporary memory 230. This embodiment will be described in more detail below with reference to FIG. 9.
  • the patch data storage unit 250 may temporarily store patch data received from the patch server 100. For example, if patch data is downloaded from the patch server 100, the control unit 240 may store the patch data in the patch data storage unit 250, and perform patching using the memory 220 and the temporary memory 230 based on the stored patch data.
  • the patch data storage unit 250 may store information about patching that was performed. For example, after performing patching, the control unit 240 may generate information about the patching (e.g., information about the version of a patch, a patch date, and a patch capacity) and provide the generated information to the patch data storage unit 250. The patch data storage unit 250 may store the generated information.
  • information about the patching e.g., information about the version of a patch, a patch date, and a patch capacity
  • the patch data storage unit 250 may store the generated information.
  • the hash generation unit 260 may generate hash values for the received data.
  • the control unit 240 may control the hash generation unit 260 so that the hash generation unit 260 generates hash values for patched data in order to check whether patching has been correctly performed after the patching has been completed.
  • the control unit 240 may compare the hash values, generated by the hash generation unit 260, with hash values included in a patch table, and determine whether the patching has been correctly performed.
  • FIG. 7 is a flowchart illustrating an embodiment of a patch method that is performed by the patch client 200 of FIG. 4.
  • the embodiment of the patch method disclosed in FIG. 7 relates to an embodiment of patching that is performed using the memory 220.
  • the control unit 240 may receive data from the patch server 100 at step S710, and allocate the memory 220 for patching at step S720.
  • the control unit 240 may partition the memory 220 into three regions for an original file, patch data and a patch file, and then allocate the memory 220.
  • the control unit 240 may load the original file and the patch data onto the regions of the memory 220 at step S730.
  • the control unit 240 may load the original file and the patch data into the memory 220 using different methods. For example, if the original file has been loaded into the memory 220 using the memory pool method, the patch data may be loaded into the memory 220 using the file-mapped memory method. If data is loaded using different methods as described above, the entire loading process does not need to be performed and the propagation of error can be prevented even when an error occurs in a specific loading method.
  • the control unit 240 checks whether the loading has been successfully performed at step S740. If an error has occurred in specific data (No at step S740), the control unit 240 may perform a process of loading only corresponding data. Although in the flowchart of FIG. 7, the memory 220 is illustrated as being reallocated at step S720 and erroneous data is illustrated as being reloaded at step S730, it may be possible only to reload the data without allocating the memory 220, in an embodiment.
  • the control unit 240 may generate a patch file based on the original file and the patch data loaded onto the memory 220 at step S750. For example, the control unit 240 may analyze the patch table of the patch data, divide the original file into one or more parts whose content will be moved without change and one or more parts on which patching will be performed based on the patch data, and generate the patch file in the region allocated to the patch file based on the division. Here, the control unit 240 may generate the patch file in pieces in succession or all at once.
  • control unit 240 may sequentially read corresponding content from the original file or the patch data in a range from a memory unit allocated to the start point of the patch file to a memory unit allocated to the end point of the patch file, and read the content, thereby generating a patch file.
  • control unit 240 may check the original file for one or more parts whose content will be moved without change, read the parts from the original file, write the parts that were read into at least part of the patch file at once, and write one or more parts to which the content of the patch file will be moved onto at least part of the patch file at once.
  • the control unit 240 may check the resulting patch file for an error at step S760.
  • the control unit 240 may check the generated patch file for errors using the patch table included in the patch data. For example, assuming that hash values for the patch data are included in the patch table, the control unit 240 may run an error check by calculating hash values for the patched part of the patch file and comparing the hash values of the patched part with the hash values of the patch table.
  • control unit 240 terminates the patch process. If there is an error (Yes at step S770), the control unit 240 may perform an error handling process at step S780. For example, if data allocated to the memory 220 as described above has been lost, the control unit 240 may repeat a series of steps S720 to S760 of allocating the memory 220 again and loading data again in order to patch parts having different hash values again. As another example, if the original file and the patch data allocated to the memory 220 remain in memory, the control unit 240 may perform the series of steps S750 and S760 of rewriting only a part associated with an error based on the original file and the patch data and performing error processing.
  • the control unit 240 may partition the memory 220 into a region for an original file and a region for patch data, read content, corresponding to one or more parts that need to be changed in the original file, from the patch data, and change the original file by overwriting the read content, thereby generating a patch file.
  • FIG. 8 is a flowchart illustrating another embodiment of a patch method that is performed by the patch client of FIG. 4.
  • the embodiment of the patch method disclosed in FIG. 8 relates to an embodiment of the patch that is performed using the temporary memory 230.
  • the control unit 240 may read data from the patch server 100 at step S810, and measure the size of data to be patched (i.e., data that needs to be changed in an original file) at step S820. In an embodiment, the control unit 240 may measure the size of the data based on the patch table of patch data.
  • the control unit 240 may allocate the temporary memory 230 of a capacity corresponding to the size of the data to be changed at step S830, and load the data to be changed into the allocated temporary memory at step S840.
  • the control unit 230 may allocate the temporary memory 230, and load the data to be changed onto the allocated temporary memory using the output buffer memory pool method.
  • the output buffer memory pool method when patching fails, information about the failed patch is automatically recorded in a file. Accordingly, performing a series of error processing steps S870 to S890 may be simple.
  • the control unit 240 checks whether the loading was successful at step S850. If an error has occurred (No at step S850), the control unit 240 may perform the step of loading the data again. Although, in the flowchart of FIG. 8, the temporary memory 230 is illustrated as being reallocated at step S830 and the data to be changed is illustrated as being reloaded at step S840, only the data to be changed may be reloaded without allocating the temporary memory 230 in an embodiment.
  • the control unit 240 may generate a patch file based on the data loaded into the temporary memory 230 at step S860, and runs an error check on the generated patch file at step S870. If an error has occurred (Yes at step S880), the control unit 240 may perform an error handling process at step S890.
  • FIG. 9 is a flowchart illustrating still another embodiment of a patch method that is performed by the patch client 200 of FIG. 4.
  • the embodiment shown in FIG. 9 is an embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8.
  • control unit 240 may receive patch data from the patch server 100 and check the patch data at step S910. For example, the control unit 240 may receive patch data and then check the overall size of the patch data.
  • control unit 240 may calculate the currently available space of the memory 220 at step S920. Thereafter, the control unit 240 compares the available space of the memory 220 with the size of the patch data at step S930. If the available space of the memory 220 is enough, patching is performed using the memory 220. In contrast, if the available space of the memory 220 is not enough, patching is performed using the temporary memory 230.
  • control unit 240 may take into consideration the memory space to be returned when calculating the available space of the memory 220. This may be represented by the following Equation 1:
  • Memory_total Memory_enable + (Memory_return * t) (1)
  • an available memory space Memory_total that may be used when patching may be the sum of the currently available memory space Memory_enable and a memory space to be returned within a specific time Memory_return * t.
  • the memory space to be returned within the specific time may be represented by multiplying a memory space expected to be returned within the specific time Memory_return by a specific probability value t.
  • the control unit 240 may set a memory space expected to be returned within 1 minute as the memory space expected to be returned Memory_return if the memory space expected to be returned is determined.
  • the probability value t may be proportional to the expected return time. For example, the probability value t of a memory space Memory_return A that is expected to be returned within 10 seconds may be higher than that of a memory space Memory_return B that is expected to be returned within 40 seconds.
  • the control unit 240 may perform patching using the memory 220 at steps S940 to S970.
  • the steps of performing patching using the memory 220 have been described above with reference to FIG. 7, a detailed description thereof is omitted.
  • the control unit 240 may perform patching using the temporary memory 230 at steps S941 to S971.
  • the steps of performing patching using the temporary memory 230 were described above with reference to FIG. 8, a detailed description thereof is omitted.
  • the embodiment shown in FIG. 9 is performed based on a combination of the patch method of FIG. 7 using the memory 220 and the patch method of FIG. 8 using the temporary memory 230.
  • the embodiment shown in FIG. 9 indicates that the patch method using the memory 220 is first performed. That is, if all patch data may be allocated to the memory 220, patching is rapidly performed using the memory 220. If the capacity of the memory 220 is smaller than the total size of patch data, patching is performed using the temporary memory 230. Accordingly, a large amount of patch data may be patched more rapidly.
  • FIG. 10 is a flowchart illustrating yet another embodiment of a patch method that is performed in the patch client 200 of FIG. 4.
  • the embodiment of FIG. 10 corresponds to another embodiment of a patch method that is performed based on a combination of the embodiments of FIGS. 7 and 8, and relates to an embodiment in which patching is performed on each of a plurality of files to be patched according to an optimized method when patch data includes the plurality of files. That is, in the embodiment of FIG. 9, whether to use the memory 220 or the temporary memory 230 is determined based on the entirety of the patch data.
  • patching is performed on patch data, including a plurality of files, using any one of the memory 220 and the temporary memory 230 for each file.
  • the control unit 240 may receive patch data from the patch server 100, and check the patch data at step S1010. In an embodiment, after receiving patch data including a plurality of files to be patched, the control unit 240 may check the size of each of the plurality of files. In another embodiment, the patch data may include information about the size of the plurality of files, and the control unit 240 may check each of the plurality of files for the size thereof based on the information.
  • the control unit 240 may calculate the currently available space of the memory 220 at step S1020. In this case, the control unit 240 may compare the available space of the memory 220 with the size of each of the files included in the patch data. In contrast, the control unit 240 may patch a file, having a size equal to or smaller than the available capacity of the memory 220 using the memory 220. The control unit 240 may patch a file, having a size greater than the available capacity of the memory 220 using the temporary memory 230.
  • the calculation of the available space of the memory 220 may be performed as in FIG. 9, a detailed description thereof is omitted.
  • the control unit 240 checks whether there are some files each having a size equal to or smaller than the available space of the memory 220 at step S1030. If there are some files (Yes at step S1030), the control unit 240 may patch some files of the patch data using the memory 220 at steps S1040 to S1070. Here, the steps of patching some files of the patch data using the memory 220 may be performed for each file. If the available space of the memory 220 is greater than the size of two of the files, the control unit 240 may patch the two or more files at the same time. Since a detailed method of performing patching using the memory 220 has been described above with reference to FIG. 7, a detailed description thereof will be omitted.
  • the control unit 240 may patch the remaining files of the patch data, each having a size greater than the available space of the memory 220, using the temporary memory 230 at steps S1041 to S1071. For example, the control unit 240 may sum up the sizes of the remaining files of the patch data and allocate the temporary memory 230 corresponding to the sum. Since a detailed method of performing patching using the allocated temporary memory 230 has been described above with reference to FIG. 8, a detailed description thereof is omitted.
  • patch data including a plurality of files is patched using at least one of the memory 220 and the temporary memory 230. Accordingly, patching may be performed more rapidly.
  • the control unit 240 may perform patching using the memory 220 and the temporary memory 230 in parallel depending on its computation performance. That is, the control unit 240 may patch some files, each having a size smaller than the available space of the memory 220, using the memory 220 and, at the same time, patch the remaining files using the temporary memory 230. If patching is performed in parallel, the time it takes to perform patching can be considerably reduced.
  • patching can be performed more rapidly and efficiently by maximizing the utilization of the resources of the patch client.
  • a patch file having a high capacity can be patched rapidly and reliably.
  • patching can be performed more rapidly using an optimized patch algorithm depending on the size of data to be patched.
  • patching can be performed in a resource-efficient and error-tolerable manner by modifying only an erroneous part, rather than the entire file to be patched, if there is an error in a patch process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

L'invention concerne un procédé de correction. Le procédé de correction est mis en œuvre dans un client de correctif. Le client de correctif peut être connecté à un serveur de correctif et comprend un dispositif de stockage et une mémoire. Le procédé de correction comprend les étapes consistant (a) à accéder au serveur de correctif et à recevoir des données de correctif provenant du serveur de correctif ; (b) à calculer l'espace disponible de la mémoire ; (c) si la taille des données de correctif est inférieure ou égale à l'espace disponible de la mémoire, à effectuer une correction de programme à l'aide de l'espace disponible de la mémoire ; (d) si la taille des données de correctif est supérieure à l'espace disponible de la mémoire, à attribuer de la mémoire temporaire, dont la capacité correspond à la taille des données de correctif, au dispositif de stockage, et à effectuer une correction de programme à l'aide de la mémoire temporaire attribuée.
PCT/KR2012/006613 2011-12-30 2012-08-21 Procédé de correction utilisant une mémoire et une mémoire temporaire et serveur et client de correctif l'utilisant WO2013100302A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110147740A KR101246360B1 (ko) 2011-12-30 2011-12-30 메모리 및 임시 메모리를 이용한 패치 방법 및 그를 이용한 패치 서버, 패치 클라이언트
KR10-2011-0147740 2011-12-30

Publications (1)

Publication Number Publication Date
WO2013100302A1 true WO2013100302A1 (fr) 2013-07-04

Family

ID=47728119

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/006613 WO2013100302A1 (fr) 2011-12-30 2012-08-21 Procédé de correction utilisant une mémoire et une mémoire temporaire et serveur et client de correctif l'utilisant

Country Status (4)

Country Link
KR (1) KR101246360B1 (fr)
CN (1) CN102945170A (fr)
TW (1) TW201327168A (fr)
WO (1) WO2013100302A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160380739A1 (en) * 2015-06-25 2016-12-29 Intel IP Corporation Patch download with improved acknowledge mechanism
US11797288B2 (en) 2019-04-17 2023-10-24 Huawei Technologies Co., Ltd. Patching method, related apparatus, and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002166304A (ja) * 2000-11-30 2002-06-11 Ngk Spark Plug Co Ltd スローアウェイバイト
CN107113307B (zh) 2015-11-18 2019-10-01 深圳市大疆创新科技有限公司 外接设备的管理方法、装置、系统以及存储器、无人机
CN107944021B (zh) * 2017-12-11 2021-06-18 北京奇虎科技有限公司 文件替换方法、装置及终端设备
CN111179913B (zh) * 2019-12-31 2022-10-21 深圳市瑞讯云技术有限公司 一种语音处理方法及装置
CN112788384A (zh) * 2021-02-07 2021-05-11 深圳市大鑫浪电子科技有限公司 无线数字电视投屏方法、装置、计算机设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003150396A (ja) * 2001-11-12 2003-05-23 Casio Comput Co Ltd 情報処理装置及びパッチ処理方法
US20060130046A1 (en) * 2000-11-17 2006-06-15 O'neill Patrick J System and method for updating and distributing information
US20060136898A1 (en) * 2004-09-06 2006-06-22 Bosscha Albert J Method of providing patches for software
KR100670797B1 (ko) * 2004-12-17 2007-01-17 한국전자통신연구원 보조 저장장치가 없는 환경에서 실시간 패치 장치 및 그방법

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008198060A (ja) 2007-02-15 2008-08-28 Seiko Epson Corp 情報処理装置、パッチコード実装システム、電子機器及びパッチコードの実装方法
US20100131698A1 (en) * 2008-11-24 2010-05-27 Tsai Chien-Liang Memory sharing method for flash driver
CN102215479B (zh) * 2011-06-22 2018-03-13 中兴通讯股份有限公司 升级包下载及安装的方法、服务器及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060130046A1 (en) * 2000-11-17 2006-06-15 O'neill Patrick J System and method for updating and distributing information
JP2003150396A (ja) * 2001-11-12 2003-05-23 Casio Comput Co Ltd 情報処理装置及びパッチ処理方法
US20060136898A1 (en) * 2004-09-06 2006-06-22 Bosscha Albert J Method of providing patches for software
KR100670797B1 (ko) * 2004-12-17 2007-01-17 한국전자통신연구원 보조 저장장치가 없는 환경에서 실시간 패치 장치 및 그방법

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160380739A1 (en) * 2015-06-25 2016-12-29 Intel IP Corporation Patch download with improved acknowledge mechanism
US9780938B2 (en) * 2015-06-25 2017-10-03 Intel IP Corporation Patch download with improved acknowledge mechanism
US10153887B2 (en) 2015-06-25 2018-12-11 Intel IP Corporation Patch download with improved acknowledge mechanism
US11797288B2 (en) 2019-04-17 2023-10-24 Huawei Technologies Co., Ltd. Patching method, related apparatus, and system

Also Published As

Publication number Publication date
KR101246360B1 (ko) 2013-03-22
TW201327168A (zh) 2013-07-01
CN102945170A (zh) 2013-02-27

Similar Documents

Publication Publication Date Title
WO2013100302A1 (fr) Procédé de correction utilisant une mémoire et une mémoire temporaire et serveur et client de correctif l'utilisant
CN110297689B (zh) 智能合约执行方法、装置、设备及介质
US9639501B1 (en) Apparatus and methods to compress data in a network device and perform ternary content addressable memory (TCAM) processing
US20120296960A1 (en) Method and system for providing access to mainframe data objects in a heterogeneous computing environment
CN110399227B (zh) 一种数据访问方法、装置和存储介质
CN109379432A (zh) 数据处理方法、装置、服务器及计算机可读存储介质
CN104220987A (zh) 应用安装
CN111274288B (zh) 分布式检索方法、装置、系统、计算机设备及存储介质
CN112632069B (zh) 哈希表数据存储管理方法、装置、介质和电子设备
CN112636908B (zh) 密钥查询方法及装置、加密设备及存储介质
CN111708738A (zh) 实现hadoop文件系统hdfs与对象存储s3数据互访方法及系统
US10732904B2 (en) Method, system and computer program product for managing storage system
CN111984729A (zh) 异构数据库数据同步方法、装置、介质和电子设备
CN111176896A (zh) 文件备份方法、装置及终端设备
CN113806301A (zh) 数据同步方法、装置、服务器及存储介质
WO2014181946A1 (fr) Système et procédé d'extraction de données volumineuses
CN114422537B (zh) 多云存储系统、多云数据读写方法及电子设备
US9588884B2 (en) Systems and methods for in-place reorganization of device storage
CN110162395B (zh) 一种内存分配的方法及装置
CN106790521B (zh) 采用基于ftp的节点设备进行分布式组网的系统及方法
CN115203210A (zh) 哈希表处理方法、装置、设备及计算机可读存储介质
US9471409B2 (en) Processing of PDSE extended sharing violations among sysplexes with a shared DASD
CN110874344B (zh) 数据迁移方法、装置及电子设备
US11977636B2 (en) Storage transaction log
CN111722783A (zh) 数据存储方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12861578

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12861578

Country of ref document: EP

Kind code of ref document: A1