CN111580849A - File difference updating optimization method and related device - Google Patents
File difference updating optimization method and related device Download PDFInfo
- Publication number
- CN111580849A CN111580849A CN202010386611.XA CN202010386611A CN111580849A CN 111580849 A CN111580849 A CN 111580849A CN 202010386611 A CN202010386611 A CN 202010386611A CN 111580849 A CN111580849 A CN 111580849A
- Authority
- CN
- China
- Prior art keywords
- bytes
- memory
- control block
- difference
- cache memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
- G06F8/658—Incremental updates; Differential updates
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The application discloses a method and a related device for optimizing file delta updating, wherein the method comprises the following steps: traversing each control block in the differential packet based on the number x of differential bytes in the kth control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q; reading position from old file by target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte; based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory; caching x with target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes. It can be seen that the maximum target read memory size is the difference byte number xkThe maximum target cache memory size is the difference byte number xkWith the new number of bytes ykAddition x ofk+ykTo solve the problem of memory overflow。
Description
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and a related apparatus for optimizing file delta update.
Background
With the development of the intelligent internet vehicle, the amount of software codes borne by the vehicle end of the intelligent internet vehicle is increased geometrically, and the problem of faults caused by software is also increased continuously, so that the software is widely concerned with updating and upgrading. An Over-the-Air Technology (English) is introduced into the field of intelligent internet connection, and an upgrading package making process of software becomes a key node of OTA. With the gradual increase of data files of various types of software, the time and flow required by the vehicle end for downloading the upgrade package of the software are reduced, the bandwidth load and flow consumption of the service end are reduced, and the method also becomes a necessary consideration of the key node of the software upgrade package manufacturing process.
Since the bsdiff algorithm is an excellent difference algorithm sourced by the collin perceival, it is widely applied to a difference upgrading system by virtue of the characteristics of high compression rate and lossless reduction, so as to implement the delta update of the file; therefore, for updating and upgrading various types of software loaded with the bspatch algorithm in the vehicle end, the bsdiff algorithm is operated on the basis of the old file and the new file at the server end to generate a differential packet, the vehicle end downloads the differential packet from the server end, and the bspatch algorithm is operated at the vehicle end to combine the old file and the differential packet into the new file.
At present, in the process of operating a bspatch algorithm at a vehicle end to combine an old file and a differential file into a new file, a read memory for reading the whole old file at one time and a cache memory for caching the whole new file at one time need to be applied; however, the inventor has found through research that, due to the memory allocation limitation of the vehicle-end intelligent system, for example, the software loaded with the bspatch algorithm in the vehicle-end is limited to the upper limit of memory usage, and when the sum of the read memory for reading the whole old file at a time and the cache memory for caching the whole new file at a time exceeds the upper limit of memory usage, the software loaded with the bspatch algorithm in the vehicle-end is dropped by kill due to memory overflow.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method and a related apparatus for optimizing file difference update, which greatly save used memory, avoid a phenomenon that an upper use limit of the memory is exceeded, and fundamentally solve a problem that software carrying a bspatch algorithm in a vehicle end is dropped by kill due to memory overflow.
In a first aspect, an embodiment of the present application provides a method for optimizing file delta update, where the method includes:
traversing each control block in the differential packet based on the number x of differential bytes in the kth control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q;
reading with the targetReading position of memory from old file to start reading xkOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte;
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory;
caching the x by utilizing the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
Optionally, when k is equal to 0, the number x of bytes based on the difference in the k-th control blockkDetermining a target read memory, specifically:
based on the number x of difference bytes in the k control blockkApplication xkTaking the read memory with the size as the target read memory;
correspondingly, the number x of bytes based on difference in the k control blockkAnd the new number of bytes ykDetermining a target cache memory, specifically:
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykApplication xk+ykAnd taking the cache memory with the size as the target cache memory.
Optionally, when k > 0, the number x of bytes based on difference in the k control blockkDetermining a target read memory, comprising:
if the number of the difference bytes in the k control block is xkThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, and the applied read memory is determined as the target read memory;
if the number of the difference bytes in the k control block is xkIs larger than the applied size a of the read memory and is based on the number x of the difference bytes in the k control blockkDetermining the target read memory according to the size a of the applied read memory;
correspondingly, the number x of bytes based on difference in the k control blockkAnd the new number of bytes ykDetermining a target cache memory, comprising:
if the number of the difference bytes in the k control block is xkWith the new number of bytes ykAddition x ofk+ykDetermining the applied cache memory as the target cache memory when the size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory;
if the number of the difference bytes in the k control block is xkWith the new number of bytes ykAddition x ofk+ykIs larger than the applied cache memory size b and is based on the number x of the difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory.
Optionally, the number x of bytes based on the difference in the kth control blockkAnd determining the target read memory according to the size a of the applied read memory, including:
based on the number x of difference bytes in the k control blockkAnd the size a, application x of the read memorykA size read memory;
reading the memory and the xk-a size read memory merge determining as the target read memory.
Optionally, the number x of bytes based on the difference in the kth control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory, including:
based on the number x of difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd the size of the applied cache memory b, application (x)k+yk) -b size cache memory;
the applied cache memory and the (x)k+yk) -b-size cache memory merge determining as the target cache memory.
Optionally, the method further includes:
if said xkA sum byte and y in the k-th newly added blockkCompleting the caching of the newly added bytes in the target cache memory, and storing the xkA sum byte and y in the k-th newly added blockkThe new byte is written into the new file.
Optionally, the method further includes:
if said xkA sum byte and y in the k-th newly added blockkThe newly added bytes are cached in the target cache memory, and the reading position is shifted forwards by the shift byte number z in the k control blockk。
In a second aspect, an embodiment of the present application provides an apparatus for optimizing file delta update, where the apparatus includes:
a first determining unit for traversing each control block in the differential packet based on the number x of differential bytes in the k-th control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q;
a reading obtaining unit for starting to read x from the reading position in the old file by using the target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte;
a second determination unit for determining the number of bytes x based on the difference byte number in the k control blockkAnd the new number of bytes ykDetermining a target cache memory;
a cache unit for caching the x with the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
Optionally, when k is equal to 0, the first determining unit is specifically configured to:
based on the number x of difference bytes in the k control blockkApplication xkTaking the read memory with the size as the target read memory;
correspondingly, the second determining unit is specifically configured to:
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykApplication xk+ykAnd taking the cache memory with the size as the target cache memory.
Optionally, when k > 0, the first determining unit includes:
a first determining subunit, configured to determine if the k control block has a byte number x of differencekThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, and the applied read memory is determined as the target read memory;
a second determining subunit, configured to determine if the k control block has the difference byte number xkIs larger than the applied size a of the read memory and is based on the number x of the difference bytes in the k control blockkDetermining the target read memory according to the size a of the applied read memory;
correspondingly, the second determination unit comprises:
a third determining subunit, configured to determine if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykDetermining the applied cache memory as the target cache memory when the size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory;
a fourth determining subunit, configured to determine if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykIs larger than the applied cache memory size b and is based on the number x of the difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory.
Optionally, the second determining subunit includes:
a first application module for applying for a control block based on the number of bytes in the kth control block xkAnd the size a, application x of the read memorykA size read memory;
a first determining module for determining the applied read memory and the xk-a size read memory merge determining as the target read memory.
Optionally, the fourth determining subunit includes:
a second application module for applying for a control block based on the number of bytes in the kth control block xkWith the new number of bytes ykAddition x ofk+ykAnd the size of the applied cache memory b, application (x)k+yk) -b size cache memory;
a second determining module for comparing the applied cache memory with the (x)k+yk) -b-size cache memory merge determining as the target cache memory.
Optionally, the apparatus further comprises:
an offset unit for if xkA sum byte and y in the k-th newly added blockkThe newly added bytes are cached in the target cache memory, and the reading position is shifted forwards by the shift byte number z in the k control blockk。
Optionally, the apparatus further comprises:
a write unit for if xkA sum byte and y in the k-th newly added blockkCompleting the caching of the newly added bytes in the target cache memory, and storing the xkA sum byte and y in the k-th newly added blockkThe new byte is written into the new file.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for optimizing file delta update according to any one of the first aspect above according to instructions in the program code.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing a program code for executing the method for optimizing the delta update of a file according to any one of the above first aspects.
Compared with the prior art, the method has the advantages that:
by adopting the technical scheme of the embodiment of the application, each control block in the differential packet is traversed, and the number x of the differential bytes in the kth control block is basedkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q; reading position from old file by target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte; based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory; caching x with target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes. It can be seen that the size of the maximum target read memory is the number x of bytes differencekThe maximum value of (a) is far smaller than a read memory for reading the whole old file at one time; the maximum target cache memory size is the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe maximum value of (a) is far smaller than a cache memory for caching the whole new file at one time; the method greatly saves the use of the memory, avoids the phenomenon of exceeding the use upper limit of the memory, and fundamentally solves the problem that the software carrying the bspatch algorithm in the vehicle end is dropped by kill due to the overflow of the memory.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments of the present application or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for optimizing file delta update according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an apparatus for optimizing file delta update according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the current stage, the differential upgrading of software carrying a bspatch algorithm in a vehicle end means that a server operates a bsdiff algorithm to generate a differential packet based on an old file and a new file, and the vehicle end downloads the differential packet from the server to operate the bspatch algorithm to combine the old file and the differential packet into the new file; and in the process of operating the bspatch algorithm to combine the old file and the difference file into the new file, a read memory for reading the whole old file at one time and a cache memory for caching the whole new file at one time are required to be applied. However, due to the memory allocation limitation of the vehicle-end intelligent system, for example, the software loaded with the bspatch algorithm in the vehicle-end is limited to the upper limit of memory usage, and when the sum of the read memory for reading the whole old file at a time and the cache memory for caching the whole new file at a time exceeds the upper limit of memory usage, the software loaded in the vehicle-end is dropped by kill due to memory overflow.
In order to solve the problem, in the embodiment of the application, each control block in the differential packet is traversed, a target read memory is determined based on the number xk of the differential bytes in the kth control block, the total number of each control block is q +1, k is an integer, q is a positive integer, and k is greater than or equal to 0 and less than or equal to q; reading position from old file by target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte; based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory; caching using a target cache memoryxkThe sum byte and the k newly added block ykAnd adding new bytes. It can be seen that the size of the maximum target read memory is the number x of bytes differencekThe maximum value of (a) is far smaller than a read memory for reading the whole old file at one time; the maximum target cache memory size is the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe maximum value of (a) is far smaller than a cache memory for caching the whole new file at one time; the method greatly saves the use of the memory, avoids the phenomenon of exceeding the use upper limit of the memory, and fundamentally solves the problem that the software carrying the bspatch algorithm in the terminal is dropped by kill due to the overflow of the memory.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 1, where the scenario includes a server 101 and a vehicle 102, the server 101 runs a bsdiff algorithm based on an old file and a new file to generate a differential packet, the vehicle 102 downloads the differential packet from the server 101, and runs a bspatch algorithm to complete an operation of combining the old file and the differential packet into the new file by using the implementation manner in the embodiment of the present application.
It is to be understood that, in the above application scenario, although the actions of the embodiment of the present application are described as being performed by the vehicle end 102, the present application is not limited in terms of the execution subject as long as the actions disclosed in the embodiment of the present application are performed.
It is to be understood that the above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
The following describes in detail a specific implementation manner of the method for optimizing file delta update and a related apparatus in the embodiments of the present application by using embodiments with reference to the accompanying drawings.
Exemplary method
Referring to fig. 2, a flowchart of a method for optimizing file delta update in an embodiment of the present application is shown. In this embodiment, the method may include, for example, the steps of:
step 201: traversing each control block in the differential packet based on the number x of differential bytes in the kth control blockkDetermining target readsThe total number of each control block is q +1, k is an integer, q is a positive integer, and k is greater than or equal to 0 and is less than or equal to q.
The differential packet downloaded from the service end by the vehicle end comprises q +1 control blocks, q +1 differential blocks and q +1 newly-added blocks. The control block is composed of x, y and z, wherein x represents the number of old bytes which are different from the new file and need to be read from the old file, namely, the number of difference bytes included in the difference character string in the difference block corresponding to the control block, y represents the number of new bytes included in the new added character string in the new added block corresponding to the control block, and z represents the number of bytes which need to be shifted forward next time the old bytes which are different from the new file and need to be read from the old file. For the k control block, it includes the number x of difference byteskNew number of bytes ykAnd offset byte number zk(ii) a The differential string in the corresponding kth difference block includes xkA difference byte; the new character string in the corresponding k-th new block comprises ykAnd adding new bytes.
In the prior art, in the process of combining an old file and a differential file into a new file, a read memory is required to be used for reading the whole old file at one time, the read memory with the size of the whole old file is required to be applied, and then all control blocks are traversed to read the old bytes which are different from the new file from the old file in sequence, so that the method consumes more memory space. Therefore, in the embodiment of the present application, considering the principle of "how many reads to process" based on each control block, after opening an old file, there is no need to apply for reading a memory of the entire size of the old file, and for the kth control block in the traversal process, first, based on the number x of bytes of difference in the kth control blockkDetermining the target read memory, i.e. the size of the target read memory is the number x of bytes differencek。
In practical application, for the first control block in the traversal process, that is, the 0 th control block when k is 0, no read memory is applied at this time, and the number x of bytes in the 0 th control block can be directly determined according to the difference byte number0Application x0And taking the read memory with the size as a target read memory. For the non-first control block in the traversal process, namely the k-th control block when k > 0, the requested control block existsReading a memory; if the k control block has the difference byte number xkThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, which indicates that the space of the applied read memory is enough, and the applied read memory can be determined as a target read memory; if the k control block has the difference byte number xkThe size a of the read memory is larger than the applied read memory, which indicates that the applied read memory has insufficient space and needs to be based on the number x of the difference bytes in the k control blockkAnd the size a of the applied read memory determines the target read memory.
That is, in an optional implementation manner of this embodiment of the present application, when k is 0, the step 201 may specifically be: based on the number x of difference bytes in the k control blockkApplication xkAnd taking the read memory with the size as the target read memory. When k > 0, said step 201 may for example comprise: if the number of the difference bytes in the k control block is xkThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, and the applied read memory is determined as the target read memory; if the number of the difference bytes in the k control block is xkIs larger than the applied size a of the read memory and is based on the number x of the difference bytes in the k control blockkAnd determining the target read memory according to the size a of the applied read memory.
It should be noted that, when the k control block has the number x of difference byteskWhen the size of the read memory is larger than the size a of the read memory, the fact that the space of the read memory is insufficient means that x is lacked on the basis of the read memory which is appliedkA size of read memory, then need to apply for xkA size of read memory, read memory and xkThe read memory merge of the-a size can be used as the target read memory. Therefore, in an optional implementation manner of this embodiment of this application, the control block based on the kth number of bytes x is different from the control block based on the kth number of byteskAnd the step of determining the target read memory according to the size a of the applied read memory, for example, may include the following steps:
step A: based on the number x of difference bytes in the k control blockkAnd saidApplied read memory size a, applied xkA size read memory;
and B: reading the memory and the xk-a size read memory merge determining as the target read memory.
Step 202: starting to read x from the reading position in the old file by utilizing the target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte.
It is understood that after determining the target read memory in step 201, in order to meet the above-mentioned principle of "how many reads are processed" based on each control block, it is necessary to start reading x from the read position in the old file by using the target read memorykIndividual old byte, read xkOne old byte needs to be different from x in the kth difference blockkThe difference bytes are summed to obtain xkA sum byte.
Step 203: based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykAnd determining a target cache memory.
In the prior art, in the process of combining an old file and a differential file into a new file, the whole new file needs to be cached by using a cache memory at one time, the cache memory with the size of the whole new file needs to be applied, then all control blocks are traversed to read old bytes which are different from the new file from the old file in sequence, the difference bytes which are included in differential character strings in all difference blocks and new bytes which are included in new character strings in all new blocks, and the whole new file can be cached to the cache memory at one time after the packet combining operation is completed, so that more memory space is consumed. Therefore, in the embodiment of the present application, considering the principle of "how much cache is obtained" based on each control block, after reading, from an old file, an old byte having a difference with a new file, a difference byte included in a difference character string in a difference block corresponding to the control block, and a new byte included in a new added character string in a new added block corresponding to the control block, which are based on one control block, complete a packet combining operation, a cache operation can be performed; that is to say that the position of the first electrode,for the k control block in the traversal process, the size of the corresponding target cache memory is determined by the number x of the difference bytes in the k control blockkAnd the new number of bytes ykAnd (4) determining.
In practical application, for the first control block in the traversal process, that is, the 0 th control block when k is 0, no application for any cache memory is made at this time, and the number x of bytes in the 0 th control block may be directly determined according to the difference byte number0And the new number of bytes y0Application x0+y0The cache memory with the size is used as a target cache memory. Aiming at a non-first control block in the traversal process, namely a kth control block when k is larger than 0, the applied cache memory exists at the moment; if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory, which indicates that the space of the applied cache memory is enough, and the applied cache memory can be determined as a target cache memory; if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe size b of the cache memory is larger than the applied cache memory, which indicates that the space of the applied cache memory is insufficient, and the number x of bytes is needed to be based on the difference in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd the size b of the applied cache memory determines the target cache memory.
That is, in an optional implementation manner of this embodiment of the present application, when k is 0, the step 203 may specifically be: based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykApplication xk+ykAnd taking the cache memory with the size as the target cache memory. When k > 0, said step 203 may for example comprise: if the number of the difference bytes in the k control block is xkWith the new number of bytes ykAddition x ofk+ykDetermining the applied cache memory as the target cache memory when the size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory; if the number of the difference bytes in the k control block is xkWith the new number of bytes ykIs added withAnd xk+ykIs larger than the applied cache memory size b and is based on the number x of the difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory.
Wherein, if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykLarger than the size b of the applied cache memory, the insufficient space of the applied cache memory means that the cache memory is lacked (x) based on the applied cache memoryk+yk) B size cache memory, then need to apply for (x)k+yk) B size cache memory, adding (x) to the applied cache memoryk+yk) Merging the cache memories of the sizes of the b to be used as a target cache memory. Therefore, in an optional implementation manner of this embodiment of this application, the control block based on the kth number of bytes x is different from the control block based on the kth number of byteskWith the new number of bytes ykAddition x ofk+ykAnd the step of determining the target cache memory according to the size b of the applied cache memory may include the following steps:
and C: based on the number x of difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd the size of the applied cache memory b, application (x)k+yk) -b size cache memory;
step D: the applied cache memory and the (x)k+yk) -b-size cache memory merge determining as the target cache memory.
Step 204: caching the x by utilizing the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
It is understood that after the target cache memory is determined in step 203, in order to meet the above-mentioned principle of "how much cache is obtained" based on each control block, it is necessary to utilize x obtained in the target cache memory caching step 202kA sum byteAnd y in the k-th newly added blockkAnd adding new bytes.
Of course, it should be noted that the cache completion x in the target cache memorykThe sum byte and the k newly added block ykA new byte representing the number x of bytes in the control block based on the kkAfter the target reading memory is determined, reading x from the reading position in the old file by using the target reading memorykOld byte, x in the k-th difference blockkX obtained by summing the difference byteskA sum byte and y in the k-th newly added blockkThe newly added bytes complete the packet combining operation, and at the moment, x needs to be immediately completed without waiting for other packet combining operations to be completedkSum byte and y in the k new blockkThe new byte is written into the new file. Therefore, in an optional implementation manner of the embodiment of the present application, for example, the method may further include step F: if said xkA sum byte and y in the k-th newly added blockkCompleting the caching of the newly added bytes in the target cache memory, and storing the xkA sum byte and y in the k-th newly added blockkThe new byte is written into the new file.
It should be noted that, since the k control block is shifted by the byte number zkFor indicating the number of bytes that need to be shifted forward when reading the old bytes having a difference from the new file for the (k + 1) th control block in the traversal process from the old file, therefore, when the cache in the target cache memory is completed by xkThe sum byte and the k newly added block ykAfter adding new bytes, the read position in the old file needs to be shifted forward by the offset byte number z in the k control blockk. Therefore, in an optional implementation manner of the embodiment of the present application, for example, the method may further include step E: if said xkA sum byte and y in the k-th newly added blockkThe newly added bytes are cached in the target cache memory, and the reading position is shifted forwards by the shift byte number z in the k control blockk。
Through various implementation manners provided by the embodiment, each control block in the differential packet is traversed based on the kth controlDifferent number of bytes in block xkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q; reading position from old file by target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte; based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory; caching x with target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes. It can be seen that the size of the maximum target read memory is the number x of bytes differencekThe maximum value of (a) is far smaller than a read memory for reading the whole old file at one time; the maximum target cache memory size is the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe maximum value of (a) is far smaller than a cache memory for caching the whole new file at one time; the method greatly saves the use of the memory, avoids the phenomenon of exceeding the use upper limit of the memory, and fundamentally solves the problem that the software carrying the bspatch algorithm in the vehicle end is dropped by kill due to the overflow of the memory.
Exemplary devices
Referring to fig. 3, a schematic structural diagram of an apparatus for optimizing file delta update in an embodiment of the present application is shown. In this embodiment, the apparatus may specifically include:
a first determining unit 301, configured to traverse each control block in the differential packet, based on the number x of differential bytes in the k-th control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q;
a read obtaining unit 302, configured to start reading x from an old file by using the target read memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte;
a second determining unit 303, configured to determine the number x of bytes based on the difference in the k control blockkAnd the new number of bytes ykDetermining within a target cacheStoring;
a cache unit 304, configured to cache the x by using the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
In an optional implementation manner of the embodiment of the present application, when k is equal to 0, the first determining unit 301 is specifically configured to:
based on the number x of difference bytes in the k control blockkApplication xkTaking the read memory with the size as the target read memory;
correspondingly, the second determining unit 303 is specifically configured to:
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykApplication xk+ykAnd taking the cache memory with the size as the target cache memory.
In an optional implementation manner of this embodiment of the present application, when k > 0, the first determining unit 301 includes:
a first determining subunit, configured to determine if the k control block has a byte number x of differencekThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, and the applied read memory is determined as the target read memory;
a second determining subunit, configured to determine if the k control block has the difference byte number xkIs larger than the applied size a of the read memory and is based on the number x of the difference bytes in the k control blockkDetermining the target read memory according to the size a of the applied read memory;
correspondingly, the second determining unit 303 includes:
a third determining subunit, configured to determine if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykDetermining the applied cache memory as the target cache memory when the size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory;
a fourth determining subunit, configured to determine if the k control block has the difference byte number xkWith the new number of bytes ykAddition x ofk+ykIs larger than the applied cache memory size b and is based on the number x of the difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory.
In an optional implementation manner of the embodiment of the present application, the second determining subunit includes:
a first application module for applying for a control block based on the number of bytes in the kth control block xkAnd the size a, application x of the read memorykA size read memory;
a first determining module for determining the applied read memory and the xk-a size read memory merge determining as the target read memory.
In an optional implementation manner of the embodiment of the present application, the fourth determining subunit includes:
a second application module for applying for a control block based on the number of bytes in the kth control block xkWith the new number of bytes ykAddition x ofk+ykAnd the size of the applied cache memory b, application (x)k+yk) -b size cache memory;
a second determining module for comparing the applied cache memory with the (x)k+yk) -b-size cache memory merge determining as the target cache memory.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes:
an offset unit for if xkA sum byte and y in the k-th newly added blockkThe newly added bytes are cached in the target cache memory, and the reading position is shifted forwards by the shift byte number z in the k control blockk。
In an optional implementation manner of the embodiment of the present application, the apparatus further includes:
a write unit for if xkA sum byte and the k-th newly added blockMiddle ykCompleting the caching of the newly added bytes in the target cache memory, and storing the xkA sum byte and y in the k-th newly added blockkThe new byte is written into the new file.
Through various implementation manners provided by this embodiment, each control block in the differential packet is traversed, and the number x of differential bytes in the kth control block is based onkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q; reading position from old file by target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte; based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory; caching x with target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes. It can be seen that the size of the maximum target read memory is the number x of bytes differencekThe maximum value of (a) is far smaller than a read memory for reading the whole old file at one time; the maximum target cache memory size is the difference byte number xkWith the new number of bytes ykAddition x ofk+ykThe maximum value of (a) is far smaller than a cache memory for caching the whole new file at one time; the method greatly saves the use of the memory, avoids the phenomenon of exceeding the use upper limit of the memory, and fundamentally solves the problem that the software carrying the bspatch algorithm in the vehicle end is dropped by kill due to the overflow of the memory.
In addition, an embodiment of the present application further provides a terminal device, where the terminal device includes a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for optimizing file delta update according to the method embodiments according to the instructions in the program code.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used to store a program code, and the program code is used to execute the method for optimizing the file delta update according to the above-mentioned method embodiment.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.
Claims (10)
1. A method for optimizing file delta update, comprising:
traversing each control block in the differential packet based on the number x of differential bytes in the kth control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q;
starting to read x from the reading position in the old file by utilizing the target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte;
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykDetermining a target cache memory;
caching the x by utilizing the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
2. The method of claim 1, wherein when k is 0, the k-th control block is based on the number x of bytes in differencekDetermining a target read memory, specifically:
based on the number x of difference bytes in the k control blockkApplication xkTaking the read memory with the size as the target read memory;
correspondingly, the number x of bytes based on difference in the k control blockkAnd new byteNumber ykDetermining a target cache memory, specifically:
based on the number x of difference bytes in the k control blockkAnd the new number of bytes ykApplication xk+ykAnd taking the cache memory with the size as the target cache memory.
3. The method of claim 2, wherein when k > 0, the k control block is based on the number x of bytes in differencekDetermining a target read memory, comprising:
if the number of the difference bytes in the k control block is xkThe size a of the applied read memory is smaller than or equal to the size a of the applied read memory, and the applied read memory is determined as the target read memory;
if the number of the difference bytes in the k control block is xkIs larger than the applied size a of the read memory and is based on the number x of the difference bytes in the k control blockkDetermining the target read memory according to the size a of the applied read memory;
correspondingly, the number x of bytes based on difference in the k control blockkAnd the new number of bytes ykDetermining a target cache memory, comprising:
if the number of the difference bytes in the k control block is xkWith the new number of bytes ykAddition x ofk+ykDetermining the applied cache memory as the target cache memory when the size b of the applied cache memory is smaller than or equal to the size b of the applied cache memory;
if the number of the difference bytes in the k control block is xkWith the new number of bytes ykAddition x ofk+ykIs larger than the applied cache memory size b and is based on the number x of the difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory.
4. The method of claim 3, wherein the controlling block is based on the kth in-control block differenceNumber of bytes xkAnd determining the target read memory according to the size a of the applied read memory, including:
based on the number x of difference bytes in the k control blockkAnd the size a, application x of the read memorykA size read memory;
reading the memory and the xk-a size read memory merge determining as the target read memory.
5. The method of claim 3, wherein the k-th control block is based on the number x of bytes that are different from each otherkWith the new number of bytes ykAddition x ofk+ykAnd determining the target cache memory according to the size b of the applied cache memory, including:
based on the number x of difference bytes in the k control blockkWith the new number of bytes ykAddition x ofk+ykAnd the size of the applied cache memory b, application (x)k+yk) -b size cache memory;
the applied cache memory and the (x)k+yk) -b-size cache memory merge determining as the target cache memory.
6. The method of claim 1, further comprising:
if said xkA sum byte and y in the k-th newly added blockkCompleting the caching of the newly added bytes in the target cache memory, and storing the xkA sum byte and y in the k-th newly added blockkThe new byte is written into the new file.
7. The method of claim 1, further comprising:
if said xkA sum byte and y in the k-th newly added blockkThe newly added bytes are cached in the target cache memory, and the reading position is shifted forwards by the firstOffset byte number z in k control blocksk。
8. An apparatus for file delta update optimization, comprising:
a first determining unit for traversing each control block in the differential packet based on the number x of differential bytes in the k-th control blockkDetermining a target read memory, wherein the total number of each control block is q +1, k is an integer, q is a positive integer, and k is more than or equal to 0 and less than or equal to q;
a reading obtaining unit for starting to read x from the reading position in the old file by using the target reading memorykOld byte, x in the k-th difference blockkAdding the difference bytes to obtain xkA sum byte;
a second determination unit for determining the number of bytes x based on the difference byte number in the k control blockkAnd the new number of bytes ykDetermining a target cache memory;
a cache unit for caching the x with the target cache memorykThe sum byte and the k newly added block ykAnd adding new bytes.
9. A terminal device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method for file delta update optimization of any of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium for storing program code for performing the method for file delta update optimization of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010386611.XA CN111580849B (en) | 2020-05-09 | 2020-05-09 | File difference updating and optimizing method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010386611.XA CN111580849B (en) | 2020-05-09 | 2020-05-09 | File difference updating and optimizing method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111580849A true CN111580849A (en) | 2020-08-25 |
CN111580849B CN111580849B (en) | 2023-07-04 |
Family
ID=72115338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010386611.XA Active CN111580849B (en) | 2020-05-09 | 2020-05-09 | File difference updating and optimizing method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111580849B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0472812A1 (en) * | 1990-08-28 | 1992-03-04 | Landis & Gyr Technology Innovation AG | Method to charge an object code version to a first program stored in the computer of an appliance into an object code version of a second program which was derived by at least one change to the first program |
US20040062130A1 (en) * | 2002-09-30 | 2004-04-01 | Chiang Ying-Hsin Robert | Updating electronic files using byte-level file differencing and updating algorithms |
JP2006079492A (en) * | 2004-09-13 | 2006-03-23 | Mitsubishi Electric Corp | Difference data generation device, difference data generation method and difference data generation program |
CN104123149A (en) * | 2013-04-28 | 2014-10-29 | 腾讯科技(深圳)有限公司 | Software upgrading method, device, client and system |
CN106250195A (en) * | 2016-08-10 | 2016-12-21 | 青岛海信电器股份有限公司 | Update the method for system file, equipment and system |
CN106528125A (en) * | 2016-10-26 | 2017-03-22 | 腾讯科技(深圳)有限公司 | Data file incremental updating method, server, client and system |
CN107273159A (en) * | 2017-06-08 | 2017-10-20 | 深圳市华信天线技术有限公司 | Difference patch upgrading method and device suitable for embedded system |
US20180173723A1 (en) * | 2015-06-04 | 2018-06-21 | Here Global B.V. | Incremental update of compressed navigational databases |
CN109697071A (en) * | 2017-10-24 | 2019-04-30 | 腾讯科技(深圳)有限公司 | Installation kit synthetic method, device, terminal and storage medium |
-
2020
- 2020-05-09 CN CN202010386611.XA patent/CN111580849B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0472812A1 (en) * | 1990-08-28 | 1992-03-04 | Landis & Gyr Technology Innovation AG | Method to charge an object code version to a first program stored in the computer of an appliance into an object code version of a second program which was derived by at least one change to the first program |
US20040062130A1 (en) * | 2002-09-30 | 2004-04-01 | Chiang Ying-Hsin Robert | Updating electronic files using byte-level file differencing and updating algorithms |
JP2006079492A (en) * | 2004-09-13 | 2006-03-23 | Mitsubishi Electric Corp | Difference data generation device, difference data generation method and difference data generation program |
CN104123149A (en) * | 2013-04-28 | 2014-10-29 | 腾讯科技(深圳)有限公司 | Software upgrading method, device, client and system |
US20180173723A1 (en) * | 2015-06-04 | 2018-06-21 | Here Global B.V. | Incremental update of compressed navigational databases |
CN106250195A (en) * | 2016-08-10 | 2016-12-21 | 青岛海信电器股份有限公司 | Update the method for system file, equipment and system |
CN106528125A (en) * | 2016-10-26 | 2017-03-22 | 腾讯科技(深圳)有限公司 | Data file incremental updating method, server, client and system |
CN107273159A (en) * | 2017-06-08 | 2017-10-20 | 深圳市华信天线技术有限公司 | Difference patch upgrading method and device suitable for embedded system |
CN109697071A (en) * | 2017-10-24 | 2019-04-30 | 腾讯科技(深圳)有限公司 | Installation kit synthetic method, device, terminal and storage medium |
Non-Patent Citations (4)
Title |
---|
LEON.LIAO: "在线升级 - 差量升级", pages 1 - 6, Retrieved from the Internet <URL:https://blog.csdn.net/qazw9600/article/details/105440994> * |
YUICHI KOMANO等: "Efficient and Secure Firmware Update/Rollback Method for Vehicular Devices", INFORMATION SECURITY PRACTICE AND EXPERIENCE, pages 455 * |
包晓安 等: "基于压缩和差分算法的嵌入式平台远程更新设计与分析", 浙江理工大学学报(自然科学版), vol. 43, no. 4, pages 535 - 541 * |
许梦茹: "车载嵌入式设备的远程升级系统设计", 中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑, no. 3, pages 035 - 72 * |
Also Published As
Publication number | Publication date |
---|---|
CN111580849B (en) | 2023-07-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111881133B (en) | Storage bucket management method and device, computer equipment and readable storage medium | |
CN113721967B (en) | Differential packet generation method, differential packet generation device, and upgrade method | |
US8984058B2 (en) | Pre-fetching remote resources | |
CN110032339B (en) | Data migration method, device, system, equipment and storage medium | |
CN103713928A (en) | Incremental file generating method, and application program installing file updating method and device | |
CN106897103B (en) | Method for quickly and thermally updating version of mobile terminal network game | |
CN1502075A (en) | Method for loading and executing an application in an embedded environment | |
CN113407376B (en) | Data recovery method and device and electronic equipment | |
CN109471843A (en) | A kind of metadata cache method, system and relevant apparatus | |
CN111586170B (en) | Resource downloading method, device, equipment and computer readable storage medium | |
CN111258621A (en) | Differentiated firmware upgrading method | |
WO2019041891A1 (en) | Method and device for generating upgrade package | |
CN106649654A (en) | Data updating method and device | |
CN103825945A (en) | Fragmentation storage method and user terminal | |
CN115421745A (en) | Equipment remote upgrading method, device, terminal and storage medium | |
CN111580849A (en) | File difference updating optimization method and related device | |
CN113626054A (en) | Business service updating method and device | |
WO2011023049A1 (en) | Method and device for downloading | |
CN113742131A (en) | Method, electronic device and computer program product for storage management | |
CN107643959A (en) | Image file treating method and apparatus | |
CN116302328A (en) | Intelligent contract data processing method and system | |
CN112559647A (en) | Method and device for updating data of map layer | |
CN110413587A (en) | A kind of method and apparatus of aging history data | |
US20220350576A1 (en) | Compression Of Firmware Updates | |
CN106445700B (en) | A kind of URL matching process and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |