EP3698253A1 - Système et procédé de gestion de mémoire de programme sur un dispositif de stockage - Google Patents

Système et procédé de gestion de mémoire de programme sur un dispositif de stockage

Info

Publication number
EP3698253A1
EP3698253A1 EP18867427.9A EP18867427A EP3698253A1 EP 3698253 A1 EP3698253 A1 EP 3698253A1 EP 18867427 A EP18867427 A EP 18867427A EP 3698253 A1 EP3698253 A1 EP 3698253A1
Authority
EP
European Patent Office
Prior art keywords
object code
storage
code segments
block
program memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP18867427.9A
Other languages
German (de)
English (en)
Other versions
EP3698253A4 (fr
Inventor
Lior HAMMER
Gilad Barzilay
Yaron Galula
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Argus Cyber Security Ltd
Original Assignee
Argus Cyber Security Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Argus Cyber Security Ltd filed Critical Argus Cyber Security Ltd
Publication of EP3698253A1 publication Critical patent/EP3698253A1/fr
Publication of EP3698253A4 publication Critical patent/EP3698253A4/fr
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • G06F8/654Updates using techniques specially adapted for alterable solid state memories, e.g. for EEPROM or flash memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • G06F8/658Incremental updates; Differential updates

Definitions

  • the present invention relates to programmable computing devices. More particularly, the present invention relates to systems and methods for management of program memory storage.
  • IoT Internet of Things
  • OS operating system
  • IoT devices may be elements of automotive, or inter- vehicle networks, that allow internal communication between various components of a vehicle (e.g., air conditioning system, diagnostics, engine, etc.) via Electronic Control Units (ECUs).
  • ECUs normally gets input from sensors (e.g., speed, temperature, pressure, etc.) to be used in its analysis and exchange data among themselves during the normal operation of the vehicle.
  • sensors e.g., speed, temperature, pressure, etc.
  • an engine may need to inform a transmission box what the engine speed is, and the transmission may need to inform other modules when a gear shift occurs.
  • the inter-vehicle network allows exchanging data quickly and reliably, with internal communication between the ECUs.
  • OTA Over-The- Air
  • new software, configuration settings, and updating encryption keys may be distributed to various computerized devices.
  • a central location such as a dedicated remote server may send an update to a subset of users or embedded end units.
  • Delta updates are a common method to carry out software updates over the air, occupying minimal memory for each update, in order to reduce bandwidth costs and minimize update time so as to reduce the overall system down-time.
  • Delta updates include sending the difference between an old version of the software (or software image such as an object file, as commonly referred to in the art) and a new (or revised) version of the software image, instead of sending the new software image in its entirety.
  • an end unit e.g., an IoT device
  • a dedicated algorithm may analyze the received partial image and the existing software image and decide what needs to be updated.
  • Delta update algorithms may require a substantial amount of memory, that may exceed the available memory on the embedded device.
  • a prevalent delta update algorithm called "bsdiff ' normally requires n+m+0(l) bytes of memory, where 'n' is the size of the old software component in bytes, 'm' is the size of the new software component in bytes and 0(1) is a big O notation of a constant, that may depend upon a specific implementation of the algorithm, as known in the art.
  • Embodiments of the present invention may include a system and a method of managing program memory on a storage device. According to some embodiments, the method may include:
  • storage block information including at least one of: storage block size of a storage device and block utilization limit of the storage device;
  • the plurality of object code segments may be associated with respective one or more functions of software modules, and sparsely stacking of object code segments may include selection of object code segments according to the association of the object code segments with the respective functions of software modules.
  • Embodiments of the method may further include:
  • Embodiments of the method may further include:
  • a function call graph comprising a plurality of nodes, each representing a specific function associated with an object code segment, and a plurality of edges, each representing a call of one function to another;
  • a size indicator representing a storage size of the respective object code segment.
  • selection of object code segments for stacking may include:
  • Embodiments of the method may further include:
  • Embodiments of the method may further include maintaining an address table associating each object code segment with a respective storage address on a block of the program memory storage and storing of the patch object code segment may include replacing the storage address of at least one object code segment on the address table with that of the patch object code segment.
  • the software modules are associated with one or more abstraction layers, selected from a list comprising a kernel layer, a driver layer and an application layer.
  • Embodiments of the present invention may include a system for managing program memory on a storage device.
  • the system may include:
  • storage block information of the first storage device including at least one of: storage block size of a storage device and block utilization limit of the storage device; receive at least one first object file including a plurality of object code segments and a respective plurality of linker placeholders;
  • Embodiments of the present invention may include method of managing program memory on a storage device.
  • the method may include:
  • FIG. 1 is a block diagram, depicting a computing device that may be included within a system for management of program memory storage, according to some embodiments;
  • FIG. 2 is a block diagram, depicting system for management of program memory storage, according to some embodiments.
  • Fig. 3 is a schematic block diagram, depicting an example of object code data that may be used by a system for management of program memory storage, according to some embodiments;
  • FIG. 4 is a block diagram, depicting an example of an implementation of a system for management of program memory storage as part of an inter-vehicle network constellation, according to some embodiments;
  • FIG. 5A and 5B are block diagrams, depicting two example for utilization of a program memory storage device as part of a system for management of program memory storage, according to some embodiments;
  • Fig. 6 is a block diagram, depicting an example of a function call graph, which may be included within a system for management of program memory storage, according to some embodiments;
  • Fig. 7 is a flow diagram, depicting a method of management of program memory storage, according to some embodiments.
  • Fig. 8 is a flow diagram, depicting a method of management of program memory storage, according to some embodiments.
  • Fig. 1 is a block diagram depicting a computing device 10, which may be included within an embodiment of a system for management of program memory storage, according to some embodiments.
  • Computing device 10 may include a controller 2 that may be, for example, a central processing unit (CPU) processor, a chip or any suitable computing or computational device, an operating system 3, a memory 4, executable code 5, a storage system 6, input devices 7 and output devices 8. Controller 2 (or one or more controllers or processors, possibly across multiple units or devices) may be configured to carry out methods described herein, and/or to execute or act as the various modules, units, etc. More than one computing device 10 may be included in, and one or more computing devices 10 may act as the components of, a system according to embodiments of the invention.
  • a controller 2 may be, for example, a central processing unit (CPU) processor, a chip or any suitable computing or computational device, an operating system 3, a memory 4, executable code 5, a storage system 6, input devices 7 and output devices 8. Controller 2 (or one or more controllers or processors, possibly across multiple units or devices) may be configured to carry out methods described herein, and/or to execute or act as the various modules, units, etc. More than one computing
  • Operating system 3 may be or may include any code segment (e.g., one similar to executable code 5 described herein) designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 10, for example, scheduling execution of software programs or tasks or enabling software programs or other modules or units to communicate.
  • Operating system 3 may be a commercial operating system. It will be noted that an operating system 3 may be an optional component, e.g., in some embodiments, a system may include a computing device 10 that does not require or include an operating system 3.
  • Memory 4 may be or may include, for example, a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
  • Memory 4 may be or may include a plurality of, possibly different memory units.
  • Memory 4 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM.
  • Executable code 5 may be any executable code, e.g., an application, a program, a process, task or script. Executable code 5 may be executed by controller 2 possibly under control of operating system 3. For example, executable code 5 may be an application that may management of program memory storage as further described herein. Although, for the sake of clarity, a single item of executable code 5 is shown in Fig. 1 , a system according to some embodiments of the invention may include a plurality of executable code segments similar to executable code 5 that may be loaded into memory 4 and cause controller 2 to carry out methods described herein.
  • Storage system 6 may be or may include, for example, a flash memory as known in the art, a memory that is internal to, or embedded in, a micro controller or chip as known in the art, a hard disk drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit.
  • Content may be stored in storage system 6 and may be loaded from storage system 6 into memory 120 where it may be processed by controller 2.
  • some of the components shown in Fig. 1 may be omitted.
  • memory 4 may be a non-volatile memory having the storage capacity of storage system 6. Accordingly, although shown as a separate component, storage system 6 may be embedded or included in memory 4.
  • Input devices 7 may be or may include any suitable input devices, components or systems, e.g., a detachable keyboard or keypad, a mouse and the like.
  • Output devices 8 may include one or more (possibly detachable) displays or monitors, speakers and/or any other suitable output devices.
  • Any applicable input/output (I/O) devices may be connected to computing device 10 as shown by blocks 7 and 8.
  • NIC network interface card
  • USB universal serial bus
  • any suitable number of input devices 7 and output device 8 may be operatively connected to computing device 10 as shown by blocks 7 and 8.
  • a system may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi- purpose or specific processors or controllers (e.g., controllers similar to controller 2), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
  • CPU central processing units
  • controllers e.g., controllers similar to controller 2
  • System 1 may include at least one computing device 10 (e.g., element 10 of Fig. 1), configured to produce an executable instruction code, and at least one target device 20, configured to receive the produced code, and execute it by a controller or processor therein, as known in the art.
  • computing device 10 e.g., element 10 of Fig. 1
  • target device 20 configured to receive the produced code, and execute it by a controller or processor therein, as known in the art.
  • computing device 10 may be a desktop computer, a server computer, a smartphone a laptop and the like, and target device 20 may be an IoT device, a vehicle Electronic Control Unit (ECU) device, and the like.
  • computing device 10 may also implemented as an ECU device, on condition that it has sufficient computational resources to implement embodiments of a method of management of program memory storage, as described herein.
  • Computing device 10 and target device 20 may be communicatively connected through any type of wired or wireless communication protocol, including for example: TCP/IP, Bluetooth, WiFi, Cellular communication protocols (e.g., WCDMA, LTE, etc.), inter-vehicle communication protocols, and the like.
  • TCP/IP Transmission Control Protocol/IP
  • Bluetooth Wireless Fidelity
  • WiFi Wireless Fidelity
  • Cellular communication protocols e.g., WCDMA, LTE, etc.
  • inter-vehicle communication protocols e.g., LTE, etc.
  • target device 20 may typically include a processor or controller 210, and limited memory resources.
  • Embodiments of system 1 may implement at least one method for:
  • a program memory storage device 220 such as a Flash memory device, a solid-state device (SSD), a Non- Volatile Random Access Memory (NVRAM) device and the like.
  • a program memory storage device 220 such as a Flash memory device, a solid-state device (SSD), a Non- Volatile Random Access Memory (NVRAM) device and the like.
  • target device 20 may include a random-access memory (RAM) device 230, that may be included in or associated with program memory storage 220 (e.g., a Flash memory device, an SSD device and the like), and may be used, for example, to sort or organize segments of program memory stored on program memory storage 220, as explained herein.
  • RAM random-access memory
  • target device 20 may be configured to transfer the executable code to a second program memory device 210- A (often referred to as an 'internal' memory device) associated with controller 210 during boot time, using a boot loader 240. Controller 210 may then execute the executable code from the internal program memory 210- A at run-time.
  • at least one computing device 10 e.g., a personal computer, a server, a laptop computer and the like
  • computing device 10 may build the software instruction code, to produce object code 40.
  • Object code 40 may include or may be formatted as one or more object files 41 (e.g., 41A, 41B and 41C).
  • the one or more object files 41 may include a plurality of object code segments 410 (e.g., 410A, 410B and 410C) and a respective plurality of storage address fields 412 (e.g., 412A, 412B and 412C).
  • object files 41 e.g., 410A, 410B and 410C
  • storage address fields 412 e.g., 412A, 412B and 412C
  • the plurality of object code segments may be attributed (e.g. as part of the storage address fields 412), a respective plurality of linker placeholders.
  • the linker placeholders may be or may include for example an initial or arbitrary value.
  • the value stored within the storage address fields 412 may be modified, during a linking stage of the build process, where the linker placeholders may be changed to a value of a storage address pointer or reference to a location where the respective object code segment is to be stored on program memory storage 220.
  • Embodiments of the present invention may implement a method for optimally selecting at least one storage address pointer, so as to require minimal storage space on program memory storage 220 and require minimal network traffic for transferring object code 40 between computing device 10 and target device 20, as explained herein.
  • the object code segments may be associated with respective one or more segments of the software instruction code.
  • specific segments of the at least one object file may be associated (e.g., by a label, a name or an identifier) with functions of software modules (e.g., software applications, drivers, kernel objects, etc.) within the software instruction code.
  • each object code segment may be associated with an abstraction layer, including for example: a kernel layer, a driver layer and an application layer.
  • object file 41 may include a function identifier field 411 (e.g., 411A, 411B and 411C), associating one or more object code segments 410 with respective functions identifiers of functions within software modules of the software instruction code.
  • a function identifier field 411 e.g., 411A, 411B and 411C
  • computing device 10 may receive as input 30 instruction code in an upper- level computing language (e.g., C, C++, and the like), and may execute (e.g., on element 2 of Fig. 1) one or more software modules to process (e.g., build) instruction code 30 and produce object code 40.
  • computing device 10 may employ one or more of: (a) a preprocessing module 100, (b) a compiler module 110, (c) an assembler module 120, and (d) a linker module 130, as known in the art.
  • embodiments may include any combination or subset of modules 100, 110, 120, 130.
  • computing device 10 may receive as input 30 one or more object files that may include a plurality of object code segments and a respective plurality of linker placeholders in storage address fields 412.
  • Computing device 10 may consequently only employ linker module 130 to produce object code 40, with program memory address pointers in storage address fields 412, as elaborated herein.
  • program memory storage 220 may include a plurality of storage blocks.
  • program memory storage 220 may be a Flash device, including a plurality of blocks that are the minimal erasable entities within the flash device, where each block includes a plurality of programmable pages.
  • computing device 10 may receive storage block information 31 relating to program memory storage 220.
  • Storage block information 31 may include for example: the number of storage blocks and the size of storage blocks within program memory storage 220.
  • Computing device 10 may receive (e.g. from a user, via element 7 of Fig. 1) additional storage block information 31, including a block utilization limit parameter. For example, a user may dictate that one or more blocks of program memory storage 220 may be utilized (e.g., store program data therein) up to a predefined limit (e.g., up to 60% of the block size).
  • a predefined limit e.g., up to 60% of the block size.
  • Computing device 10 may be configured to sparsely stack or accumulate object code segments to produce two or more libraries 415 (e.g., elements 415A and 415B of Fig. 3) according to the storage block information.
  • object code segments may be allocated non- sequential storage locations on program memory storage 220 (e.g., in order to reserve space for future additions or modifications of the object code).
  • each block in program memory storage 220 is 100 kB (kilo-Bytes);
  • the utilization limit parameter is set (e.g., by a user configuration) to 60%; and the storage size of object code segments 410A - 410D are 39, 20, 9 and 49 kB respectively.
  • the utilization limit of storage blocks is 60% of lOOkB, i.e., 60 kB;
  • the cumulative size of object code segments 41 OA and 410B is 59 kB;
  • the cumulative size of object code segments 410C and 410D is 58 kB.
  • computing device 10 may:
  • stack object code segments 410A and 410B to produce a first library 415A, the size of which (59 kB) is beneath the utilization limit of storage blocks;
  • stack object code segments 410C and 410D to produce a second library 415B, the size of which (58 kB) is also beneath the utilization limit of storage blocks.
  • embodiments of the invention may stack the object code segments in a sparse manner so that library 415A may be allocated storage space on a first block of storage 220, and may occupy only 60% of the first storage block, and library 415B may be allocated storage space on a second, possibly consecutive block of storage 220, and may occupy only 60% of that block.
  • Linker 130 may replace the plurality of linker placeholders with actual addresses of sections of program memory according to the stacking of object code segments. [063] Pertaining to the same example, linker 130 may replace the initial content of storage address fields 412A - 412D to address pointers, where:
  • 412A and 412B would point to addresses of pages within the first storage block of storage 220;
  • 412C and 412D would point to addresses of pages within the second storage block of storage 220.
  • computing device 10 may receive (e.g., from a user via element 7 of Fig. 1) and attribute specific block utilization limit parameters per each software module. For example, a first application that may be prone to future fixes may be attributed a first block utilization limit value, and a second application that may be less prone to future fixes may be attributed a second block utilization limit value that is higher than the first block utilization limit value. Computing device 10 may consequently stack the object code segments according to the attributed block utilization limit. Pertaining to the same example, libraries pertaining to the first application may be stacked, and later stored on storage 220 more sparsely (e.g., with greater gaps between libraries) than libraries pertaining to the second application.
  • Computing device 10 may transmit (e.g., via a wired or wireless communication network) object code 40, that may include a plurality of object code segments, sparsely stacked into libraries (e.g., 415A, 415B) as explained above, to target device 20.
  • object code 40 may include a plurality of object code segments, sparsely stacked into libraries (e.g., 415A, 415B) as explained above, to target device 20.
  • Target device 20 may sparsely store the plurality of object code segments on the storage device according to the actual addresses allocated thereto.
  • controller 210 may configure program memory storage 220 to store the content of library 415A according to storage address pointers 412A and 412B (e.g., within the first storage block) and store the content of library 415B according to storage address pointers 412C and 412D (e.g., within the second storage block).
  • libraries 415A and 415B may be stored sparsely, e.g., in non-contiguous addresses of program memory storage 220, according to the storage address pointers.
  • library 415 A may occupy the first 60kB of the first storage block
  • library 415B may occupy the first 60kB of the second storage block, thus forming a gap between the two stored instances.
  • the plurality of object code segments 410 may be associated with respective one or more functions of software modules via function identifiers 411.
  • the sparse stacking of object code segments, and consequent storage thereof on program memory storage 220 may include selection of object code segments according to the association of the object code segments with the respective functions of software modules.
  • computing device 10 may identify or determine that two or more functions are related (e.g., when a first function includes or calls a second function), as explained herein.
  • Linker 130 may consequently select the two or more object code segments associated with the related functions to aggregate them into a library.
  • Target device may subsequently localize the storage of the object code segments associated with the related functions (e.g., store the related object code segments in a single storage block).
  • This compartmentation of executable code may facilitate an update of software in a manner that is optimal in terms of: (a) the number of changes that may be required on the data stored on program memory storage 220, and (b) the amount of data that may need to be transferred from computing device 10 to target device 20 in case of such a software update.
  • FIG. 4 is a block diagram, depicting an example of an implementation of a system for management of program memory storage as part of an inter- vehicle network constellation, according to some embodiments.
  • system 1 in Fig. 2 may be embedded into or implemented as an inter- vehicle network 200 or bus.
  • system 1 may include one or more ECUs as target devices 20 (e.g., 20A and 20B), and may optimize communication on the inter- vehicle network 200.
  • inter-vehicle network 200 may include a master ECU 211 including a processor (e.g., such as element 2 of Fig. 1) in communication with other ECU components of inter-vehicle network 200 (where communication is indicated with arrows in Fig. 4).
  • master ECU 211 may communicate with one or more slave ECU modules (e.g., 20A, 20B) as known in the art, and with a communication ECU 212.
  • system 1 may allow optimization of various attributes of data transfer, such as optimization of memory allocation as well as optimization of operating time (e.g., reduction of downtime) for the data to be transferred and/or uploaded data, for instance data for software/firmware updates.
  • processor 211 may be coupled to at least one ECU 20 (e.g., 20A and 20B) and may analyze operations of ECUs coupled thereto. It should be noted that each of processor 211 and ECUs 20 coupled thereto may be considered as a node of the inter-vehicle network 200. In some embodiments, communication between nodes of inter-vehicle network 200 may be carried out at least partially with wireless communication (e.g., via Bluetooth).
  • wireless communication e.g., via Bluetooth
  • inter-vehicle network 200 may include a communication ECU 212 configured to allow wired or wireless communication within inter-vehicle network 200 and/or communication with external devices.
  • communication ECU 212 may enable communication to computing device 10, as elaborated in Fig. 2.
  • communication ECU 212 may enable a navigation system ECU to communicate with satellites and/or to receive messages (e.g., a time stamp) from external sources.
  • communication ECU 212 may be implemented on the same entity as master ECU 211.
  • master ECU 211 may be configured to perform as the computing device 10 of Fig. 2, and produce object code 30, as elaborated above in relation to Fig. 2, and at least one ECU device (e.g., 20 A, 20B) may perform as the target device 20 of Fig. 2, and may store the executable code on a program memory storage device (e.g., element 220 of Fig. 2), to execute the code on a respective processor (e.g., element 210 of Fig. 2) therein.
  • a program memory storage device e.g., element 220 of Fig. 2
  • master ECU 211 may be configured to transfer (e.g., via communication ECU 212) object code 30 from an external computing device 10 to at least one ECU device (e.g., 20A, 20B), which may in turn store the executable code on a program memory storage device (e.g., element 220 of Fig. 2), to execute the code on a respective processor (e.g., element 210 of Fig. 2) therein.
  • communication between nodes of inter-vehicle network 200 may be continuous or periodic (e.g., sending single files and/or images).
  • all communication within inter-vehicle network 200 may be stored (e.g., on a memory unit) and processor 211 may analyze the communication history and determine that communication previously received by at least one node of inter- vehicle network 200 may be compromised.
  • at least one node of inter- vehicle network 200 may analyze and/or process data within inter- vehicle network 200.
  • at least one computing device such as device 10, as shown in Fig.
  • inter-vehicle network 200 may be embedded into inter-vehicle network 200 and may process data from the network to analyze data within the inter-vehicle network 200.
  • at least one computing device such as device 10, as shown in Fig. 1 may be embedded into at least one node of inter-vehicle network 200 and process data from that node and/or from the network to analyze data within the inter- vehicle network 200.
  • a node of the inter-vehicle network 200 may include a low-end processing chip such as controller 210 shown in Fig. 2.
  • Fig. 5 A and 5B are block diagrams depicting examples of utilizing a program memory device (e.g., element 220 of Fig. 2) as part of a system for management of program memory storage, according to some embodiments.
  • program memory storage 220 includes 10 storage blocks, each having lOOkB of memory space and the total storage space required for the object code (e.g., element 40 of Fig. 2) is 600kB.
  • Fig. 5A depicts a 'naive', consecutive allocation scheme for the object code 40 on storage element 220, where the first six storage blocks are sequentially allocated to store object code 40.
  • This allocation is naive, in the sense that a minor change in object code 40 may require extensive transfer of data from computing device 10 to target device 20, as well as extensive data reallocation (e.g., a plurality of program/erase cycles for storage devices 220 implemented as Flash memory devices, as known in the art).
  • input 30 e.g., an instruction code in a high-level programming language such as C
  • Object code 40 may consequently increase in size (e.g., by lkB, to 601kB).
  • program memory storage 220 is a Flash device, and the additional code segment should reside according to the contiguous allocation scheme in block number 2, then block 2 will need to be re- flashed in its entirety, as no partial erasure of blocks is permitted on Flash devices.
  • blocks '0' and T may also need to be reprogrammed, as they may include a relative reference to subsequent blocks (e.g., blocks of higher indices) that may no longer be valid.
  • target device 20 e.g., an IoT device
  • target device 20 may be limited in storage resources.
  • target device 20 may not have enough RAM space to implement a delta algorithm as part of a software update procedure. Therefore, the content of each block of the updated object code may need to be transferred in its entirety from computing device 10 to target device 20.
  • execution of commercially available delta algorithms e.g., "bspatch" on a single storage block of lOOkB may require as much as lOOkB + lOOkB + 0(1) RAM space. This space may be greater than the space available on RAM 230 (e.g., 150kB).
  • every update of content of a storage block of program memory storage 220 may require a complete transfer of the content of the updated block from computing device 10 to target device 20.
  • Figure 5B depicts an improved, sparse program memory allocation scheme, in which the data is allocated sparsely, e.g., in a non-contiguous manner.
  • object code 40 may be partitioned in advance according to a block utilization limit parameter.
  • the limit parameter may be 60%, and object code 40 may consequently be partitioned to ten parts and stored sparsely on each of blocks 0 through 9.
  • object code 40 is updated to include a small change that may increase its size (e.g., by lkB, to 601kB)
  • the additional data may be written into a single block of program memory storage 220, without affecting or requiring reallocation of adjacent blocks.
  • program memory storage 220 is implemented as a Flash device and if an additional object code segment needs to reside, according to the sparse allocation scheme, within storage block number 2, the additional object code segment may be written to vacant pages within block 2 as depicted in Fig. 5B, without affecting adjacent blocks as in the example depicted in Fig. 5A.
  • target device 20 may only require 60kB + 60kB + 0(1) of RAM 230 space to facilitate a delta algorithm such as "bspatch" as part of a program update process (in contrast with 200kB+O(l), as in the example depicted in Fig. 5A).
  • system 1 may only need to transfer lkB of new code from computing device 10 to target device 20.
  • the scheme depicted in Fig. 5B provides a number of benefits over the scheme depicted in Fig. 5A during update of data storage on target device 20.
  • These benefits include, for example:
  • computing device 10 may receive a fix or update to previously received input code 30.
  • this fix may be received as a second instruction code in a high-level software language (e.g., C, C++), and computing device may process or build the new input code as known in the art, to produce a respective, second object code including at least one object file.
  • the fix may already be received at computing device 10 (e.g., from an external source, not shown) as a second object code, including at least one object file, including at least one object code segment.
  • Computing device 10 may apply a delta encoding algorithm on the at least one first and second object files, as known in the art, to produce a patch file.
  • the patch file may include at least one patch object code segment.
  • Computing device 10 may transfer the at least one patch object code segment to target device 20, and processor 210 of target device 20 may store the at least one patch object code segment on a block of the program memory storage 220.
  • processor 210 of target device 20 may be configured execute to delta algorithm (e.g., "bspatch”) as known in the art to replace the program data stored on storage 220 with the updated software, and reboot to load the updated software to program memory device 210- A and execute the updated software.
  • delta algorithm e.g., "bspatch”
  • At least one storage block (e.g., one block) of storage 220 may be dedicated to store one or more patch object code segments, and at least one storage block of storage 220 may hold an address table associating each object code segment with a respective storage address on a block of the program memory storage.
  • Processor 210 of target device 20 may be configured, upon receiving a patch object code segment, to replace the storage address of at least one object code segment on the address table with that of the patch object code segment.
  • processor 210 would get the address of the fixed or updated function within the patch- dedicated storage block from the address table and execute the updated software.
  • a dedicated algorithm may be implemented on computing device 10.
  • Such algorithm may include executable code for a linker (e.g., element 130 of Fig. 2) form partitions in the compiled code (e.g., object code 40) prior to replacing the content of storage address fields 412 (e.g., elements 412A, 412B, 412C and 412D of Fig. 3) from linker placeholders to pointers to actual storage addresses.
  • a linker e.g., element 130 of Fig. 2
  • partitions in the compiled code e.g., object code 40
  • linker 130 module may typically receive output of a preprocessing module 100, a compiling module 110 and an assembly module 120.
  • This output may be formatted as an assembly code version of the source code (e.g., as one or more object files), and may include placeholder addresses (e.g., addresses that are initialized to an arbitrary value, such as OxFFFF) instead of real storage addresses.
  • placeholder addresses e.g., addresses that are initialized to an arbitrary value, such as OxFFFF
  • Linker 130 may create a single executable code 40, by finalizing the location of object code segments and replacing all the placeholders with real storage addresses.
  • linker 130 may sparsely stack object code segments into libraries based on flash block information (e.g., as depicted in Fig. 5B). This stands in contrast to serial library stacking (e.g., as depicted in Fig. 5 A) that may be common practice in commercially available linkers.
  • a first change or update in an instruction code may impact a plurality of code segments and may induce a plurality of alterations in object code 40.
  • a change in a single function of a software module may require a change to the function's prototype or address that may, in turn, demand a change in all the instances of the function's calls that may be manifested on a plurality of storage blocks.
  • Embodiments may include a method of avoiding such proliferation of changes, through novel compartmentation of object code segments according to a hierarchical function call structure, as explained herein.
  • IoT devices in general and automotive devices in particular typically use bare-board implementations that may create a monolithic code image, where one is unable to discern between different software components. Partitioning or compartmenting the code according to different software modules (e.g., applications, drivers, kernel objects and the like) may not be possible under such conditions.
  • software modules e.g., applications, drivers, kernel objects and the like
  • IoT software may typically be characterized by the following features:
  • the code image is normally static (or deterministic), meaning that the flow of the program may be completely determined at compilation time
  • the code is non-recursive, as required by the ISO-26262 standard and/or MISRA-C best coding practices, meaning that a function may either call a child function or end and return to its caller, but will never call one of the callers up the call graph.
  • FIG. 6 is a block diagram, depicting an example of a function call graph, which may be included within a system for management of program memory storage, according to some embodiments.
  • linker 130 may be configured to produce a function call graph including a plurality of nodes (e.g., main, fund , func2, func3, etc.). Each node may represent a specific function (e.g., main(), funcl(), func2(), func3(), etc. respectively) associated with a respective object code segment, as explained above in relation to Fig. 2.
  • the nodes may be interconnected by a plurality of edges, each representing a call of one function (e.g., main()) to another function (e.g., funcl()).
  • the call graph may represent a static, non- recursive code image. The non-recursion assumption is manifested by the fact that the software flows strictly from the left hand side to the right hand side. No arrows point from right to left in Fig. 6.
  • linker 130 may be further configured to attribute each node of the function call graph with a size indicator, representing a storage size of the respective object code segment.
  • a size indicator representing a storage size of the respective object code segment.
  • funcl 1 may be associated with a 50kB- large object code segment and funcl 11 may be associated with an object code segment that may consume 0.5kB of storage.
  • Function call graph 140 may be implemented as any appropriate data structure known in the art, including for example a linked list, a relational database and the like. Function call graph 140 may be stored in a storage device (e.g., element 4 or 6 of Fig. 1) associated with or included within computing device 10.
  • a storage device e.g., element 4 or 6 of Fig. 1
  • object code segments may be sparsely stacked into libraries according to the call graph.
  • Selection of object code segments for stacking may include the following stages:
  • linker 130 may select a group (e.g., Groupl) of nodes including, or representing one or more object code segments (e.g., object code segments respective to functions: fund i, funcl 11, funcl 12, funcl 13 and funcl 121).
  • the nodes may be related along branches of the function call graph (e.g., derive from a common calling function, such as funcl in the example of Groupl).
  • the cumulative value of the one or more object code segments' size indicators of the group may be limited so as not to surpass the block utilization limit. Pertaining to the aforementioned example, where the block utilization limit was 60% and the block size is lOOkB, this limit is set to 60kB. Therefore, the cumulative sum of object code segment sizes in each group may be limited to 60kB. As shown in Fig. 6, each of groups Groupl, Group2 and Group3 comply with this limitation.
  • Linker 130 may sparsely stack the object code segments of the selected group (e.g., Groupl) to produce a library, as elaborated above in relation to Fig. 2.
  • Linker 130 may repeat the above steps of selecting groups of object code segments and sparsely stacking the them to produce libraries, until all object code segments of the at least one first object file are stacked in libraries.
  • Computing device 10 may consequently produce object code 40 according to the sparse stacking of libraries and target device 20 may store the produced object code 40 on program memory storage 220 as explained above.
  • compiled object code normally includes an indication of storage size per each object code segment.
  • object files or assembly files normally include an indication of the storage size (e.g., size, start and/or end location, and the like) required for each function of the source code.
  • linker 130 may calculate or extract from the object code the storage size required for each object code segment. Linker 130 may attribute each node of the function call graph a size indicator, representing the storage size of the respective object code segment.
  • linker 130 may create call graph 140 with code size of the called function, as shown in Fig. 6. Once the code size is calculated for each function, these functions may be grouped to groups around the target number for the block, and distant to each other as possible. In the abovementioned example, the block should be around 60kB (as we target 60% fullness), and a total of ⁇ 53kB is grouped in Fig. 3B (shown at the uppermost group).
  • such grouping may be achieved by scanning the call tree bottom- up, starting from functions that do not call any other function (e.g., func3121) up the call stack (e.g., up to func3), and constructing a cluster that may fit into the predetermined (or designated) size (e.g., 60kB, as in the aforementioned example). It should be noted that such a method may maximize the chance that upon updating code with a fix, only a specific block may need to be replaced or updated. Thus, resulting in memory allocation at the processing chip with increased likelihood of transferring less data (e.g., reducing redundancy).
  • Fig. 7 is a flow diagram, depicting a method of management of program memory storage, according to some embodiments.
  • the method of management of program memory storage may be performed by computing device 10 (e.g., element 10 of Fig. 1) or by any other computation device that may be associated with a target device (e.g., element 20 of Fig. 2) and/or embedded or included within a network of IoT devices (e.g., an inter-vehicle network 200, as depicted in Fig. 4).
  • computing device 10 e.g., element 10 of Fig. 1
  • a target device e.g., element 20 of Fig. 2
  • IoT devices e.g., an inter-vehicle network 200, as depicted in Fig. 4
  • computing device 10 may receive (e.g., by a user, via input device 7 of Fig. 1) storage block information of a program memory storage device (e.g., element 220 of Fig. 2).
  • the storage block information may include at least one of: storage block size of storage 220 and block utilization limit of program memory storage 220.
  • computing device 10 may receive (e.g., via input device 7) at least one object file including a plurality of object code segments and a respective plurality of linker placeholders. Additionally, or alternatively, computing device 10 may receive at least one file of instruction code in a high-level computing language (e.g., C, C++ and the like), and process or build the instruction code to obtain the at least one object file.
  • a high-level computing language e.g., C, C++ and the like
  • computing device 10 may sparsely stack the object code segments to produce two or more libraries according to the storage block information.
  • computing device 10 may accumulate object code segments to produce two or more libraries 415 (e.g., elements 415A and 415B of Fig. 3) according to the storage block information, and allocate sparse (e.g., non-contiguous) memory space for the stacked libraries according to the predefined block utilization limit, as depicted in Fig. 5B.
  • computing device 10 may replace the plurality of linker placeholders with actual addresses of sections of program memory according to the stacking of object code segments.
  • computing device 10 may replace at least one linker placeholder in a storage address field (e.g., elements 412A, 412B, 412C and 412D of Fig. 3) of an object file 41 (e.g., 41A, 41B, 41C), with a pointer or reference to a memory address of storage 220.
  • a storage address field e.g., elements 412A, 412B, 412C and 412D of Fig. 3
  • an object file 41 e.g., 41A, 41B, 41C
  • computing device 10 may store the plurality of object code segments on the storage device according to the actual addresses. For example, computing device 10 may transmit (e.g., over a wired or wireless network) object code (e.g., element 40 of Fig. 2 and Fig. 3) to at least one target device 20.
  • a processor or controller (e.g., element 210) of the at least one target device 20 may receive the transmitted object code 40 and may store the content of object code 40 on storage 220 according to the address pointers in the storage address fields 412 therein.
  • Fig. 8 is a flow diagram, depicting a method of management of program memory storage on a storage device, according to some embodiments.
  • computing device 10 may receive (e.g., by a user, via input device 7 of Fig. 1) storage block information of a program memory storage device (e.g., element 220 of Fig. 2).
  • the storage block information may include at least one of: storage block size of storage 220 and block utilization limit of program memory storage 220.
  • computing device 10 may receive at least one instruction code file that may be formatted in a high-level programming language such as C or C++.
  • Computing device 10 may analyze at least one instruction code file, to produce a function call graph, as depicted in Fig. 6.
  • computing device 10 may compile the at least one instruction code file, as known in the art, to produce at least one object file comprising a plurality of object code segments and a respective plurality of linker placeholders, as depicted in Fig. 3.
  • computing device 10 may sparsely stack the object code segments to produce two or more libraries according to the storage block information and the function call graph, as explained above in relation to Fig. 6.
  • computing device 10 may replace the plurality of linker placeholders with actual addresses of sections of program memory storage 220 or pointers thereto (e.g., in storage address fields 412 of object code 40, as explained above in relation to Fig. 3) according to the stacking of object code segments.
  • Computing device 10 may transmit the updated object code 40 to at least one target device 20.
  • target device 20 may sparsely store the plurality of object code segments on program memory storage 220 according to the actual addresses in the address fields 412 of object code 40.
  • At least one storage block of program memory storage 220 may be predefined for patching future fixes.
  • embodiments may include aggregating all of the fixes (which are assumed to be an order of magnitude smaller than the relevant code) together in a dedicated storage block, and use an addressing table, to access these fixes.
  • the actual address may be received from a different hard coded location in the patch block, and the addressing table may be at the dedicated block to point at the right function.
  • the addressing table may be at the dedicated block to point at the right function.
  • the memory map may be as shown in table 1 below:
  • block '9' may include 1%+1 % memory allocation, where the address of block '9' may correspond to the patched function.
  • the patched function may be executed so there is no need to delete the original function.
  • all the functions may be collected (e.g., in the preprocessing stage) and a new source file with a table of the functions may be created, so as to replace the source code function calls with references to the new table.
  • the preprocessor may create and initialize a function table.
  • the new file with the function table may then be located in block '9' using available linker programs (or the linker as described above).
  • a dedicated algorithm may indicate differences in the code, where the detected changed function may be copied to the function table file, such that the pointer in the table may be updated.
  • the new code and function table may be burned to block '9'.
  • some chipsets may include hardware that may be used with similar methods, but without any real-time implication (e.g., the hardware breakpoint mechanism).
  • This is a mechanism that in hardware is constantly comparing the program counter in a chip to a configurable constant address, and once that address is reached, instead of fetching the next opcode from that address it may jump to the breakpoint or patch address.
  • all the fixes in a patch block may be maintained and code may be added in the boot that checks to see if it has a patch therein.
  • the algorithm may configure the patch address in this hardware module and then the chip may execute the patched code instead of the original code in run time. Since the code flow of the program is known in advance (e.g., static code), then the next patch that needs to be run is known and the algorithm may configure the hardware accordingly.
  • such an algorithm may achieve zero real time impact while only needing to transfer and burn the patch block.
  • an application such as "AutoSAR" based application
  • the application layer may include software components, which are the most likely to be updated in general. Thus, in order to maximize the chance of updating a small portion of code, code compartmentalization may be applied (to get easier updates of the software components).
  • code compartmentalization may be applied (to get easier updates of the software components).
  • the software components may be separated from the lower layer as depicted in table 2 herein:
  • updated ECU may maintain two copies of the its software, one that it is running from, and another to update (e.g., for updating while executing), such that the normal ECU functionality may be kept, while it overwrites the second copy of its software.
  • the delta algorithm e.g., "bspatch”
  • the old software instead of keeping the old software and changes in the RAM, and then building the new software in the RAM (requiring n + m + 0(1) memory), the old software that is available may be utilized. All the changes in the RAM may be kept, but instead of keeping the old and new software, only the next block to burn in the RAM may be kept.
  • linker level interference in the binary creation process, where the linker decides where to place the compiled code and its different sections and replaces the addressing placeholders accordingly.
  • this might be realized as a plugin to an existing linker and/or a new linker and/or as a separate linker pass either before or after the normal linking.
  • such implementation method may include at least one of the following advantages: predictable since for a specific update length based on the number of blocks planned to update there is no need to "guess" compression performance, works in tandem with other ("classic") delta technologies for per-block compression and easy integration with existing processes where there is no need for a dedicated "back-end” for on-the-fly generation of delta updates.
  • the method embodiments described herein are not constrained to a particular order in time or chronological sequence. Additionally, some of the described method elements may be skipped, or they may be repeated, during a sequence of operations of a method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

L'invention concerne un système et un procédé de gestion de mémoire de programme sur un dispositif de stockage. Le procédé peut consister à : recevoir des informations de blocs de stockage, comprenant au moins une valeur parmi : la taille de bloc de stockage d'un dispositif de stockage et la limite d'utilisation de blocs du dispositif de stockage ; recevoir au moins un premier fichier objet comprenant une pluralité de segments de code objet et une pluralité respective de paramètres fictifs d'éditeur de liens ; empiler de manière clairsemée les segments de code objet pour produire deux bibliothèques ou plus selon les informations de blocs de stockage ; remplacer la pluralité de paramètres fictifs d'éditeur de liens par des adresses réelles de sections de mémoire de programme selon l'empilement de segments de code objet ; et stocker la pluralité de segments de code objet sur le dispositif de stockage selon les adresses réelles.
EP18867427.9A 2017-10-17 2018-10-17 Système et procédé de gestion de mémoire de programme sur un dispositif de stockage Ceased EP3698253A4 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762573178P 2017-10-17 2017-10-17
PCT/IL2018/051113 WO2019077607A1 (fr) 2017-10-17 2018-10-17 Système et procédé de gestion de mémoire de programme sur un dispositif de stockage

Publications (2)

Publication Number Publication Date
EP3698253A1 true EP3698253A1 (fr) 2020-08-26
EP3698253A4 EP3698253A4 (fr) 2021-08-04

Family

ID=66174433

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18867427.9A Ceased EP3698253A4 (fr) 2017-10-17 2018-10-17 Système et procédé de gestion de mémoire de programme sur un dispositif de stockage

Country Status (2)

Country Link
EP (1) EP3698253A4 (fr)
WO (1) WO2019077607A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11256785B2 (en) * 2019-07-09 2022-02-22 Microsoft Technologly Licensing, LLC Using secure memory enclaves from the context of process containers
CN112748925A (zh) * 2019-10-30 2021-05-04 北京国双科技有限公司 利用标签解析前端代码的方法、装置和设备
CN112817617A (zh) 2019-11-18 2021-05-18 华为技术有限公司 软件升级方法、装置和系统
CN112925552B (zh) * 2021-02-26 2023-07-28 北京百度网讯科技有限公司 代码处理方法、装置、设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5727215A (en) * 1995-11-30 1998-03-10 Otis Elevator Company Method for replacing software modules utilizing a replacement address table
US8069192B2 (en) * 2004-03-22 2011-11-29 Microsoft Corporation Computing device with relatively limited storage space and operating / file system thereof
US8195912B2 (en) * 2007-12-06 2012-06-05 Fusion-io, Inc Apparatus, system, and method for efficient mapping of virtual and physical addresses
GB2527060B (en) * 2014-06-10 2021-09-01 Arm Ip Ltd Method and device for updating software executed from non-volatile memory
KR101906074B1 (ko) * 2017-11-15 2018-10-08 재단법인 경북아이티융합 산업기술원 IoT 디바이스 운용 플랫폼 시스템

Also Published As

Publication number Publication date
EP3698253A4 (fr) 2021-08-04
WO2019077607A1 (fr) 2019-04-25

Similar Documents

Publication Publication Date Title
WO2019077607A1 (fr) Système et procédé de gestion de mémoire de programme sur un dispositif de stockage
US8438558B1 (en) System and method of updating programs and data
CN107832062B (zh) 一种程序更新方法及终端设备
CN107506219A (zh) 一种基于Android系统的通用版本升级方法
CN105389191A (zh) 一种基于局域网的软件升级方法、装置和系统
CN110032339B (zh) 数据迁移方法、装置、系统、设备和存储介质
CN111158715B (zh) 灰度发布控制方法及系统
CN104267978A (zh) 一种生成差分包的方法及装置
US11036480B2 (en) General machine learning model, and model file generation and parsing method
CN113504918A (zh) 设备树配置优化方法、装置、计算机设备和存储介质
WO2023221735A1 (fr) Procédé de mise à jour de micrologiciel de dispositif intégré, dispositif intégré et dispositif terminal de développement
CN104866293A (zh) 一种对Android应用程序扩展功能的方法及装置
CN114595058A (zh) 基于gpu资源的模型训练方法和装置、电子设备和存储介质
CN113051250A (zh) 数据库集群的扩容方法和装置、电子设备和存储介质
CN109446754A (zh) 智能合约中算法的保护方法、装置、设备及存储介质
CN108694049B (zh) 一种更新软件的方法和设备
KR102392880B1 (ko) 계층화 문서를 관리하는 방법 및 이를 이용한 장치
CN107621946B (zh) 一种软件开发方法、装置及系统
CN112559020A (zh) 文件升级方法、装置、设备和介质
CN116991758A (zh) 一种空间布局的更新方法、装置、设备及介质
CN115934354A (zh) 在线存储方法和装置
CN112269665B (zh) 内存的处理方法和装置、电子设备和存储介质
CN115004667B (zh) 信息推送方法、装置、电子设备及计算机可读介质
CN106326310B (zh) 一种手机客户端软件的资源加密更新方法
CN110333870B (zh) Simulink模型变量分配的处理方法、装置及设备

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200518

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ARGUS CYBER SECURITY LTD

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20210706

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 12/02 20060101AFI20210630BHEP

Ipc: G06F 9/445 20180101ALI20210630BHEP

Ipc: G11C 7/00 20060101ALI20210630BHEP

Ipc: G06F 8/654 20180101ALI20210630BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20230103

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20230729