CN115993994A - System acceleration method, device, electronic equipment and storage medium - Google Patents

System acceleration method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115993994A
CN115993994A CN202111215490.3A CN202111215490A CN115993994A CN 115993994 A CN115993994 A CN 115993994A CN 202111215490 A CN202111215490 A CN 202111215490A CN 115993994 A CN115993994 A CN 115993994A
Authority
CN
China
Prior art keywords
data
processing
processor
processed
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111215490.3A
Other languages
Chinese (zh)
Inventor
丁磊
黄骏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202111215490.3A priority Critical patent/CN115993994A/en
Publication of CN115993994A publication Critical patent/CN115993994A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application provides a system acceleration method, a device, electronic equipment and a storage medium, wherein the method is applied to the electronic equipment comprising a main processor, a slave processor and a storage unit, the main processor receives a system closing request, stores first system operation data operated by the main processor to the storage unit, determines a first memory address of the storage unit of the first system operation data, and guides the main processor to be closed; the slave processor acquires a first memory address, and starts the slave processor according to first system operation data corresponding to the first memory address. Therefore, the switching of the high-low power consumption processor can be rapidly completed, and the switching efficiency of the system is improved.

Description

System acceleration method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and in particular, to a system acceleration method, a system acceleration device, an electronic device, and a storage medium.
Background
The memory is an internal memory for directly exchanging data with the processor, and is used for temporarily loading various data and programs for the processor to directly run and use, and space is released after the programs and data processing are finished. The system can be composed of various programs and data, and in the normal operation process of the system, the memory normally loads the data and the programs, and the processor reads the data and the programs from the memory to maintain the normal operation of the system.
When the system needs to be shut down, the processor is shut down, and the data in the memory is cleared. When the system is restarted, the data and the program in the hard disk are required to be loaded into the memory, and then the restarted processor is used for reading the data and the program from the memory again, restarting the program and loading the data, so that the system is started. In the system starting mode, data and programs need to be restarted every time, and the starting speed is low.
Disclosure of Invention
The embodiment of the application provides a system acceleration method, a system acceleration device, electronic equipment and a storage medium, so as to solve the problems in the related art, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a system acceleration method, which is applied to an electronic device including a master processor, a slave processor, and a storage unit, and includes: the main processor receives a system closing request, stores first system operation data operated by the main processor into the storage unit, determines a first memory address of the first system operation data in the storage unit, and guides the main processor to be closed; and the slave processor acquires the first memory address and guides the slave processor to start according to first system operation data corresponding to the first memory address.
In one embodiment, the method further comprises: the slave processor stores second system operation data operated by the slave processor into the storage unit, and determines a second memory address of the second system operation data in the storage unit, and guides the slave processor to be closed; and the main processor receives a system starting request, acquires the second memory address, and guides the main processor to restart according to second system operation data corresponding to the second memory address.
In one embodiment, the electronic device further includes at least two hard disks, and the main processor maps respective physical addresses of the at least two hard disks to corresponding virtual addresses sequentially at intervals; the main processor receives a data processing request for data to be processed; and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals according to the data processing request and the mapping relation between the physical address and the virtual address of each of the at least two hard disks.
In one embodiment, the main processor performs target processing on the data to be processed in the at least two hard disks at intervals according to the data processing request and the mapping relationship between the physical addresses and the virtual addresses of the at least two hard disks, and the method includes: the main processor determines target processing according to the data processing request; and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed.
In one embodiment, the main processor determines a target process according to the data processing request, including: if the data processing request is the data writing request, the main processor determines that target processing is data writing processing; and if the data processing request is the data reading request, the main processor determines that the target processing is the data reading processing.
In one embodiment, the virtual space corresponding to each hard disk includes a virtual cache space and a virtual data space, the virtual data space includes a system data space and a user data space, and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relationship between the physical addresses and the virtual addresses of the at least two hard disks, and the virtual addresses corresponding to the data to be processed, including: the main processor executes the data to be processed on the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed: caching the data to be processed into the virtual cache space; according to the data type, first target data of a system data type are obtained from the data to be processed stored in the virtual cache space, and the target processing is executed on the first target data in the system data space; and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing the target processing on the second target data in the user data space.
In a second aspect, an embodiment of the present application provides a system acceleration device applied to an electronic device including a master processor, a slave processor, and a storage unit, including: the closing module is used for storing first system operation data operated by the main processor to the storage unit under the condition that the main processor receives a system switching request, determining a first memory address of the first system operation data in the storage unit and guiding the main processor to be closed; and the starting module is used for acquiring the first memory address and guiding the slave processor to start according to first system operation data corresponding to the first memory address.
In one embodiment, the shutdown module is further configured to save second system operation data of the slave processor to the storage unit, and determine a second memory address of the second system operation data in the storage unit, so as to guide the slave processor to be shutdown; the starting module is further configured to obtain the second memory address when the main processor receives a system starting request, and guide the main processor to restart according to second system operation data corresponding to the second memory address.
In one embodiment, the electronic device further includes at least two hard disks, and the apparatus further includes a mapping module, a receiving module, and a processing module: the mapping module is used for sequentially mapping the respective physical addresses of the at least two hard disks to corresponding virtual addresses at intervals; the receiving module is used for receiving a data processing request aiming at data to be processed; the processing module is configured to execute target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the data processing request and the mapping relationship between the physical addresses and the virtual addresses of the at least two hard disks.
In one embodiment, the processing module is specifically configured to: determining target processing according to the data processing request; and executing target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed.
In one embodiment, the processing module is specifically configured to: if the data processing request is the data writing request, determining that the target processing is data writing processing; and if the data processing request is the data reading request, determining target processing to be data reading processing.
In one embodiment, the processing module is specifically configured to: executing the data to be processed on the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed: caching the data to be processed into the virtual cache space; according to the data type, first target data of a system data type are obtained from the data to be processed stored in the virtual cache space, and the target processing is executed on the first target data in the system data space; and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing the target processing on the second target data in the user data space.
In a third aspect, an embodiment of the present application provides a system-accelerated electronic device, including: memory and a processor. Wherein the memory and the processor are in communication with each other via an internal connection, the memory is configured to store instructions, the processor is configured to execute the instructions stored by the memory, and when the processor executes the instructions stored by the memory, the processor is configured to perform the method of any one of the embodiments of the above aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the method in any one of the above embodiments is performed when the computer program is run on a computer.
The advantages or beneficial effects in the technical scheme at least comprise: when the operating system is switched from the high-power-consumption main processor to the low-power-consumption auxiliary processor, the main processor stores the first memory address of the first system operation data in the memory, so that the data in the memory is not eliminated, when a system closing request is received, the operating system is switched to the auxiliary processor, the first system operation data of the memory is directly loaded, the switching of the high-power-consumption processor and the low-power-consumption processor is rapidly completed, and the switching efficiency of the system is improved.
The foregoing summary is for the purpose of the specification only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will become apparent by reference to the drawings and the following detailed description.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the disclosure and are not therefore to be considered limiting of its scope.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a flow chart of a system acceleration method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of space division of a virtual space of a hard disk according to an embodiment of the present application;
FIG. 4 is a block diagram of a system acceleration device according to an embodiment of the present application;
fig. 5 is a block diagram of an electronic device for system acceleration according to an embodiment of the present application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature and not as restrictive.
Fig. 1 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 1, the electronic device according to the embodiment of the application includes a master processor, a slave processor, a storage unit shared among the master processor and the slave processor, where the storage unit is used to store system operation data of an operating system during a switching process of the master processor and the slave processor, and optionally, the storage unit is composed of a storage chip of a dual-channel 24G. Alternatively, the host processor provides high-speed data services using two hard disks, for example flash UFS3.1 and nonvolatile memory (Nvme), respectively, in fig. 1.
Fig. 2 shows a flowchart of a system acceleration method according to an embodiment of the present application, which may be applied in the electronic device shown in fig. 1. As shown in fig. 2, the system acceleration method may include:
step S201: the main processor receives the system closing request, stores first system operation data operated by the main processor into the storage unit, determines a first memory address of the storage unit of the first system operation data, and guides the main processor to be closed.
Step S202: the slave processor acquires a first memory address, and starts the slave processor according to first system operation data corresponding to the first memory address.
In the embodiment of the application, the operating system kernel can run on any one processor, each processor can self-schedule running processes and threads, the operating system kernel is also designed to be multi-process or multi-thread, and all parts of the kernel can execute in parallel. The difference between the master processor and the slave processor is that both the master processor and the slave processor can operate system data, but the memory performance of the master processor is superior to that of the slave processor, so that the master processor has higher operation power consumption than that of the slave processor.
Therefore, when the operating system is switched from the high-power-consumption main processor to the low-power-consumption auxiliary processor, the main processor stores the first memory address of the first system operation data in the memory, so that the data in the memory is not eliminated, and when a system closing request is received and the operating system is switched to the auxiliary processor, the first system operation data of the memory is directly loaded, so that the switching of the high-power-consumption processor and the low-power-consumption processor is quickly completed, and the switching efficiency of the system is improved.
In a possible implementation manner, the method further includes step S203 and step S204.
Step S203: the slave processor stores second system operation data operated by the slave processor into the storage unit, and determines a second memory address of the storage unit of the second system operation data, and guides the slave processor to be closed.
Step S204: the main processor receives the system starting request, acquires a second memory address, and guides the main processor to restart according to second system operation data corresponding to the second memory address.
In this way, the secondary processor stores the second memory address of the second system operation data in the memory, namely the memory unit, so that the data in the memory is not eliminated, and when the primary processor receives a system starting request, the secondary processor obtains the second memory address, so that the system is started according to the second system operation data corresponding to the second memory address. When restarting, the system can be started quickly without loading data from the hard disk to the memory again and directly loading the running data of the system, so that the starting efficiency of the system is improved.
In one possible implementation, a host processor receives a data processing request; the main processor provides data processing services using at least two hard disks.
In the implementation mode, the main processor can utilize array acceleration to improve the speed of reading and writing I/O of the hard disk during the system starting process or the normal operation of the system.
Specifically, the main processor provides data processing service by using at least two hard disks, and the data processing service can be realized through the following processes: the main processor maps the physical addresses of at least two hard disks into corresponding virtual addresses at intervals in sequence; the method comprises the steps that a main processor receives a data processing request for data to be processed; and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the data processing request and the mapping relation between the physical addresses and the virtual addresses of the at least two hard disks.
Specifically, the main processor executes target processing on data to be processed in at least two hard disks at intervals in sequence according to the data processing request and the mapping relation between the physical address and the virtual address of each of the at least two hard disks, including: the main processor determines target processing according to the data processing request; and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical addresses and the virtual addresses of the at least two hard disks and the virtual addresses corresponding to the data to be processed.
Specifically, the data processing request in the embodiment of the present application includes a data writing request and a data reading request, where when the data processing request is a data writing request, the target process is a data writing process; when the data processing request is a data reading request, the target process is a data reading process.
It should be noted that, in the embodiment of the present application, the host processor sets a virtual address for each physical address on the hard disk, establishes a mapping relationship between an actual physical address and the virtual address, and provides a virtual address to the outside.
For example: flash UFS3.1 and nonvolatile memory (Nvme) that provide data services for the host processor are referred to as hard disk a and hard disk B, respectively.
Alternatively, the actual physical address of the hard disk a is mapped to a singular virtual address, and the actual physical address of the hard disk B is mapped to a double virtual address. And when the data read-write service is actually provided to the outside, the virtual address is provided to the outside.
Assume that: when the data abc is required to be accessed, the memory is allocated to the data according to the address value of the virtual address, and the single virtual address and the double virtual address are allocated in sequence. Specifically, the virtual address assigned to the a data is 1, the virtual address assigned to the b data is 2, and the virtual address assigned to the c data is 3.
Thus, the host processor maps virtual address 1 to the A hard disk, virtual address 2 to the B hard disk, and virtual address 3 to the A hard disk when actually performing a data write operation or a data read operation. Therefore, the data can be simultaneously written into the two hard disks A and B, so that the two hard disks can simultaneously write or read the data, and the data access efficiency is improved.
In a possible implementation manner, in an embodiment of the present application, a virtual space of a hard disk is spatially divided, where the virtual space corresponding to each hard disk includes a virtual cache space and a virtual data space, and the virtual data space includes a system data space and a user data space
Specifically, according to the divided virtual space, performing target processing on the data to be processed, including: the main processor caches the data to be processed into a virtual cache space; according to the data type, acquiring first target data of the system data type from the data to be processed stored in the virtual cache space, and executing target processing on the first target data in the system data space; and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing target processing on the second target data in the user data space.
Optionally, referring to fig. 3, in the embodiment of the present application, the virtual space of the two hard disks is divided into a virtual cache space and a virtual data space according to a storage stage of data, which may also be referred to as a cache (data) area and a data (data) area. Wherein, the data in the cache area is used for caching the data to be processed.
For example, under the situation of unexpected power failure of the electronic device, at this time, the memory data needs to be cleared, the data in the cache area can be stored for a period of time, the time interval can be flexibly set for a longer period of time according to actual needs, and the data in the cache area can be written into the data area during the period of time.
Further, in the embodiment of the present application, the data area is also divided, for example, into a system data area (disk 1) and a user data area (disk 2) according to the data function or the data type.
After the data is partitioned according to the partitioned data area, the data can be protected, for example, when the disk1 has a fault problem, only the data stored by the disk1 is required to be restored, and the stored data of the disk2 can be used continuously because the data is not damaged.
Fig. 4 shows a block diagram of a system accelerator according to an embodiment of the present invention. As shown in fig. 4, the apparatus applied to an electronic device including a master processor, a slave processor, and a storage unit may include: a shutdown module 401 and a startup module 402, wherein:
the closing module 401 is configured to store, when the main processor receives the system switching request, first system operation data of the main processor to the storage unit, and determine a first memory address of the storage unit for the first system operation data, so as to guide the main processor to close; the starting module 402 is configured to obtain a first memory address, and guide the slave processor to start according to first system operation data corresponding to the first memory address.
In one embodiment, the shutdown module 401 is further configured to save the second system operation data executed by the slave processor to the storage unit, and determine a second memory address of the storage unit for the second system operation data, and guide the slave processor to shutdown; the starting module 402 is further configured to obtain a second memory address when the main processor receives a system starting request, and guide the main processor to restart according to second system operation data corresponding to the second memory address.
In one embodiment, the electronic device further includes at least two hard disks, and the apparatus further includes a mapping module (not shown in the figure), a receiving module (not shown in the figure), and a processing module (not shown in the figure): the mapping module is used for sequentially mapping the physical addresses of at least two hard disks to corresponding virtual addresses at intervals; the receiving module is used for receiving a data processing request aiming at data to be processed; and the processing module is used for executing target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the data processing request and the mapping relation between the physical addresses and the virtual addresses of the at least two hard disks.
In one embodiment, the processing module is specifically configured to: determining target processing according to the data processing request; and executing target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical addresses and the virtual addresses of the at least two hard disks and the virtual addresses corresponding to the data to be processed.
In one embodiment, the processing module is specifically configured to: if the data processing request is a data writing request, determining that the target processing is data writing processing; and determining target processing as data reading processing when the data processing request is a data reading request.
In one embodiment, the processing module is specifically configured to: according to the mapping relation between the physical addresses and the virtual addresses of at least two hard disks and the virtual addresses corresponding to the data to be processed, executing the data to be processed in the at least two hard disks at intervals in sequence: caching data to be processed into a virtual cache space; according to the data type, acquiring first target data of the system data type from the data to be processed stored in the virtual cache space, and executing target processing on the first target data in the system data space; and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing target processing on the second target data in the user data space.
The functions of each module in each device of the embodiments of the present invention may be referred to the corresponding descriptions in the above methods, and are not described herein again.
Fig. 5 shows a block diagram of an electronic device for system acceleration according to an embodiment of the invention. As shown in fig. 5, the electronic device includes: memory 510 and processor 520, memory 510 stores a computer program executable on processor 520. The processor 520, when executing the computer program, implements the system acceleration method in the above-described embodiment. The number of memories 510 and processors 520 may be one or more.
The electronic device further includes:
and the communication interface 530 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 510, the processor 520, and the communication interface 530 are implemented independently, the memory 510, the processor 520, and the communication interface 530 may be connected to each other and communicate with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 510, the processor 520, and the communication interface 530 are integrated on a chip, the memory 510, the processor 520, and the communication interface 530 may communicate with each other through internal interfaces.
The embodiment of the invention provides a computer readable storage medium storing a computer program which, when executed by a processor, implements a method provided in the embodiment of the application.
The embodiment of the application also provides a chip, which comprises a processor and is used for calling the instructions stored in the memory from the memory and running the instructions stored in the memory, so that the communication device provided with the chip executes the method provided by the embodiment of the application.
The embodiment of the application also provides a chip, which comprises: the input interface, the output interface, the processor and the memory are connected through an internal connection path, the processor is used for executing codes in the memory, and when the codes are executed, the processor is used for executing the method provided by the application embodiment.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processing, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (fieldprogrammablegate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (advanced RISC machines, ARM) architecture.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may include a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory, among others. Volatile memory can include random access memory (random access memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (dynamic random access memory, DRAM), synchronous DRAM (SDRAM), double data rate synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A system acceleration method applied to an electronic device comprising a master processor, a slave processor and a storage unit, comprising:
the main processor receives a system closing request, stores first system operation data operated by the main processor into the storage unit, determines a first memory address of the first system operation data in the storage unit, and guides the main processor to be closed;
and the slave processor acquires the first memory address and guides the slave processor to start according to first system operation data corresponding to the first memory address.
2. The method as recited in claim 1, further comprising:
the slave processor stores second system operation data operated by the slave processor into the storage unit, and determines a second memory address of the second system operation data in the storage unit, and guides the slave processor to be closed;
and the main processor receives a system starting request, acquires the second memory address, and guides the main processor to restart according to second system operation data corresponding to the second memory address.
3. The method of claim 1, wherein the electronic device further comprises at least two hard disks,
the main processor maps the physical addresses of the at least two hard disks to corresponding virtual addresses at intervals in sequence;
the main processor receives a data processing request for data to be processed;
and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals according to the data processing request and the mapping relation between the physical address and the virtual address of each of the at least two hard disks.
4. The method of claim 3, wherein the host processor performs target processing on the data to be processed at sequentially spaced intervals in the at least two hard disks according to the data processing request and the mapping relationship between the physical address and the virtual address of each of the at least two hard disks, comprising:
the main processor determines target processing according to the data processing request;
and the main processor executes target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed.
5. The method of claim 4, wherein the main processor determining a target process from the data processing request comprises:
if the data processing request is the data writing request, the main processor determines that target processing is data writing processing;
and if the data processing request is the data reading request, the main processor determines that the target processing is the data reading processing.
6. The method according to any one of claims 3 to 5, wherein the virtual space corresponding to each hard disk includes a virtual cache space and a virtual data space, the virtual data space includes a system data space and a user data space, and the host processor sequentially performs target processing on the data to be processed in the at least two hard disks at intervals according to a mapping relationship between physical addresses and virtual addresses of the at least two hard disks, the virtual addresses corresponding to the data to be processed, the target processing including:
the main processor executes the data to be processed on the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed:
caching the data to be processed into the virtual cache space;
according to the data type, first target data of a system data type are obtained from the data to be processed stored in the virtual cache space, and the target processing is executed on the first target data in the system data space; the method comprises the steps of,
and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing the target processing on the second target data in the user data space.
7. A system acceleration apparatus for use in an electronic device including a master processor, a slave processor, and a storage unit, comprising:
the closing module is used for storing first system operation data operated by the main processor to the storage unit under the condition that the main processor receives a system switching request, determining a first memory address of the first system operation data in the storage unit and guiding the main processor to be closed;
and the starting module is used for acquiring the first memory address and guiding the slave processor to start according to first system operation data corresponding to the first memory address.
8. The apparatus of claim 7, wherein the shutdown module is further configured to save second system operation data of the slave processor to the storage unit, and determine a second memory address of the second system operation data in the storage unit, and direct the slave processor to shutdown;
the starting module is further configured to obtain the second memory address when the main processor receives a system starting request, and guide the main processor to restart according to second system operation data corresponding to the second memory address.
9. The apparatus of claim 8, wherein the electronic device further comprises at least two hard disks, the apparatus further comprising a mapping module, a receiving module, and a processing module:
the mapping module is used for sequentially mapping the respective physical addresses of the at least two hard disks to corresponding virtual addresses at intervals;
the receiving module is used for receiving a data processing request aiming at data to be processed;
the processing module is configured to execute target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the data processing request and the mapping relationship between the physical addresses and the virtual addresses of the at least two hard disks.
10. The apparatus of claim 9, wherein the processing module is specifically configured to:
determining target processing according to the data processing request;
and executing target processing on the data to be processed in the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed.
11. The apparatus of claim 10, wherein the processing module is specifically configured to:
if the data processing request is the data writing request, determining that the target processing is data writing processing;
and if the data processing request is the data reading request, determining target processing to be data reading processing.
12. The apparatus according to claims 9 to 11, wherein the processing module is specifically configured to: executing the data to be processed on the at least two hard disks at intervals in sequence according to the mapping relation between the physical address and the virtual address of each of the at least two hard disks and the virtual address corresponding to the data to be processed:
caching the data to be processed into the virtual cache space;
according to the data type, first target data of a system data type are obtained from the data to be processed stored in the virtual cache space, and the target processing is executed on the first target data in the system data space; the method comprises the steps of,
and according to the data type, acquiring second target data of the user data type from the data to be processed stored in the virtual cache space, and executing the target processing on the second target data in the user data space.
13. A system-accelerated electronic device, comprising: a memory, a master processor, and a slave processor; wherein the memory is for storing one or more computer instructions that are invoked by the master processor and the slave processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202111215490.3A 2021-10-19 2021-10-19 System acceleration method, device, electronic equipment and storage medium Pending CN115993994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111215490.3A CN115993994A (en) 2021-10-19 2021-10-19 System acceleration method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111215490.3A CN115993994A (en) 2021-10-19 2021-10-19 System acceleration method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115993994A true CN115993994A (en) 2023-04-21

Family

ID=85990741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111215490.3A Pending CN115993994A (en) 2021-10-19 2021-10-19 System acceleration method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115993994A (en)

Similar Documents

Publication Publication Date Title
KR101110490B1 (en) Information processing device, processor and memory management method
US8453015B2 (en) Memory allocation for crash dump
US9298472B2 (en) High-speed restart method, information processing device, and program
US10789184B2 (en) Vehicle control device
US20130036426A1 (en) Information processing device and task switching method
JP2012252576A (en) Information processing device, start method and program
US7010725B2 (en) Method and apparatus for getting dump of a computer system
JP2010500682A (en) Flash memory access circuit
JP2007334403A (en) System and method for supporting trouble of computer system
JP2009175904A (en) Multiprocessor processing system
US10346234B2 (en) Information processing system including physical memory, flag storage unit, recording device and saving device, information processing apparatus, information processing method, and computer-readable non-transitory storage medium
JP2005122334A (en) Memory dump method, memory dumping program and virtual computer system
EP4089545A1 (en) Virtual mapping based off-chip non-volatile memory dynamic loading method and electronic apparatus
US7934073B2 (en) Method for performing jump and translation state change at the same time
CN115993994A (en) System acceleration method, device, electronic equipment and storage medium
CN108319470B (en) Method, device and equipment for creating OS starting item and readable storage medium
JP2019036322A (en) Vehicle controller
US20080072009A1 (en) Apparatus and method for handling interrupt disabled section and page pinning apparatus and method
TWI760756B (en) A system operative to share code and a method for code sharing
TWI660307B (en) Binary translation device and method
US20220269777A1 (en) Apparatus and method for detecting violation of control flow integrity
US20110131397A1 (en) Multiprocessor system and multiprocessor control method
US8862825B2 (en) Processor supporting coarse-grained array and VLIW modes
JP2019114097A (en) Semiconductor device
CN113569231B (en) Multiprocess MPU protection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication