CN116382657A - Electronic device, application program running method, device and storage medium - Google Patents

Electronic device, application program running method, device and storage medium Download PDF

Info

Publication number
CN116382657A
CN116382657A CN202310315524.9A CN202310315524A CN116382657A CN 116382657 A CN116382657 A CN 116382657A CN 202310315524 A CN202310315524 A CN 202310315524A CN 116382657 A CN116382657 A CN 116382657A
Authority
CN
China
Prior art keywords
application
running
application program
data
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310315524.9A
Other languages
Chinese (zh)
Inventor
王子杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202310315524.9A priority Critical patent/CN116382657A/en
Publication of CN116382657A publication Critical patent/CN116382657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/31Programming languages or programming paradigms
    • G06F8/315Object-oriented languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The disclosure provides an electronic device, an application program running method, an application program running device and a storage medium, which can be applied to the technical field of computers. The electronic device includes: a memory configured to store data resources for running the application program; and a processor configured to: creating an object pool and a process pool, wherein the object pool comprises a plurality of application instances, and the process pool comprises a plurality of processes; in response to receiving a request to run an application program, acquiring a target application instance related to the application program from a plurality of application instances, acquiring a target process from a plurality of processes, and acquiring data resources related to the running application program from a memory; and running the application program by compiling the target application instance and the data resource with the target process.

Description

Electronic device, application program running method, device and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an electronic device, an application program running method, an apparatus, and a storage medium.
Background
In the development field of lightweight portable devices and wearable devices, development methods are gradually converted from C-based language compilation to scripted programming based on JavaScript, python and the like, which results in improved programming methods and efficiency.
However, since the computing power of the lightweight device is limited, for example, a device controlled by the JavaScript script program may have a phenomenon that the JavaScript application starting rate and the running rate are slow, thereby causing a user to feel less to experience with the lightweight portable device and the wearable device.
Disclosure of Invention
In view of the foregoing, the present disclosure provides an electronic device, an application running method, an apparatus, a medium, and a program product.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a memory configured to store data resources for running the application program; and a processor configured to: creating an object pool and a process pool, wherein the object pool comprises a plurality of application instances, and the process pool comprises a plurality of processes; in response to receiving a request to run an application program, acquiring a target application instance related to the application program from a plurality of application instances, acquiring a target process from a plurality of processes, and acquiring data resources related to the running application program from a memory; and running the application program by compiling the target application instance and the data resource with the target process.
According to an embodiment of the present disclosure, a processor running an application includes: carrying out hash operation on the target application instance and the data resource by utilizing the target process to obtain a hash value; under the condition that the byte code corresponding to the hash value is hit in the cache is determined, determining an application version number corresponding to the byte code; and executing the byte code under the condition that the application version number corresponding to the byte code is determined to be the same as the application version number of the application program.
According to an embodiment of the present disclosure, the processor running the application further comprises: under the condition that the byte codes are not hit in the cache, compiling the target application instance and the data resource to obtain the byte codes; executing byte codes; and writing the bytecode into the cache.
According to an embodiment of the present disclosure, the processor writing the bytecode into the cache includes: writing the byte code into the cache under the condition that the residual storage capacity of the cache is determined to meet the storage condition; deleting discarded data in the cache under the condition that the residual storage capacity of the cache does not meet the storage condition, wherein the discarded data is data with access probability smaller than preset probability in the cache; and writing the byte code into the cache after the discarded data is deleted.
According to an embodiment of the present disclosure, the processor is further configured to: responsive to receiving a request to stop running the application, writing running data associated with the application into a delay queue; and setting a delay deletion time for the operation data to delete the operation data in the delay queue in case that the length of the delay deletion time is determined.
According to an embodiment of the present disclosure, the processor is further configured to: in response to receiving a request to run an application, querying a delay queue if it is determined that the application is not running for the first time; acquiring operation data from the delay queue under the condition that the delay queue is determined to comprise the operation data of the application program; and executing the operational data.
According to an embodiment of the present disclosure, the processor is further configured to: releasing an application instance and a process occupied by an application program corresponding to the operation data under the condition that the operation data in the delay queue is deleted; and sending the application instance to the object pool and sending the process to the process pool.
In another aspect of the disclosure, the disclosure provides an application running method, including: creating an object pool and a process pool, wherein the object pool comprises a plurality of application instances, and the process pool comprises a plurality of processes; in response to receiving a request to run an application program, acquiring a target application instance related to the application program from a plurality of application instances, acquiring a target process from a plurality of processes, and acquiring data resources related to the running application program from a memory; and compiling the target application instance and the data resource by utilizing the target process, and running the application program.
In another aspect of the present disclosure, the present disclosure provides an application running apparatus, including: the system comprises a creation module, a processing module and a storage module, wherein the creation module is used for creating an object pool and a processing pool, the object pool comprises a plurality of application instances, and the processing pool comprises a plurality of processes; the acquisition module is used for responding to the received request for running the application program, acquiring a target application instance related to the application program from a plurality of application instances, acquiring a target process from a plurality of processes and acquiring data resources related to the running application program from a memory; and the running module is used for compiling the target application instance and the data resource by utilizing the target process and running the application program.
In another aspect of the present disclosure, the present disclosure provides an electronic device, including: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method described above.
In another aspect of the disclosure, the disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described method.
In another aspect of the disclosure, the disclosure provides a computer program product comprising a computer program which, when executed by a processor, implements the above method.
According to the embodiment of the disclosure, the electronic equipment creates the object pool and the process pool for the running of the application program, so that a universal resource is provided for the running process of the application program, the phenomenon that the running of the application program is slow due to the resource allocation problem is avoided when the application program runs, and the running rate of the application program is improved.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates a structural schematic diagram of an electronic device according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a schematic diagram of application running according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of application running according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a processor running bytecode according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a schematic diagram of a processor processing operational data according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of an application running method according to an embodiment of the disclosure;
FIG. 7 schematically illustrates a block diagram of an application running device according to an embodiment of the present disclosure; and
fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement an application running method according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Fig. 1 schematically illustrates a structural schematic diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 1, the electronic device 100 includes a memory 110 and a processor 120.
In the disclosed embodiment, the memory 110 stores data resources for running applications. For example, the application is a JavaScript application, and the data resources include JavaScript scripts, pictures, audio, and the like.
The processor 120 creates an object pool comprising a plurality of application Instance app instances and a process pool comprising a plurality of process processes. The processor 120, in response to receiving a request to run an application, obtains a target application instance associated with the application from a plurality of application instances, obtains a target process from a plurality of processes, obtains data resources associated with the running application from the memory 110, and compiles the target application instance and the data resources by utilizing the target process to run the application.
For example, the object pool may be a JavaScript application object pool, and the application instance may be an application object. The multiple application instances in the object pool may be the same or different. For example, the same application instance belongs to a common resource required to run multiple JavaScript applications, and different application instances belong to a specific resource required to run corresponding JavaScript applications. For example, the object pool includes an application instance a, an application instance B, and a plurality of application instances C. The application instance A belongs to a specific resource required by the operation of the JavaScript application program A, the application instance B belongs to a specific resource required by the operation of the JavaScript application program B, and the application instance C belongs to a general resource required by the operation of both the JavaScript application program A and the JavaScript application program B. The same application instance is used for realizing the same running attribute and the same initialization logic of a plurality of JavaScript applications, and different application instances are used for realizing the specific attribute of the corresponding JavaScript applications.
For example, the JavaScript application needs to run in a process environment, and the processor 120 distributes data and resources required for running the JavaScript application into the process, compiles the data and resources, and executes the compiling result. After completing the compilation and execution of the tasks, the processor 120 may recycle the processes that complete the tasks back into the process pool. Inter-process communication (Inter Process Communication, IPC) between multiple processes in a process pool may be implemented by means of signals, semaphores, message queues, pipes, etc.
In the disclosed embodiments, the object pools and process pools may be created prior to application launch and execution. In the event that it is determined that the electronic device 100 is in an operational state, such as after the electronic device 100 is powered on, the processor 120 creates an object pool and a process pool for use in subsequently running the application. And a plurality of application instances are pre-created in the object pool, so that the cost of creating the application instances when the application program is started and operated can be reduced, and the operation efficiency of the program can be improved. In addition, creating multiple identical application instances in the object pool can also avoid resource allocation overhead generated between different JavaScript applications.
For example, a process in a process pool is in an idle (waiting) state when it is not executing a task. The processes in the process pool are in a running (running) state when executing tasks. In response to receiving a request to run an application, the processor 120 determines a process in an idle state from among a plurality of processes in a process pool as a target process, and wakes up the target process to place the target process in a running state. Processor 120 obtains target application instances from the object pool, which may include generic application instances for implementing the same operational attributes and the same initialization logic, as well as specific application instances for implementing specific operational attributes. The processor 120 retrieves data resources associated with running the application, such as JavaScript scripts, pictures, audio, etc. associated with running the JavaScript application from the memory 110.
In the embodiment of the present disclosure, in the case where it is determined that the execution of the JavaScript application is completed, the processor 120 retrieves the process, so that the JavaScript application can implement the repeated use of multiple processes in the process pool, reducing the overhead generated by repeatedly creating the process, and further improving the program running efficiency.
In the disclosed embodiments, for example, the data resources may include JavaScript scripts and JavaScript functions that are compiled by the processor 120 into byte codes (bytecodes) including executing programs using the target process. The processor 120 executes the bytecode to thereby implement the running application.
Fig. 2 schematically illustrates a schematic diagram of application running according to an embodiment of the present disclosure. An application scenario in which the electronic device runs a JavaScript application is exemplarily described with reference to fig. 1 and 2.
As shown in fig. 2, the application scenario 200 includes a user interface 210 and a processor 220. Processor 220 is similar to the operations performed by processor 120 described in the previous embodiments, and for brevity, this disclosure is not repeated here.
In the event that a determination is made that the user clicks on the icon of application 1 in user interface 210, processor 220 may receive an instruction to launch application 1. In the case where it is determined that the application 1 is first run, the processor 220 acquires a target application instance from the pre-created object pool 230 (application instance 1, application instance 2, and application instance 3), acquires a target process from the pre-created process pool 240 (process 1, process 2, and process 3), and acquires data resources required to run the application 1 from the database 250 (data resource 1, data resource 2, and data resource 3). For example, database 250 may be located in memory 110 as previously described.
In the embodiment of the present disclosure, since a plurality of processes in the process pool 240 created in advance by the processor 220 are in an idle state before executing a task, the processor 220 wakes up a target process, brings the target process into an operating state, and compiles and executes a target instance and data resources by using the target process.
Fig. 3 schematically illustrates a schematic diagram of application running according to another embodiment of the present disclosure. An application scenario in which the electronic device runs a JavaScript application is exemplarily described with reference to fig. 1, 2, and 3.
As shown in fig. 3, the application scenario 300 includes a processor 320. Processor 320 is similar to the operations performed by processor 220 described in the previous embodiments, processor 320 creates an object pool 330 and a process pool 340 similar to object pool 230 and process pool 240, respectively, described in the previous embodiments, and database 350 is similar to database 250 described in the previous embodiments. For brevity, this disclosure is not repeated here.
In the disclosed embodiment, processor 320 obtains the target application instance (application instance 1) associated with application 1 from object pool 330, obtains the target process (process 1) associated with application 1 from process pool 340, and obtains the data resources (data resource 1) required to run application 1 from database 350. Processor 320 compiles and executes application instance 1 using process 1 to build frames such as icons, pictures, and text boxes in the display page of application 1. The processor 320 compiles and executes the data resource 1 using the process 1, thereby exposing the corresponding data resource in a corresponding data presentation form in the constructed framework.
According to the embodiment of the disclosure, the object pool and the process pool are created in advance, so that universal resources are provided for the running process of the application program, the phenomenon that the running of the application program is slow due to the resource allocation problem when the application program runs can be avoided, and the running rate of the application program is improved.
FIG. 4 schematically illustrates a schematic diagram of a processor running bytecode according to an embodiment of the present disclosure.
As shown in fig. 4, the processor 420 is configured to compile the target application instance and the data resource to obtain bytecodes. The cache 410 is configured to store bytecodes (bytecode 1, bytecode 2, bytecode 3, and bytecode 4), and the cache 410 may be a bytecode cache.
In an embodiment of the present disclosure, the processor 420 running the application may include: carrying out hash operation on the target application instance and the data resource by utilizing the target process to obtain a hash value; in the case where it is determined that the bytecode corresponding to the hash value is hit in the cache 410, an application version number corresponding to the bytecode is determined, and in the case where it is determined that the application version number corresponding to the bytecode is the same as the application version number of the application program, the bytecode is executed.
For example, the data resources may include JavaScript scripts and JavaScript functions. The processor 420 may hash the script full path of the JavaScript script to obtain a hash value, e.g., the hash operation includes a hash (JavaScript full path). Alternatively, the processor 420 may hash the full path of the script and the function name where the JavaScript function resides, e.g., the hash operation includes a hash (JavaScript full path+function name).
The hash value and the bytecode may be stored in the buffer 410 in the form of key-value pairs, the hash value being a key and the bytecode being a value. Therefore, the processor 420 searches whether the byte code corresponding to the hash value is stored in the cache 410 according to the key-value pair relationship between the hash value and the byte code. When it is determined that the byte code corresponding to the hash value is hit in the cache 410, it is considered that the byte code corresponding to the hash value is stored in the cache 410. Since the version of the JavaScript application is continuously updated during the process of running the JavaScript application, the version corresponding to the bytecode stored in the cache 410 may not be consistent with the current version of the JavaScript application. In the case that the version corresponding to the bytecode stored in the cache 410 is inconsistent with the current version of the JavaScript application, executing the bytecode stored in the cache 410 may cause the JavaScript application to run abnormally, so the processor 420 also needs to determine whether the version corresponding to the bytecode stored in the cache 410 is inconsistent with the current version of the JavaScript application. In the case where it is determined that the application version number corresponding to the bytecode stored in the cache 410 is the same as the application version number of the JavaScript application, the processor 420 executes the bytecode.
For example, the process of executing the bytecode by the processor 420 includes the processor 420 optimizing the bytecode, converting the optimized bytecode into a machine code (machine code), and executing the machine code by the processor 420, wherein the machine code is a code recognizable by the processor 420. For example, the optimization process may include merging code portions of the bytecode that may be multiplexed, and deleting code portions of the bytecode that are not useful. The optimization process may reduce the amount of data in the bytecode and increase the rate of subsequent execution by the processor 420. As shown in fig. 4, in the case where it is determined that the bytecode corresponding to the hash value is not hit in the cache 410, the cache 410 is considered to not store the bytecode corresponding to the hash value, or in the case where it is determined that the application version number corresponding to the bytecode stored in the cache 410 is inconsistent with the application version number of the JavaScript application, the bytecode stored in the cache 410 is considered to have been unusable due to expiration. At this time, the processor 420 needs to compile the target application instance and the data resource, obtain the bytecode 5, and execute the bytecode 5, thereby implementing the running of the application program. In addition, the processor 420 writes the compiled bytecode 5 into the cache 410, so that when the application program corresponding to the bytecode 5 is subsequently executed again, the processor 420 can directly obtain the bytecode 5 from the cache 410 and execute the bytecode 5, thereby implementing the running again of the application program.
For example, the processor 420 writing the bytecode into the cache includes: in the case where it is determined that the remaining storage capacity of the cache 410 satisfies the storage condition, the processor 420 writes the bytecode into the cache 410; in the case where it is determined that the remaining storage capacity of the cache 410 does not satisfy the storage condition, the processor 420 deletes the discard data in the cache, which may be data in which the access probability is less than the preset probability. The processor 420 then writes the bytecode to the cache 410 from which discarded data was deleted.
In embodiments of the present disclosure, the cache 410 may include a plurality of cache pages, and the processor 420 may write the compiled bytecode to the free cache pages in the event that it is determined that the cache 410 has free cache pages. The free cache pages are cache pages in the cache 410 that have not yet stored data. In the event that it is determined that there are no free cache pages in the cache 410, it is indicated that the processor 420 needs to clean up the stored data in the cache 410 to obtain free cache pages to store the newly compiled bytecode.
For example, the stored data in cache 410 may be flushed using a least recently used (Least Recently Used, LRU) algorithm. A register is set for each cache page of the cache 410 for recording the use of data in each cache page. If any of the cache pages in the cache 410 hits a byte code corresponding to the hash value, the value recorded in the register corresponding to the cache page is incremented by 1. For example, in the case where it is determined that the buffer 410 does not have a free buffer page, the data in the register corresponding to the buffer page having the smallest record value among the plurality of registers is determined to be discarded data, and the processor 420 clears the data in the buffer page to obtain the free buffer page to store the newly compiled bytecode. For example, in the case where it is determined that the buffer 410 does not have a free buffer page, it may also be determined that the data in the buffer page corresponding to the register having the value smaller than the preset value recorded in the plurality of registers is discarded data in the buffer page in a certain period of time, and the processor 420 clears the data in the buffer page. Thereafter, the bytecode is written into the cache 410 from which discarded data has been deleted.
For example, in the case where it is determined that the number of records in the register is less than 5 in 100ms, it can be considered that the number of times the data in the register corresponding to the cache page is accessed in 100ms is less than 5 times, and the access probability is less than the preset probability. For example, the average access time of the cache memory is about 2ms, and within 100ms, the data in the cache memory can be accessed 50 times, and in the case that the number of times that the data in the corresponding cache page of the register is accessed within 100ms is determined to be less than 5 times, the probability that the data is accessed is considered to be lower than 10%, and the preset probability is 10%. The present disclosure does not limit the preset probability, the preset number of times, and the duration of the time period.
In the embodiment of the disclosure, a byte code cache switch may also be provided. Before compiling the target application instance and the data resource, the processor 420 determines whether the byte code cache switch is open. In the case that it is determined that the bytecode cache is in an on state, the processor 420 hashes the target application instance and the data resource to find the bytecode corresponding to the hash value from the cache 410. In the event that the bytecode cache is determined to be in a closed state, the processor 420 performs an operation of compiling the target application instance and the data resource.
In the embodiment of the disclosure, a byte code caching technology is adopted, the JavaScript script which needs to be compiled frequently and the byte code corresponding to the JavaScript function are written into the cache, so that the byte code which needs to be accessed frequently can be obtained directly from the cache, the process of compiling the JavaScript script into the byte code for many times can be omitted, and the compiling time is saved. Under the condition of reducing the compiling time, the running efficiency of the JavaScript application program can be improved.
Fig. 5 schematically illustrates a schematic diagram of a processor processing operational data according to an embodiment of the present disclosure.
As shown in fig. 5, the processor 520 is configured to send the operation data of the application program that needs to be stopped to the delay queue 510, and the delay queue 510 is configured to store the operation data (operation data 1, operation data 2, operation data 3, and operation data 4) that needs to be deleted. The processor 520 also sets a delay deletion time for the operation data in the delay queue 510, respectively, so as to delay deleting the corresponding operation data, and the processor 520 may sequentially delete the operation data in the delay queue 510 reaching the deletion time point after the delay deletion time.
In the disclosed embodiment, in response to receiving a request to stop running an application, the processor 520 writes running data associated with the application in the delay queue 510 and sets a delay delete time for the running data. In the case where the length of the delay deletion time is determined to pass, the processor 520 deletes the running data in the delay queue.
For example, the user performs an operation of closing the application at the user interface, and the processor 520 receives an instruction to stop running the application. Processor 520 moves the application's associated operational data into delay column 510 and sets a delay delete time for the operational data, e.g., 30 minutes delay to delete the operational data. At this point, the processor 520 may also hide the graphical user interface (Graphical User Interface, GUI) of the application.
In the embodiment of the present disclosure, before reaching the deletion time point, when the processor 520 receives a request for starting the application program, the delay queue 510 may be queried to obtain the running data of the application program in the delay queue 510, and execute the running data, so as to implement rerun of the application program, and improve the starting rate of the JavaScript application.
For example, in response to receiving a request to run an application, processor 520 queries delay queue 510 in the event that it is determined that the application is not running for the first time. In the event that it is determined that the delay queue 510 includes the running data of the application, the processor 520 obtains the running data from the delay queue 510 and executes the running data.
In the disclosed embodiment, in the event that it is determined that the application is not running for the first time, it may be considered that the running data of the application may be stored in the delay queue 510. In the event that it is determined that the delay queue 510 includes operational data for the application, the processor 520 retrieves the operational data from the delay queue 510 and also resumes the GUI displaying the application. By setting the delay deletion time, for an application program which is frequently used by a user, relevant operation data can be quickly acquired in a delay queue, and the application program can be quickly restarted.
In the embodiment of the present disclosure, when the deletion time point is reached, the processor 520 deletes the corresponding running data in the delay queue 510, and when it is determined that the running data in the delay queue is deleted, the processor 520 releases the application instance and the process occupied by the application program corresponding to the running data, and releases the obtained application instance to send to the object pool, and sends the released process to the process pool. The application instance sent to the object pool becomes a free resource, and the process sent to the process pool enters a free state waiting for subsequent use by other applications.
In the event that it is determined that the running data in the delay queue is deleted, the processor 520 needs to reacquire the target application instance and the target process from the object pool and the process pool, respectively. In the event that the object pool is determined to include an idle application instance and the process pool includes a spatial process, the processor 520 may obtain a target application instance and a target process and compile the target application instance and data resources with the target process to run the application.
In the event that it is determined that the object pool does not include free application instances and the process pool includes spatial processes, the object pool and the process pool may be considered temporarily to not include available resources and the processor 520 may utilize the LRU algorithm to clean up the running application. For example, an application program that has been least recently used is determined from among a plurality of application programs that are running, the application program is stopped to be run, and the running data of the application program is cleared, thereby releasing the application instance and the process for use by a new application program.
In the embodiment of the present disclosure, the processor 520 may implement reuse of the application instance and the process according to the application instance and the process using and recycling manner, so as to reduce overhead generated by repeatedly creating the application instance and the process, and further improve the running efficiency of the application.
Fig. 6 schematically illustrates a flowchart of an application running method according to an embodiment of the present disclosure.
As shown in fig. 6, the application running method of this embodiment includes operations S610 to S630.
In operation S610, an object pool including a plurality of application instances and a process pool including a plurality of processes are created.
In response to receiving the request to run the application, a target application instance associated with the application is obtained from the plurality of application instances, the target process is obtained from the plurality of processes, and the data resource associated with the running application is obtained from the memory in operation S620.
In operation S630, the application program is run by compiling the target application instance and the data resource with the target process.
In the embodiment of the present disclosure, operations S610 to S630 are performed by the processor 120 in the previous embodiment, and correspond to the operations performed by the processor 120, which are not described herein for brevity.
Based on the application program running method, the disclosure also provides an application program running device. The device will be described in detail below in connection with fig. 7. Fig. 7 schematically shows a block diagram of an application running apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the application running apparatus 700 of this embodiment includes a creation module 710, an acquisition module 720, and a running module 730.
The creation module 710 is configured to create an object pool and a process pool, the object pool including a plurality of application instances, the process pool including a plurality of processes. In an embodiment, the creation module 710 may be configured to perform the operation S610 described above, which is not described herein.
The obtaining module 720 is configured to obtain, in response to receiving a request for running an application program, a target application instance related to the application program from a plurality of application instances, obtain a target process from a plurality of processes, and obtain a data resource related to the running application program from a memory. In an embodiment, the obtaining module 720 may be configured to perform the operation S620 described above, which is not described herein.
The running module 730 is configured to compile the target application instance and the data resource by using the target process, and run the application program. In an embodiment, the operation module 730 may be configured to perform the operation S630 described above, which is not described herein.
In accordance with an embodiment of the present disclosure, the creation module 710, the acquisition module 720, and the execution module 730 may be located in the processor 120, such as in the embodiments described above. Any of the creation module 710, the acquisition module 720, and the operation module 730 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. At least one of the creation module 710, the acquisition module 720, and the execution module 730 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-a-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware, such as any other reasonable way of integrating or packaging the circuitry, or in any one of or a suitable combination of three of software, hardware, and firmware, in accordance with embodiments of the present disclosure. Alternatively, at least one of the creation module 710, the acquisition module 720, and the execution module 730 may be at least partially implemented as a computer program module that, when executed, may perform the corresponding functions.
Fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement an application running method according to an embodiment of the disclosure.
As shown in fig. 8, an electronic device 800 according to an embodiment of the present disclosure includes a processor 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. The processor 801 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 801 may also include on-board memory for caching purposes. The processor 801 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the disclosure.
In the RAM 803, various programs and data required for the operation of the electronic device 800 are stored. The processor 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. The processor 801 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 802 and/or the RAM 803. Note that the program may be stored in one or more memories other than the ROM 802 and the RAM 803. The processor 801 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 800 may also include an input/output (I/O) interface 805, the input/output (I/O) interface 805 also being connected to the bus 804. The electronic device 800 may also include one or more of the following components connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 802 and/or RAM 803 and/or one or more memories other than ROM 802 and RAM 803 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. When the computer program product runs in a computer system, the program code is used for enabling the computer system to realize the application running method provided by the embodiment of the disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, and/or from a removable medium 811 via a communication portion 809. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 801. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (10)

1. An electronic device, comprising:
a memory configured to store data resources for running the application program; and
a processor configured to:
creating an object pool and a process pool, wherein the object pool comprises a plurality of application instances, and the process pool comprises a plurality of processes;
in response to receiving a request to run the application program, acquiring a target application instance related to the application program from the plurality of application instances, acquiring a target process from the plurality of processes, and acquiring data resources related to running the application program from the memory; and
and compiling the target application instance and the data resource by utilizing the target process, and running the application program.
2. The electronic device of claim 1, wherein the processor running the application comprises:
carrying out hash operation on the target application instance and the data resource by utilizing the target process to obtain a hash value;
determining an application version number corresponding to the byte code under the condition that the byte code corresponding to the hash value is hit in a cache; and
and executing the byte code under the condition that the application version number corresponding to the byte code is identical with the application version number of the application program.
3. The electronic device of claim 2, wherein the processor running the application further comprises:
under the condition that the byte code is not hit in the cache, compiling the target application instance and the data resource to obtain the byte code;
executing the byte code; and
and writing the byte code into the cache.
4. The electronic device of claim 3, wherein the processor writing the bytecode to the cache comprises:
writing the byte code into the cache under the condition that the residual storage capacity of the cache is determined to meet the storage condition;
deleting the discarded data in the cache under the condition that the residual storage capacity of the cache does not meet the storage condition, wherein the discarded data is the data with the access probability smaller than the preset probability in the cache; and
and writing the byte code into a cache after deleting the discarded data.
5. The electronic device of claim 1, wherein the processor is further configured to:
responsive to receiving a request to stop running the application, writing running data associated with the application into a delay queue; and
setting a delay deletion time for the operation data, so as to delete the operation data in the delay queue under the condition that the time length of the delay deletion time is determined.
6. The electronic device of claim 5, wherein the processor is further configured to:
in response to receiving a request to run the application, querying the delay queue if it is determined that the application is not running for the first time;
acquiring the operation data from the delay queue under the condition that the delay queue comprises the operation data of the application program; and
and executing the operation data.
7. The electronic device of claim 5, wherein the processor is further configured to:
releasing an application instance and a process occupied by an application program corresponding to the operation data under the condition that the operation data in the delay queue is deleted; and
and sending the application instance to the object pool, and sending the process to the process pool.
8. An application running method, comprising:
creating an object pool and a process pool, wherein the object pool comprises a plurality of application instances, and the process pool comprises a plurality of processes;
in response to receiving a request for running an application program, acquiring a target application instance related to the application program from the plurality of application instances, acquiring a target process from the plurality of processes, and acquiring data resources related to running the application program from a memory; and
and compiling the target application instance and the data resource by utilizing the target process, and running the application program.
9. An application running apparatus comprising:
the system comprises a creation module, a processing module and a storage module, wherein the creation module is used for creating an object pool and a processing pool, the object pool comprises a plurality of application instances, and the processing pool comprises a plurality of processes;
the acquisition module is used for responding to the received request for running the application program, acquiring a target application instance related to the application program from the plurality of application instances, acquiring a target process from the plurality of processes and acquiring data resources related to running the application program from a memory; and
and the running module is used for compiling the target application instance and the data resource by utilizing the target process and running the application program.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of claim 8.
CN202310315524.9A 2023-03-28 2023-03-28 Electronic device, application program running method, device and storage medium Pending CN116382657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310315524.9A CN116382657A (en) 2023-03-28 2023-03-28 Electronic device, application program running method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310315524.9A CN116382657A (en) 2023-03-28 2023-03-28 Electronic device, application program running method, device and storage medium

Publications (1)

Publication Number Publication Date
CN116382657A true CN116382657A (en) 2023-07-04

Family

ID=86968721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310315524.9A Pending CN116382657A (en) 2023-03-28 2023-03-28 Electronic device, application program running method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116382657A (en)

Similar Documents

Publication Publication Date Title
US11556348B2 (en) Bootstrapping profile-guided compilation and verification
US10073770B2 (en) Scheme for determining data object usage in a memory region
US10838857B2 (en) Multi-section garbage collection
US8806469B2 (en) Runtime code replacement
US9355002B2 (en) Capturing trace information using annotated trace output
KR20110052470A (en) Symmetric multi-processor lock tracing
TWI659305B (en) Facility for extending exclusive hold of a cache line in private cache
US11188364B1 (en) Compilation strategy for a sharable application snapshot
US20220038355A1 (en) Intelligent serverless function scaling
US8938608B2 (en) Enabling portions of programs to be executed on system z integrated information processor (zIIP) without requiring programs to be entirely restructured
US11580228B2 (en) Coverage of web application analysis
US11775527B2 (en) Storing derived summaries on persistent memory of a storage device
US9229757B2 (en) Optimizing a file system interface in a virtualized computing environment
CN116382657A (en) Electronic device, application program running method, device and storage medium
CN112379945B (en) Method, apparatus, device and storage medium for running application
US11194724B2 (en) Process data caching through iterative feedback
US20200175163A1 (en) Feedback-directed static analysis
US20230101885A1 (en) Reliable device assignment for virtual machine based containers
Zhou et al. Gru: Exploring computation and data redundancy via partial gpu computing result reuse
CN115904477A (en) Dynamic configuration processing method and device, storage medium and electronic equipment
CN116150127A (en) Data migration method, device, electronic equipment and storage medium
CN117193990A (en) Scheduling management method, device, equipment and storage medium of http interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination