CN111679813A - Method for information processing, electronic device, and storage medium - Google Patents

Method for information processing, electronic device, and storage medium Download PDF

Info

Publication number
CN111679813A
CN111679813A CN202010798545.7A CN202010798545A CN111679813A CN 111679813 A CN111679813 A CN 111679813A CN 202010798545 A CN202010798545 A CN 202010798545A CN 111679813 A CN111679813 A CN 111679813A
Authority
CN
China
Prior art keywords
determined
code
storage mode
storage
cache server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010798545.7A
Other languages
Chinese (zh)
Other versions
CN111679813B (en
Inventor
孙欣然
倪述荣
王佳斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Juyin Information Technology Co ltd
Nanjing Yunlian Digital Technology Co ltd
Original Assignee
Shanghai Juyin Information Technology Co ltd
Nanjing Yunlian Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Juyin Information Technology Co ltd, Nanjing Yunlian Digital Technology Co ltd filed Critical Shanghai Juyin Information Technology Co ltd
Priority to CN202010798545.7A priority Critical patent/CN111679813B/en
Publication of CN111679813A publication Critical patent/CN111679813A/en
Application granted granted Critical
Publication of CN111679813B publication Critical patent/CN111679813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Embodiments of the present disclosure relate to methods, electronic devices, and computer storage media for information processing, and relate to the field of information processing. According to the method, if it is determined that a service is started, reading a source code file associated with the service from a disk, the source code file being associated with a first programming language; analyzing the source code file to obtain an intermediate code; instantiating the intermediate code to obtain executable code; creating a process for an asynchronous task; executing executable code associated with the service if it is determined that the service request is received; and if it is determined that the first code segment in the executable code is marked with the predetermined mark while the executable code is being executed, asynchronously executing the first code segment via the created process. Thereby, specially marked code segments, such as time and/or resource consuming code segments, can be executed by asynchronous processes, thereby enabling fast response of the service.

Description

Method for information processing, electronic device, and storage medium
Technical Field
Embodiments of the present disclosure relate generally to the field of information processing, and more particularly, to a method, an electronic device, and a computer storage medium for information processing.
Background
With the development of web technologies, web services are becoming more and more popular. For a web service implemented by a programming language such as PHP, which often responds to a service request from a terminal device in a synchronous blocking manner, when a time-consuming and/or resource-consuming code segment exists in the service, the response to the service request is slow, which affects the user experience.
Disclosure of Invention
A method, an electronic device, and a computer storage medium for information processing are provided, which are capable of executing specially marked code segments, such as time-consuming and/or resource-consuming code segments, through asynchronous processes, thereby achieving a fast response of a service.
According to a first aspect of the present disclosure, a method for information processing is provided. The method comprises the following steps: according to the method, if it is determined that a service is started, reading a source code file associated with the service from a disk, the source code file being associated with a first programming language; analyzing the source code file to obtain an intermediate code; instantiating the intermediate code to obtain executable code; creating a process for an asynchronous task; executing the executable code if it is determined that the service request is received; and if it is determined that the first code segment in the executable code is marked with the predetermined mark while the executable code is being executed, asynchronously executing the first code segment via the created process.
According to a second aspect of the present disclosure, an electronic device is provided. The electronic device includes: at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method according to the first aspect.
In a third aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements.
FIG. 1 is a schematic diagram of an information handling environment 100 according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of a method 200 for information processing, according to an embodiment of the present disclosure.
FIG. 3 is a schematic diagram of a method 300 for storing result data associated with executable code, in accordance with an embodiment of the present disclosure.
FIG. 4 is a schematic diagram of a method 400 for sending result data associated with executable code to a cache server, according to an embodiment of the disclosure.
Fig. 5 is a block diagram of an electronic device for implementing a method for information processing of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example embodiment" and "one embodiment" mean "at least one example embodiment". The term "another embodiment" means "at least one additional embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As described above, for a web service implemented by a programming language such as PHP, which often responds to a service request from a terminal device in a synchronous blocking manner, when a code segment consuming time and/or resources exists in the service, the response to the service request is slow, which affects the user experience. In addition, increased disk I/O of a web server can cause processor overload, affect the service capability of the server, and even result in denial of service.
To address, at least in part, one or more of the above issues and other potential issues, an example embodiment of the present disclosure proposes a scheme for information processing. In this scenario, according to the method, if it is determined that a service is started, reading a source code file associated with the service from a disk, the source code file being associated with a first programming language; analyzing the source code file to obtain an intermediate code; instantiating the intermediate code to obtain executable code; creating a process for an asynchronous task; executing the executable code if it is determined that the service request is received; and if it is determined that the first code segment in the executable code is marked with the predetermined mark while the executable code is being executed, asynchronously executing the first code segment via the created process.
Thereby, specially marked code segments, such as time and/or resource consuming code segments, can be executed by asynchronous processes, thereby enabling fast response of the service.
Hereinafter, specific examples of the present scheme will be described in more detail with reference to the accompanying drawings.
FIG. 1 shows a schematic diagram of an example of an information processing environment 100, according to an embodiment of the present disclosure. Information handling environment 100 may include server 110, terminal device 120, and cache server 130.
The server 110 includes, for example, but is not limited to, a server computer, a multiprocessor system, a mainframe computer, a distributed computing environment including any of the above systems or devices, and the like. In some embodiments, the server 110 may have one or more processing units, including special purpose processing units such as image processing units GPU, field programmable gate arrays FPGA, and application specific integrated circuits ASIC, and general purpose processing units such as central processing units CPU. The server 110 may have, for example, at least two network ports, one of which may be connected to the external network 140, e.g., to the terminal device 120 via the external network 140, and the other of which may be connected to the internal network 150, e.g., to the cache server 130 via the internal network 150. The server 110 may, for example, measure network bandwidth associated with a network port, such as external network bandwidth, internal network bandwidth.
The terminal device 120 includes, for example, but is not limited to, a smartphone, personal computer, desktop computer, laptop computer, tablet computer, personal digital assistant, wearable device, and the like. The terminal device 120 may be implemented with a browser through which a user accessing the website of the server 110 may trigger the terminal device 120 to send a service request, such as an HTTP request, to the server 110.
The server 110 may be implemented as a web service, for example, through a programming language such as php. The server 110 may have stored on its disk source code files associated with the service, the source code files being associated with a first programming language, such as the PHP language. Server 110 may generate executable code associated with the service based on the source code file. The server 110 may receive a service request from the terminal device 120, execute executable code associated with the service, and send the execution results to the terminal device 120.
The server 110 is configured to read a source code file associated with the service from the disk if it is determined that the service is started, the source code file being associated with the first programming language; analyzing the source code file to obtain an intermediate code; instantiating the intermediate code to obtain executable code; creating a process for an asynchronous task; executing the executable code if it is determined that the service request is received; and if it is determined that the first code segment in the executable code is marked with the predetermined mark while the executable code is being executed, asynchronously executing the first code segment via the created process.
Fig. 2 shows a flow diagram of a method 200 for information processing according to an embodiment of the present disclosure. For example, the method 200 may be performed by the server 110 as shown in FIG. 1. It should be understood that method 200 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the present disclosure is not limited in this respect.
At block 202, the server 110 determines whether the service is up.
If the server 110 determines that the service is up at block 202, a source code file associated with the service is read from disk at block 204, the source code file being associated with the first programming language. The first programming language includes, for example, but is not limited to, the PHP language. In some embodiments, the source code file may include, but is not limited to, a phar file. For example, the Phar file may be free of test and debug files.
At block 206, the server 110 parses the source code file to obtain the intermediate code. In parsing the source code, if it is determined that a code segment in the source code file is marked with a predetermined mark, the corresponding code segment in the intermediate code may be marked with the predetermined mark.
At block 208, the server 110 instantiates intermediate code to obtain executable code. The generated executable code is associated with a service. Taking PHP as an example, the intermediate code may be converted into executable code through the PHP underlying mechanism. During instantiation, if a code segment in the intermediate code is marked with a predetermined marker, the corresponding executable code segment in the executable code may be marked with the predetermined marker in the memory.
At block 210, the server 110 creates a process for the asynchronous task. The number of processes created may be one or more.
At block 212, the server 110 determines whether a service request is received. The service request may be based on the HTTP protocol, for example. The service request may come from, for example, terminal device 120.
If server 110 determines at block 212 that a service request is received, the executable code is executed at block 214.
Server 110, while executing the executable code, determines whether a first code segment in the executable code is marked with a predetermined marker at block 216. For example, the memory may store the address of the code segment in the executable code and a record of whether the code segment is marked with the predetermined mark, and it may be determined whether the code segment is marked with the predetermined mark through the record.
If, at block 216, server 110 determines that a first code segment in the executable code is marked with a predetermined mark while executing the executable code, the first code segment is executed asynchronously via the created process at block 218. The first code segment marked with the predetermined mark is for example a time and/or resource consuming code segment. For example, the process may be caused to execute the first code segment asynchronously by sending an asynchronous message to the created process. Otherwise, the first code segment may continue to be executed synchronously.
Thereby, specially marked code segments, such as time and/or resource consuming code segments, can be executed by asynchronous processes, thereby enabling fast response of the service. In addition, the executable code associated with the service can be generated and resident in the memory when the service is started, so that the service request can be responded quickly, the processes of reading, analyzing and instantiating the file after the service request is received are avoided, and the service efficiency is improved.
In some embodiments, server 110 may determine an execution duration of a second code segment in the executable code. The execution duration of the second code segment may be determined, for example, by determining a start execution time and an end execution time of the second code segment. If server 110 determines that the duration of execution of the second code segment is greater than or equal to the predetermined duration, the second code segment is marked with a predetermined marker.
In addition to or in addition to the execution duration, in some embodiments, server 110 may determine a required amount of resources for a second code segment in the executable code. The amount of resources required includes, for example, but is not limited to, the required internal memory size. If server 110 determines that the amount of resources required for the second code segment is greater than or equal to the predetermined amount of resources, the second code segment is marked with a predetermined marker. The second code segment may also be executed asynchronously via the created asynchronous process the next time a service request is received after the second code segment is marked with the predetermined mark. The specific process is similar to the first code segment and is not described herein again.
Therefore, the code segments consuming time and/or resources can be marked for asynchronous processing, the service request can be responded quickly, and the response efficiency is improved.
Alternatively or additionally, in some embodiments, the server 110 determines operational information, such as errors, attention, network, load, concurrency, execution efficiency, etc., generated during operation of the service after the service is started, and may notify mobile devices associated with developers and/or operation and maintenance personnel through a channel, such as short message, instant messaging, etc., if the operational information is determined to reach a preset threshold. Therefore, operation and maintenance integration can be realized, and operation and maintenance processing can be performed in time.
Alternatively or additionally, in some embodiments, server 110 may also store result data associated with the executable code. The method for storing result data associated with executable code will be described in detail below in conjunction with FIG. 3.
FIG. 3 illustrates a flow diagram of a method 300 for storing result data associated with executable code in accordance with an embodiment of the present disclosure. For example, the method 300 may be performed by the server 110 as shown in FIG. 1. It should be understood that method 300 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 302, the server 110 determines a ratio between a current bandwidth and a peak bandwidth associated with the first network port. The first network port is connected to, for example, an internal network 150. The peak bandwidth is, for example, the maximum bandwidth in the history bandwidth.
At block 304, the server 110 determines whether the ratio is less than or equal to a first threshold. The first threshold value is, for example, 0.8.
If the server 110 determines at block 304 that the ratio is less than or equal to the first threshold, the current storage mode is determined at block 306 to be the first storage mode. In the first storage mode, data to be stored is sent to the cache server 130 connected to the first network port for storage. That is, data storage is achieved through network IO in the first storage mode.
If the server 110 determines at block 304 that the ratio is greater than the first threshold, then at block 308 it is determined whether the ratio is less than or equal to a second threshold. The second threshold value is, for example, 0.9.
If server 110 determines at block 308 that the ratio is less than or equal to the second threshold, then the current storage mode is determined at block 310 to be the second storage mode. In the second storage mode, data to be stored is sent to the cache server 130 for storage with a first probability and stored on the local disk with a first remaining probability. That is, data storage is achieved by mixing network IO and disk IO in the second storage mode. The sum of the first probability and the first remaining probability is 100%. The first probability includes, but is not limited to, 70% for example, and the first remaining probability includes, but is not limited to, 30% for example.
If the server 110 determines at block 308 that the ratio is greater than the second threshold, the current storage mode is determined at block 312 to be the third storage mode. In the third storage mode, the data to be stored is sent to the cache server 130 for storage with the second probability and stored on the local disk with the second remaining probability. Similar to the second storage mode, data storage is also implemented in the third storage mode by mixing network IO and disk IO, except that the probability or ratio of mixing is different, wherein the second probability is greater than the first probability, and the second residual probability is less than the first residual probability. The sum of the second probability and the second remaining probability is 100%. The second probability includes, but is not limited to, 60% for example, and the second remaining probability includes, but is not limited to, 40% for example.
In some embodiments, the first storage mode lasts for a first length of time, the second storage mode lasts for a second length of time, and the third storage mode lasts for a third length of time. Specifically, the first duration may be less than the second duration, and the second duration may be less than the third duration. For example, the first time period is 1s, the second time period is 5 minutes, and the third time period is 15 minutes. Therefore, by setting the longer time of the second storage mode and the third storage mode, the abnormal request interference in the shorter time is avoided, the bandwidth is adjusted to the maximum precision, the service is prevented from being influenced, and the service performance can be improved at any time according to the bandwidth condition by setting the shorter time of the first storage mode.
At block 314, server 110 stores result data associated with the executable code based on the current storage mode.
Specifically, if server 110 determines that the current storage mode is the first storage mode, server 110 sends the result data associated with the executable code to cache server 130 for storage.
If the server 110 determines that the current storage mode is the second storage mode, the server 110 generates a first random number within a predetermined range, e.g., between 0-1, compares the first random number to a first probability, and if the first random number is determined to be less than the first probability, e.g., 0.7, the server 110 sends the result data associated with the executable code to the cache server 130 for storage, otherwise the server 110 stores the result data on a local disk.
If the server 110 determines that the current storage mode is the third storage mode, the server 110 generates a second random number within a predetermined range, e.g., between 0-1, compares the second random number to a second probability, and if the second random number is determined to be less than the second probability, e.g., 0.6, the server 110 sends the result data associated with the executable code to the cache server 130 for storage, otherwise the server 110 stores the result data on a local disk.
Therefore, data storage can be achieved through network IO when the internal network bandwidth is sufficient, the condition that the processor is excessively high in load and finally service rejection is caused due to the fact that data are stored through disk IO is avoided, service efficiency is improved, when the internal network bandwidth is not sufficient, data storage is conducted through hybrid network IO and disk IO, and performance can be improved to the maximum extent under the condition that service is not affected. In addition, when the internal network bandwidth is not enough, the data storage mode is divided into 2 mixed storage modes based on different bandwidth situations, so that the data storage mode is more refined.
FIG. 4 shows a flow diagram of a method 400 for sending result data associated with executable code to a cache server, according to an embodiment of the disclosure. For example, the method 400 may be performed by the server 110 as shown in FIG. 1. It should be understood that method 400 may also include additional blocks not shown and/or may omit blocks shown, as the scope of the disclosure is not limited in this respect.
At block 402, the server 110 determines whether the previous connection with the cache server 130 was broken.
If the server 110 determines at block 402 that the previous connection was not broken, the result data associated with the executable code is sent to the cache server 130 for storage via the previous connection at block 404.
If the server 110 determines at block 402 that the previous connection has been broken, a connection is established with the cache server 130 at block 406.
At block 408, the result data associated with the executable code is sent to the cache server 130 for storage via the established connection.
Therefore, the result data associated with the executable code can be sent to the cache server for storage by multiplexing the previous connection which is not disconnected, and long time delay caused by connection reestablishment every time data storage is carried out is avoided, so that the time delay of data storage is reduced.
Fig. 5 illustrates a schematic block diagram of an example device 500 that may be used to implement embodiments of the present disclosure. For example, the server 110 as shown in FIG. 1 may be implemented by the device 500. As shown, device 500 includes a Central Processing Unit (CPU) 501 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM) 502 or loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The CPU 501, ROM 502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, a microphone, and the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The various processes and processes described above, such as the method 200 and 400, may be performed by the processing unit 501. For example, in some embodiments, the method 200-400 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 603 and executed by CPU601, one or more of the acts of method 200 and 400 described above may be performed.
The present disclosure relates to methods, apparatuses, systems, electronic devices, computer-readable storage media and/or computer program products. The computer program product may include computer-readable program instructions for performing various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (8)

1. A method for information processing, comprising:
if it is determined that the service is started:
reading, from a disk, a source code file associated with the service, the source code file associated with a first programming language,
parsing the source code file to obtain an intermediate code,
instantiating the intermediate code to obtain executable code, an
Creating a process for an asynchronous task;
executing the executable code if it is determined that a service request is received;
asynchronously executing a first code segment in the executable code via the created process if it is determined that the first code segment is marked with a predetermined mark while the executable code is being executed;
determining a ratio between a current bandwidth and a peak bandwidth associated with the first network port;
if the ratio is determined to be smaller than or equal to a first threshold value, determining a current storage mode as a first storage mode, wherein in the first storage mode, data to be stored is sent to a cache server connected with the first network port for storage;
determining the current storage mode as a second storage mode in which the data is sent to the cache server for storage with a first probability and stored on a local disk with a first remaining probability if it is determined that the ratio is greater than the first threshold and less than or equal to a second threshold;
determining the current storage mode as a third storage mode in which the data is sent to the cache server for storage with a second probability and stored to the local disk with a second remaining probability if it is determined that the ratio is greater than the second threshold; and
storing result data associated with the executable code based on the current storage mode.
2. The method of claim 1, further comprising:
determining at least one of an execution duration and an amount of resources required for a second code segment in the executable code; and
marking the second code segment with the predetermined marking if it is determined that at least one of the following conditions is met:
the execution time length is greater than or equal to a preset time length; and
the required amount of resources is greater than or equal to a predetermined amount of resources.
3. The method of claim 1, wherein the first storage mode lasts for a first length of time, the second storage mode lasts for a second length of time, and the third storage mode lasts for a third length of time.
4. The method of claim 3, wherein the first duration is less than the second duration and the second duration is less than the third duration.
5. The method of claim 1, wherein storing the result data comprises:
if the current storage mode is determined to be the first storage mode, sending the result data to the cache server for storage;
if the current storage mode is determined to be the second storage mode, then:
a first random number within a predetermined range is generated,
if it is determined that the first random number is less than or equal to the first probability, sending the result data to the cache server for storage, an
If the first random number is determined to be greater than the first probability, storing the result data in a local disk;
if the current storage mode is determined to be the third storage mode, then:
generating a second random number within the predetermined range,
if it is determined that the second random number is less than or equal to the second probability, sending the result data to the cache server for storage, an
If it is determined that the second random number is greater than the second probability, storing the result data at a local disk.
6. The method of claim 5, wherein sending the result data to the cache server for storage comprises:
determining whether a previous connection with the cache server was broken;
sending the result data to the cache server for storage via the previous connection if it is determined that the previous connection was not broken; and
if it is determined that the previous connection has been broken:
establishing a connection with said cache server, an
Sending the result data to the cache server for storage via the established connection.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
8. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202010798545.7A 2020-08-11 2020-08-11 Method for information processing, electronic device, and storage medium Active CN111679813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010798545.7A CN111679813B (en) 2020-08-11 2020-08-11 Method for information processing, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010798545.7A CN111679813B (en) 2020-08-11 2020-08-11 Method for information processing, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN111679813A true CN111679813A (en) 2020-09-18
CN111679813B CN111679813B (en) 2020-11-06

Family

ID=72458198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010798545.7A Active CN111679813B (en) 2020-08-11 2020-08-11 Method for information processing, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN111679813B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326039A (en) * 2021-06-21 2021-08-31 深圳市网通兴技术发展有限公司 Asynchronous code generation method and system for medical code flow modeling

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104396215A (en) * 2012-05-01 2015-03-04 思杰系统有限公司 Method and apparatus for bandwidth allocation and estimation
US20180063002A1 (en) * 2016-08-30 2018-03-01 Here Global B.V. Wireless network optimization
CN110209350A (en) * 2019-05-10 2019-09-06 华中科技大学 It is a kind of to mix in storage architecture HPC system using the dynamic dispatching method of I/O request
CN110267350A (en) * 2018-03-12 2019-09-20 中兴通讯股份有限公司 Downlink, ascending transmission method, device and base station, terminal, storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104396215A (en) * 2012-05-01 2015-03-04 思杰系统有限公司 Method and apparatus for bandwidth allocation and estimation
US20180063002A1 (en) * 2016-08-30 2018-03-01 Here Global B.V. Wireless network optimization
CN110267350A (en) * 2018-03-12 2019-09-20 中兴通讯股份有限公司 Downlink, ascending transmission method, device and base station, terminal, storage medium
CN110209350A (en) * 2019-05-10 2019-09-06 华中科技大学 It is a kind of to mix in storage architecture HPC system using the dynamic dispatching method of I/O request

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326039A (en) * 2021-06-21 2021-08-31 深圳市网通兴技术发展有限公司 Asynchronous code generation method and system for medical code flow modeling
CN113326039B (en) * 2021-06-21 2022-02-18 深圳市网通兴技术发展有限公司 Asynchronous code generation method and system for medical code flow modeling

Also Published As

Publication number Publication date
CN111679813B (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN112311656B (en) Message aggregation and display method and device, electronic equipment and computer readable medium
CN110888817B (en) Code coverage rate report generation method, device and readable storage medium
CN113572560B (en) Method, electronic device, and storage medium for determining clock synchronization accuracy
CN108733449B (en) Method, apparatus, and computer-readable storage medium for managing virtual machines
US11934287B2 (en) Method, electronic device and computer program product for processing data
CN111679813B (en) Method for information processing, electronic device, and storage medium
CN112084102A (en) Interface pressure testing method and device
CN110389857B (en) Method, apparatus and non-transitory computer storage medium for data backup
CN112748962A (en) Application loading method and device, electronic equipment and computer readable medium
CN111752644A (en) Interface simulation method, device, equipment and storage medium
CN113453371B (en) Method, base station, and computer storage medium for wireless communication
CN111857546A (en) Method, network adapter and computer program product for processing data
CN112948138A (en) Method and device for processing message
US20220066792A1 (en) Methods for Information Processing and In-Vehicle Electronic Device
US11662927B2 (en) Redirecting access requests between access engines of respective disk management devices
CN114115941A (en) Resource sending method, page rendering method, device, electronic equipment and medium
CN108288135B (en) System compatibility method and device, computer readable storage medium and electronic equipment
EP4053696A1 (en) Information processing method, device, and computer storage medium
CN112929453A (en) Method and device for sharing session data
US10372436B2 (en) Systems and methods for maintaining operating consistency for multiple users during firmware updates
CN112152915A (en) Message forwarding network system and message forwarding method
US11153232B2 (en) Method, device and computer program product for backing up data
CN114816978A (en) Function enabling method and device and electronic equipment
WO2020090497A1 (en) Infection probability calculating device, infection probability calculating method, and infection probability calculating program
CN115499402A (en) Instant messaging information processing method, terminal and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant