CN113110939A - Method and device for processing running data, computer equipment and storage medium - Google Patents

Method and device for processing running data, computer equipment and storage medium Download PDF

Info

Publication number
CN113110939A
CN113110939A CN202110518576.7A CN202110518576A CN113110939A CN 113110939 A CN113110939 A CN 113110939A CN 202110518576 A CN202110518576 A CN 202110518576A CN 113110939 A CN113110939 A CN 113110939A
Authority
CN
China
Prior art keywords
data
target
time
running
timer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110518576.7A
Other languages
Chinese (zh)
Inventor
安泰宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cyber Shenzhen Co Ltd
Original Assignee
Tencent Cyber Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cyber Shenzhen Co Ltd filed Critical Tencent Cyber Shenzhen Co Ltd
Priority to CN202110518576.7A priority Critical patent/CN113110939A/en
Publication of CN113110939A publication Critical patent/CN113110939A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/547Messaging middleware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a method and a device for processing running data, computer equipment and a storage medium, and belongs to the technical field of computers. According to the method and the device, after the operation data are obtained, the target device with the load condition meeting the target condition is selected from the candidate devices of the operation environment supporting the operation data, namely a platform supporting cloud function service is built, the computing resource, namely the target device, is provided outwards through the platform to execute the operation data, possible conflict caused by the fact that the operation environment is not adaptive is avoided, the target device more prone to load balancing can be reasonably distributed through the load condition, the operation data can be executed at the target moment through the target device, and the target device more prone to load balancing can be selected for each operation data, so that the resource utilization rate can be greatly improved.

Description

Method and device for processing running data, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for processing operation data, a computer device, and a storage medium.
Background
With the development of computer technology, a timing task is an indispensable project in application development, and the timing task refers to running data that is expected to be executed with a delay in a future time, and the running data is usually implemented in the form of code data such as script codes and program codes.
At present, when operating data is processed, a crontab component carried by a Linux system is usually used, and the crontab component can execute the operating data on a local single machine, but when application development involves cooperation of a plurality of hosts, the operating data on different hosts are mutually independent, so that the situations that an individual host is very busy and the individual host is idle frequently occur, namely the resource utilization rate is low, and therefore a method for processing the operating data capable of improving the resource utilization rate is urgently needed.
Disclosure of Invention
The embodiment of the application provides a method and a device for processing running data, computer equipment and a storage medium, which can improve the resource utilization rate in the process of processing the running data. The technical scheme is as follows:
in one aspect, a method for processing operation data is provided, and the method includes:
acquiring running data to be processed, wherein the running data is code data running at a target moment;
in response to reaching the target time, determining a plurality of candidate devices supporting the operating environment based on the operating environment of the operating data;
determining target equipment from the candidate equipment based on the load conditions of the candidate equipment, wherein the load condition of the target equipment meets a target condition;
executing, by the target device, the operational data.
In one possible embodiment, the determining the target device from the plurality of candidate devices based on the load conditions of the plurality of candidate devices includes:
determining at least one first device from the plurality of candidate devices based on a geographic location associated with the operational data, the first device supporting provision of services to terminals within the geographic location;
determining the target device with the load condition meeting the target condition from the at least one first device.
In one aspect, an apparatus for processing operation data is provided, the apparatus including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring running data to be processed, and the running data is code data running at a target moment;
a first determination module, configured to determine, in response to reaching the target time, a plurality of candidate devices that support the execution environment based on the execution environment of the execution data;
a second determining module, configured to determine a target device from the multiple candidate devices based on load conditions of the multiple candidate devices, where the load condition of the target device meets a target condition;
and the execution module is used for executing the running data through the target equipment.
In one possible implementation, the execution module is configured to:
in response to the target time being reached, adding identification information of a plurality of pieces of data periodically executed at the target time into a message queue, wherein the plurality of pieces of data comprise the running data;
and for the operating data in the message queue, sending the identification information of the operating data to the target equipment, and loading and executing the operating data by the target equipment based on the identification information.
In one possible embodiment, the apparatus further comprises:
and the first setting module is used for setting the state information of the operating data into a to-be-executed state in response to the identification information of the operating data being added to the message queue, wherein the to-be-executed state is used for representing that the target time is reached but the operating data is not sent to corresponding target equipment.
In one possible embodiment, the apparatus further comprises:
and the second setting module is used for setting the state information of the running data into an executing state in response to the identification information of the running data being sent to the target equipment, wherein the executing state is used for representing that the running data is loaded and executed by the corresponding target equipment.
In one possible embodiment, the apparatus further comprises:
and the adding module is used for responding to the fact that the running data is code data which is executed circularly, and adding the identification information of the running data to the tail of the message queue.
In one possible implementation, the load condition includes at least one of a central processing unit CPU usage, a disk input/output I/O usage, or a memory occupancy, and the second determining module is configured to:
determining a candidate device with the lowest CPU utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
determining the candidate device with the lowest disk I/O utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
and determining the candidate device with the lowest memory occupancy rate as the target device from the plurality of candidate devices.
In one possible implementation, the second determining module is configured to:
determining at least one first device from the plurality of candidate devices based on a geographic location associated with the operational data, the first device supporting provision of services to terminals within the geographic location;
determining the target device with the load condition meeting the target condition from the at least one first device.
In one possible embodiment, the apparatus further comprises:
and the association module is used for associating the running data with a timer corresponding to the time granularity based on the time granularity of the target time, and the timer is used for determining whether the target time is reached.
In one possible implementation, the time granularity includes at least one of seconds, minutes, or hours, and the association module is to:
in response to the time granularity of the target time of day being seconds, associating the operational data with a second timer, the second timer having a minimum timing unit of 1 second; or the like, or, alternatively,
responsive to the time granularity of the target time being minutes, associating the operational data with a minute timer, the minute timer having a minimum timing unit of 1 minute; or the like, or, alternatively,
in response to the time granularity of the target time being hours, associating the operational data with an hour timer, the hour timer having a minimum timing unit of 1 hour.
In one possible embodiment, the association module is configured to:
and distributing identification information for the running data, and binding the identification information with the timer corresponding to the time granularity.
In one possible embodiment, the apparatus further comprises:
and a third setting module, configured to set, in response to that the running data is associated with the timer, state information of the running data to a to-be-triggered state, where the to-be-triggered state is used to represent that the running data is bound to the corresponding timer but has not yet reached the target time.
In one possible embodiment, the apparatus further comprises:
and the fourth setting module is used for setting the state information of the operating data into a to-be-issued state in response to the acquisition of the operating data, wherein the to-be-issued state is used for representing that the operating data is acquired but not bound to a corresponding timer.
In a possible implementation manner, the obtaining module is further configured to obtain an operation result and an operation log of the operation data from the target device;
the device further comprises: and the sending module is used for responding to the operation result as operation failure and sending alarm information to a terminal associated with the operation data, wherein the alarm information carries the operation result and the operation log.
In one aspect, a computer device is provided, the computer device comprising one or more processors and one or more memories, wherein at least one computer program is stored in the one or more memories, loaded by the one or more processors and executed to implement the method for processing operation data according to any one of the possible implementations described above.
In one aspect, a storage medium is provided, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement the processing method of the operation data according to any one of the above possible implementations.
In one aspect, a computer program product or computer program is provided that includes one or more program codes stored in a computer readable storage medium. The one or more processors of the computer device can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the computer device can execute the processing method of the operation data of any one of the above-mentioned possible embodiments.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
by selecting target equipment with load conditions meeting target conditions from candidate equipment supporting the operating environment of the operating data after the operating data is obtained, possible conflicts caused by the fact that the operating environment is not adapted are avoided, the target equipment which tends to load balance more can be reasonably distributed through the load conditions, the operating data can be executed at the target moment through the target equipment, and the target equipment which tends to load balance more can be selected for each operating data, so that the resource utilization rate can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to be able to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment of a method for processing operation data according to an embodiment of the present application;
fig. 2 is a flowchart of a method for processing operation data according to an embodiment of the present application;
fig. 3 is an interaction flowchart of a method for processing operation data according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a timer for a time wheel provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a second timer provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a clock timer provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an hour timer provided in an embodiment of the present application;
fig. 8 is a schematic diagram illustrating a principle of adding identification information to a message queue according to an embodiment of the present application;
FIG. 9 is a diagram of dynamic changes in CPU utilization of two candidate devices according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram illustrating a target device loading operation data according to an embodiment of the present application;
fig. 11 is a schematic diagram of a method for processing operation data according to an embodiment of the present application;
FIG. 12 is a schematic interface diagram of a monitoring management statistics interface according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an apparatus for processing operation data according to an embodiment of the present application;
FIG. 14 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like in this application are used for distinguishing between similar items and items that have substantially the same function or similar functionality, and it should be understood that "first," "second," and "nth" do not have any logical or temporal dependency or limitation on the number or order of execution.
The term "at least one" in this application means one or more, and the meaning of "a plurality" means two or more, for example, a plurality of first locations means two or more first locations.
Before the embodiments of the present application are described, some basic concepts in the cloud technology field are introduced, and the following description is given.
Cloud Technology (Cloud Technology): the cloud computing business mode management system is a management technology for unifying series resources such as hardware, software, networks and the like in a wide area network or a local area network to realize data calculation, storage, processing and sharing, namely is a general name of a network technology, an information technology, an integration technology, a management platform technology, an application technology and the like applied based on a cloud computing business mode, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support in the field of cloud technology. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can be realized through cloud computing.
Cloud Computing (Cloud Computing): refers to a mode of delivery and use of IT (Internet Technology) infrastructure, refers to obtaining required resources through a network in an on-demand, easily extensible manner; the generalized cloud computing refers to a delivery and use mode of a service, and refers to obtaining a required service in an on-demand and easily-extensible manner through a network. Such services may be IT and software, internet related, or other services. Cloud Computing is a product of development and fusion of traditional computers and Network Technologies, such as Grid Computing (Grid Computing), Distributed Computing (Distributed Computing), Parallel Computing (Parallel Computing), Utility Computing (Utility Computing), Network Storage (Network Storage Technologies), Virtualization (Virtualization), Load balancing (Load Balance), and the like.
In the cloud computing mode, computing tasks are distributed on a resource pool formed by a large number of computers, so that various application systems can acquire computing power, storage space and information services according to needs. The network that provides the resources is referred to as the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand.
As a basic capability provider of cloud computing, a cloud computing resource pool (cloud platform for short, generally referred to as IaaS, english is collectively referred to as Infrastructure as a Service, and chinese is collectively referred to as Infrastructure as a Service) platform is established, and multiple types of virtual resources are deployed in the resource pool and are used by external clients. The cloud computing resource pool mainly comprises: computing devices (which are virtualized machines, including operating systems), storage devices, and network devices.
According to the logic function division, a Platform as a Service (PaaS) layer can be deployed on the IaaS layer, a Software as a Service (SaaS) layer is deployed on the PaaS layer, and the SaaS layer can be directly deployed on the IaaS layer. PaaS is a platform on which software runs, such as a database, a web (web page) container, and the like. SaaS is a variety of business software, such as web portal, sms, and mass texting. Generally speaking, SaaS and PaaS are upper layers relative to IaaS.
With the development of diversification of internet, real-time data stream and connecting equipment and the promotion of demands of search service, social network, mobile commerce, open collaboration and the like, cloud computing is rapidly developed. Different from the prior parallel distributed computing, the generation of cloud computing can promote the revolutionary change of the whole internet mode and the enterprise management mode in concept.
Cloud Function (Function as a Service, FaaS, Function as Service): cloud functions are a new way to provide computing power, i.e., functions that run in the cloud (server side). In terms of physical design, a cloud function can be composed of a plurality of files, and occupies a certain amount of computing resources such as a Central Processing Unit (CPU) memory; each cloud function is completely independent; may be deployed in different regions, respectively. A developer does not need to purchase and build a server, only needs to write function codes and deploy the function codes to the cloud end to call at the applet end, and meanwhile, cloud functions can be mutually called. For developers, when the cloud function is used, only the language supported by the platform is needed to write the core code, and the running condition of the code is set, so that the code can be flexibly, safely and monitorably run on the cloud infrastructure. The cloud function enables a developer to abandon the configuration and management of the server, and only the core service code (namely the operation data) needs to be written and uploaded, so that the corresponding data result (namely the operation result) can be obtained. By using the cloud function, all operation and maintenance operations of a developer can be avoided, enterprises and developers can concentrate on development of core services, rapid online and iteration are realized, and the rhythm of service development is mastered.
Microservice (Microservice): microservice is a software development technique, a variant of the Service-Oriented Architecture (SOA) style, that constructs applications as a set of loosely coupled services. In the microservice architecture, services are fine-grained and protocols are lightweight. In other words, the microservice is a framework and an organization method for developing software, wherein an Application (i.e. software) is composed of a microservice framework through a well-defined API (Application Programming Interface) and a small independent service that can communicate, so that the Application is easier to expand and develop faster, thereby accelerating innovation and shortening the online time of new functions.
Time wheel timer: the time wheel is a circular queue for storing delay messages, and the bottom layer of the time wheel is realized by adopting an array, so that efficient cycle traversal can be realized. Each element in the circular queue corresponds to a delay task list, the list is a bidirectional circular linked list, and each item in the linked list represents a delay task to be executed. The delay task, that is, the timing task, refers to operation data executed at a certain time in the future, and the operation data is usually implemented in the form of code data such as script codes and program codes.
Distributed message queues: the Message Queue is a Message Queue with fault tolerance, such as Kafka (a high-throughput distributed publish-subscribe Message system), RabbitMQ (Message Queue developed by rabbitmessage Queue, a program-to-program communication method), and the like.
POD (pea POD): POD is the smallest management element in the K8s (kubernets, a kind of open source container cluster management system) cluster, i.e. the smallest management element in the K8s cluster is not an individual independent container, but is called POD, i.e. the smallest unit for management, creation, planning. The K8s is a server cluster, the running data related to the embodiment of the application can be processed in the K8s cluster, each script in the K8s cluster is packaged into a docker image, and after the docker image is pulled from the cloud database by the target device, the script is run through a POD. A POD is a logical host in the context of a container, which may contain one or more closely coupled applications, which may be on the same physical host or virtual machine.
In application development, a timing task is inevitably used, and the timing task refers to operation data which is expected to be executed with a delay in a future time, and the operation data is usually realized in the form of code data such as script code and program code. At present, the main implementation mode of the timing task is to use a crontab component carried by a Linux system, and the crontab component can be executed in a local stand-alone mode, so that the running data does not have high availability; in addition, when multiple hosts are involved in cooperation in application development, timing tasks on different hosts are mutually independent, so that running data cannot be reused and high expansibility is not achieved; in addition, under the condition that the multiple hosts cooperate, different computing resources are usually consumed by different operation data, for example, some operation data are CPU consumption type, and some operation data are memory consumption type, so that the situations that an individual host is very busy and the individual host is idle often occur, that is, the resource utilization rate is low; in addition, when the execution of the running data fails, the problem (BUG) is often difficult to locate, so that the problem locating process is complex, and if the single machine crashes and crashes, the running data cannot be normally executed, and the developer cannot be warned in time.
In view of this, the embodiment of the present application relates to a method for processing operation data, which can improve availability and expansibility of the operation data, improve processing performance of the operation data, improve resource utilization rate of a whole cloud function cluster, synchronize state information of the operation data to a developer in time, alarm the developer in time if the execution of the operation data fails, facilitate the developer to accurately position a BUG in the operation data by sending an operation result and an operation log, simplify a BUG positioning process, reduce BUG positioning difficulty, and make an application development process simpler, more convenient and more transparent.
In an exemplary scenario, by taking the example that the running data is the script code, a set of distributed function computing microservice script system is built, and a cloud function computing platform and microservices are utilized, the reusability and the expansibility of the script code can be improved, and can improve development efficiency, ensure high availability of script code using a distributed message queue and a plurality of execution PODs, meanwhile, the micro-service monitoring component can alarm the execution state of the script codes in time and prompt the developer, and in addition, by acquiring the load condition of each machine (i.e., candidate device), each machine draws the script code to be executed by preemptively pulling, the method ensures that each machine achieves load balance as much as possible, reasonably utilizes cluster resources, and finally utilizes a microservice (a measurement class library of monitoring indexes) monitoring technology to make the execution process more transparent.
Fig. 1 is a schematic diagram of an implementation environment of a method for processing operation data according to an embodiment of the present application. Referring to fig. 1, in this implementation environment, a terminal 110 and a server 120 are included, and the terminal 110 and the server 120 are exemplary illustrations of computer devices.
The terminal 110 is configured to provide running data to be processed, where the running data refers to code data running at a target time, the target time refers to any time after the current time, that is, the target time is a future time, and the code data may be a script code written by a script language, a program code written by a high-level programming language, and the like, which is not specifically limited in this embodiment of the application.
In some embodiments, a user (usually a developer) logs in an account of the cloud platform on the terminal 110, writes to-be-processed running data in an application program associated with the cloud platform, and uploads the running data to the cloud platform, so that the server 120 regularly triggers a target device deployed by the cloud platform to execute the running data after receiving the running data. For example, the application program may be a browser application, and after a developer writes running data (such as script code segments) to be executed periodically in an online IDE (Integrated Development Environment) of the browser application, the developer triggers a distribution or upload function option in the online IDE, so that the terminal sends the running data to the server 120. Optionally, besides the browser application, the application may also be another client that supports uploading of running data, for example, an application corresponding to the cloud platform, and the like, which is not specifically limited in this embodiment of the present application.
The terminal 110 and the server 120 can be directly or indirectly connected through wired or wireless communication, and the application is not limited thereto.
The server 120 may be configured to provide a processing service of the operation data to each terminal 110, that is, after receiving the operation data issued by each terminal 110, for each operation data, when a target time at which the operation data needs to be executed is reached, the server 120 allocates a target device serving as an execution host to the operation data from a server cluster or a distributed system, and executes the operation data on time through the target device.
Optionally, when allocating the target device, the load condition of each candidate device in the distributed system may be considered comprehensively, and a candidate device with a lighter current load (i.e., less occupied computing resources) is selected as the target device, so as to improve the phenomena of low resource utilization rate and unbalanced load caused when each terminal 110 executes its own running data locally.
Optionally, the server 120 may further collect an operation result and an operation log of each operation data through a log platform, for any operation data, if the operation result of any operation data is successful, the server 120 sends a confirmation message to the terminal associated with the operation data, where the confirmation message is used to indicate that the operation of the operation data is completed, and if the operation result of any operation data is failed, the server 120 sends an alarm message to the terminal associated with the operation data, where the alarm message is used to indicate that the operation of the operation data is failed.
It is noted that each of the pending operation data, as disclosed herein, may be stored on a blockchain. In other words, a plurality of servers in a server cluster or distributed system for executing the operation data can be grouped into a block chain, and the servers are nodes on the block chain.
The server 120 may include at least one of a server, a plurality of servers, a cloud computing platform, or a virtualization center. Alternatively, the server 120 may undertake primary computational tasks and the terminal 110 may undertake secondary computational tasks; alternatively, the server 120 undertakes the secondary computing work and the terminal 110 undertakes the primary computing work; alternatively, the terminal 110 and the server 120 perform cooperative computing by using a distributed computing architecture.
Optionally, the server 120 is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, cloud database, cloud computing, cloud function, cloud storage, web service, cloud communication, middleware service, domain name service, security service, CDN (Content Delivery Network), big data and artificial intelligence platform, and the like.
Optionally, the terminal 110 is a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, a vehicle-mounted terminal, an MP3(Moving Picture Experts Group Audio Layer III, mpeg Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, mpeg Audio Layer 4) player, an e-book reader, and the like, but is not limited thereto.
Those skilled in the art will appreciate that terminal 110 may refer broadly to one of a plurality of terminals, which may be more or less in number. For example, the number of the terminals may be only one, or several tens or hundreds of the terminals, or more. The number of terminals and the type of the device are not limited in the embodiments of the present application.
Fig. 2 is a flowchart of a method for processing operation data according to an embodiment of the present application. Referring to fig. 2, the embodiment is applied to a computer device, and the following description takes the computer device as a server as an example, and the embodiment includes the following steps:
201. the server acquires the running data to be processed, wherein the running data is code data running at the target moment.
Optionally, the running data refers to code data running at a target time, and the target time refers to any time after the current time, that is, the target time is a future time. Optionally, the code data may be a script code written in a script language, a program code written in a high-level programming language, or a code implemented in other forms, which is not specifically limited in this embodiment of the present application.
Optionally, the running data is code data executed once, for example, code data executed at 3 pm, or the running data is code data executed in a loop, that is, the running data is executed periodically, for the code data executed in a loop, the number of times of loop execution or the stop time of loop execution may be set, so that the code data is no longer executed after the number of times of loop execution is exceeded or the stop time is reached, for example, the code data executed at 3 pm daily stops after 7 days of loop execution, and this embodiment of the present application does not specifically limit whether the code data is executed once or executed periodically.
In some embodiments, the server receives the operation data uploaded or published by the terminal in the cloud platform, in other words, when the server receives any data transmission message, the server parses a header field of the data transmission message, when the transmission data type information carried in the header field indicates that the data transmission message is operation data, determines that the data transmission message is a transmission message of the operation data, parses a data field of the data transmission message, and obtains the operation data to be processed, and optionally, carries a terminal identifier or an account identifier, the target time and the operation data in the data field.
The cloud platform is a platform for providing cloud function services, namely, the server provides computing power, and the terminal can execute the running data at the cloud end only by uploading or releasing the running data to be processed to the cloud platform without occupying local computing resources. For a developer, when using a cloud function, the developer only needs to write operation data in a language supported by a cloud platform and set operation conditions of the operation data (i.e., set target time), so that the operation data can be flexibly, safely and monitorably executed on a cloud infrastructure (i.e., target equipment).
Optionally, the transmission data type information may be an identifier, for example, 00 represents operation data, 01 represents service data, 10 represents control data, or the like, or the transmission data type information may also be a target time, that is, the execution condition of the operation data is used as the transmission data type information of the transmission data type information.
In an exemplary embodiment, taking the example that the transmission data type information is a target time as an example, when a server receives any data transmission message, parsing a header field of the data transmission message, when the header field carries the target time, determining that the data transmission message is a transmission message of operation data, parsing a data field of the data transmission message to obtain the operation data to be processed, and optionally, carrying a terminal identifier or an account identifier and the operation data in the data field.
In an exemplary scenario, a developer logs in an account of a cloud platform on a terminal, writes to-be-processed running data in an application program associated with the cloud platform, triggers the terminal to send the to-be-processed running data to a server through triggering operation of a release or upload function option in the application program, and the server receives the to-be-processed running data. For example, the application is a software development application, a writing function of an online IDE is provided in the software development application, a developer writes the to-be-processed running data (such as script code segments, program code segments, function functions, and the like) in the online IDE, and the developer clicks or triggers a release or upload function option through a shortcut key, so that the terminal sends the to-be-processed running data to the server.
202. The server determines, in response to reaching the target time, a plurality of candidate devices that support the execution environment based on the execution environment of the execution data.
In some embodiments, since the server is used to provide the computing power to each terminal through the cloud platform, the server may be a server cluster (or a distributed system or a block chain system) composed of a plurality of computing devices, that is, a plurality of cloud infrastructures equivalent to the cloud platform.
In some embodiments, the server identifies, for the operational data, an operational environment for the operational data, and determines, from among a plurality of computing devices of the server cluster, a plurality of candidate devices that support the operational environment. In one example, the runtime data is JavaScript script code, the server identifies a runtime environment of the runtime data as a Java environment, and among all computing devices of the server cluster, a plurality of computing devices supporting the Java environment are determined as the plurality of candidate devices.
In some embodiments, the server may intelligently recognize the operating environment of the operating data through a machine learning model, for example, recognize the operating environment through a classification model, or, since the operating data of different operating environments generally have different programming grammars, the server may recognize the operating environment of the operating data through the programming grammar of the operating data, or may also recognize the operating environment of the operating data manually, which is not specifically limited in the embodiment of the present application.
In some embodiments, when the developer uploads or releases the operating data to the server through the terminal, the developer may also carry operating environment indication information of the operating data, where the operating environment indication information is used to indicate an operating environment of the operating data, so that the operating environment is not required to be identified by the server with high effort, but a plurality of candidate devices supporting the operating environment may be determined from a plurality of computing devices of the server cluster directly through the operating environment indication information sent by the terminal.
Optionally, the running environment indication information may be a running environment (e.g., a Java environment) set by a developer, or may be a programming language (e.g., a JavaScript scripting language) for writing the running data, which is not specifically limited in this embodiment of the application.
In some embodiments, if the operating environment indication information is the operating environment itself, the server may pre-store association information of the computing devices and the operating environment, where the association information is used to record each computing device and each supported operating environment in the server cluster (or in a distributed system or a blockchain system), and the server selects, based on the association information, each computing device supporting the operating environment as the multiple candidate devices.
Optionally, the association information may be a plurality of key value pairs, each key value pair takes one operating environment as a key name, and the device identifiers of the computing devices supporting the operating environment are taken as key values, the server takes the operating environment as an index, queries index content corresponding to the index, and determines the index content, that is, the computing device corresponding to each device identifier stored in the key values, as the plurality of candidate devices. Optionally, besides key value pairs, the associated information may also be in the form of a list, or may also be a linked list, an array, a hash table, a bitmap, and the like.
In some embodiments, if the operating environment indication information is a programming language for writing the operating data, the server may determine an operating environment corresponding to the programming language according to a mapping relationship between the programming language and the operating environment, and select each computing device supporting the operating environment as the plurality of candidate devices based on the associated information of the computing device and the operating environment pre-stored in the server, where the associated information is already described above and is not described herein again.
203. The server determines a target device from the candidate devices based on the load conditions of the candidate devices, wherein the load condition of the target device meets a target condition.
The load condition is used for characterizing the usage condition of the computing resource of the candidate device, and optionally, the load condition includes at least one of a central processing unit CPU usage rate, a disk input/output i (input)/o (output) utilization rate, or a memory occupancy rate.
In some embodiments, when the load condition includes CPU usage, the target condition may be that CPU usage is lowest, that is, the server determines, from the plurality of candidate devices, a candidate device with the lowest CPU usage as the target device. Optionally, the server may obtain the CPU utilization of the multiple candidate devices, rank the candidate devices in the order from low to high according to the CPU utilization, and determine the candidate device ranked first as the target device.
In some embodiments, when the load condition includes a disk I/O utilization rate, the target condition may be that the disk I/O utilization rate is the lowest, that is, the server determines, from the plurality of candidate devices, the candidate device with the lowest disk I/O utilization rate as the target device. Optionally, the server may obtain the disk I/O utilization rates of the multiple candidate devices, rank the candidate devices in the order from low to high according to the disk I/O utilization rates, and determine the candidate device ranked at the top as the target device.
In some embodiments, when the load condition includes memory occupancy, the target condition may be that the memory occupancy is lowest, that is, the server determines, from the plurality of candidate devices, the candidate device with the lowest memory occupancy as the target device. Optionally, the server may obtain the memory occupancy rates of the multiple candidate devices, rank the candidate devices in the order from low to high according to the memory occupancy rates, and determine the candidate device ranked first as the target device.
In the process, the candidate equipment with lighter load is selected as the target equipment from the candidate equipment, so that the load of each equipment in the whole server cluster is balanced, the phenomena that some equipment are very busy and some equipment are very idle in the server cluster are avoided, and the resource utilization rate of the server cluster can be improved.
In some embodiments, in allocating target devices, geographical locations may be considered in addition to load conditions, since computing devices in a server cluster (or in a distributed system or a blockchain system) are typically deployed in different geographical locations, so that the computing devices can provide computing power to terminals in a nearby geographical area in the near vicinity to improve response speed of the entire server cluster. In this case, the server may determine, based on a geographic location associated with the operational data, at least one first device from the plurality of candidate devices, the first device supporting provision of a service to a terminal within the geographic location; from the at least one first device, the target device is determined for which the load situation complies with the target condition.
Optionally, the geographic location may be a location of a terminal (generally, a terminal corresponding to a developer) that uploads the operation data, or the geographic location may also be a location of a terminal (generally, a terminal corresponding to a consumer) that is operated by the operation data, which is not specifically limited in this embodiment of the present application.
In an exemplary scenario, assuming that a geographic location associated with the operation data is city a, the server determines, from the multiple candidate devices, at least one first device that supports providing services to terminals in city a, and then determines, according to a load condition of the at least one first device, the target device whose load condition meets the target condition.
In the process, when the target equipment is selected, the load condition and the geographic position are considered, so that the whole server cluster can provide services to the corresponding terminals nearby by the target equipment, the response speed of the server cluster can be increased on the premise of balancing the load, and the processing performance of the cloud platform on the operation data is optimized.
204. The server executes the operational data through the target device.
In some embodiments, since there may be multiple data to be executed within the server cluster at the target time, including the running data, the server may orchestrate management of the various data that are executed periodically at the same time through the message queue. That is, in response to the target time being reached, the server adds identification information of a plurality of pieces of data periodically executed at the target time to the message queue, wherein the plurality of pieces of data include the running data; and for the running data in the message queue, sending the identification information of the running data to the target equipment, and loading and executing the running data by the target equipment based on the identification information.
In an exemplary scenario, a server maintains a message queue for each time, and when the target time is reached, determines a plurality of pieces of data that need to be executed at regular time at the target time, and adds identification information of the plurality of pieces of data to a message queue corresponding to the target time, then, for running data corresponding to each piece of identification information in the message queue, determines candidate devices based on a running environment, and allocates target devices based on a load condition, then, sends each piece of identification information to each corresponding target device, and loads and executes each piece of running data based on each piece of received identification information by each target device.
In some embodiments, the target device may take a preemptive load or a distributed load when loading the run data. Preemptive loading refers to: after receiving the identification information, the target device actively pulls the running data corresponding to the identification information from the server, and executes the running data after receiving the running data returned by the server; distributed loading refers to: the server distributes the identification information to the target device and simultaneously issues corresponding operation data to the target device, and the target device executes the operation data after receiving the operation data, wherein the server may adopt synchronous distribution or asynchronous distribution when distributing the identification information and the operation data, and the embodiment of the application is not specifically limited thereto.
In the above process, the server can reasonably pool a plurality of data executed at the same time at regular time through the message queue, and the message queue is also called as a distributed message queue in the distributed system. If the data needed to be executed is excessive at the same time, a frightening group effect is caused by the rapid increase of the number of the operation data to be executed in the system, wherein the frightening group effect means that a plurality of data are executed when waiting for reaching the target time at the same time, and when reaching the target time, the processes corresponding to the data are awakened at the same time. Through the distributed message queues, the quantity of running data (namely the length of the message queues) required to be executed at the target moment can be determined in advance before the target moment is reached, so that resource distribution can be carried out in advance, computing resources are reasonably and comprehensively arranged, and the crowd effect can be improved.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, after the operation data is obtained, the target equipment with the load condition meeting the target condition is selected from the candidate equipment of the operation environment supporting the operation data, the possible conflict caused by the fact that the operation environment is not adaptive is avoided, the target equipment more tending to load balance can be reasonably distributed according to the load condition, the operation data is executed at the target moment through the target equipment, and the target equipment more tending to load balance can be selected for each operation data, so that the resource utilization rate can be greatly improved.
Fig. 3 is an interaction flowchart of a method for processing operation data according to an embodiment of the present application. Referring to fig. 3, the embodiment is applied to a computer device, and the following description takes the computer device as a server as an example, and the embodiment includes the following steps:
301. the server acquires the running data to be processed, wherein the running data is code data running at the target moment.
Step 301 is similar to step 201 and will not be described herein.
In an exemplary scenario, taking the running data as a script code as an example, an example of the running data to be processed is shown:
Figure BDA0003062950710000161
Figure BDA0003062950710000171
302. and the server distributes identification information to the operation data in response to the acquired operation data.
In some embodiments, since the server is usually a server cluster (or a distributed system or a block chain system) formed by a plurality of servers, in the server cluster, the device receiving the operation data, the device allocating the target device to the operation data, and the target device executing the operation data are not usually integrated on the same physical machine, that is, the three are respectively completed by different physical machines, so in the execution flow of the operation data, transmission inside the cluster is involved, but only the target device needs to load and execute the full amount of operation data, therefore, the server may allocate unique identification information to the operation data after receiving one operation data, and when executing the subsequent flows of allocating the target device, adding a message queue, and the like, the operation data is represented by using the identification information, communication overhead inside the cluster can be greatly saved.
In some embodiments, when the server allocates the identification information, it may be ensured that the identification information is incremented according to the sequence of the received operation data, that is, the server allocates larger identification information for the operation data received later, so that each operation data to be processed by the cluster may be conveniently managed according to the time sequence. Optionally, the identification information may be an incremented serial number, or may also be an incremented serial number that carries timestamp information, and the like.
In other embodiments, the server may also increase the identification information according to the sequence of the received operation data when allocating the identification information, for example, the server concatenates the terminal identification with the timestamp of the received operation data to form a serial number, encrypts the serial number, and determines the encrypted information as the identification information uniquely identifying the operation data, so as to improve the security of the identification information generation process. Optionally, the encryption algorithm for the serial number may be a hash algorithm, a digest algorithm, or the like, and this embodiment of the present application is not particularly limited.
303. And the server sets the state information of the running data into a to-be-issued state, wherein the to-be-issued state is used for representing that the running data is acquired but is not bound to the corresponding timer.
In some embodiments, the server may set, for each piece of operation data uploaded to the cloud platform, state information used for tracking an execution state of the operation data, where the state information optionally includes the following types:
a) the to-be-released state is used for representing that the running data is acquired but not bound to the corresponding timer;
b) a state to be triggered, which is used for representing that the running data is bound to the corresponding timer but does not reach the target moment;
c) the to-be-executed state is used for representing that the target time is reached but the running data is not sent to the corresponding target equipment;
d) and the execution state is used for representing that the running data is loaded and executed by the corresponding target equipment.
In some embodiments, the state information may be a state variable, the state variable needs to be synchronized among devices involved in the operation data within the server cluster, and optionally, when the identification information of the operation data is transmitted inside the server cluster, the state variable of the operation data may also be transmitted simultaneously, for example, after the server allocates a target device to the operation data, the identification information of the operation data and the state variable need to be sent to the target device, after the target device receives the identification information and the state variable, the received state variable is compared with a locally stored (synchronized) state variable, if the two are consistent, it indicates that the verification is passed, the target device loads and executes the operation data, and if the two are inconsistent, it indicates that the operation data has a leakage risk or the state variable is not updated in time, at this time, the target device may not pass the verification, and may alert the server, so that the technician may troubleshoot the specific problem according to the alert condition.
Optionally, because the state information includes 4 possible states, 4 possible values may be set for the state variable, for example, 00 represents a to-be-issued state, 01 represents a to-be-triggered state, 10 represents a to-be-executed state, 11 represents an executing state, and for example, 1 represents the to-be-issued state, 2 represents the to-be-triggered state, 3 represents the to-be-executed state, and 4 represents the executing state.
In an exemplary scenario, after receiving the operation data, the server allocates identification information to the operation data, creates a State variable for the operation data, initializes the State variable to a State to be published (e.g., State variable State is 00), and then stores the operation data, the identification information, and the State variable in a database, for example, using the identification information as a key name and the operation data and the State variable as key values, thereby forming a set of key value pairs.
In some embodiments, since the state information can reflect the flow of the running data when executed in the cloud platform in real time, the state information can also be fed back to a terminal (usually a terminal of a developer) that uploads or publishes the running data by a server, so that the developer can conveniently track the running data written by the developer in real time.
It should be noted that, the foregoing step 302 and the foregoing step 303 do not have a sequential relationship limitation in the execution timing sequence, that is, in response to receiving the operation data, the server may allocate the identification information first and then modify the state information into the to-be-issued state, or may modify the state information into the to-be-issued state and redistribute the identification information first, or may modify the state information into the to-be-issued state while allocating the identification information, and this embodiment of the present application does not specifically limit the execution timing sequence between the foregoing step 302 and the foregoing step 303.
304. And the server binds the identification information with a timer corresponding to the time granularity based on the time granularity of the target time, wherein the timer is used for determining whether the target time is reached.
In some embodiments, the server identifies a time granularity for the target time and binds the identification information to a timer corresponding to the time granularity.
Optionally, the time granularity includes at least one of seconds, minutes, or hours, each of which may correspond to one or more timers. Assuming that each time granularity corresponds to one timer, the timer may be divided into a second timer having a minimum timing unit of 1 second, a minute timer having a minimum timing unit of 1 minute, and an hour timer having a minimum timing unit of 1 hour. Alternatively, each time granularity may also correspond to a plurality of timers, for example, a 1 second timer (the minimum timing unit is 1 second), a 2 second timer (the minimum timing unit is 2 seconds), a 5 second timer (the minimum timing unit is 5 seconds) and the like are provided, and this is not particularly limited in the embodiments of the present application.
In some embodiments, when identifying the time granularity of the target time, the server may determine the time granularity of the minimum time unit of the target time as the time period granularity of the target time, for example, if the target time is 14 points on 7 days 5 months, the time granularity of the minimum time unit is 1 hour, and if the target time is 14 points on 7 days 5 months, the time granularity of the minimum time unit is 1 minute, which is not specifically limited in this embodiment of the application.
In step 302-. In some embodiments, the full amount of the running data may also be bound to a timer corresponding to the time granularity, which can simplify the flow of associating the running data with the timer.
In some embodiments, the server associates the operational data with a second timer in response to the target time of day having a time granularity of seconds, optionally binding identification information of the operational data with the second timer, wherein the second timer has a minimum timing unit of 1 second.
In some embodiments, the server associates the running data with a minute timer in response to the target time being at a time granularity of minutes, optionally binding identification information of the running data with the minute timer, wherein a minimum timing unit of the minute timer is 1 minute.
In some embodiments, the server associates the running data with an hour timer in response to the target time being hour in time granularity, optionally binding identification information of the running data with the hour timer, wherein the minimum timing unit of the hour timer is 1 hour.
In the above process, the identifier information of the running data is bound with the timer corresponding to the time granularity of the running data, so that the corresponding running data can be executed at a target moment under various time granularities, while in the related art, the crontab component of the Linux system only supports a minute timer, in other words, a timing task (running data) under a second granularity cannot be realized, and the timing task under an hour granularity is crowded and redundant.
In the following, the principle of the time wheel timer will be explained, and fig. 4 is a schematic diagram of a time wheel timer provided in the embodiment of the present application, and as shown in 400, the minimum timing unit is referred to as Tick (which can be understood as the minimum time interval represented by one hop of the pointer of the timer), where Tick means 1 second in the second timer, Tick means 1 minute in the minute timer, and Tick means 1 hour in the hour timer. Assuming that N Tick units are included in a round robin of the timer, where N is an integer greater than or equal to 1, and each Tick unit of the time round timer maintains one message queue, N message queues need to be maintained in total. If the current time is after S cycles pointing to element i (referring to the ith Tick cell), where i is a number greater than or equal to 1 and less than or equal to N, then the current time T iscCan be represented as TcIf a piece of operation data with a time interval Ti between a target time and a current time is bound, identification information of the piece of operation data is added to a message queue of the element N, and N ═ Tc + Ti) ═ mod N ═ S × N + i + Ti (i + Ti) mod N ═ mod N.
Fig. 5 is a schematic diagram of a second timer provided in an embodiment of the present application, as shown in 500, where a minimum timing unit Tick in the second timer is 1 second, so that 60 ticks are included in the second timer, and assuming that a clock points to a Tick of 2 at a current time, at this time, running data executed after being delayed for 4 seconds is bound (or referred to as associated), then a target time is a Tick of 6, and at this time, identification information of the running data is inserted into a message queue under Tick 6.
Fig. 6 is a schematic diagram of a minute timer provided in this embodiment of the present application, as shown in 600, a minimum timing unit Tick in the minute timer is 1 minute, so that 60 ticks are included in the minute timer, and assuming that a clock points to a Tick of Tick 1 at a current time, at this time, running data executed with a delay of 5 minutes is bound (or referred to as associated), then a target time is a Tick of Tick 6, and at this time, identification information of the running data is inserted into a message queue of Tick 6.
Fig. 7 is a schematic diagram of an hour timer provided in an embodiment of the present application, as shown in 700, where a minimum timing unit Tick in the hour timer is 1 hour, so that 24 ticks are included in the second timer, and assuming that a clock points to a scale where Tick is 1 at a current time, at this time, running data executed with a delay of 5 hours is bound (or referred to as associated), then a target time is a scale where Tick is 6, and at this time, identification information of the running data is inserted into a message queue under Tick 6.
305. And the server responds to the fact that the running data is associated with the timer, and the state information of the running data is set to be in a state to be triggered, wherein the state to be triggered is used for representing that the running data is bound to the corresponding timer but does not reach the target time.
Step 305 is similar to step 303 and will not be described herein.
Based on the example provided in step 303 above, it is assumed that the state information is implemented in the form of a state variable, where the state variable corresponds to 4 possible values, for example, 00 represents a to-be-issued state, 01 represents a to-be-triggered state, 10 represents a to-be-executed state, and 11 represents an executing state.
Then, in response to that the running data is associated with the timer (that is, the identification information of the running data is bound to the timer of the corresponding time granularity), the server queries, in the database, index content stored corresponding to the index, where the index content includes the running data and the state variable, and at this time, modifies the state variable from 00 (to-be-issued state) to 01 (to-be-triggered state), that is, assigns the state variable to 01.
In the process, the state information can reflect the flow of the running data in the cloud platform in real time, so that the state information can be fed back to a terminal (usually a terminal of a developer) uploading or publishing the running data by a server, and the developer can conveniently track the running data compiled by the developer in real time.
306. And the server adds identification information of a plurality of data periodically executed at the target time to a message queue in response to reaching the target time, wherein the plurality of data comprise the running data.
In some embodiments, the server maintains a message queue for each Tick unit in the timer, obtains a time interval between a target time and a current time based on the target time corresponding to the operation data, and based on the time interval, determining the number of Tick units corresponding to the time interval in the timer, adding the Tick unit scales pointed by the pointer at the current moment and the Tick unit number to obtain the Tick unit scales pointed by the pointer at the target moment, so that when the pointer points to the Tick unit scale corresponding to the target time, the target time is determined to be reached, then, the identification information of a plurality of data executed in fixed time at the target time is added to the message queue corresponding to the Tick unit, wherein the operation data is included in the plurality of data because the operation data is also executed at the target timing.
Optionally, the message queue may be a Kafka message queue, or may also be a RabbitMQ (a high-throughput distributed publish-subscribe message system), an ActiveMQ (active message queue), and the like, and the embodiment of the present application does not specifically limit the type of the message queue.
In some embodiments, in addition to writing the identification information of each data into the message queue, parameters such as the last execution time of each data, the execution time expected this time, and the like may also be written simultaneously, so as to maintain more comprehensive meta information of each data in the message queue.
Fig. 8 is a schematic diagram of adding identification information to a message queue according to an embodiment of the present application, and as shown in fig. 800, assuming that a pointer of a timer at a target point points to a Tick 5 scale, at this time, it is triggered to write identification information of all pieces of running data at the Tick 5 scale into the message queue at the Tick 5 scale, which is schematically illustrated by taking a Kafka message queue written to the Tick 5 scale as an example.
307. And the server responds to the fact that the identification information of the running data is added to the message queue, and the state information of the running data is set to be a to-be-executed state, wherein the to-be-executed state is used for representing that the target time is reached but the running data is not sent to the corresponding target equipment.
Step 307 is similar to step 303 and will not be described herein.
Based on the example provided in step 303 above, it is assumed that the state information is implemented in the form of a state variable, where the state variable corresponds to 4 possible values, for example, 00 represents a to-be-issued state, 01 represents a to-be-triggered state, 10 represents a to-be-executed state, and 11 represents an executing state.
Then, in response to that the identification information of the running data is added to the message queue, the server uses the identification information as an index in the database to query the index content stored corresponding to the index, where the index content includes the running data and the state variable, and at this time, the state variable is modified from the original 01 (state to be triggered) to 10 (state to be executed), that is, the state variable is assigned to 10.
In the process, the state information can reflect the flow of the running data in the cloud platform in real time, so that the state information can be fed back to a terminal (usually a terminal of a developer) uploading or publishing the running data by a server, and the developer can conveniently track the running data compiled by the developer in real time.
308. The server determines a plurality of candidate devices supporting the running environment based on the running environment of the running data for the running data in the message queue.
Step 308 is similar to step 202, and is not described in detail here.
It should be noted that, in the embodiment of the present application, only the operation data in the message queue is taken as an example, and how to allocate the target device and execute the operation data by the target device is described, but the operation data may be any data in the message queue, that is, the server may execute similar operations on each data in the message queue, which is not described herein again.
309. The server determines a target device from the candidate devices based on the load conditions of the candidate devices, wherein the load condition of the target device meets a target condition.
Step 309 is similar to step 203, and will not be described herein.
Fig. 9 is a dynamic variation diagram of the CPU utilization of each of two candidate devices according to the embodiment of the present application, and as shown in fig. 9, a load condition including the CPU utilization is taken as an example for explanation, an upper half 901 shows the CPU utilization of the candidate device a, a latest value of the CPU utilization of the candidate device a is 14, and a peak value 40 is reached at a time 12:13:14 on XX day in XXXX year, a lower half 902 shows the CPU utilization of the candidate device B, a latest value of the CPU utilization of the candidate device B is 27, and a peak value 40 is reached at a time 12:14:14 on XX day in XXXX year, the server may select the candidate device a with a smaller latest value of the CPU utilization as a target device.
310. The server sends the identification information of the operation data to the target device.
Optionally, the server compresses and encrypts only the identification information and then sends the identification information to the target device, which is convenient to save communication overhead during data transmission, optionally, the server compresses and encrypts the identification information and the state information together and then sends the identification information and the state information to the target device, which is convenient to verify whether running data is leaked or whether the state information is tampered or not through the state information, optionally, the server compresses and encrypts the identification information, the state information and the running data together and then sends the running data to the target device, which is equivalent to adopting a distributed loading mode of the running data, namely, the server actively distributes the running data to the target device, and this is not specifically limited in this embodiment of the present application.
311. And the server responds to the identification information of the running data sent to the target device, and sets the state information of the running data to an executing state, wherein the executing state is used for representing that the running data is loaded and executed by the corresponding target device.
Step 311 is similar to step 303, and is not described herein.
Based on the example provided in step 303 above, it is assumed that the state information is implemented in the form of a state variable, where the state variable corresponds to 4 possible values, for example, 00 represents a to-be-issued state, 01 represents a to-be-triggered state, 10 represents a to-be-executed state, and 11 represents an executing state.
Then, in response to that the identification information of the operation data is sent to the target device, the server uses the identification information as an index in the database, and queries index content stored corresponding to the index, where the index content includes the operation data and the state variable, and at this time, changes the state variable from original 10 (to-be-executed state) to 11 (executing state), that is, assigns the state variable to 11.
In the process, the state information can reflect the flow of the running data in the cloud platform in real time, so that the state information can be fed back to a terminal (usually a terminal of a developer) uploading or publishing the running data by a server, and the developer can conveniently track the running data compiled by the developer in real time.
312. The target device loads and executes the run data based on the identification information in response to receiving the identification information.
In some embodiments, two loading manners of the operation data are provided, one is distributed loading, that is, the server issues the operation data while issuing the identification information in step 310, and at this time, the target device directly executes the operation data, and certainly, if a state variable issued by the server is received, a verification process based on the state variable may be performed, and in the embodiment of the present application, whether the verification process needs to be executed is not specifically limited; one is preemptive loading, that is, after receiving identification information, a target device actively queries index content, which is stored in a corresponding index from a (cloud) database by using the identification information as the index, where the index content at least includes the running data, the target device downloads the running data from the database, and then executes the running data, and optionally, if the index content further includes a state variable, a verification process based on the state variable is also performed.
In the following, a state variable based verification process is schematically introduced: the target device compares the state variable read from the database (or sent by the server) with the state variable stored locally, if the two are consistent, the verification is passed, the running data is executed normally, and if the two are inconsistent, the verification fails, and the abnormal condition is reported to a technician.
In step 310 and 312, a possible implementation manner that the server executes the operation data through the target device is shown, that is, the identification information of each piece of data that needs to be executed at regular time when the target device is stored through the message queue, when the target device is reached, each piece of identification information stored in the message queue is distributed to each target device, and each target device loads and executes each corresponding operation data, so that a startle group effect caused by suddenly waking up a large number of regular tasks (i.e., executing excessive operation data) at the target time can be improved.
Fig. 10 is a schematic diagram of loading operation data by a target device according to an embodiment of the present application, and as shown in 1000, it is assumed that N (N ≧ 1) Identification Information (ID) is recorded in a Kafka message queue at a target time, a server monitors load conditions of each candidate device in real time, and pulls the ID of the operation data to be executed in the Kafka message queue in real time. Based on the load condition of each candidate device, allocating a corresponding target device (i.e., an execution unit Worker) to each piece of running data stored in the Kafka message queue, optionally, in distributed loading, the server actively distributes the running data to the target device, in preemptive loading, the server only distributes the ID of the running data to the target device, the target device actively queries, from a (cloud) database, an index content correspondingly stored in the index by using the ID as an index, the index content includes the running data and the state variable, then, the target device compares the state variable read from the database with the state variable stored locally, if the two are consistent, verification is passed, the running data is downloaded and the running data is executed, if the two are inconsistent, verification is failed, and the abnormal condition is reported to a technician. Fig. 10 schematically illustrates a preemptive loading manner, where each target device pulls a mirroring task from the cloud database through the received ID, and the cloud database returns corresponding running data (i.e., running data written by the developer in the IDE online platform) to the target device.
313. And the target equipment responds to the completion of the execution of the running data and sends the running result and the running log of the running data to the server.
In some embodiments, when the execution data is executed, a running log of the execution data is generated at the same time, where the running log is used to record various operations of the execution data during the execution process, and when the execution is completed, an execution result of the execution data is also generated, for example, the execution result includes an execution success or an execution failure, and the reason of the execution failure may be various, for example, a BUG exists in a code of the execution data itself, or a target device is down, or a network of the target device is crashed at a target time, and the like. When the target device finishes executing the running data, the identification information, the running log and the running result can be compressed and encrypted and then sent to the server.
314. And the server acquires the operation result and the operation log of the operation data from the target equipment.
Optionally, the server receives the identification information, the operation result, and the operation log sent by the target device, for the operation data with the operation result of successful operation, the server may send confirmation information to the terminal associated with the operation data, where the confirmation information is used to indicate that the operation data is completed (and has not reported an error), and for the operation data with the operation result of failed operation, the server may execute step 315 described below.
315. And the server responds to the operation result as operation failure and sends alarm information to the terminal associated with the operation data, wherein the alarm information carries the operation result and the operation log.
Optionally, in response to that the operation data is an operation failure, the server may send alarm information to a terminal (which may be a terminal that issues the operation data, or a terminal that is used to debug the BUG, or the like) associated with the operation data, where the alarm information at least carries the operation result and the operation log, and optionally, the alarm information may also carry original operation data and identification information of the operation data, which is not specifically limited in this embodiment of the present application.
In the process, the server can track the operation data failed in operation in time by collecting the operation log and the operation result of the operation data in time, and reports the operation data to the terminal to be checked by the user in time, so that the timeliness of troubleshooting is improved, and the user can conveniently and quickly locate the fault reason according to the operation log.
In some embodiments, since the running data may be divided into code data executed once and code data executed periodically (i.e., in a loop), for the code data executed once, the server may end the process to enter the next Tick unit of the timer, and for the code data executed in a loop, the server may add the identification information of the running data to the tail of the message queue in response to that the running data is the code data executed in a loop, and repeatedly execute the operations similar to the above step 307 and 315, which are not described herein again.
In the above process, by directly adding the identification information to the team of the current message queue for the code data to be executed circularly, the message queue of the Tick unit to which the identification information should be added can be avoided from being repeatedly calculated, so that the calculation resources of the server are saved.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
According to the method provided by the embodiment of the application, after the operation data is obtained, the target equipment with the load condition meeting the target condition is selected from the candidate equipment of the operation environment supporting the operation data, the possible conflict caused by the fact that the operation environment is not adaptive is avoided, the target equipment more tending to load balance can be reasonably distributed according to the load condition, the operation data is executed at the target moment through the target equipment, and the target equipment more tending to load balance can be selected for each operation data, so that the resource utilization rate can be greatly improved.
Fig. 11 is a schematic diagram of a processing method of running data according to an embodiment of the present application, and as shown in fig. 11, a cloud platform 1100 supporting cloud function services includes a script management end 1101, a time wheel control module 1102, and a monitoring management platform 1103, taking the running data as a script code as an example.
The script management terminal 1101 is configured to receive script codes uploaded by each terminal, allocate a script ID (Identification information) to each script code, and initialize state information of the script code to a to-be-issued state. Optionally, when the script management end 1101 issues the script codes to be processed to the time wheel control module 1102, the script management end 1101 may package each script code to a docker (an open source application container engine) and deploy the script code on the micro service platform.
After receiving the script code issued by the script management terminal 1101, the time wheel control module 1102 binds the script ID of the script code to the timer with the corresponding time granularity, and updates the state information of the script code to the state to be triggered; when the timer indicates that the target time is reached, writing the script ID of the script code into a message queue corresponding to the target time, and updating the state information of the script code into a state to be executed; after the target device is allocated to the script code based on the operating environment and the load condition, the script ID of the script code is sent to the target device, the target device pulls the complete script code from the database of the script management terminal 1101 based on the script ID, executes the script code, and updates the state information of the script code to the executing state. For the script code executed once, the flow is ended at this time, and for the script code executed circularly, the script ID of the script code is added to the tail of the current message queue in the timer.
The monitoring management platform 1103 is also called a log platform, and the script management end 1101 and the time wheel control module 1102 synchronize state information of each script code with the monitoring management platform 1103 in real time, so that the monitoring management platform 1103 is configured to check and view the state information of each script code, and optionally, check and view execution time consumption of each script code, further, after each target device finishes executing each script code, each target device reports an operation result of the script code and a log ID of an operation log to the monitoring management platform 1103, if the operation result is successful, the monitoring management platform 1103 may ignore the operation result or send confirmation information to a terminal associated with the script code, and if the operation result is failed, the monitoring management platform may pull a complete operation log from the target device according to the log ID, fault location of the script code is facilitated.
An exemplary code of the running result and the log ID reported by the target device to the monitoring management platform 1103 is shown below, and optionally, the target device also reports the execution time consumption of the script code to the monitoring management platform 1103:
Figure BDA0003062950710000281
fig. 12 is an interface schematic diagram of a monitoring management statistical interface provided in an embodiment of the present application, and as shown in 1200, after the monitoring management platform 1103 collects the operation results and the log IDs of the script codes, it may also count the number of the script codes whose operation results are counted by the platform in each time period as operation failures, so as to facilitate real-time monitoring and alarming, and through the log IDs and the execution time consumption, facilitate tracing back the execution process and time consumption of the script, and facilitate locating and troubleshooting of fault conditions. In the lower part of fig. 12, it is also shown that the number of script codes whose execution results in an execution failure in a specified period of time is 186, and the log ID of each script code, the account ID of the terminal operated by the script code, the error field, the error code, and the log storage location are listed.
In the embodiment of the application, an efficient operation scheme of the timing script based on the distributed function computation time loop algorithm can be provided through the cloud platform supporting the cloud function service, compared with the defects that a single machine is inflexible to deploy, the operation environment is complex to depend, the load cannot be balanced and the like in the related technology, a user can rapidly develop the script by using the cloud platform and also supports multiplexing of other interfaces for co-building the cloud service, the high availability and the high expansibility of script execution are guaranteed, the cloud platform has high performance, the cluster load can be balanced, the multi-granularity timing task is supported, and the complete process of script execution can be tracked.
Fig. 13 is a schematic structural diagram of an apparatus for processing operation data according to an embodiment of the present application, please refer to fig. 13, where the apparatus includes:
an obtaining module 1301, configured to obtain running data to be processed, where the running data is code data that runs at a target time;
a first determining module 1302, configured to determine, in response to reaching the target time, a plurality of candidate devices supporting the runtime environment based on the runtime environment of the runtime data;
a second determining module 1303, configured to determine, based on load conditions of the multiple candidate devices, a target device from the multiple candidate devices, where the load condition of the target device meets a target condition;
an executing module 1304, configured to execute the operation data by the target device.
According to the device provided by the embodiment of the application, after the operation data is obtained, the target equipment with the load condition meeting the target condition is selected from the candidate equipment of the operation environment supporting the operation data, the conflict possibly caused by the fact that the operation environment is not adaptive is avoided, the target equipment more tending to load balance can be reasonably distributed according to the load condition, the operation data is executed at the target moment through the target equipment, and the target equipment more tending to load balance can be selected for each operation data, so that the resource utilization rate can be greatly improved.
In one possible implementation, the execution module 1304 is configured to:
in response to the target time, adding identification information of a plurality of pieces of data which are executed regularly at the target time into a message queue, wherein the plurality of pieces of data comprise the running data;
and for the running data in the message queue, sending the identification information of the running data to the target equipment, and loading and executing the running data by the target equipment based on the identification information.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the first setting module is used for setting the state information of the operating data into a to-be-executed state in response to the identification information of the operating data being added to the message queue, wherein the to-be-executed state is used for representing that the target time is reached but the operating data is not sent to the corresponding target equipment.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the second setting module is used for setting the state information of the running data into an executing state in response to the identification information of the running data being sent to the target device, wherein the executing state is used for representing that the running data is loaded and executed by the corresponding target device.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the adding module is used for responding to the code data which is executed circularly by the running data and adding the identification information of the running data to the tail of the message queue.
In one possible embodiment, the load condition includes at least one of CPU utilization, disk I/O utilization, or memory usage, and the second determining module 1303 is configured to:
determining the candidate device with the lowest CPU utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
determining the candidate device with the lowest disk I/O utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
and determining the candidate device with the lowest memory occupancy rate as the target device from the plurality of candidate devices.
In one possible implementation, the second determining module 1303 is configured to:
determining at least one first device from the plurality of candidate devices based on a geographic location associated with the operational data, the first device supporting provision of services to terminals within the geographic location;
from the at least one first device, the target device is determined for which the load situation complies with the target condition.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the association module is used for associating the running data with a timer corresponding to the time granularity based on the time granularity of the target time, and the timer is used for determining whether the target time is reached.
In one possible embodiment, the time granularity includes at least one of seconds, minutes, or hours, the association module is to:
in response to the time granularity of the target time being seconds, associating the operational data with a second timer, the second timer having a minimum timing unit of 1 second; or the like, or, alternatively,
in response to the time granularity of the target time being minutes, associating the operational data with a minute timer, the minute timer having a minimum timing unit of 1 minute; or the like, or, alternatively,
in response to the time granularity of the target time being hours, the operational data is associated with an hour timer having a minimum timing unit of 1 hour.
In one possible embodiment, the association module is configured to:
and distributing identification information for the running data, and binding the identification information with the timer corresponding to the time granularity.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the third setting module is used for setting the state information of the running data into a to-be-triggered state in response to the running data being associated with the timer, wherein the to-be-triggered state is used for representing that the running data is bound to the corresponding timer but does not reach the target moment.
In a possible embodiment, based on the apparatus composition of fig. 13, the apparatus further comprises:
and the fourth setting module is used for setting the state information of the operating data to be released in response to the acquisition of the operating data, wherein the state to be released is used for representing that the operating data is acquired but not bound to the corresponding timer.
In a possible implementation manner, the obtaining module 1301 is further configured to obtain an operation result and an operation log of the operation data from the target device;
based on the apparatus composition of fig. 13, the apparatus further comprises: and the sending module is used for responding to the operation result as operation failure and sending alarm information to the terminal associated with the operation data, wherein the alarm information carries the operation result and the operation log.
All the above optional technical solutions can be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the above embodiment, when processing the operation data, the processing apparatus for the operation data is only illustrated by dividing the functional modules, and in practical applications, the functions can be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above. In addition, the processing apparatus for the operation data and the processing method for the operation data provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the processing method for the operation data, and are not described herein again.
Fig. 14 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 14, a computer device is taken as an example to describe the terminal 1400, where the terminal 1400 may be a script management end in a server cluster, that is, a device in the server cluster for receiving operation data, or a monitoring management platform in the server cluster, that is, a device in the server cluster for collecting an operation result and an operation log of each operation data, or a candidate device or a target device in the server cluster for executing operation data, which is not specifically limited in this embodiment of the present application.
Optionally, the device types of the terminal 1400 include: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1400 can also be referred to as user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 1400 includes: a processor 1401, and a memory 1402.
Optionally, processor 1401 includes one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Alternatively, the processor 1401 is implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). In some embodiments, processor 1401 includes a main processor, which is a processor for Processing data in an awake state, also referred to as a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 is integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 1401 also includes an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
In some embodiments, memory 1402 includes one or more computer-readable storage media, which are optionally non-transitory. Optionally, memory 1402 also includes high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one program code for execution by processor 1401 to implement the methods of processing operational data provided by the various embodiments herein.
In some embodiments, terminal 1400 may further optionally include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 can be connected by buses or signal lines. Each peripheral device can be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera assembly 1406, audio circuitry 1407, a positioning assembly 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 are implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Optionally, the radio frequency circuit 1404 communicates with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 further includes NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). Optionally, the UI includes graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal can be input to the processor 1401 as a control signal for processing. Optionally, the display 1405 is also used to provide virtual buttons and/or virtual keyboards, also known as soft buttons and/or soft keyboards. In some embodiments, display 1405 is one, providing the front panel of terminal 1400; in other embodiments, the display 1405 can be at least two, respectively disposed on different surfaces of the terminal 1400 or in a folded design; in still other embodiments, display 1405 is a flexible display disposed on a curved surface or on a folded surface of terminal 1400. Even alternatively, the display 1405 is provided in a non-rectangular irregular figure, that is, a shaped screen. Alternatively, the Display 1405 is made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 also includes a flash. Optionally, the flash is a monochrome temperature flash, or a bi-color temperature flash. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp and is used for light compensation under different color temperatures.
In some embodiments, the audio circuitry 1407 includes a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones are respectively disposed at different positions of the terminal 1400. Optionally, the microphone is an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. Alternatively, the speaker is a conventional membrane speaker, or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to human, but also the electric signal can be converted into a sound wave inaudible to human for use in distance measurement or the like. In some embodiments, the audio circuit 1407 also includes a headphone jack.
The positioning component 1408 serves to locate the current geographic position of the terminal 1400 for navigation or LBS (Location Based Service). Optionally, the Positioning component 1408 is a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1409 is used to power the various components of terminal 1400. Optionally, the power source 1409 is alternating current, direct current, a disposable battery, or a rechargeable battery. When the power source 1409 comprises a rechargeable battery, the rechargeable battery supports either wired or wireless charging. The rechargeable battery is also used to support fast charge technology.
In some embodiments, terminal 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
In some embodiments, acceleration sensor 1411 detects acceleration magnitudes on three coordinate axes of a coordinate system established with terminal 1400. For example, the acceleration sensor 1411 is used to detect components of the gravitational acceleration in three coordinate axes. Alternatively, the processor 1401 controls the display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 is also used for acquisition of motion data of a game or a user.
In some embodiments, the gyro sensor 1412 detects a body direction and a rotation angle of the terminal 1400, and the gyro sensor 1412 and the acceleration sensor 1411 cooperate to acquire a 3D motion of the user on the terminal 1400. The processor 1401 realizes the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Optionally, pressure sensors 1413 are disposed on the side frame of terminal 1400 and/or under display 1405. When the pressure sensor 1413 is disposed on the side frame of the terminal 1400, the user can detect the holding signal of the terminal 1400, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the display screen 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. Optionally, fingerprint sensor 1414 is disposed on the front, back, or side of terminal 1400. When a physical button or vendor Logo is provided on terminal 1400, fingerprint sensor 1414 can be integrated with the physical button or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 controls the display brightness of display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1405 is increased; when the ambient light intensity is low, the display brightness of the display screen 1405 is reduced. In another embodiment, the processor 1401 also dynamically adjusts the shooting parameters of the camera assembly 1406 based on the intensity of ambient light collected by the optical sensor 1415.
Proximity sensor 1416, also known as a distance sensor, is typically disposed on the front panel of terminal 1400. The proximity sensor 1416 is used to collect the distance between the user and the front surface of the terminal 1400. In one embodiment, when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually decreased, processor 1401 controls display 1405 to switch from a bright screen state to a dark screen state; when proximity sensor 1416 detects that the distance between the user and the front face of terminal 1400 is gradually increasing, display 1405 is controlled by processor 1401 to switch from the sniff state to the brighten state.
Those skilled in the art will appreciate that the configuration shown in fig. 14 is not intended to be limiting with respect to terminal 1400 and can include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
Fig. 15 is a schematic structural diagram of a computer device 1500 according to an embodiment of the present application, where the computer device 1500 may generate a relatively large difference due to different configurations or performances, and the computer device 1500 includes one or more processors (CPUs) 1501 and one or more memories 1502, where at least one computer program is stored in the memory 1502, and is loaded and executed by the one or more processors 1501 to implement the Processing method of the operation data according to the embodiments. Optionally, the computer device 1500 further has a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the computer device 1500 further includes other components for implementing the device functions, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including at least one computer program, which is executable by a processor in a terminal to perform the processing method of the operation data in the above-described embodiments, is also provided. For example, the computer-readable storage medium includes a ROM (Read-Only Memory), a RAM (Random-Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product or computer program is also provided, comprising one or more program codes, the one or more program codes being stored in a computer readable storage medium. The one or more processors of the computer apparatus can read the one or more program codes from the computer-readable storage medium, and the one or more processors execute the one or more program codes, so that the computer apparatus can execute to perform the processing method of the operation data in the above-described embodiment.
Those skilled in the art will appreciate that all or part of the steps for implementing the above embodiments can be implemented by hardware, or can be implemented by a program instructing relevant hardware, and optionally, the program is stored in a computer readable storage medium, and optionally, the above mentioned storage medium is a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for processing operational data, the method comprising:
acquiring running data to be processed, wherein the running data is code data running at a target moment;
in response to reaching the target time, determining a plurality of candidate devices supporting the operating environment based on the operating environment of the operating data;
determining target equipment from the candidate equipment based on the load conditions of the candidate equipment, wherein the load condition of the target equipment meets a target condition;
executing, by the target device, the operational data.
2. The method of claim 1, wherein the executing the operational data by the target device comprises:
in response to the target time being reached, adding identification information of a plurality of pieces of data periodically executed at the target time into a message queue, wherein the plurality of pieces of data comprise the running data;
and for the operating data in the message queue, sending the identification information of the operating data to the target equipment, and loading and executing the operating data by the target equipment based on the identification information.
3. The method according to claim 2, wherein after the adding the identification information of the plurality of data periodically executed at the target time to a message queue, the method further comprises:
and in response to the identification information of the running data being added to the message queue, setting the state information of the running data to a to-be-executed state, wherein the to-be-executed state is used for representing that the target time is reached but the running data is not sent to corresponding target equipment.
4. The method of claim 2, wherein after sending the identification information of the operational data to the target device, the method further comprises:
and responding to the identification information of the running data sent to the target equipment, and setting the state information of the running data to be in an execution state, wherein the execution state is used for representing that the running data is loaded and executed by the corresponding target equipment.
5. The method of any of claims 2 to 4, wherein after the executing the operational data by the target device, the method further comprises:
and in response to the running data being circularly executed code data, adding the identification information of the running data to the tail of the message queue.
6. The method of claim 1, wherein the load condition comprises at least one of central processor CPU usage, disk input/output I/O usage, or memory occupancy, and wherein determining the target device from the plurality of candidate devices based on the load conditions of the plurality of candidate devices comprises:
determining a candidate device with the lowest CPU utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
determining the candidate device with the lowest disk I/O utilization rate as the target device from the plurality of candidate devices; or the like, or, alternatively,
and determining the candidate device with the lowest memory occupancy rate as the target device from the plurality of candidate devices.
7. The method of claim 1, wherein prior to determining a plurality of candidate devices that support the runtime environment based on the runtime environment of the runtime data in response to reaching the target time, the method further comprises:
associating the operational data with a timer corresponding to the time granularity based on the time granularity of the target time, the timer being used to determine whether the target time is reached.
8. The method of claim 7, wherein the time granularity comprises at least one of seconds, minutes, or hours, and wherein associating the operational data with a timer corresponding to the time granularity based on the time granularity for the target time comprises:
in response to the time granularity of the target time of day being seconds, associating the operational data with a second timer, the second timer having a minimum timing unit of 1 second; or the like, or, alternatively,
responsive to the time granularity of the target time being minutes, associating the operational data with a minute timer, the minute timer having a minimum timing unit of 1 minute; or the like, or, alternatively,
in response to the time granularity of the target time being hours, associating the operational data with an hour timer, the hour timer having a minimum timing unit of 1 hour.
9. The method of claim 7 or 8, wherein associating the operational data with a timer corresponding to the time granularity comprises:
and distributing identification information for the running data, and binding the identification information with the timer corresponding to the time granularity.
10. The method of claim 7, wherein after associating the operational data with a timer corresponding to the time granularity based on the time granularity of the target time, the method further comprises:
and in response to the running data being associated with the timer, setting the state information of the running data to a to-be-triggered state, wherein the to-be-triggered state is used for representing that the running data is bound to the corresponding timer but does not reach the target time.
11. The method of claim 1, wherein after obtaining the operational data to be processed, the method further comprises:
and in response to the acquisition of the running data, setting the state information of the running data to be in a state to be issued, wherein the state to be issued is used for representing that the running data is acquired but not bound to a corresponding timer.
12. The method of claim 1, wherein after the executing the operational data by the target device, the method further comprises:
acquiring an operation result and an operation log of the operation data from the target equipment;
and responding to the operation result as operation failure, and sending alarm information to a terminal associated with the operation data, wherein the alarm information carries the operation result and the operation log.
13. An apparatus for processing operational data, the apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring running data to be processed, and the running data is code data running at a target moment;
a first determination module, configured to determine, in response to reaching the target time, a plurality of candidate devices that support the execution environment based on the execution environment of the execution data;
a second determining module, configured to determine a target device from the multiple candidate devices based on load conditions of the multiple candidate devices, where the load condition of the target device meets a target condition;
and the execution module is used for executing the running data through the target equipment.
14. A computer device, characterized in that the computer device comprises one or more processors and one or more memories in which at least one computer program is stored, the at least one computer program being loaded and executed by the one or more processors to implement a method of processing operational data according to any one of claims 1 to 12.
15. A storage medium, in which at least one computer program is stored, the at least one computer program being loaded and executed by a processor to implement a method of processing operational data according to any one of claims 1 to 12.
CN202110518576.7A 2021-05-12 2021-05-12 Method and device for processing running data, computer equipment and storage medium Pending CN113110939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110518576.7A CN113110939A (en) 2021-05-12 2021-05-12 Method and device for processing running data, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110518576.7A CN113110939A (en) 2021-05-12 2021-05-12 Method and device for processing running data, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113110939A true CN113110939A (en) 2021-07-13

Family

ID=76722114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110518576.7A Pending CN113110939A (en) 2021-05-12 2021-05-12 Method and device for processing running data, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113110939A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792192A (en) * 2021-08-09 2021-12-14 万翼科技有限公司 Open source service function support system and service function control method
CN114996117A (en) * 2022-03-28 2022-09-02 湖南智擎科技有限公司 Client GPU application evaluation system and method for SaaS mode
CN115048087A (en) * 2022-08-15 2022-09-13 江苏博云科技股份有限公司 Method, equipment and storage medium for realizing online IDE tool in Kubernetes environment
CN115150467A (en) * 2022-09-01 2022-10-04 武汉绿色网络信息服务有限责任公司 Data access method and device and electronic equipment
CN116048643A (en) * 2023-03-08 2023-05-02 苏州浪潮智能科技有限公司 Equipment operation method, system, device, storage medium and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792192A (en) * 2021-08-09 2021-12-14 万翼科技有限公司 Open source service function support system and service function control method
CN113792192B (en) * 2021-08-09 2022-12-30 万翼科技有限公司 Open source service function support system and service function control method
CN114996117A (en) * 2022-03-28 2022-09-02 湖南智擎科技有限公司 Client GPU application evaluation system and method for SaaS mode
CN114996117B (en) * 2022-03-28 2024-02-06 湖南智擎科技有限公司 Client GPU application evaluation system and method for SaaS mode
CN115048087A (en) * 2022-08-15 2022-09-13 江苏博云科技股份有限公司 Method, equipment and storage medium for realizing online IDE tool in Kubernetes environment
CN115150467A (en) * 2022-09-01 2022-10-04 武汉绿色网络信息服务有限责任公司 Data access method and device and electronic equipment
CN116048643A (en) * 2023-03-08 2023-05-02 苏州浪潮智能科技有限公司 Equipment operation method, system, device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US11899687B2 (en) Elastic in-memory database provisioning on database-as-a-service
CN113110939A (en) Method and device for processing running data, computer equipment and storage medium
US11954122B2 (en) In-memory database-managed container volume replication
CN111930521A (en) Method and device for deploying application, electronic equipment and readable storage medium
WO2018213311A1 (en) Distributed versioning of applications using cloud-based systems
US11803514B2 (en) Peer-to-peer delta image dispatch system
CN111083042B (en) Template message pushing method, device, equipment and storage medium
US11310289B2 (en) Systems and methods for generating a shortcut associated with a rich communication services messaging session
WO2021244267A1 (en) Application program transplantation method and apparatus, device, and medium
CN111090687A (en) Data processing method, device and system and computer readable storage medium
CN111338910A (en) Log data processing method, log data display method, log data processing device, log data display device, log data processing equipment and log data storage medium
CN114205365B (en) Application interface migration system, method and related equipment
CN112162843A (en) Workflow execution method, device, equipment and storage medium
CN113742366A (en) Data processing method and device, computer equipment and storage medium
CN113553178A (en) Task processing method and device and electronic equipment
CN116541142A (en) Task scheduling method, device, equipment, storage medium and computer program product
CN110995842A (en) Method, device and equipment for downloading service data and storage medium
CN113138771A (en) Data processing method, device, equipment and storage medium
CN111125602A (en) Page construction method, device, equipment and storage medium
CN113190362B (en) Service calling method and device, computer equipment and storage medium
CN113986825B (en) System, method and device for data migration, electronic equipment and readable storage medium
CN112995587B (en) Electronic equipment monitoring method, system, computer equipment and storage medium
CN111522798B (en) Data synchronization method, device, equipment and readable storage medium
CN113935427A (en) Training task execution method and device, electronic equipment and storage medium
CN112148499A (en) Data reporting method and device, computer equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination