CN115689405B - Data processing method, device and system and computer storage medium - Google Patents

Data processing method, device and system and computer storage medium Download PDF

Info

Publication number
CN115689405B
CN115689405B CN202310005073.9A CN202310005073A CN115689405B CN 115689405 B CN115689405 B CN 115689405B CN 202310005073 A CN202310005073 A CN 202310005073A CN 115689405 B CN115689405 B CN 115689405B
Authority
CN
China
Prior art keywords
data
simulation
shared
configuration
simulation calculation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310005073.9A
Other languages
Chinese (zh)
Other versions
CN115689405A (en
Inventor
朱雨童
庄晓天
吴盛楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202310005073.9A priority Critical patent/CN115689405B/en
Publication of CN115689405A publication Critical patent/CN115689405A/en
Application granted granted Critical
Publication of CN115689405B publication Critical patent/CN115689405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure relates to data processing methods, apparatuses, systems, and computer-readable storage media, and relates to the field of computer technology. The data processing method comprises the following steps: responding to the trigger of a target simulation calculation task in a simulation experiment, and acquiring simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from a service data model as shared simulation data; storing the shared simulation data in a memory database corresponding to the simulation experiment; and performing simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task. According to the method and the device, the data processing efficiency in the simulation calculation process can be improved, and the resource utilization rate is improved.

Description

Data processing method, device and system and computer storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, apparatus and system, and a computer readable medium.
Background
With the rise of digital twin technology, under the logistics network scene, the digital mapping of logistics network configuration, state and load can be realized by means of the twin technology, and a logistics network simulation model integrating the characteristics of each network link is constructed. Before the network planning or plan scheme is issued, the logistics network simulation model can verify the operation effect of the whole scheme in advance, and provides a virtual test and strategy effect verification tool under a twinning scene for the application of logistics node layout, line strategy adjustment and the like.
In the related art, a plurality of simulation calculation tasks of each simulation experiment are independently performed, and each simulation calculation task performs an operation of acquiring basic configuration data and service load data from a twin data model once, so as to perform simulation calculation on each simulation calculation task.
Disclosure of Invention
In the related art, the basic configuration data and the service load data acquired from the twin data model by the multiple simulation calculation tasks of each simulation experiment are the same, so that the multiple simulation calculation tasks belonging to the same simulation experiment repeatedly acquire the same data from the twin data model, and thus operations such as accessing and inquiring the twin data model are repeatedly executed, resulting in higher data acquisition cost, low data processing efficiency and calculation resource waste.
Aiming at the technical problems, the present disclosure provides a solution, which can improve the data processing efficiency in the simulation calculation process and improve the resource utilization rate.
According to a first aspect of the present disclosure, there is provided a data processing method comprising: responding to the trigger of a target simulation calculation task in a simulation experiment, and acquiring simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from a service data model as shared simulation data; storing the shared simulation data in a memory database corresponding to the simulation experiment; and performing simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task.
In some embodiments, the shared simulation data includes traffic load data, the traffic load data being used to drive simulation calculations of a simulation calculation task, and obtaining simulation data shared by a plurality of simulation calculation tasks in the simulation experiment includes: dividing the load time period of the business load data into a plurality of sub time periods; and for each sub-time period, acquiring the business load data corresponding to each sub-time period from the business data model.
In some embodiments, the service data model is generated based on a service network including a plurality of service nodes, each sub-period corresponds to a plurality of pieces of service load data, each piece of service load data includes a load time, an associated service node, and service load content, and storing the shared simulation data includes: classifying the service load data corresponding to each sub-time period according to a preset time interval and an associated service node to obtain a plurality of load data sets, wherein the service load data in each load data set comprises the same associated service node, and the time difference between the earliest load time and the latest load time is the duration of the time interval, and the duration of the time interval is smaller than the duration of the sub-time period; and storing a plurality of load data sets corresponding to each sub-time period in a memory database corresponding to the simulation experiment.
In some embodiments, storing the plurality of load data sets corresponding to each sub-period includes: carrying out serialization processing on each load data group corresponding to each sub-time period; and storing the plurality of load data sets after the serialization processing in a memory database corresponding to the simulation experiment.
In some embodiments, performing the simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment includes: determining the ending time of the latest sub-time period corresponding to the stored business load data in the memory database as the reference time in response to the trigger of the event for indicating to acquire the business load data; acquiring a serialized load data set which corresponds to the event and belongs to the associated service node from the memory database under the condition that the execution time stamp of the event is smaller than or equal to the reference time; performing deserialization processing on the load data set subjected to the serialization processing and corresponding to the associated service node to obtain the load data set corresponding to the event; and according to the load data set obtained by the deserialization processing, performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task.
In some embodiments, obtaining simulation data shared by a plurality of simulation computing tasks in the simulation experiment comprises: and under the condition that the acquisition state of the shared simulation data is not written in the memory database corresponding to the simulation experiment, acquiring the shared simulation data from the service data model, wherein the acquisition state represents whether the shared simulation data is being acquired or has been acquired.
In some embodiments, obtaining the shared simulation data includes: under the condition that the acquisition state of the shared simulation data is not written in a memory database corresponding to the simulation experiment, writing the acquisition state of the shared simulation data into the memory database to be a first acquisition state, wherein the first acquisition state represents that the corresponding shared simulation data is being acquired; acquiring the shared simulation data from the service data model; and under the condition that all the shared simulation data are stored in a memory database corresponding to the simulation experiment, writing the acquisition state of the shared simulation data into the memory database as a second acquisition state, wherein the second acquisition state represents that the corresponding shared simulation data are acquired.
In some embodiments, the shared simulation data includes a plurality of pieces of first configuration data belonging to different configuration categories, the plurality of pieces of first configuration data being used for configuration initialization of a simulation model that performs simulation calculations, and storing the shared simulation data includes: carrying out serialization processing on the first configuration data corresponding to each configuration type; and storing the first configuration data after the serialization processing corresponding to each configuration type in a memory database corresponding to the simulation experiment.
In some embodiments, performing the simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment includes: for the target simulation calculation task and the subsequent simulation calculation task, acquiring serialized first configuration data corresponding to the simulation experiment from the memory database; performing deserialization processing on the serialized first configuration data acquired from the memory database to obtain first configuration data corresponding to the simulation experiment; according to first configuration data obtained through deserialization processing, carrying out configuration initialization on a simulation model corresponding to the simulation experiment; and performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task by using the simulation model after configuration initialization.
In some embodiments, configuring and initializing a simulation model corresponding to the simulation experiment includes: according to the first configuration data obtained through the deserialization processing and the second configuration data corresponding to the target simulation calculation task, carrying out configuration initialization on a simulation model corresponding to the target simulation calculation task, wherein the second configuration data corresponding to the target simulation calculation task is incremental data of the first configuration data; and carrying out configuration initialization on a simulation model corresponding to the follow-up simulation calculation task according to the first configuration data and the second configuration data corresponding to the follow-up simulation calculation task, wherein the second configuration data corresponding to the follow-up simulation calculation task is incremental data of the first configuration data.
In some embodiments, after the simulation calculation of the simulation experiment is finished, the shared simulation data corresponding to the simulation experiment in the memory database is cleared.
In some embodiments, the in-memory database includes a cache.
In some embodiments, the business data model comprises a twin data model; or the business data model comprises a twin data model of a logistics network, the logistics network comprises a plurality of logistics nodes and a plurality of logistics lines connected between the logistics nodes, the shared simulation data comprises first configuration data and business load data, the first configuration data comprises logistics line configuration data and logistics node configuration data, and the business load data comprises package data of the logistics nodes.
According to a second aspect of the present disclosure, there is provided a data processing apparatus comprising: the acquisition module is configured to respond to the triggering of a target simulation calculation task in a simulation experiment, and acquire simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from a service data model as shared simulation data; a storage module configured to store the shared simulation data in a memory database corresponding to the simulation experiment; and the simulation calculation module is configured to perform simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task.
According to a third aspect of the present disclosure, there is provided a data processing apparatus comprising: a memory; and a processor coupled to the memory, the processor configured to perform the data processing method of any of the embodiments described above based on instructions stored in the memory.
According to a fourth aspect of the present disclosure there is provided a data processing system comprising: the data processing apparatus as in any above embodiments.
In some embodiments, the data processing system further comprises: and the memory database is configured to store shared simulation data corresponding to the simulation experiment.
In some embodiments, the memory database includes a plurality of data storage areas, each data storage area configured to store shared simulation data corresponding to a simulation experiment corresponding to each data storage area, each data storage area corresponding to a simulation experiment.
According to a fifth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a data processing method according to any of the embodiments described above.
In the embodiment, the data processing efficiency in the simulation calculation process can be improved, and the resource utilization rate is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart illustrating a data processing method according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating a logistic network simulation according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating a data processing method according to some embodiments of the present disclosure;
FIG. 4 is a flow diagram illustrating configuration cache creation or access according to some embodiments of the present disclosure;
FIG. 5 is a flow diagram illustrating a load data cache creation thread job according to some embodiments of the present disclosure;
FIG. 6 is a flow diagram illustrating load cache access in a simulation calculation in accordance with some embodiments of the present disclosure;
FIG. 7 is a block diagram illustrating a data processing apparatus according to some embodiments of the present disclosure;
FIG. 8 is a block diagram illustrating a data processing apparatus according to further embodiments of the present disclosure;
FIG. 9 is a block diagram illustrating a data processing system according to some embodiments of the present disclosure;
FIG. 10 is a schematic diagram illustrating a relationship between a simulation experiment, an in-memory database, and a business data model, according to some embodiments of the present disclosure;
FIG. 11 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Fig. 1 is a flow chart illustrating a data processing method according to some embodiments of the present disclosure.
As shown in fig. 1, the data processing method includes: step S110, responding to the trigger of a target simulation calculation task in a simulation experiment, and acquiring simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from a service data model as shared simulation data; step S120, storing shared simulation data in a memory database corresponding to a simulation experiment; and step S130, performing simulation calculation on a target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task. In some embodiments, the data processing method is performed by a data processing apparatus. The simulation computing task may also be referred to as a simulation sample in the field of simulation computing.
In the above embodiment, for the simulation experiment of multiple simulation calculation tasks, by storing the simulation data shared by multiple simulation calculation tasks into the corresponding memory database, the data required by the simulation calculation of the simulation calculation tasks can be quickly acquired from the memory database, so that multiple simulation calculation tasks of the same simulation experiment can be prevented from repeatedly acquiring the shared simulation data from the service data model, the cost of data acquisition is reduced, the data processing efficiency is improved, the consumption of calculation resources in the simulation process is reduced, and the utilization rate of the calculation resources is improved. The memory database reads and writes the memory, so that the speed is higher, and the data processing efficiency can be improved.
In some embodiments, the in-memory database includes a cache. The cache may be a local cache or a distributed cache. For example, the cache is a cloud cache. For another example, the cache is a Redis cache.
In some embodiments, the business data model comprises a twin data model, i.e., a digital twin data model.
In some embodiments, obtaining simulation data shared by a plurality of simulation computing tasks in a simulation experiment includes: and under the condition that the acquisition state of the simulation data is not written in the memory database corresponding to the simulation experiment, acquiring the shared simulation data from the service data model. The acquisition status characterizes whether the shared simulation data is being acquired or has been acquired. For example, the target simulation calculation task is a simulation calculation task for judging an acquisition state of unwritten simulation data in a memory database corresponding to a simulation experiment. For another example, the target simulation calculation task may be a simulation calculation task whose execution time satisfies a preset condition among a plurality of simulation calculation tasks. The preset condition may be the earliest or next earliest execution time or other conditions.
In the above embodiment, whether the shared simulation data is acquired is determined by determining whether the acquisition state is written, so that multiple simulation calculation tasks in the same simulation experiment can be prevented from repeatedly acquiring the shared simulation data, thereby further reducing the consumption of simulation resources and improving the data processing efficiency of the simulation calculation.
In some embodiments, obtaining shared simulation data includes the following steps.
First, under the condition that the acquisition state of the shared simulation data is not written in the memory database corresponding to the simulation experiment, the acquisition state of the written shared simulation data is a first acquisition state in the memory database, and the first acquisition state represents that the corresponding shared simulation data is being acquired.
Then, shared simulation data is acquired from the business data model.
Finally, under the condition that all the shared simulation data are stored in a memory database corresponding to the simulation experiment, the acquisition state of the written shared simulation data is a second acquisition state in the memory database, and the second acquisition state represents that the corresponding shared simulation data are acquired.
In some embodiments, the shared simulation data includes a plurality of pieces of first configuration data belonging to different configuration categories for configuration initialization of a simulation model that performs the simulation calculations. The method for acquiring the simulation data shared by a plurality of simulation calculation tasks in the simulation experiment comprises the following steps: for each configuration type, when the acquisition state of the first configuration data of each configuration type is not written in the memory database corresponding to the simulation experiment, the first configuration data corresponding to each configuration type is acquired from the service data model. The acquisition status characterizes whether the first configuration data of each configuration class is being acquired or has been acquired. The acquisition state here refers to a state in which the first configuration data is acquired from the service data model.
In the above embodiment, whether the first configuration data is acquired is determined by determining whether the acquisition state is written, so that a plurality of simulation calculation tasks in the same simulation experiment can be prevented from repeatedly acquiring the first configuration data, thereby further reducing the consumption of simulation resources and improving the data processing efficiency of the simulation calculation.
In some embodiments, taking the example that the shared simulation data includes a plurality of pieces of first configuration data, storing the shared simulation data may be achieved as follows.
First, the first configuration data corresponding to each configuration type is subjected to serialization processing.
Then, in the memory database corresponding to the simulation experiment, the first configuration data after the serialization processing corresponding to each configuration type is stored.
In some embodiments, performing the simulation calculation on the target simulation calculation task and the subsequent simulation calculation task of the simulation experiment includes the following steps.
Firstly, for a target simulation calculation task and a subsequent simulation calculation task, serialized first configuration data corresponding to a simulation experiment is obtained from a memory database.
And secondly, performing deserialization processing on the serialized first configuration data acquired from the memory database to obtain the first configuration data corresponding to the simulation experiment.
Then, according to the first configuration data obtained through the deserialization processing, the simulation model corresponding to the simulation experiment is configured and initialized.
And finally, performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task by using the simulation model after configuration initialization.
In some embodiments, configuring and initializing a simulation model corresponding to a simulation experiment includes: according to the first configuration data obtained through the deserialization processing and the second configuration data corresponding to the target simulation calculation task, carrying out configuration initialization on a simulation model corresponding to the target simulation calculation task; and carrying out configuration initialization on the simulation model corresponding to the subsequent simulation calculation task according to the first configuration data obtained through the deserialization processing and the second configuration data corresponding to the subsequent simulation calculation task. The second configuration data corresponding to the target simulation computing task is delta data of the first configuration data. The second configuration data corresponding to the subsequent simulation calculation task is incremental data of the first configuration data.
In some embodiments, obtaining the first configuration data corresponding to each configuration category includes the following steps.
First, for each configuration type, when the acquisition state of the first configuration data of each configuration type is not written in the memory database corresponding to the simulation experiment, the acquisition state of the first configuration data of each configuration type is written in the memory database as the first acquisition state. The first acquisition state characterizes the corresponding first configuration data being acquired.
Then, first configuration data corresponding to each configuration category is acquired from the business data model.
Finally, when the first configuration data of all the configuration types are stored in the memory database corresponding to the simulation experiment, the acquisition state of the first configuration data written in each configuration type is the second acquisition state. The second acquisition state characterizes that the corresponding first configuration data has been acquired.
In some embodiments, the shared simulation data includes traffic load data for driving simulation computations of the simulation computation task.
In some embodiments, the traffic load data is obtained from the traffic data model without writing the obtaining state of the traffic load data in the memory database corresponding to the simulation experiment. The acquisition status characterizes whether traffic load data is being acquired or has been acquired. The acquisition state herein refers to a state in which traffic load data is acquired from the traffic data model.
In the above embodiment, whether the service load data is acquired is determined by determining whether the acquisition state is written, so that repeated acquisition of the service load data by a plurality of simulation calculation tasks in the same simulation experiment can be avoided, thereby further reducing the consumption of simulation resources and improving the data processing efficiency of the simulation calculation.
In some embodiments, in a case where the acquisition state of the traffic load data is not written in the memory database corresponding to the simulation experiment, the acquisition state of the written traffic load data is the first acquisition state in the memory database. The first acquisition state characterizes corresponding traffic load data being acquired. And acquiring the business load data from the business data model. When all the business load data are stored in the memory database corresponding to the simulation experiment, the acquisition state of the written business load data is the second acquisition state in the memory database. The second acquisition state characterizes that the corresponding traffic load data has been acquired.
In some embodiments, taking the example that the shared simulation data includes service load data, obtaining the simulation data shared by a plurality of simulation computing tasks in the simulation experiment may be achieved as follows.
First, a load period of traffic load data is divided into a plurality of sub-periods.
Then, for each sub-time period, acquiring service load data corresponding to each sub-time period from the service data model. By dividing the load time period into a plurality of sub-time periods and acquiring the business load data by taking the sub-time period as a unit, the business load data are acquired and stored in batches, and the efficiency of data acquisition and storage can be improved, so that the data processing efficiency in the simulation calculation process is improved.
In some embodiments, taking the example that the shared simulation data includes first configuration data and service load data, and the first configuration data includes service nodes of the service network and service lines between the service nodes, after the first configuration data is acquired, the service load data may be acquired according to the service nodes and the service lines in the first configuration data. In other embodiments, the service load data may be obtained in other manners, which will not be described herein.
In some embodiments, a traffic data model is generated based on a traffic network comprising a plurality of traffic nodes, each sub-period corresponding to a plurality of pieces of traffic load data, each piece of traffic load data comprising a load moment, an associated traffic node, and traffic load content. In this case, storing the shared simulation data includes the following steps.
Firstly, classifying the business load data corresponding to each sub-time period according to a preset time interval and an associated business node to obtain a plurality of load data groups. The traffic load data in each load data set comprises the same associated traffic node and the time difference between the earliest load moment and the latest load moment is the duration of the time interval. The associated service node of each piece of service load data is the service node associated with the service load data. The duration of the time interval is less than the duration of the sub-time period.
Then, in the memory database corresponding to the simulation experiment, a plurality of load data sets corresponding to each sub-period are stored. By dividing the business load data corresponding to each sub-time period into finer granularity, the load data groups with finer granularity relative to each sub-time period are stored in the memory database, so that the memory pressure brought when the business load data are taken out from the memory database in the simulation calculation process can be reduced, and the data processing efficiency of the simulation process is improved. In addition, by storing the traffic load data in fine granularity, the difficulty of memory reclamation can be reduced.
In some embodiments, storing the plurality of load data sets corresponding to each sub-period includes: carrying out serialization processing on each load data group corresponding to each sub-time period; and storing the plurality of load data sets after the serialization processing in a memory database corresponding to the simulation experiment. Storing the serialized data in the memory database can realize cross-platform storage and network transmission of the object. The traffic load data belongs to the object.
In some embodiments, performing the simulation calculation on the target simulation calculation task and the subsequent simulation calculation task of the simulation experiment includes the following steps.
First, in response to a trigger indicating an event for acquiring traffic load data, an end time of a latest sub-period corresponding to stored traffic load data in a memory database is determined as a reference time. The event for indicating acquisition of traffic load data is a periodically triggered event with an execution time stamp. The execution time stamp characterizes a trigger time for indicating an event for acquiring traffic load data.
And secondly, under the condition that the execution time stamp of the event is smaller than or equal to the reference time, acquiring a serialized load data set which corresponds to the event and belongs to the associated service node from the memory database. And under the condition that the execution time stamp of the event is less than or equal to the reference time, the storage of the corresponding business load data is completed in the memory database. And under the condition that the execution time stamp of the event is larger than the reference time, continuously judging whether the execution time stamp of the event is smaller than or equal to the reference time or not until the execution time stamp of the event is smaller than or equal to the reference time.
And then, performing deserialization processing on the load data set subjected to the serialization processing and corresponding to the associated service node to obtain the load data set corresponding to the event.
And finally, according to the load data set obtained by the deserialization processing, performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task.
In the embodiment, the service load data is acquired from the memory database by taking the load data group as granularity to carry out simulation calculation, so that the buffer data pressure is released from the segmentation to the simulation calculation, the data loading efficiency is ensured, and the resource consumption of the simulation calculation is reduced. In addition, whether the business load data is loaded from the memory database to perform simulation calculation is judged by judging whether the execution time stamp of the event is smaller than or equal to the reference time, so that overlapping execution (parallel execution) of the simulation calculation and the business load data loading link can be realized, the agency of data loading is hidden, and the simulation resource consumption is reduced.
In some embodiments, after all simulation calculations of the simulation experiment are completed, the shared simulation data corresponding to the simulation experiment in the memory database is purged.
In some embodiments, the data processing method of the present disclosure may be applied to the field of logistics network simulation. In this case, the business data model includes a twin data model (digital twin data model) of a logistics network including a plurality of logistics nodes and a plurality of logistics lines connecting the logistics nodes, the shared simulation data includes first configuration data including logistics line configuration data and logistics node configuration data, and the business load data includes package data of the logistics nodes. The logistics network belongs to one of the service networks, the logistics node belongs to the service node of the service network, and the logistics line belongs to the service line of the service network.
The data processing method of the present disclosure will be described in detail below taking a physical distribution network simulation application in a digital twin scenario as an example.
Fig. 2 is a schematic diagram illustrating a logistic network simulation according to some embodiments of the present disclosure.
As shown in FIG. 2, external other services of the emulated computing service perform external emulated sample calls to the emulated computing service to input task requirements. The task requirements input to the simulation computing service by the external simulation sample calling process include simulation computing task data and network incremental configuration. And the simulation calculation service acquires service load excitation and network basic configuration corresponding to the simulation calculation task data from a digital twin data model of the logistics network according to the simulation calculation task data in the external simulation sample calling process. And the simulation sample calculation service performs simulation calculation according to the service load excitation, the network basic configuration and the network incremental configuration, and outputs a simulation conclusion. The service load stimulus is service load data, the network basic configuration is first configuration data in some embodiments of the present disclosure, and the network incremental configuration is second configuration data in some embodiments of the present disclosure.
In the field of simulation calculation, a simulation sample characterizes a simulation calculation operation process, which is equivalent to a simulation calculation task in the foregoing embodiment. The simulation application under the twin scene obtains network basic configuration and business load excitation of logistics nodes, transportation lines and the like from a twin data model, constructs a virtual simulation model of the logistics network, adjusts business to be verified as network increment configuration, integrates the network increment configuration into the simulation model, completes initialization of the simulation model depending on basic data, drives simulation calculation and predicts policy effects.
Fig. 3 is a schematic diagram illustrating a data processing method according to some embodiments of the present disclosure.
As shown in FIG. 3, based on the underlying simulated computing framework of FIG. 2, the simulated sample computing service provides an external call interface that can be invoked by external other services. The external call interface is used for inputting task requirements by external other services.
FIG. 3 illustrates various task requirements transmitted by an external call interface, including a multi-sample simulation experiment ID, a total sample size of experiment operation, a logistics network baseline configuration ID, a simulation sample ID, a simulation start-stop time, a simulation sample related logistics delta configuration, a business load data packet ID, a loading start-stop time and other related initial configurations.
The multiple sample simulation experiment ID corresponds to the identification of the simulation experiment of the foregoing embodiment. The total sample size of the experiment operation is the total number of simulation samples (simulation calculation tasks) in the simulation experiment. The logistics network baseline configuration ID identifies the first configuration data in the previous embodiment. The simulation sample ID identifies a simulation calculation task in the simulation experiment in the foregoing embodiment. The simulation start-stop time defines the start time and the stop time of the simulation and is used for controlling the running of the simulation calculation. The incremental configuration of the simulation sample correlation stream network corresponds to the second configuration data of the foregoing embodiment. The traffic load packet ID identifies the traffic load data. The traffic load data loading start-stop moments define the start time and the stop time of loading traffic load data from the twin data model, the two times constituting the load period (also referred to as the validation period) of the traffic load data. Other relevant initial configurations may be configured by external other services.
The emulation computing service (also called emulation service) receives a task requirement transmitted by an external call interface, and performs operations such as configuration buffer creation or access, load data buffer creation thread operation, load buffer access in emulation computing, and the like. For example, after sample execution ends, the simulated sample computation service may also check and flush the cache to support the continued reliable service capabilities of the cache system.
Configuration cache creation or access includes retrieving base configurations from the twin data model and storing the retrieved base configurations in corresponding Redis caches (also referred to as injection caches). Configuration cache creation or access also includes obtaining a corresponding base configuration by accessing the Redis cache and merging the obtained base configuration with the delta configuration to generate an initial network configuration.
Configuration cache creation or access also includes creating a load initialization thread to initiate load data cache creation thread jobs after the initial network configuration is generated. The load data cache creation thread job includes acquiring the traffic load data from the twin data model and storing the acquired traffic load data in the Redis cache.
During load data cache creation thread jobs, load cache accesses in emulated computations may be performed. Load cache access in simulation calculation comprises access to a Redis cache, and service load data is acquired from the Redis cache to perform simulation calculation.
Fig. 4 is a flow diagram illustrating configuration cache creation or access according to some embodiments of the present disclosure.
As shown in fig. 4, the configuration buffer creation or access includes step S401 to step S410.
In step S401, when the simulation instance is started, the configuration type [ connection ] of the basic configuration is traversed according to the requirements of the network line, the logistics node, the transportation vehicle type, the area and the like loaded by the initial network configuration. [ ConType ] is a variable that characterizes the configuration class (also referred to as the load data class) ID. For example, the configuration category includes a logistics node "LogNode", a line information "lineif", and the like.
Step S402 to step S410 are performed for each configuration type.
In step S402, a dis cache connected to the simulation experiment is queried for a key representing a state of a configuration category of the multi-sample simulation experiment. Each KEY is formed by [ ExpID ] characterizing the ID of a unique multi-sample simulation experiment, [ condype ] characterizing the configuration class ID, and the STATE variable STATE.
In step S403, it is determined whether or not there is a VALUE corresponding to the KEY in the Redis cache. If so, step S401 is performed, i.e., the sequential traversal of the configuration classes continues.
If the simulation computing task does not exist, the simulation computing task is not written into the Redis cache, and the basic configuration of the corresponding configuration type is indicated. In this case, step S404 is performed.
In step S404, the value of the KEY is set and written in the Redis cache as a first acquisition state indicating that the basic configuration of the corresponding configuration class is being acquired. For example, the first acquisition state is denoted as "load" so far that the emulated computing service acquires initial configuration (base configuration) LOADING rights.
In step S405, the basic configuration corresponding to the configuration category [ condype ] is obtained from the twin data model through the interface provided by the twin data model. For example, the line configuration "lineif" should obtain data of the shipping line to node ID, load, model, line shift time to failure, etc.
In step S406, after serializing the acquired basic configuration, the serialized basic configuration is stored in the Redis cache. In some embodiments, the loaded data objects (base configuration) may be serialized into byte arrays by a protocol buffer. Protocol buffer is a format for data exchange, which is independent of language and platform.
In some embodiments, [ ExpID ] of the ID of the unique multi-sample simulation experiment and [ ConType ] of the ID of the characterization configuration type jointly form a KEY, the byte array after serialization is a VALUE, and the basic configuration is stored in a Redis cache.
In step S407, the value of the KEY is set and written in the Redis cache as a second acquisition state indicating that the basic configuration of the corresponding configuration class is already acquired. For example, the second acquisition state is denoted as "FININSH" so far, completing the initial configuration load and cache sharing process.
In step S408, it is determined whether or not the traversal of all the configuration categories [ ConfType ] is completed. In some embodiments, it may be determined whether traversal of all configuration types [ ConfType ] is complete by determining whether the values of the KEY indexes corresponding to the configuration types [ ConfType ] in the Redis cache are all "FINISH".
In the case where the traversal of all the configuration categories [ ConfType ] is not completed, the traversal of the configuration category [ ConfType ] is continued.
In the case where all traversals of [ ConfType ] are completed, step S409 is performed. In step S409, all the base configurations are acquired from the Redis cache and deserialized. For example, a binary VALUE (VALUE) field corresponding to a KEY composed of [ ExpID ] and [ ConfType ] together is deserialized by a protocol buffer.
In step S410, the deserialized basic configuration and the incremental configuration are combined to generate a simulation initial configuration, thereby completing the configuration initialization of the logistic network simulation model. The incremental configuration among different simulation samples is closely related to the simulation samples, and multiplexing among sample execution does not exist. The incremental configuration comprises variable quantity information such as logistics network incremental adjustment, logistics node adjustment and the like for verification.
After the initial configuration loading is completed, the business load information or data related to each logistics node is loaded. The service load data is mainly represented as order package data of the logistics nodes, and the data is irrelevant to the initialization logic of the logistics network model. In the initial loading link, only the creation of the business load data loading process is completed, the simulation running main process can jump to the logic link of the simulation running, and the progress coordination of the load loading and the simulation time pushing is ensured by the discrete event scheduling mechanism in the simulation running.
The traffic load is typically represented as a time stamp dataset identified by a logistics node ID and a time stamp. To relieve the access pressure of the twin data model when initial loading is performed, the access pressure is usually carried out for a fixed duration T load In units of the service load, the service period (load period/effective period) [ T ] L_START ,T L_END ]Divided into lengths T load The sub-time periods (i.e., the plurality of sub-time periods) of the data model are accessed in series. T (T) load The data access mechanism provided by the twin data model is typically set to 24 hours. At this point, the simulation sample call may calculate the number of sub-time periods N of the desired load.
The N sub-time periods respectively comprise A 0 =[T L_START ,T L_START +T load )、A 1 =[T L_START +T load ,T L_START +2×T load )、 A 2 =[T L_START +2×T load ,T L_START +3×T load ) Etc. Based on the sub-time period characteristics, the segmentation of the business load loading task is completed.
FIG. 5 is a flow diagram illustrating a load data cache creation thread job according to some embodiments of the present disclosure.
As shown in FIG. 5, the load data cache creation thread operation includes steps S501-S510.
In step S501, taking the example that each of the above-described sub-periods has one period number [ Offset ], all the sub-periods [ Offset ] are traversed.
In step S502, for each sub-period currently traversed, a key characterizing the state of the traffic load data used by the simulation sample corresponding to each sub-period in the multi-sample simulation experiment is queried to the Redis. Each KEY is formed by the unique multiple sample simulation experiment ID [ ExpID ], the unique traffic load data used by the simulation sample ID [ FlowID ], the sub-period sequence number of the characterization [ Offset ] and the STATE of the characterization STATE.
In step S503, it is determined whether there is a VALUE of the queried KEY KEY in the Redis cache.
In case there is a VALUE of the queried KEY in the dis cache, step S501 is continued, i.e. the next sub-period is traversed.
In the case where there is no VALUE of the queried KEY in the dis cache, step S504 is performed.
In step S504, the value of the KEY is set and written in the Redis to a state indicating that the traffic load data of the corresponding sub-period is being acquired. For example, a status of "load" indicating that traffic load data for a corresponding sub-period is being acquired
In step S505, the traffic load data corresponding to the sub-period [ Offset ] is acquired from the digital twin data model.
In step S506, the service load data corresponding to the acquired sub-period is classified to obtain a load data set, and after the load data set is serialized, the serialized load data set is stored in the Redis.
Each piece of traffic load data may be represented in the form of a triplet, i.e. < moment of load action, originating logistics node index, traffic load content >.
In some embodiments, the service load data corresponding to the sub-time period is classified according to a preset fixed time interval Tpack and an originating logistics node index NodeID, so as to obtain a plurality of load data sets. The plurality of load data sets may be represented as an array of a plurality of load data sets. And (3) finishing serialization of the load data set according to a protocol buffer format, and storing binary objects of a KEY formed by [ ExpID ] representing the ID of the unique multi-sample simulation experiment, [ FlowID ] representing the identification of service load data used by the simulation sample, [ TimeSlot ] of the start time of the corresponding time period of the load data set and [ NodeID ] representing the index KEY of the original logistics node of the load data set in a Redis cache. Here, the originating logistics node is used as the associated service node, and other logistics nodes can also be used as the associated service node.
In some embodiments, T pack Is determined by the number of load data packets which can occur in the time interval, and is matched with the speed of the simulation sample calculation for acquiring data from the cache and the speed of the simulation event queue consumption data. T (T) pack Typically 1 hour. T for convenient handling pack Less than T load And is connected with T load There is an integer multiple relationship.
In step S507, after the serialized load data set corresponding to the sub-period is stored in the Redis cache, the value of the KEY is set and written in the Redis cache to represent that the service load data of the corresponding sub-period has been obtained. For example, the traffic load data characterizing the sub-period using the "FINISH" field has been acquired.
In step S508, from offset=0, it is determined that the value of the KEY that continuously appears characterizes the maximum Offset corresponding to the completion of the acquisition of the traffic load data of the corresponding sub-period. The maximum Offset characterizes the latest sub-period of time in the memory database corresponding to the stored traffic load data. For example, the sub-period corresponding to 24 hours for 8 month 2 is later than the sub-period corresponding to 24 hours for 8 month 1.
In step S509, a multi-sample simulation experiment [ ExpID ] is recorded in the Redis buffer ]Max_time is T L_START +(Offset +1)×T load . And determining the ending time of the latest sub-time period corresponding to the stored business load data in the memory database according to the determined maximum Offset, and taking the ending time as the reference time, namely the maximum time of the multi-sample simulation experiment.
In step S510, it is determined whether or not the traversal of all sub-periods [ Offset ] is completed.
If the VALUE of the KEY corresponding to all the offsets is 'FINISH', the process of acquiring and storing all the business load data of the simulation calculation task from the data model to the Redis cache is completed. In the case where all the traversals of [ Offset ] are completed, the thread job ends. In the case where the traversal of all [ Offset ] is not completed, step S501 is continued to be executed.
The business load data are mainly used for injecting external package arrival and other events into the simulation model, and driving the logistics network simulation model to sort and transport the external package. The event block storage already performed based on the period Tpack in the initial call link. Since the logistic network simulation typically takes the form of a discrete event simulation, a sequence of events, such as a simulation model event queue, may be constructed that is executed at regular times. And creating a timing load event with the moment equal to the initial moment of the first load in the simulation initialization link, and injecting the timing load event into an event queue.
The flow of load cache access in simulation calculations based on simulation model event queues will be described in detail below in connection with FIG. 6.
FIG. 6 is a flow diagram illustrating load cache access in a simulation calculation in accordance with some embodiments of the present disclosure.
As shown in fig. 6, the load cache access in the simulation calculation includes steps S601 to S607.
In step S601, load loading events in the simulation model event queue are scheduled. The load loading event is used to indicate that traffic load data is acquired, with an execution TimeStamp TimeStamp. The load event is a periodically executed event. After an event in the simulation model event queue is scheduled, the event will be fetched from the simulation model event queue.
In step S602, a reference time, that is, a maximum time of the multi-sample simulation experiment, is queried in the Redis.
In step S603, it is determined whether the reference time is less than the TimeStamp TimeStamp.
In the case where the reference time is less than the TimeStamp, step S602 is continued to be performed.
In the case where the reference time is greater than or equal to the TimeStamp TimeStamp, step S604 is performed.
In step S604, the relevant logistics node is acquired. In some embodiments, the emulation run can be implemented in a single-threaded or multi-threaded or process fashion. In a single-thread mode, namely serial implementation, all the related calculation of the logistics nodes are calculated in the same thread, namely the same scheduling unit, namely the related logistics nodes comprise all the logistics nodes. In parallel (multi-process or thread) mode, each scheduling unit is responsible for calculating different logistics nodes, but adds up to all logistics nodes. Each scheduling unit only needs to process the load injection with the related logistics node as a starting point.
In step S605, according to the relevant logistics node, a load data set corresponding to a KEY formed by [ ExpID ] characterizing the ID of the unique multi-sample simulation experiment, [ FlowID ] characterizing the identification of the traffic load data used by the simulation sample, [ TimeSlot ] of the start time of the corresponding time period of the load data set, and [ NodeID ] characterizing the index KEY of the originating logistics node of the load data set, i.e. the VALUE of the VALUE corresponding to the KEY, is queried and obtained in the Redis cache.
In step S606, the acquired load data set is subjected to a deserialization process. For example, the deserialization is performed by a protocol buffer.
In step S607, a traffic load event is generated and injected into the simulation model event queue. For example, according to each triplet < load acting time, originating logistics node index, service load data > in the load data set obtained by deserialization, a load injection event, i.e. a service load event, for each logistics node in a time period corresponding to [ TimeSlot ] is constructed. The business load event is injected into an event queue of discrete event simulation to drive the simulation system to run.
For the operation of clearing the cache, after the simulation run is completed and the data output is completed, redis is queried for a KEY KEY composed of [ ExpID ] representing the ID of the unique multi-sample simulation experiment and FinishedSamples representing the completed sample. If the KEY object is not available, setting the VALUE of the KEY to be 1 to Redis, otherwise, increasing the corresponding VALUE by 1. If the VALUE corresponding to the KEY KEY is the same as the total agreed simulation times of the multiple sample experiments, the completion of all calculation tasks using the experiment buffer is proved, and all buffer objects corresponding to KEYs starting with [ ExpID ] corresponding to the ID representing the unique multiple sample simulation experiment are deleted from the Redis.
The above description of fig. 2 to 6 only take the application of the logistics network simulation in the digital twin scenario as an example to describe some embodiments, which are only illustrative and not representative of the only implementation. For example, the embodiments of the present disclosure may also be applied to other service data models other than the twin data model, and may also be applied to a service network simulation scenario composed of service nodes and service lines.
Fig. 7 is a block diagram illustrating a data processing apparatus according to some embodiments of the present disclosure.
As shown in fig. 7, the data processing apparatus 71 includes an acquisition module 711, a storage module 712, and a simulation calculation module 713.
The acquisition module 711 is configured to acquire, as shared simulation data, simulation data shared by a plurality of simulation calculation tasks in a simulation experiment from the service data model in response to triggering of a target simulation calculation task in the simulation experiment, for example, to perform step S110 shown in fig. 1.
The storage module 712 is configured to store the shared simulation data in the memory database corresponding to the simulation experiment, for example, to perform step S120 shown in fig. 1.
The simulation calculation module 713 is configured to perform simulation calculation on the target simulation calculation task of the simulation experiment and the subsequent simulation calculation task according to the shared simulation data corresponding to the simulation experiment in the in-memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task, for example, step S130 shown in fig. 1 is performed.
Fig. 8 is a block diagram illustrating a data processing apparatus according to further embodiments of the present disclosure.
As shown in fig. 8, the data processing device 81 includes a memory 811; and a processor 812 coupled to the memory 811. The memory 811 is used for storing instructions for executing corresponding embodiments of the data processing method. The processor 812 is configured to perform the data processing methods in any of the embodiments of the present disclosure based on instructions stored in the memory 811.
FIG. 9 is a block diagram illustrating a data processing system according to some embodiments of the present disclosure.
As shown in fig. 9, the data processing system 9 includes a data processing device 91. The data processing apparatus 91 is, for example, a data processing apparatus in any embodiment of the present disclosure, and is configured to perform the data processing method in any embodiment of the present disclosure.
In some embodiments, data processing system 9 also includes in-memory database 92. The memory database 92 is configured to execute shared simulation data corresponding to a simulation experiment.
In some embodiments, memory database 92 includes a plurality of data storage areas, each configured to store shared simulation data corresponding to a simulation experiment corresponding to each data storage area, each data storage area corresponding to a simulation experiment.
Fig. 10 is a schematic diagram illustrating a relationship between a simulation experiment, an in-memory database, and a business data model according to some embodiments of the present disclosure.
As shown in fig. 10, the simulation experiment 1 includes a simulation calculation task 11 and a simulation calculation task 12. Simulation experiment 2 includes a simulation calculation task 21, a simulation calculation task 22, and a simulation calculation task 23.
In the simulation experiment 1, a plurality of simulation calculation tasks acquire shared simulation data from a business data model and store the shared simulation data in an experiment task cache 1 of a memory database.
The multiple simulation calculation tasks of the simulation experiment 2 acquire shared simulation data from the business data model and store the shared simulation data in the experiment task cache 2 of the memory database. The experiment task caches 1 and 2 are different data storage areas in the memory database.
In some embodiments, each simulated computing task corresponds to a simulated service deployment. The in-memory database may be a Redis cache. The business data model may be a twin data model. For example, the twinning data model is a twinning data model of a logistics network.
In some embodiments, the shared emulation data comprises network infrastructure, traffic load data of the traffic network.
In some embodiments, the Redis cache is constructed for multiplexing of service load data and network basic configuration between different simulation calculation tasks (also called simulation samples) in the same simulation experiment. Different simulation sample runs belonging to the same multi-sample simulation experiment all have the same ID identification of the simulation experiment.
In the cache construction, taking the ID of the simulation experiment as an identification, and sharing the cache space among a plurality of simulation calculation task runs in the same simulation experiment. By utilizing the characteristics of small volume of specific incremental service configuration required by sample execution in a multi-sample simulation experiment and high similarity of different sample initialization data, network basic data acquired from a service data model are put into a cache space, a loading mode of incremental configuration in simulation calculation task execution is maintained, and the preprocessing cost of the network initial configuration data is reduced.
For example, in the initial data loading link of each simulation calculation task operation call, whether the required network basic configuration and service load are loaded is checked, if the loading is not completed, the initial data analysis is completed in cooperation with the active simulation process under the same simulation experiment, and the buffer is injected. If relevant data copies exist in the cache, the relevant data copies are directly used in a deserialization mode, and the data acquisition cost from the business data model is avoided. Specific processing procedures refer to other embodiments of the present disclosure, and are not described herein.
FIG. 11 is a block diagram illustrating a computer system for implementing some embodiments of the present disclosure.
As shown in FIG. 11, computer system 110 may be in the form of a general purpose computing device. Computer system 110 includes a memory 1110, a processor 1120, and a bus 1100 that connects the various system components.
The memory 1110 may include, for example, system memory, nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), and other programs. The system memory may include volatile storage media, such as Random Access Memory (RAM) and/or cache memory. The non-volatile storage medium stores, for example, instructions for performing a corresponding embodiment of at least one of the data processing methods. Non-volatile storage media include, but are not limited to, disk storage, optical storage, flash memory, and the like.
Processor 1120 may be implemented as discrete hardware components such as a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gates, or transistors. Accordingly, each of the modules, such as the judgment module and the determination module, may be implemented by a Central Processing Unit (CPU) executing instructions of the corresponding steps in the memory, or may be implemented by a dedicated circuit that performs the corresponding steps.
Bus 1100 may employ any of a variety of bus architectures. For example, bus structures include, but are not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, and a Peripheral Component Interconnect (PCI) bus.
Computer system 110 may also include input/output interfaces 1130, network interfaces 1140, storage interfaces 1150, and the like. These interfaces 1130, 1140, 1150 and the memory 1110 and processor 1120 may be connected by a bus 1100. The input/output interface 1130 may provide a connection interface for input/output devices such as a display, mouse, keyboard, etc. The network interface 1140 provides a connection interface for a variety of networking devices. The storage interface 1150 provides a connection interface for external storage devices such as a floppy disk, a USB flash disk, an SD card, and the like.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions specified in the flowchart and/or block diagram block or blocks.
These computer readable program instructions may also be stored in a computer readable memory that can direct a computer to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instructions which implement the function specified in the flowchart and/or block diagram block or blocks.
The present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
By the data processing method, the device, the system and the computer storage medium in the embodiment, the data processing efficiency in the simulation calculation process can be improved, and the resource utilization rate can be improved.
Thus far, the data processing method, apparatus and system, computer-readable storage medium according to the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.

Claims (19)

1. A method of data processing, comprising:
responding to the trigger of a target simulation calculation task in a simulation experiment, and acquiring simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from a service data model as shared simulation data, wherein the service data model comprises a twin data model of a service network;
storing the shared simulation data in a memory database corresponding to the simulation experiment;
and performing simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task.
2. The method according to claim 1, wherein the shared simulation data includes traffic load data, the traffic load data being used for driving simulation calculation of a simulation calculation task, and obtaining the simulation data shared by a plurality of simulation calculation tasks in the simulation experiment includes:
dividing the load time period of the business load data into a plurality of sub time periods;
and for each sub-time period, acquiring the business load data corresponding to each sub-time period from the business data model.
3. The data processing method of claim 2, wherein the traffic data model is generated based on a traffic network comprising a plurality of traffic nodes, each sub-time period corresponding to a plurality of pieces of traffic load data, each piece of traffic load data comprising a load time, an associated traffic node, and traffic load content, and wherein storing the shared simulation data comprises:
classifying the service load data corresponding to each sub-time period according to a preset time interval and an associated service node to obtain a plurality of load data sets, wherein the service load data in each load data set comprises the same associated service node, and the time difference between the earliest load time and the latest load time is the duration of the time interval, and the duration of the time interval is smaller than the duration of the sub-time period;
And storing a plurality of load data sets corresponding to each sub-time period in a memory database corresponding to the simulation experiment.
4. A data processing method according to claim 3, wherein storing a plurality of load data sets corresponding to each sub-period comprises:
carrying out serialization processing on each load data group corresponding to each sub-time period;
and storing the plurality of load data sets after the serialization processing in a memory database corresponding to the simulation experiment.
5. The data processing method according to claim 4, wherein performing simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment includes:
determining the ending time of the latest sub-time period corresponding to the stored business load data in the memory database as the reference time in response to the trigger of the event for indicating to acquire the business load data;
acquiring a serialized load data set which corresponds to the event and belongs to the associated service node from the memory database under the condition that the execution time stamp of the event is smaller than or equal to the reference time;
Performing deserialization processing on the load data set subjected to the serialization processing and corresponding to the associated service node to obtain the load data set corresponding to the event;
and according to the load data set obtained by the deserialization processing, performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task.
6. The data processing method according to claim 1, wherein acquiring simulation data shared by a plurality of simulation calculation tasks in the simulation experiment includes:
and under the condition that the acquisition state of the shared simulation data is not written in the memory database corresponding to the simulation experiment, acquiring the shared simulation data from the service data model, wherein the acquisition state represents whether the shared simulation data is being acquired or has been acquired.
7. The data processing method of claim 6, wherein obtaining the shared simulation data comprises:
under the condition that the acquisition state of the shared simulation data is not written in a memory database corresponding to the simulation experiment, writing the acquisition state of the shared simulation data into the memory database to be a first acquisition state, wherein the first acquisition state represents that the corresponding shared simulation data is being acquired;
Acquiring the shared simulation data from the service data model;
and under the condition that all the shared simulation data are stored in a memory database corresponding to the simulation experiment, writing the acquisition state of the shared simulation data into the memory database as a second acquisition state, wherein the second acquisition state represents that the corresponding shared simulation data are acquired.
8. The data processing method according to any one of claims 1 to 7, wherein the shared simulation data includes a plurality of pieces of first configuration data belonging to different configuration categories, the plurality of pieces of first configuration data being used for configuration initialization of a simulation model that performs simulation calculation, and storing the shared simulation data includes:
carrying out serialization processing on the first configuration data corresponding to each configuration type;
and storing the first configuration data after the serialization processing corresponding to each configuration type in a memory database corresponding to the simulation experiment.
9. The data processing method according to claim 8, wherein performing simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment includes:
For the target simulation calculation task and the subsequent simulation calculation task, acquiring serialized first configuration data corresponding to the simulation experiment from the memory database;
performing deserialization processing on the serialized first configuration data acquired from the memory database to obtain first configuration data corresponding to the simulation experiment;
according to first configuration data obtained through deserialization processing, carrying out configuration initialization on a simulation model corresponding to the simulation experiment;
and performing simulation calculation on the target simulation calculation task and the subsequent simulation calculation task by using the simulation model after configuration initialization.
10. The data processing method according to claim 9, wherein initializing the simulation model corresponding to the simulation experiment includes:
according to the first configuration data obtained through the deserialization processing and the second configuration data corresponding to the target simulation calculation task, carrying out configuration initialization on a simulation model corresponding to the target simulation calculation task, wherein the second configuration data corresponding to the target simulation calculation task is incremental data of the first configuration data;
And carrying out configuration initialization on a simulation model corresponding to the follow-up simulation calculation task according to the first configuration data and the second configuration data corresponding to the follow-up simulation calculation task, wherein the second configuration data corresponding to the follow-up simulation calculation task is incremental data of the first configuration data.
11. The data processing method according to claim 1, wherein the shared simulation data corresponding to the simulation experiment in the memory database is cleared after the simulation calculation of the simulation experiment is completed.
12. The method of claim 1, wherein the in-memory database comprises a cache.
13. A data processing method according to claim 1, wherein,
the business data model comprises a twin data model of a logistics network, the logistics network comprises a plurality of logistics nodes and a plurality of logistics lines connected between the logistics nodes, the shared simulation data comprise first configuration data and business load data, the first configuration data comprise logistics line configuration data and logistics node configuration data, and the business load data comprise package data of the logistics nodes.
14. A data processing apparatus, comprising:
the system comprises an acquisition module, a service data model and a control module, wherein the acquisition module is configured to respond to the triggering of a target simulation calculation task in a simulation experiment and acquire simulation data shared by a plurality of simulation calculation tasks in the simulation experiment from the service data model as shared simulation data, and the service data model comprises a twin data model of a service network;
a storage module configured to store the shared simulation data in a memory database corresponding to the simulation experiment;
and the simulation calculation module is configured to perform simulation calculation on the target simulation calculation task and a subsequent simulation calculation task of the simulation experiment according to the shared simulation data corresponding to the simulation experiment in the memory database, wherein the subsequent simulation calculation task is performed after the target simulation calculation task.
15. A data processing apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the data processing method of any of claims 1 to 13 based on instructions stored in the memory.
16. A data processing system, comprising:
a data processing apparatus as claimed in claim 14 or 15.
17. The data processing system of claim 16, further comprising:
and the memory database is configured to store shared simulation data corresponding to the simulation experiment.
18. The data processing system of claim 17, wherein the memory database comprises a plurality of data storage areas, each data storage area configured to store shared simulation data corresponding to a simulation experiment corresponding to each data storage area, each data storage area corresponding to a simulation experiment.
19. A computer-readable storage medium, having stored thereon computer program instructions which, when executed by a processor, implement a data processing method according to any of claims 1 to 13.
CN202310005073.9A 2023-01-04 2023-01-04 Data processing method, device and system and computer storage medium Active CN115689405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310005073.9A CN115689405B (en) 2023-01-04 2023-01-04 Data processing method, device and system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310005073.9A CN115689405B (en) 2023-01-04 2023-01-04 Data processing method, device and system and computer storage medium

Publications (2)

Publication Number Publication Date
CN115689405A CN115689405A (en) 2023-02-03
CN115689405B true CN115689405B (en) 2023-05-30

Family

ID=85057271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310005073.9A Active CN115689405B (en) 2023-01-04 2023-01-04 Data processing method, device and system and computer storage medium

Country Status (1)

Country Link
CN (1) CN115689405B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10261811B2 (en) * 2015-03-10 2019-04-16 Sphere 3D Inc. Systems and methods for contextually allocating emulation resources
CN107515922A (en) * 2017-08-23 2017-12-26 北京汽车研究总院有限公司 A kind of data managing method and system
CN111966748B (en) * 2020-07-30 2023-02-24 西南电子技术研究所(中国电子科技集团公司第十研究所) Distributed space-based simulation operation control management method
CN114741847A (en) * 2022-03-15 2022-07-12 中国人民解放军军事科学院战争研究院 Resource loosely-coupled black box simulation model platform

Also Published As

Publication number Publication date
CN115689405A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
US11487698B2 (en) Parameter server and method for sharing distributed deep learning parameter using the same
WO2022037337A1 (en) Distributed training method and apparatus for machine learning model, and computer device
US9996394B2 (en) Scheduling accelerator tasks on accelerators using graphs
CN104965757A (en) Virtual machine live migration method, virtual machine migration management apparatus, and virtual machine live migration system
CN102760176B (en) Hardware transaction level simulation method, engine and system
CN103699441B (en) The MapReduce report task executing method of task based access control granularity
CN103593242A (en) Resource sharing control system based on Yarn frame
US20230351145A1 (en) Pipelining and parallelizing graph execution method for neural network model computation and apparatus thereof
US11762760B1 (en) Scalable test workflow service
US20210390405A1 (en) Microservice-based training systems in heterogeneous graphic processor unit (gpu) cluster and operating method thereof
CN115689405B (en) Data processing method, device and system and computer storage medium
US20240028423A1 (en) Synchronization Method and Apparatus
Lin et al. A configurable and executable model of Spark Streaming on Apache YARN
KR20210103393A (en) System and method for managing conversion of low-locality data into high-locality data
CN115630937B (en) Logistics network simulation time synchronization method, device and storage medium
RU2643622C1 (en) Computer module
US20220300322A1 (en) Cascading of Graph Streaming Processors
CN116414581A (en) Multithreading time synchronization event scheduling system based on thread pool and Avl tree
US20170004232A9 (en) Device and method for accelerating the update phase of a simulation kernel
JP2023544911A (en) Method and apparatus for parallel quantum computing
KR20220071895A (en) Method for auto scaling, apparatus and system thereof
CN112416539B (en) Multi-task parallel scheduling method for heterogeneous many-core processor
CN116185497B (en) Command analysis method, device, computer equipment and storage medium
CN117724874B (en) Method, computer device and medium for managing shared receive queues
WO2022261867A1 (en) Task scheduling method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant