WO2024021524A1 - 数据处理方法、装置、电子设备及存储介质 - Google Patents

数据处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024021524A1
WO2024021524A1 PCT/CN2022/143511 CN2022143511W WO2024021524A1 WO 2024021524 A1 WO2024021524 A1 WO 2024021524A1 CN 2022143511 W CN2022143511 W CN 2022143511W WO 2024021524 A1 WO2024021524 A1 WO 2024021524A1
Authority
WO
WIPO (PCT)
Prior art keywords
main process
algorithm model
memory
sub
preloaded
Prior art date
Application number
PCT/CN2022/143511
Other languages
English (en)
French (fr)
Inventor
耿雷明
吕旭涛
王辰
Original Assignee
深圳云天励飞技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术股份有限公司 filed Critical 深圳云天励飞技术股份有限公司
Publication of WO2024021524A1 publication Critical patent/WO2024021524A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • G06F9/4862Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
    • G06F9/4868Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate with creation or replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Definitions

  • the present invention relates to the field of computers, and in particular, to a data processing method, device, electronic equipment and storage medium.
  • the disadvantage of the multi-process solution is that for computationally intensive programs, the multi-process solution can make full use of the multi-core capabilities of the computer.
  • each process corresponds to one core of the computer.
  • the algorithm since the algorithm must be processed in each process Loading the model and making predictions through the process after loading the algorithm model will also waste a lot of memory and slow down the data processing performance of the computer. Therefore, existing multi-process data processing has the problem of high memory usage, resulting in low performance of computing data processing.
  • Embodiments of the present invention provide a data processing method, aiming to solve the problem of high memory usage in existing multi-process data processing, resulting in low performance of computing data processing.
  • Preload the preset algorithm model through the created main process create a sub-process that is the same as the main process, use the sub-process for data processing, and the sub-process obtains the corresponding preloaded algorithm model by accessing the memory of the main process, so , the sub-process no longer needs to load the algorithm model, which improves the data processing speed.
  • the sub-process does not need additional memory resources to load the algorithm model, saving a lot of memory resources.
  • an embodiment of the present invention provides a data processing method, which method includes:
  • the preloaded algorithm model is obtained in the main process through the sub-process for data processing.
  • the method also includes:
  • the main process memory is frozen.
  • create a preset number of sub-processes based on the main process including:
  • obtaining the preloaded algorithm model in the main process through the sub-process for data processing includes:
  • the data to be processed is processed through the read-only preloaded algorithm model.
  • the method before the step of preloading the preset algorithm model through the created main process to obtain the preloaded algorithm model, and saving the preloaded algorithm model in the memory of the main process, the method also includes:
  • the corresponding algorithm model is obtained from the algorithm model library as a preset algorithm model.
  • an embodiment of the present invention provides a data processing device, which includes:
  • the preloading module is used to preload the preset algorithm model through the created main process, obtain the preloaded algorithm model, and save the preloaded algorithm model in the main process memory;
  • a sub-process creation module configured to create a preset number of sub-processes based on the main process, and the sub-processes share the memory of the main process;
  • a processing module configured to obtain the preloaded algorithm model in the main process through the sub-process for data processing.
  • the device also includes:
  • a freezing module is configured to freeze the memory of the main process when all the preloaded algorithm models are saved in the memory of the main process.
  • the sub-process creation module includes:
  • Subprocess creation submodule used to create a preset number of new processes
  • Allocation submodule used to allocate hardware resources to the new process
  • embodiments of the present invention provide an electronic device, including: a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program Implement the steps in the data processing method provided by the embodiment of the present invention.
  • embodiments of the present invention provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the data processing method provided by the embodiment of the invention is implemented. step.
  • the preset algorithm model is preloaded through the created main process to obtain the preloaded algorithm model, and the preloaded algorithm model is saved in the memory of the main process; based on the main process, a preset algorithm model is created A preset number of sub-processes, the sub-processes share the memory of the main process; the sub-processes obtain the preloading algorithm model in the main process for data processing.
  • Preload the preset algorithm model through the created main process create a sub-process that is the same as the main process, use the sub-process for data processing, and the sub-process obtains the corresponding preloaded algorithm model by accessing the memory of the main process, so , the sub-process no longer needs to load the algorithm model, which improves the data processing speed.
  • the sub-process does not need additional memory resources to load the algorithm model, saving a lot of memory resources.
  • Figure 1 is a flow chart of a data processing method provided by an embodiment of the present invention.
  • Figure 2 is a schematic structural diagram of a data processing model provided by an embodiment of the present invention.
  • Figure 3 is a schematic structural diagram of a data processing device provided by an embodiment of the present invention.
  • Figure 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention.
  • Figure 1 is a flow chart of a data processing method provided by an embodiment of the present invention. As shown in Figure 1, the data processing method includes the following steps:
  • the above-mentioned data processing method is applied to the cloud service process of the machine learning algorithm model, and the above-mentioned data processing method is deployed in a server, and the above-mentioned server may be an http server.
  • the above-mentioned server can use Python to provide cloud service of machine learning algorithm model, or can use other programming language platforms to provide cloud service of machine learning algorithm model.
  • the above algorithm model can be a machine learning-based algorithm model such as face recognition, target detection, target tracking, image segmentation, etc.
  • the above data processing can be image or video and other data corresponding to face recognition, target detection, target tracking, image segmentation, etc. Algorithmic processing.
  • the main process includes code, data and allocated resources.
  • the above code is used to maintain the startup, management and shutdown of the main process.
  • the data is used for the calculation of the main process.
  • the allocated resources include computing resources and memory resources.
  • the above data is a preloading algorithm model, and the above memory resources are used to store the preloading algorithm model. It should be noted that after preloading is completed, the above main process does not directly process data.
  • the above preset algorithm models can be all algorithm models that can be used for data processing. After the main process is created, all algorithm models that can be used for data processing are preloaded into the main process to obtain the preloaded algorithm model, and Preloaded models are stored in main process memory.
  • a new process that is the same as the main process can be created as a child process.
  • Each child process is managed through a child process identifier.
  • Each child process corresponds to a unique child process identifier.
  • the above child process identifier can be a child process. ID, the above child process shares the memory of the main process.
  • the main process manages the sub-process through the sub-process identification, and the sub-process is used to process connection requests and request responses.
  • the above connection request can be a connection request based on an http server.
  • the user can submit tasks such as face recognition, target detection, target tracking, and image segmentation as connection requests through the http server.
  • the above request response includes calculation results, such as face recognition, target
  • the calculation results of detection, target tracking, image segmentation and other tasks are returned to the upper layer application of the http server for users to view.
  • the preset number of the above-mentioned sub-processes may be the number of cores of the computer.
  • Each sub-process corresponds to one core of the computer, and corresponding computing resources are provided for the sub-process through the core.
  • the sub-process after receiving the connection request, accesses the memory of the main process, obtains the corresponding preloading algorithm model, and performs data processing through the preloading algorithm model.
  • the preloading algorithm model is stored in the form of a linked list in the memory of the main process.
  • Each preloading algorithm model corresponds to a linked list, and the child process obtains the preloading algorithm model by copying the corresponding linked list.
  • the sub-process After obtaining the preloaded algorithm model, the sub-process directly processes the data according to the preloaded algorithm model, so that the corresponding calculation results can be obtained.
  • the preset algorithm model is preloaded through the created main process to obtain the preloaded algorithm model, and the preloaded algorithm model is saved in the memory of the main process; based on the main process, Create a preset number of sub-processes, and the sub-processes share the memory of the main process; the sub-processes obtain the preloading algorithm model in the main process for data processing.
  • Preload the preset algorithm model through the created main process create a sub-process that is the same as the main process, use the sub-process for data processing, and the sub-process obtains the corresponding preloaded algorithm model by accessing the memory of the main process, so , the sub-process no longer needs to load the algorithm model, which improves the data processing speed.
  • the sub-process does not need additional memory resources to load the algorithm model, saving a lot of memory resources.
  • all preloaded algorithms can be
  • the algorithm model freezes the main process memory when the main process memory is saved.
  • the preloaded algorithm model is no longer modified. Furthermore, in order to avoid modification of the preloading algorithm model, all preloading algorithm models can be frozen in the main process memory when the main process memory is saved, keeping the loading data of the preloading algorithm model unchanged.
  • the cloud service of the above machine learning algorithm model can be implemented based on python3.7 or above, by calling python
  • the gc.freeze() function freezes the main process memory.
  • Python's GC module mainly uses reference counting to track and recycle garbage; it solves the problem of circular references that may occur in container objects through "mark-clear"; it further improves the efficiency of garbage collection by exchanging space for time through generational recycling.
  • Python's garbage collection mechanism will cause copy-on-write problems, causing the host to transform the load data in the memory, thereby changing the preloading algorithm model.
  • Using the gc.freeze() function can prevent Python's garbage collection mechanism from causing copy-on-write problems. question.
  • the step of creating a preset number of child processes based on the main process you can create a preset number of new processes; allocate hardware resources to the new processes; copy the values of the main process to the new process, and add the main process
  • the memory mapping path is used to obtain the child process.
  • the above-mentioned preset number can be determined according to the number of cores of the computer. Each core number corresponds to a process. If the number of cores of the computer is N, in an intensive computing scenario, the above-mentioned preset number can be N . In non-intensive computing scenarios, the above preset number can be determined according to the algorithm model required for the specific algorithm task.
  • the above hardware resources refer to the computing resources corresponding to the computer's core.
  • a core is allocated to each sub-process, and a sub-process can be executed through one core, so that the performance of the computer's multi-core can be fully utilized in multi-parallel computing scenarios and improve Data processing speed.
  • Each child process has a process identifier (process ID), which can be obtained through the getpid() function.
  • the main process can manage the child process through the process identifier, including starting, monitoring and closing the child process.
  • the cloud service of the above machine learning algorithm model can be implemented based on Python.
  • a sub-process that is the same as the main process can be created through the fork() function.
  • the sub-process and the main process can perform exactly the same tasks, or they can perform different tasks based on different initial parameters or passed-in variables.
  • the main process is only used to manage the sub-processes without executing any algorithm.
  • the task corresponding to the model Specifically, after the main process calls the fork() function, the system first allocates resources to the new process, such as space to store data and code, and then copies all the values of the main process to the new process, which is equivalent to copying a main process. as a child process.
  • the child process can only use the algorithm model that has been loaded by the main process.
  • the algorithm model loaded by the main process can be controlled through parameters and environment variables. This allows the main process to load on demand and only load the required models, further saving memory.
  • the main process allocates the kernel separately during the preloading process of the algorithm model. After the preloading of the algorithm model is completed, the corresponding kernel resources are released, leaving only the memory resources and resources needed to manage the sub-process. So that the released kernel resources can be used to create child processes.
  • the main process only preloads the algorithm model and provides management functions for the sub-processes, and does not participate in specific algorithm model calculations, which are performed by the sub-processes. Since the main process does not participate in specific algorithm model calculations and the sub-process does not load the algorithm model, less memory resources can be allocated to the sub-process. Therefore, the creation failure of the sub-process due to excessive memory resource requirements can be avoided, which improves efficiency. Success rate of child process creation.
  • the tasks to be processed can be obtained.
  • the tasks to be processed include the data to be processed and the algorithm requirements; based on the algorithm requirements, in the main process
  • the corresponding preloading algorithm model is read-only in the memory; the data to be processed is processed through the read-only preloading algorithm model.
  • the above-mentioned tasks to be processed can be obtained according to the connection request of the http server.
  • the user can submit the connection request through the http server.
  • the connection request can include at least one of face recognition, target detection, target tracking, image segmentation, etc. tasks as pending tasks.
  • the above-mentioned data to be processed may be image data, and the above-mentioned image data may be data uploaded by the user or uploaded by connecting to an image device designated by the user.
  • the above connection request can be a connection request based on an http server.
  • the child process After receiving the connection request, the child process will find the corresponding preloaded algorithm model in the main process memory according to the algorithm needs in the connection request for read-only execution, and the corresponding read-only execution Tasks such as face recognition, target detection, target tracking, and image segmentation correspond to preloaded algorithm models.
  • the above preloading algorithm model is stored in the form of a linked list in the memory of the main process. The child process can only read the corresponding part of the linked list in the memory of the main process, thereby only reading the corresponding preloading algorithm model.
  • the child process has the same code structure as the main process, after obtaining the preloaded algorithm model, the corresponding preloaded algorithm model can be run directly without spending additional memory resources to load the algorithm model, saving a lot of memory. resource.
  • the tasks to be processed include algorithm requirements; based on the algorithm requirements, the corresponding algorithm model is obtained from the algorithm model library as a preset algorithm model.
  • the above-mentioned tasks to be processed can be obtained according to the connection request of the http server.
  • the user can submit the connection request through the http server.
  • the connection request can include at least one of face recognition, target detection, target tracking, image segmentation, etc. tasks as pending tasks.
  • connection request can be a connection request based on an http server.
  • the main process After receiving the connection request, the main process obtains the corresponding algorithm model from the algorithm model library according to the tasks to be processed and preloads it.
  • All algorithm models are stored in the above algorithm model library.
  • the above algorithm model library communicates with the http server. In the http server, there is no algorithm model library.
  • the http server only stores the preloaded algorithm model through the main process.
  • the storage method is: Key-value mapping, each preloaded algorithm model corresponds to a pair of key values, which can greatly reduce the storage amount of the algorithm model.
  • the sub-process can obtain the mapped "value" through the "key” in the main process memory, thereby making it read-only to the corresponding preloaded algorithm model.
  • the sub-process is created through the main process. During the creation process, the page table corresponding to the memory of the main process is copied. The memory sharing between the main process and the sub-process is realized through the page table. The corresponding key-value mapping relationship is recorded in the page table.
  • the type of algorithm model is determined according to the algorithm requirements in the task to be processed. Algorithm models corresponding to all types of algorithm models are obtained from the algorithm model library, and these algorithm models are preloaded through the main process. According to the algorithm model type required by the algorithm, selecting the corresponding type of algorithm model for preloading can reduce the data of the algorithm model preloaded by the main process, thereby reducing the memory resource requirements of the main process.
  • M models with the highest call frequency can be selected for preloading. In this way, the data of the algorithm model preloaded by the main process can be reduced, and the memory resources of the main process can be reduced. needs.
  • the sub-process can obtain the loaded model in the main process for algorithm prediction, and there is no need to load the corresponding algorithm model for every request in the current sub-process.
  • algorithm prediction it improves the concurrent data processing performance of computers in dense scenarios and saves a lot of memory resources.
  • data processing method provided by the embodiment of the present invention can be applied to devices such as smart phones, computers, and servers that can perform data processing.
  • Figure 2 is a schematic diagram of the principle of a data processing method provided by an embodiment of the present invention.
  • the cloud service of the machine learning algorithm model is implemented based on python3.7 or above.
  • the algorithm model is pre-programmed. It cannot be modified after loading.
  • When creating an http server use pre-fork mode to create a main process in advance. After the main process is created and initialized, all algorithm models are immediately preloaded, and then the loaded algorithm models are saved into memory. After all algorithm models are preloaded, call Python's gc.freeze() function freezes memory to prevent Python's garbage collection mechanism from causing copy-on-write problems. Use the main process to fork out a batch of sub-processes.
  • the main process is used to manage sub-processes, and the sub-processes are used to process connection requests and responses.
  • the child process can obtain the algorithm model that has been loaded in the main process for algorithm prediction, without the need to load the model for every request in the current process, which improves data processing performance and saves a lot of memory.
  • data processing method provided by the embodiment of the present invention can be applied to devices such as smart phones, computers, and servers that can perform data processing.
  • Figure 3 is a schematic structural diagram of a data processing device provided by an embodiment of the present invention. As shown in Figure 3, the device includes:
  • the preloading module 301 is used to preload the preset algorithm model through the created main process, obtain the preloaded algorithm model, and save the preloaded algorithm model in the main process memory;
  • the sub-process creation module 302 is used to create a preset number of sub-processes based on the main process, and the sub-processes share the memory of the main process;
  • the processing module 303 is configured to obtain the preloaded algorithm model in the main process through the sub-process for data processing.
  • the device also includes:
  • a freezing module is configured to freeze the memory of the main process when all the preloaded algorithm models are saved in the memory of the main process.
  • the sub-process creation module includes:
  • Subprocess creation submodule used to create a preset number of new processes
  • Allocation submodule used to allocate hardware resources to the new process
  • the processing module 303 includes:
  • Acquisition sub-module used to obtain tasks to be processed, which include data to be processed and algorithm requirements;
  • the read-only submodule is used to read only the corresponding preloaded algorithm model in the memory of the main process based on the algorithm requirements;
  • the processing sub-module is used to process the data to be processed through the read-only preloaded algorithm model.
  • the device also includes:
  • the first acquisition module is used to acquire tasks to be processed, where the tasks to be processed include algorithm requirements;
  • the second acquisition module is used to acquire the corresponding algorithm model as a preset algorithm model in the algorithm model library based on the algorithm requirements.
  • the data processing device provided by the embodiment of the present invention can be applied to devices such as smartphones, computers, and servers that can perform data processing.
  • the data processing device provided by the embodiment of the present invention can implement each process implemented by the data processing method in the above method embodiment, and can achieve the same beneficial effects. To avoid repetition, they will not be repeated here.
  • Figure 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present invention. As shown in Figure 4, it includes: a memory 402, a processor 401, and an electronic device stored in the memory 402 and available in the processor. A computer program running a data processing method on 401, wherein:
  • the processor 401 is used to call the computer program stored in the memory 402 and perform the following steps:
  • the preloaded algorithm model is obtained in the main process through the sub-process for data processing.
  • the processor 401 After the step of preloading the preset algorithm model through the created main process to obtain the preloaded algorithm model, and saving the preloaded algorithm model in the memory of the main process, the processor 401 The method performed also includes:
  • the main process memory is frozen.
  • the processor 401 creates a preset number of sub-processes based on the main process, including:
  • the processor 401 executes the acquisition of the preloaded algorithm model in the main process through the sub-process for data processing, including:
  • the data to be processed is processed through the read-only preloaded algorithm model.
  • the processor 401 before the step of preloading the preset algorithm model through the created main process to obtain the preloaded algorithm model, and saving the preloaded algorithm model in the memory of the main process, the processor 401 The method performed also includes:
  • the corresponding algorithm model is obtained from the algorithm model library as a preset algorithm model.
  • the electronic device provided by the embodiment of the present invention can implement each process implemented by the data processing method in the above method embodiment, and can achieve the same beneficial effects. To avoid repetition, they will not be repeated here.
  • Embodiments of the present invention also provide a computer-readable storage medium.
  • a computer program is stored on the computer-readable storage medium.
  • the data processing method or the application-side data processing method provided by the embodiment of the present invention is implemented.
  • Each process can achieve the same technical effect. To avoid duplication, it will not be described again here.
  • the program can be stored in a computer-readable storage medium.
  • the program can be stored in a computer-readable storage medium.
  • the process may include the processes of the embodiments of each of the above methods.
  • the storage medium may be a magnetic disk, an optical disk, or a read-only memory (Read-Only memory). Memory, ROM) or random access memory (Random Access Memory, referred to as RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种数据处理方法,包括:通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。通过创建好的主进程对预设的算法模型进行预加载,根据主进程创建对应的子进程,利用子进程进行数据处理,而子进程通过访问主进程内存来获取对应的预加载算法模型,因此,子进程不需要再对算法模型进行加载,提高了数据处理速度,同时,子进程不需要额外的内存资源进行算法模型加载, 节省了大量的内存资源。

Description

数据处理方法、装置、电子设备及存储介质 技术领域
本申请要求于2022年7月28日提交中国专利局,申请号为202210900030.2、发明名称为“数据处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及计算机领域,尤其涉及一种数据处理方法、装置、电子设备及存储介质。
背景技术
机器学习算法模型在云服务化的过程中,为了提高服务的吞吐率,往往采用多线程方案、协程方案、多进程方案等方案进行实现。然而,多线程方案的缺点在于:在基于python的云服务化中,由于GIL(全局解释器锁)的存在,使得多线程的并行性能不足,不能发挥计算机多核的能力。同样的,协程方案的缺点在于:在基于python的云服务化中,对于计算密集型的程序,协程的方案与多线程的方案没有本质的区别,也同样受GIL的影响,反而因为协程间的切换拖慢了计算机性能。多进程方案的缺点在于:多进程方案对于计算密集型程序,能够充分利用计算机的多核能力,在多进程模式下,每个进程对应计算机的一个核,然而由于在每个进程中都要对算法模型进行加载,通过加载算法模型后的进程来进行预测,同时也会浪费大量的内存,拖慢计算机处理的数据处理性能。因此,现有的多进程数据处理存在内存占用率高,导致计算数据处理的性能低的问题。
技术解决方案
本发明实施例提供一种数据处理方法,旨在解决现有现有的多进程数据处理存在内存占用率高,导致计算数据处理的性能低的问题。通过创建好的主进程对预设的算法模型进行预加载,创建与主进程相同的子进程,利用子进程进行数据处理,而子进程通过访问主进程内存来获取对应的预加载算法模型,因此,子进程不需要再对算法模型进行加载,提高了数据处理速度,同时,子进程不需要额外的内存资源进行算法模型加载,节省了大量的内存资源。
第一方面,本发明实施例提供一种数据处理方法,所述方法包括:
通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
可选的,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之后,所述方法还包括:
当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
可选的,所述基于所述主进程,创建预设数量的子进程,包括:
创建预设数量的新进程;
为所述新进程分配硬件资源;
将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
可选的,所述通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理,包括:
获取待处理任务,所述待处理任务包括待处理数据以及算法需求;
基于所述算法需求,在所述主进程内存中只读对应的所述预加载算法模型;
通过只读的所述预加载算法模型对所述待处理数据进行数据。
可选的,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之前,所述方法还包括:
获取待处理任务,所述待处理任务包括算法需求;
基于所述算法需求,在算法模型库中获取对应的算法模型作为预设的算法模型。
第二方面,本发明实施例提供一种数据处理装置,所述装置包括:
预加载模块,用于通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
子进程创建模块,用于基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
处理模块,用于通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
可选的,所述装置还包括:
冻结模块,用于当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
可选的,所述子进程创建模块包括:
子进程创建子模块,用于创建预设数量的新进程;
分配子模块,用于为所述新进程分配硬件资源;
添加子模块,用于将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
第三方面,本发明实施例提供一种电子设备,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现本发明实施例提供的数据处理方法中的步骤。
第四方面,本发明实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现发明实施例提供的数据处理方法中的步骤。
本发明实施例中,通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。通过创建好的主进程对预设的算法模型进行预加载,创建与主进程相同的子进程,利用子进程进行数据处理,而子进程通过访问主进程内存来获取对应的预加载算法模型,因此,子进程不需要再对算法模型进行加载,提高了数据处理速度,同时,子进程不需要额外的内存资源进行算法模型加载,节省了大量的内存资源。
附图说明
下面将对本申请实施例中所需要使用的附图作介绍。
图1是本发明实施例提供的一种数据处理方法的流程图;
图2是本发明实施例提供的一种数据处理模型的结构示意图;
图3是本发明实施例提供的一种数据处理装置的结构示意图;
图4是本发明实施例提供的一种电子设备的结构示意图。
本发明的实施方式
下面结合附图对本申请的实施例进行描述。
请参见图1,图1是本发明实施例提供的一种数据处理方法的流程图,如图1所示,该数据处理方法包括以下步骤:
101、通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将预加载算法模型保存在主进程内存中。
在本发明实施例中,上述数据处理方法应用于机器学习算法模型的云服务化过程,上述数据处理方法部署在服务器中,上述服务器可以是http 服务器。上述服务器可以使用python提供机器学习算法模型的云服务化,也可以使用其他程序语言平台提供机器学习算法模型的云服务化。
上述算法模型可以是人脸识别、目标检测、目标跟踪、图像分割等基于机器学习的算法模型,上述数据处理可以是图像或视频等数据进行人脸识别、目标检测、目标跟踪、图像分割等对应算法的处理。
具体的,可以在创建http 服务器的时候,预先创建一个主进程。该主进程包括代码、数据和分配的资源,其中,上述代码用于维持主进程的启动、管理和关闭,数据用于主进程的计算,分配的资源包括计算资源和内存资源。在本发明实施例中,上述数据为预加载算法模型,上述内存资源用于存储预加载算法模型。需要说明的是,在预加载完成后,上述主进程不直接进行数据处理。
上述预设的算法模型可以是所有可以用于数据处理的算法模型,在创建好主进程后,将所有可以用于数据处理的算法模型预先加载到主进程中,得到预加载算法模型,并将预加载模型存储在主进程内存中。
102、基于主进程,创建预设数量的子进程。
在本发明实施例中,可以创建出与主进程相同的新进程作为子进程,每个子进程通过子进程标识进行管理,每个子进程对应一个唯一的子进程标识,上述子进程标识可以是子进程ID,上述子进程共享主进程内存。
主进程通过子进程标识对子进程进行管理,子进程用于处理连接请求和请求响应。上述连接请求可以是基于http 服务器的连接请求,比如用户可以通过http 服务器提交人脸识别、目标检测、目标跟踪、图像分割等任务作为连接请求,上述请求响应包括计算结果,比如人脸识别、目标检测、目标跟踪、图像分割等任务的计算结果,并将计算结果返回http 服务器上层应用,以供用户查看。
具体的,可以将主进程要执行的代码进行复制,从而得到新进程,对新进程进行标识分配,建立主进程与子进程之间的父子关系。一个主进程可以复制得到多个子进程。
上述子进程的预设数量可以是计算机的内核数量,每个子进程对应计算机的一个内核,通过内核为子进程提供对应的计算资源。
103、通过子进程在主进程内获取预加载算法模型进行数据处理。
在本发明实施例中,子进程在接收到连接请求后,向主进程内存进行访问,获取对应的预加载算法模型,通过预加载算法模型进行数据处理。
具体的,预加载算法模型在主进程内存中以链表的形式进行存储,每个预加载算法模型对应一段链表,子进程通过拷贝对应的链表来获取到预加载算法模型。
子进程在获取到预加载算法模型后,直接根据预加载算法模型进行数据处理,从而可以得到对应的计算结果。
在本发明实施例中,通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。通过创建好的主进程对预设的算法模型进行预加载,创建与主进程相同的子进程,利用子进程进行数据处理,而子进程通过访问主进程内存来获取对应的预加载算法模型,因此,子进程不需要再对算法模型进行加载,提高了数据处理速度,同时,子进程不需要额外的内存资源进行算法模型加载,节省了大量的内存资源。
可选的,在通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将预加载算法模型保存在主进程内存中的步骤之后,还可以当所有的预加载算法模型在主进程内存保存完成时,对主进程内存进行冻结。
在本发明实施例中,算法模型在预加载完成后,为保证算法模型的正确性,不再对预加载算法模型进行修改。进一步的,为了避免对预加载算法模型的修改,可以将所有的预加载算法模型在主进程内存保存完成时,对主进程内存进行冻结,保持预加载算法模型的加载数据不变。
具体的,上述机器学习算法模型的云服务化可以是基于python3.7或以上版本进行实现,通过调用python 的 gc.freeze() 函数对主进程内存进行冻结。Python的GC模块主要运用了引用计数来跟踪和回收垃圾;通过“标记-清除”解决容器对象可能产生的循环引用问题;通过分代回收以空间换时间进一步提高垃圾回收的效率。然而由python的垃圾回收机制会引发写时复制的问题,导致主进行内存的加载数据变换,进而改变预加载算法模型,采用gc.freeze() 函数可以防止python的垃圾回收机制引发写时复制的问题。
可选的,在基于主进程,创建预设数量的子进程的步骤中,可以创建预设数量的新进程;为新进程分配硬件资源;将主进程的值复制到新进程,并添加主进程内存的映射路径,得到子进程。
在本发明实施例中,上述预设数量可以根据计算机的内核数进行确定,每个内核数对应一个进程,若计算机的内核数为N,在密集计算的场景下,上述预设数量可以是N。在非密集计算的场景下,上述预设数量可以根据具体的算法任务所需要的算法模型进行确定。
上述硬件资源指的是计算机的内核所对应的计算资源,为每个子进行分配一个内核,可以通过一个内核执行一个子进程,从而可以在多并行计算的场景下,充分发挥计算机多核的性能,提高数据处理速度。
上述子进程需要共享主进程内存,因此,需要为子进程添加主进程内在的映射路径,这样,子进程就不需要通过自身的内存进行算法模型的加载,只需要从主进程内存中获取对应预加载算法模型的数据即可,相比于算法模型的加载而言,所需要内存资源要小得多。每个子进程都具有进程标识符(process ID),可以通过getpid()函数获得,主进程可以通过进程标识符对子进程进行管理,包括对子进程的启动、监控和关闭。
进一步的,上述机器学习算法模型的云服务化可以是基于python进行实现,在创建好主进程后,可以通过fork()函数创建与主进程相同的子进程。子进程与主进程可以执行完全相同的任务,也可以根据初始参数或传入的变量不同执行不同的任务,但在本发明实施例中,主进程只用于管理子进程,而不执行任何算法模型对应的任务。具体来说,主进程调用fork()函数后,系统先给新进程分配资源,例如存储数据和代码的空间,然后把主进程的所有值都复制到新进程中,相当于复制了一个主进程作为子进程。子进程只能使用主进程已经加载好的算法模型,可以通过参数和环境变量控制主进程所加载的算法模型,这样可以使主进程按需加载、只加载需要的模型,进一步节省了内存。
更进一步的,主进程在进行算法模型的预加载过程中,单独分配内核,在算法模型的预加载完成后,释放对应的内核资源,仅留下内存资源和管理子进程所需要的资源,以使释放出的内核资源可以被用于创建子进程。
需要说明的是,在本发明实施例中,主进程只进行算法模型的预加载和提供子进程的管理功能,不参与具体的算法模型计算,具体的算法模型计算由子进程来执行。由于主进程不参与具体的算法模型计算,子进程不进行算法模型加载,因此,可以为子进程分配较少的内存资源,因此,可以避免子进程因内存资源需求过大而创建失败,提高了子进程创建成功率。
可选的,在通过所述子进程在主进程内获取预加载算法模型进行数据处理的步骤中,可以获取待处理任务,待处理任务包括待处理数据以及算法需求;基于算法需求,在主进程内存中只读对应的预加载算法模型;通过只读的预加载算法模型对待处理数据进行数据。
在本发明实施例中,上述待处理任务可以根据http 服务器的连接请求进行获取,用户可以通过http 服务器提交连接请求,连接请求中可以包括人脸识别、目标检测、目标跟踪、图像分割等至少一项任务作为待处理任务。上述待处理数据可以是图像数据,上述图像数据可以用户上传的数据或者用户指定连接某个图像设备进行上传的。
上述连接请求可以是基于http 服务器的连接请求,子进程在接收到连接请求后,根据连接请求中的算法需要,在主进程内存中查找到对应的预加载算法模型进行只读,对应只读执行人脸识别、目标检测、目标跟踪、图像分割等任务对应预加载算法模型。具体的,上述预加载算法模型在主进程内存中以链表的形式进行存储,子进程可以在主进程内存中只读对应部分的链表,从而只读到对应的预加载算法模型。
由于子进程与主进程具有相同的代码结构,因此在得到预加载算法模型后,可以直接运行对应的预加载算法模型,不需要花费额外的内存资源再对算法模型进行加载,节省了大量的内存资源。
可选的,在通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将预加载算法模型保存在主进程内存中的步骤之前,还可以获取待处理任务,待处理任务包括算法需求;基于算法需求,在算法模型库中获取对应的算法模型作为预设的算法模型。
在本发明实施例中,上述待处理任务可以根据http 服务器的连接请求进行获取,用户可以通过http 服务器提交连接请求,连接请求中可以包括人脸识别、目标检测、目标跟踪、图像分割等至少一项任务作为待处理任务。
上述连接请求可以是基于http 服务器的连接请求,主进程在接收到连接请求后,根据待处理任务在算法模型库中获取对应的算法模型进行预加载。
上述算法模型库中存储所有算法模型,上述算法模型库与http 服务器通信连接,在http 服务器中,并不设置算法模型库,http 服务器只通过主进程对预加载算法模型进行存储,存储的方式为键值映射,每个预加载算法模型对应一对键值,可以极大的降低算法模型的存储量,子进程可以在主进程内存中通过“键”获取映射到的“值”,从而只读到对应的预加载算法模型。通过主进程创建子进程,创建过程中去复制主进程内存对应的页表,通过页表达到主进程与子进程的内存共享,页表中记录对应的键值映射关系。
在一种可能的实施例中,根据待处理任务中的算法需求,确定算法模型的类型。将与算法模型的所有类型对应的算法模型从算法模型库中进行获取,并通过主进程对这些算法模型进行预加载。通过算法需求的算法模型类型,选择对应类型的算法模型进行预加载,可以降低主进程预加载算法模型的数据,从而降低主进程内存资源的需求。
在另一种可能的实施例中,可以根据历史算法模型的调用数据,选取被调用频率最高的M个模型进行预加载,这样,可以降低主进程预加载算法模型的数据,降低主进程内存资源的需求。
在本发明实施例中,由于算法模型已经预加载到主进程内存中,子进程可获取主进程中已加载好的模型进行算法预测,无需在当前子进程中每次请求都去加载对应算法模型来进行算法预测,提高了计算机对应密集场景下的并发数据处理性能,节省了大量的内存资源。
需要说明的是,本发明实施例提供的数据处理方法可以应用于可以进行数据处理的智能手机、电脑、服务器等设备。
可选的,请参见图2,图2是本发明实施例提供的一种数据处理方法的原理示意图,机器学习算法模型的云服务化是基于python3.7或以上版本进行实现的,算法模型预加载后不可再对其进行修改。创建http 服务器的时候,采用 pre-fork 模式,预先创建一个主进程。在创建完主进程并初始化之后,立即对所有的算法模型进行预加载,然后把加载好的算法模型保存到内存中。所有算法模型预加载完毕之后,调用 python的gc.freeze()函数冻结内存,防止 python的垃圾回收机制引发写时复制问题。使用主进程fork出一批子进程,主进程用于管理子进程,子进程用于处理连接请求和响应。子进程可获取主进程中已加载好的算法模型进行算法预测,而无须在当前进程中每次请求都需要进行模型的加载,提高了数据处理性能,并节省了大量的内存。
需要说明的是,本发明实施例提供的数据处理方法可以应用于可以进行数据处理的智能手机、电脑、服务器等设备。
可选的,请参见图3,图3是本发明实施例提供的一种数据处理装置的结构示意图,如图3所示,所述装置包括:
预加载模块301,用于通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
子进程创建模块302,用于基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
处理模块303,用于通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
可选的,所述装置还包括:
冻结模块,用于当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
可选的,所述子进程创建模块包括:
子进程创建子模块,用于创建预设数量的新进程;
分配子模块,用于为所述新进程分配硬件资源;
添加子模块,用于将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
可选的,所述处理模块303包括:
获取子模块,用于获取待处理任务,所述待处理任务包括待处理数据以及算法需求;
只读子模块,用于基于所述算法需求,在所述主进程内存中只读对应的所述预加载算法模型;
处理子模块,用于通过只读的所述预加载算法模型对所述待处理数据进行数据。
可选的,所述装置还包括:
第一获取模块,用于获取待处理任务,所述待处理任务包括算法需求;
第二获取模块,用于基于所述算法需求,在算法模型库中获取对应的算法模型作为预设的算法模型。
需要说明的是,本发明实施例提供的数据处理装置可以应用于可以进行数据处理的智能手机、电脑、服务器等设备。
本发明实施例提供的数据处理装置能够实现上述方法实施例中数据处理方法实现的各个过程,且可以达到相同的有益效果。为避免重复,这里不再赘述。
参见图4,图4是本发明实施例提供的一种电子设备的结构示意图,如图4所示,包括:存储器402、处理器401及存储在所述存储器402上并可在所述处理器401上运行的数据处理方法的计算机程序,其中:
处理器401用于调用存储器402存储的计算机程序,执行如下步骤:
通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
可选的,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之后,处理器401执行的所述方法还包括:
当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
可选的,处理器401执行的所述基于所述主进程,创建预设数量的子进程,包括:
创建预设数量的新进程;
为所述新进程分配硬件资源;
将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
可选的,处理器401执行的所述通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理,包括:
获取待处理任务,所述待处理任务包括待处理数据以及算法需求;
基于所述算法需求,在所述主进程内存中只读对应的所述预加载算法模型;
通过只读的所述预加载算法模型对所述待处理数据进行数据。
可选的,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之前,处理器401执行的所述方法还包括:
获取待处理任务,所述待处理任务包括算法需求;
基于所述算法需求,在算法模型库中获取对应的算法模型作为预设的算法模型。
本发明实施例提供的电子设备能够实现上述方法实施例中数据处理方法实现的各个过程,且可以达到相同的有益效果。为避免重复,这里不再赘述。
本发明实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现本发明实施例提供的数据处理方法或应用端数据处理方法的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存取存储器(Random Access Memory,简称RAM)等。
以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。

Claims (10)

  1. 一种数据处理方法,其特征在于,包括以下步骤:
    通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
    基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
    通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
  2. 如权利要求1所述的数据处理方法,其特征在于,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之后,所述方法还包括:
    当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
  3. 如权利要求1所述的数据处理方法,其特征在于,所述基于所述主进程,创建预设数量的子进程,包括:
    创建预设数量的新进程;
    为所述新进程分配硬件资源;
    将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
  4. 如权利要求1所述的数据处理方法,其特征在于,所述通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理,包括:
    获取待处理任务,所述待处理任务包括待处理数据以及算法需求;
    基于所述算法需求,在所述主进程内存中只读对应的所述预加载算法模型;
    通过只读的所述预加载算法模型对所述待处理数据进行数据。
  5. 如权利要求1至4中任一所述的数据处理方法,其特征在于,在所述通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中的步骤之前,所述方法还包括:
    获取待处理任务,所述待处理任务包括算法需求;
    基于所述算法需求,在算法模型库中获取对应的算法模型作为预设的算法模型。
  6. 一种数据处理装置,其特征在于,所述数据处理装置包括:
    预加载模块,用于通过创建好的主进程对预设的算法模型进行预加载,得到预加载算法模型,并将所述预加载算法模型保存在主进程内存中;
    子进程创建模块,用于基于所述主进程,创建预设数量的子进程,所述子进程共享所述主进程内存;
    处理模块,用于通过所述子进程在所述主进程内获取所述预加载算法模型进行数据处理。
  7. 如权利要求6所述的数据处理装置,其特征在于,所述装置还包括:
    冻结模块,用于当所有的所述预加载算法模型在所述主进程内存保存完成时,对所述主进程内存进行冻结。
  8. 如权利要求6所述的数据处理方法,其特征在于,所述子进程创建模块包括:
    子进程创建子模块,用于创建预设数量的新进程;
    分配子模块,用于为所述新进程分配硬件资源;
    添加子模块,用于将所述主进程的值复制到所述新进程,并添加所述主进程内存的映射路径,得到所述子进程。
  9. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至5中任一项所述的数据处理方法中的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的数据处理方法中的步骤。
PCT/CN2022/143511 2022-07-28 2022-12-29 数据处理方法、装置、电子设备及存储介质 WO2024021524A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210900030.2A CN115309523A (zh) 2022-07-28 2022-07-28 数据处理方法、装置、电子设备及存储介质
CN202210900030.2 2022-07-28

Publications (1)

Publication Number Publication Date
WO2024021524A1 true WO2024021524A1 (zh) 2024-02-01

Family

ID=83858204

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2022/141552 WO2024021477A1 (zh) 2022-07-28 2022-12-23 数据处理方法、装置、电子设备及存储介质
PCT/CN2022/143511 WO2024021524A1 (zh) 2022-07-28 2022-12-29 数据处理方法、装置、电子设备及存储介质

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141552 WO2024021477A1 (zh) 2022-07-28 2022-12-23 数据处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115309523A (zh)
WO (2) WO2024021477A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309523A (zh) * 2022-07-28 2022-11-08 青岛云天励飞科技有限公司 数据处理方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157452A (zh) * 2021-04-20 2021-07-23 腾讯科技(深圳)有限公司 应用服务请求方法、装置、计算机设备及存储介质
CN113326139A (zh) * 2021-06-28 2021-08-31 上海商汤科技开发有限公司 一种任务处理方法、装置、设备及存储介质
US20210389982A1 (en) * 2020-06-10 2021-12-16 Q2 Software, Inc. System and method for process and data isolation in a networked service environment
CN114443177A (zh) * 2020-10-30 2022-05-06 腾讯科技(深圳)有限公司 应用运行方法、装置、服务器及存储介质
CN115309523A (zh) * 2022-07-28 2022-11-08 青岛云天励飞科技有限公司 数据处理方法、装置、电子设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9552239B2 (en) * 2013-08-09 2017-01-24 Oracle International Corporation Using sub-processes across business processes in different composites
CN108121594B (zh) * 2016-11-29 2020-10-20 阿里巴巴集团控股有限公司 一种进程管理方法及装置
CN112231121A (zh) * 2020-10-20 2021-01-15 北京金山云网络技术有限公司 创建进程的方法、装置和电子设备
CN112256394B (zh) * 2020-10-23 2022-11-18 海光信息技术股份有限公司 一种进程安全方法、装置、cpu、芯片及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210389982A1 (en) * 2020-06-10 2021-12-16 Q2 Software, Inc. System and method for process and data isolation in a networked service environment
CN114443177A (zh) * 2020-10-30 2022-05-06 腾讯科技(深圳)有限公司 应用运行方法、装置、服务器及存储介质
CN113157452A (zh) * 2021-04-20 2021-07-23 腾讯科技(深圳)有限公司 应用服务请求方法、装置、计算机设备及存储介质
CN113326139A (zh) * 2021-06-28 2021-08-31 上海商汤科技开发有限公司 一种任务处理方法、装置、设备及存储介质
CN115309523A (zh) * 2022-07-28 2022-11-08 青岛云天励飞科技有限公司 数据处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115309523A (zh) 2022-11-08
WO2024021477A1 (zh) 2024-02-01

Similar Documents

Publication Publication Date Title
US20190220418A1 (en) Memory Management Method and Apparatus
US9058212B2 (en) Combining memory pages having identical content
US20120227038A1 (en) Lightweight on-demand virtual machines
TWI574202B (zh) 用於新應用程式之記憶體管理模型與介面
US20180136842A1 (en) Partition metadata for distributed data objects
US9983642B2 (en) Affinity-aware parallel zeroing of memory in non-uniform memory access (NUMA) servers
TWI539280B (zh) 用於分析未經特定設計以提供記憶體分配資訊之應用程式及擷取記憶體分配資訊的方法、及其電腦系統和電腦可讀儲存媒體
US20100083272A1 (en) Managing pools of dynamic resources
US11360884B2 (en) Reserved memory in memory management system
JP7304119B2 (ja) ポーズレスなガベージ・コレクションのための活性化フレームを表す方法および装置
CN111737168A (zh) 一种缓存系统、缓存处理方法、装置、设备及介质
WO2024021524A1 (zh) 数据处理方法、装置、电子设备及存储介质
EP3991097A1 (en) Managing workloads of a deep neural network processor
CN113722114A (zh) 一种数据服务的处理方法、装置、计算设备及存储介质
US20180232304A1 (en) System and method to reduce overhead of reference counting
CN113157428A (zh) 基于容器的资源调度方法、装置及容器集群管理装置
CN116107731A (zh) 分布式集群负载的控制方法及装置
CN108139983A (zh) 用于在多级系统存储器中固定存储器页面的方法和设备
US11868805B2 (en) Scheduling workloads on partitioned resources of a host system in a container-orchestration system
CN110447019B (zh) 存储器分配管理器及由其执行的用于管理存储器分配的方法
CN114968482A (zh) 无服务器处理方法、装置和网络设备
CN114518962A (zh) 内存的管理方法及装置
CN112114959B (zh) 资源调度方法、分布式系统、计算机设备和存储介质
US10783291B2 (en) Hybrid performance of electronic design automation (EDA) procedures with delayed acquisition of remote resources
Oh et al. HybridHadoop: CPU-GPU hybrid scheduling in hadoop

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22952953

Country of ref document: EP

Kind code of ref document: A1