CN109918233B - Data processing method and device, computing equipment and storage medium - Google Patents

Data processing method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN109918233B
CN109918233B CN201910167772.7A CN201910167772A CN109918233B CN 109918233 B CN109918233 B CN 109918233B CN 201910167772 A CN201910167772 A CN 201910167772A CN 109918233 B CN109918233 B CN 109918233B
Authority
CN
China
Prior art keywords
data
gpu
memory
instruction
designated area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910167772.7A
Other languages
Chinese (zh)
Other versions
CN109918233A (en
Inventor
谭贤亮
杨林
李晶晶
程佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Online Game Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Online Game Technology Co Ltd filed Critical Zhuhai Kingsoft Online Game Technology Co Ltd
Priority to CN201910167772.7A priority Critical patent/CN109918233B/en
Publication of CN109918233A publication Critical patent/CN109918233A/en
Application granted granted Critical
Publication of CN109918233B publication Critical patent/CN109918233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present specification provides a data processing method, an apparatus, a computing device, and a storage medium, wherein the data processing method includes: submitting a data backup instruction for backing up data in a designated area to a GPU (graphics processing unit) end display card drive through a first working thread; inquiring the execution state of the data backup instruction driven by the GPU-side display card through a second working thread; and under the condition that the execution state of the GPU-side display card driver is inquired to be the data backup instruction, releasing the storage space of the data in the designated area in the GPU-side display memory.

Description

Data processing method and device, computing equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a data processing method, an apparatus, a computing device, and a storage medium.
Background
In the prior art, most of editing operations related to graphic processing are performed at a GPU terminal, and in order to prevent misoperation in the GPU operation process, retraction and reediting (UNDO/REDO) operations need to be set, which requires that the GPU needs to frequently backup and store rollback data; the implementation modes of the UNDO/REDO operation are divided into two types, namely data recording and recording operation. However, the way of performing the restore after the recording operation is not applicable to the complicated graphic operation processing. Because of the complex algorithms involved, it is difficult to perform the restoration of the graph by reverse operation. Therefore, complicated graphics processing operations generally employ a recording operation. The recording operation is to store original data when an information editing window is opened, and then record result data after each operation by a user, wherein the data refers to data which can be changed in the information editing window. And when the UNDO operation is carried out, the program transmits the data before the last operation of the user to the corresponding control of the information editing window. This is done in space for time, and the program does not have to consider which data the user has changed at all, each time replacing all the data that may have changed. When the data volume saved each time is small, the method is convenient and quick, but if the data volume is large, such as including graphics, video information and the like, the method consumes memory.
To manage the rollback data, the display memory has a small capacity, and it is difficult to support the storage of a large amount of rollback data, so that the storage time and the update frequency of the rollback data are bound to be limited, and it is difficult to support efficient rollback operation.
Disclosure of Invention
In view of this, embodiments of the present specification provide a data processing method, an apparatus, a computing device, and a storage medium, so as to solve technical defects in the prior art.
According to a first aspect of embodiments herein, there is provided a data processing method including:
submitting a data backup instruction for backing up data in a designated area to a GPU (graphics processing unit) end display card drive through a first working thread;
inquiring the execution state of the data backup instruction driven by the GPU-side display card through a second working thread;
and under the condition that the execution state of the GPU-side display card driver is inquired to be the data backup instruction, releasing the storage space of the data in the designated area in the GPU-side display memory.
Optionally, the submitting, to the GPU side display card driver, a data backup instruction for backing up data in the designated area through the first work thread includes:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
Optionally, the data processing method further includes:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
Optionally, the data processing method further includes:
receiving a data withdrawing instruction of a GPU (graphics processing unit) end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing specified area data in a memory of the CPU end;
and according to the data withdrawing instruction for withdrawing the specified area data, withdrawing the specified area data in the memory of the CPU end into the memory of the GPU end.
Optionally, the first worker thread and the second worker thread are the same worker thread or different worker threads.
According to a second aspect of embodiments herein, there is provided a data processing apparatus comprising:
the submitting module is configured to submit a data backup instruction for backing up data in the designated area to the GPU-side display card driver through the first working thread;
the query module is configured to query the execution state of the data backup instruction driven by the GPU-side display card through a second working thread;
the release module is configured to release the storage space of the designated area data in the GPU side video memory when the execution state of the GPU side video card driver is queried to be the completion of the data backup instruction.
Optionally, the submission module is further configured to:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
Optionally, the submission module is further configured to:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
Optionally, the data processing apparatus further comprises:
the receiving module is configured to receive a data withdrawing instruction of the GPU end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing specified area data in the memory of the CPU end;
and the withdrawing module is configured to withdraw the specified area data in the CPU-side memory into the GPU-side memory according to the data withdrawing instruction for withdrawing the specified area data.
Optionally, the first worker thread and the second worker thread are the same worker thread or different worker threads.
According to a third aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the data processing method when executing the instructions.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the data processing method.
In the embodiment of the description, a data backup instruction for backing up data in a designated area is submitted to a GPU side display card drive through a first working thread, an execution state of the data backup instruction in the GPU side display card drive is queried through a second working thread, and a storage space of the data in the designated area in the GPU side display memory is released when the execution state of the GPU side display card drive is queried to be the completion of the data backup instruction; the storage space of the GPU side video memory is released through the method, the method is convenient and quick, the storage of a large amount of rollback data can be supported, the data are stored in the CPU side internal memory, the storage time and the updating frequency of the data are improved, and efficient rollback data operation is guaranteed.
Drawings
FIG. 1 is a block diagram of a computing device provided by an embodiment of the present application;
FIG. 2 is a flow chart of a data processing method provided by an embodiment of the present application;
FIG. 3 is an interaction diagram of a data processing method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of a data processing method provided by an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first can also be referred to as a second and, similarly, a second can also be referred to as a first without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present invention relate are explained.
Rolling back: is the behavior of program or data processing error and restoring the program or data to the last correct state; rollback includes types of program rollback and data rollback.
In the present application, a data processing method, an apparatus, a computing device, and a storage medium are provided, and detailed descriptions are individually provided in the following embodiments.
FIG. 1 shows a block diagram of a computing device 100, according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, memory 110 and processor 120. The processor 120 is coupled to the memory 110 via a bus 130 and a database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 140 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100 and other components not shown in FIG. 1 may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 1 is for purposes of example only and is not limiting as to the scope of the description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), a mobile phone (e.g., smartphone), a wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the data processing method shown in fig. 2. Fig. 2 shows a flow chart of a data processing method according to an embodiment of the present description, including step 202 to step 206.
Step 202: and submitting a data backup instruction for backing up the data in the designated area to a GPU-side display card driver through a first working thread.
In an embodiment of this specification, the submitting, to the GPU side display card driver, a data backup instruction for backing up data in the designated area through the first work thread includes:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
Specifically, before the editing operation is executed, the editing system submits a data backup instruction for backing up data in the designated area to the GPU side video card driver through the first working thread, and when the number of the data backup instructions is multiple, the editing system calls the GPU side video card driver to execute the data backup instruction according to the submitted data backup instruction sequence. For example, drawing is performed on computer drawing software, a circle and a line are drawn (circle drawing is performed first, line drawing is performed later), data backup is performed on pre-existing data in a first area corresponding to the circle to be drawn and pre-existing data in a second area corresponding to the line to be drawn before the circle and the line are drawn, according to the sequence, a data backup instruction of the pre-existing data in the first area corresponding to the circle to be drawn is submitted to a GPU side display card driver first, and a data backup instruction of the pre-existing data in the second area corresponding to the line to be drawn is submitted to the GPU side display card driver later.
The editing system may be a graphics processing system, and this specification is not limited in any way.
Step 204: and inquiring the execution state of the data backup instruction driven by the GPU-side display card through a second working thread.
In an embodiment of the present specification, the first worker thread and the second worker thread are the same worker thread or different worker threads.
Specifically, when the first working thread and the second working thread are the same working thread, after the editing system submits a data backup instruction for backing up data in a designated area to the GPU side display card driver through the first working thread, the first working thread queries the execution state of the data backup instruction in the GPU side display card driver, and executes work according to the working sequence.
And under the condition that the first working thread and the second working thread are different working threads, the editing system submits a data backup instruction for backing up data in the designated area to the GPU-side display card drive through the first working thread, meanwhile, the second working thread always inquires the execution state of the data backup instruction driven by the GPU-side display card, and the first working thread and the second working thread execute work simultaneously.
Specifically, the editing system queries the execution state of the data backup instruction driven by the GPU-side display card through a working thread; the execution state may be that the GPU side video card driver is executing the data backup instruction, or has not executed the data backup instruction, or has executed and completed the data backup instruction.
Step 206: and under the condition that the execution state of the GPU-side display card driver is inquired to be the data backup instruction, releasing the storage space of the data in the designated area in the GPU-side display memory.
Specifically, when the editing system queries that the execution state of the GPU side display card driver is the data backup instruction, the designated area data is backed up from the GPU side display memory to the CPU side memory, and the designated area data is backed up, so that the storage space of the designated area data in the GPU side display memory can be released, the fluency of the GPU side display card driver execution capability is improved, and the data update frequency is ensured.
In an embodiment of this specification, the data processing method further includes:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
Specifically, the memory space of the CPU may be set according to the memory occupation of the editing system, or may not set the upper limit of the memory space, and it is generally recommended to set a 64-bit program and set a sufficient memory, which is not limited herein.
In an embodiment of this specification, the data processing method further includes:
receiving a data withdrawing instruction of a GPU (graphics processing unit) end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing specified area data in a memory of the CPU end;
and according to the data withdrawing instruction for withdrawing the specified area data, withdrawing the specified area data in the memory of the CPU end into the memory of the GPU end.
Specifically, when the editing system performs a retraction operation, the editing system needs to retract to the previous operation, and by receiving a data retraction instruction of the GPU terminal, data that needs to be retracted in the data retraction instruction is retracted into the GPU terminal video memory from the CPU terminal memory.
In the embodiment of the description, a data backup instruction for backing up data in a designated area is submitted to a GPU side display card drive through a first working thread, an execution state of the data backup instruction in the GPU side display card drive is queried through a second working thread, and a storage space of the data in the designated area in the GPU side display memory is released when the execution state of the GPU side display card drive is queried to be the completion of the data backup instruction; the storage space of the GPU side video memory is released through the method, the method is convenient and rapid, the storage of a large amount of rollback data can be supported, the memory of the CPU side is larger than that of the GPU side, the data amount stored in the memory of the CPU side is much larger than that of the GPU side, the data are stored in the memory of the CPU side, the storage time and the updating frequency of the data are improved, and the efficient rollback data operation is guaranteed.
Fig. 3 shows an interaction diagram of a data processing method according to an embodiment of the present specification, and fig. 4 shows a diagram of a data processing method according to an embodiment of the present specification.
Referring to fig. 3, the data processing method includes steps 302 to 318, where the data processing method includes a GPU terminal, an editing system, and a CPU terminal, where the editing system is a graphics processing system, the GPU terminal includes a video memory, and the CPU terminal includes a memory.
Step 302: and the graphics processing system submits a data backup instruction for backing up the data in the designated area to the GPU-side display card drive through the first working thread.
Referring to fig. 4, the GPU side video memory includes a designated area data 1 and a designated data area 2, and the size of the CPU side memory space may be set according to the memory occupation ratio of the graphics processing system.
Specifically, before the graphics processing system performs operation and execution on the GPU terminal, the graphics processing system sends a data backup instruction to the GPU terminal video card driver through a first working thread, where the data backup instruction includes a first data backup instruction for backing up data in the designated area 1 and a second data backup instruction for backing up the designated data area 2. The graphic processing system firstly submits a first data backup instruction and then submits a second data backup instruction to the GPU side display card drive.
Step 304: and the GPU-side display card driver executes a data backup instruction.
Step 306: and the graphics processing system queries the execution state of the data backup instruction driven by the graphics processing unit on the GPU side display card through a second working thread.
Specifically, the GPU side display card driver executes a first data backup instruction and a second data backup instruction, the GPU side display card driver preferentially executes the first data backup instruction according to the submission order, and the graphics processing system always queries the execution states of the first data backup instruction and the second data backup instruction in the GPU side display card driver through a second working thread when executing the second data backup instruction.
And the graphics processing system continuously waits until the GPU-side display card driver completes the data backup instruction under the condition that the GPU-side display card driver does not complete the data backup instruction.
Under the condition that the graphic processing system inquires that the GPU-side display card driver completes the data backup instruction, the method comprises the following steps: the GPU side display card drives to complete the first data backup instruction, the second data backup instruction is not completed, or the GPU side display card drives to complete all the data backup instructions.
Step 308: and the graphic processing system inquires that the GPU-side display card driver completes the data backup instruction.
Step 310: and the graphic processing system backs up the data of the designated area to a memory of the CPU.
Step 312: and the graphic processing system releases the storage space of the specified area data in the GPU terminal video memory.
In practical application, the execution of step 310 and step 312 is not limited in sequence, and the specified area data may be backed up from the GPU side video memory to the CPU side memory by the graphics processing system, and the storage space of the specified area data in the CPU side video memory is released at the same time, or the specified area data may be backed up from the GPU side video memory to the CPU side memory by the graphics processing system first, and the storage space of the specified area data in the CPU side GPU side video memory is released.
Referring to fig. 4, the designated area data 1 and the designated area data 2 in the GPU side video memory are backed up to the CPU side memory.
Specifically, when the graphics processing system inquires that the GPU side display card driver completes the first data backup instruction of the designated area data 1 and does not complete the second data backup instruction of the designated area data 2, the graphics processing system releases the storage space of the GPU side designated area data 1, the CPU side memory backs up the designated area data 1, and the graphics processing system continues to wait for the GPU side display card driver to execute the execution state of the second data backup instruction.
Under the condition that the graphic processing system inquires that the GPU-side display card drives to complete a first data backup instruction of the designated area data 1 and complete a second data backup instruction of the designated area data 2, the graphic processing system preferentially releases the storage space of the GPU-side designated area data 1 and then releases the storage space of the GPU-side designated area data 2 according to the submission sequence, and the CPU-side memory preferentially backs up the designated area data 1 in the designated area data 2.
Step 314: the graphic processing system sends a data withdrawing instruction to the CPU.
Step 316: and the graphic processing system receives the specified area data corresponding to the data withdrawing instruction in the memory of the CPU end.
Referring to fig. 4, the CPU-side memory includes designated area data 3 and designated area data 4, where the designated area data 3 and the designated area data 4 are data backed up in advance by the designated area data in the GPU-side memory.
Specifically, when the graphic processing system is in an misoperation, the graphic processing system needs to retract to the previous step to acquire the data in the previous step, the graphic processing system sends a data retraction instruction to the CPU, the data retraction instruction includes a first data retraction instruction for retracting the specified area data 3 and a second data retraction instruction for retracting the specified area data 4, the specified area data 3 corresponding to the first data retraction instruction is preferentially retracted, and the graphic processing system receives the specified area data 3 and the specified area data 4 in the memory of the CPU.
Step 318: and the graphic processing system withdraws the data of the designated area in the memory of the CPU end to the memory of the GPU end.
Specifically, the graphics processing system backs specified area data 3 corresponding to a first back data instruction and specified area data 4 corresponding to a second back data instruction in the CPU-side memory to the GPU-side video memory according to a back operation, and then the storage spaces of the specified area data 3 and the specified area data 4 in the CPU-side memory are released.
In an embodiment of the present specification, the method is used for performing the retraction operation of releasing the storage space of the GPU side video memory and the data, which is convenient and fast, and can support the storage of a large amount of retracted data, and by storing the data in the CPU side memory, the storage time and the update frequency of the data are improved, and the efficient retraction data operation is ensured.
Corresponding to the above method embodiments, the present specification further provides data processing apparatus embodiments, and fig. 5 shows a schematic structural diagram of a data processing apparatus according to an embodiment of the present specification. As shown in fig. 5, the apparatus 500 includes:
a submitting module 502 configured to submit a data backup instruction for backing up data in a designated area to a GPU side display card driver through a first working thread;
the query module 504 is configured to query, through a second worker thread, an execution state of the data backup instruction in the GPU side display card driver;
a releasing module 506, configured to release the storage space of the specified area data in the GPU side video memory when it is found that the execution state of the GPU side video card driver is the data backup instruction.
In an optional embodiment, the submit module 502 is further configured to:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
In an optional embodiment, the submit module 502 is further configured to:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
In an optional embodiment, the data processing apparatus further comprises:
the receiving module is configured to receive a data withdrawing instruction of the GPU end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing specified area data in the memory of the CPU end;
and the withdrawing module is configured to withdraw the specified area data in the CPU-side memory into the GPU-side memory according to the data withdrawing instruction for withdrawing the specified area data.
In an optional embodiment, the first worker thread and the second worker thread are the same worker thread or different worker threads.
In an optional embodiment, a first working thread submits a data backup instruction for backing up data in a designated area to a GPU side display card driver, an execution state of the data backup instruction in the GPU side display card driver is queried through a second working thread, and a storage space of the data in the designated area in the GPU side display memory is released when the queried execution state of the GPU side display card driver is that the data backup instruction is completed; the storage space of the GPU side video memory is released through the method, the method is convenient and quick, the storage of a large amount of rollback data can be supported, the data are stored in the CPU side internal memory, the storage time and the updating frequency of the data are improved, and efficient rollback data operation is guaranteed.
There is also provided in an embodiment of the present specification a computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the data processing method when executing the instructions.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and the instructions, when executed by a processor, implement the steps of the data processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the data processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the data processing method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.

Claims (10)

1. A data processing method, comprising:
submitting a data backup instruction for backing up data in a designated area to a GPU (graphics processing unit) end display card drive through a first working thread;
inquiring the execution state of the data backup instruction driven by the GPU-side display card through a second working thread;
when the execution state of the GPU-side video card driver is inquired to be the data backup instruction, releasing the storage space of the data in the designated area in the GPU-side video memory;
receiving a data withdrawing instruction of the GPU end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing the data of the designated area in the memory of the CPU end;
and according to the data withdrawing instruction, withdrawing the specified area data in the CPU-side memory into the GPU-side video memory, and releasing the storage space of the specified area data in the CPU-side memory.
2. The method of claim 1, wherein the submitting, by the first worker thread, the data backup instruction for backing up the data of the designated area to the GPU side video card driver comprises:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
3. The method of claim 2, further comprising:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
4. The method of claim 1, wherein the first worker thread and the second worker thread are the same worker thread or different worker threads.
5. A data processing apparatus, comprising:
the submitting module is configured to submit a data backup instruction for backing up data in the designated area to the GPU-side display card driver through the first working thread;
the query module is configured to query the execution state of the data backup instruction driven by the GPU-side display card through a second working thread;
the release module is configured to release the storage space of the designated area data in the GPU-side video memory when the execution state of the GPU-side video card driver is inquired to be the data backup instruction;
the receiving module is configured to receive a data withdrawing instruction of the GPU end, wherein the data withdrawing instruction comprises a data withdrawing instruction for withdrawing specified area data in a memory of the CPU end;
and the withdrawing module is configured to withdraw the specified area data in the memory of the CPU end to the memory of the GPU end according to the data withdrawing instruction, and release the storage space of the specified area data in the memory of the CPU end.
6. The apparatus of claim 5, wherein the commit module is further configured to:
and under the condition that the number of the data backup instructions is multiple, backing up the multiple designated area data corresponding to the multiple data backup instructions from the GPU-side video memory to the CPU-side memory through the first working thread according to the submission sequence of the multiple data backup instructions.
7. The apparatus of claim 6, wherein the commit module is further configured to:
and deleting the data of the designated area with the longest storage time in the CPU end memory under the condition that the CPU end memory reaches the upper storage limit.
8. The apparatus of claim 5, wherein the first worker thread and the second worker thread are the same worker thread or different worker threads.
9. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-4 when executing the instructions.
10. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 4.
CN201910167772.7A 2019-03-06 2019-03-06 Data processing method and device, computing equipment and storage medium Active CN109918233B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910167772.7A CN109918233B (en) 2019-03-06 2019-03-06 Data processing method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910167772.7A CN109918233B (en) 2019-03-06 2019-03-06 Data processing method and device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109918233A CN109918233A (en) 2019-06-21
CN109918233B true CN109918233B (en) 2021-02-26

Family

ID=66963573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910167772.7A Active CN109918233B (en) 2019-03-06 2019-03-06 Data processing method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109918233B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217976B (en) * 2021-12-23 2023-02-28 北京百度网讯科技有限公司 Task processing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810199A (en) * 2012-06-15 2012-12-05 成都平行视野科技有限公司 Image processing method based on GPU (Graphics Processing Unit)
CN105955687A (en) * 2016-04-29 2016-09-21 华为技术有限公司 Image processing method, apparatus and system
CN106293953A (en) * 2015-06-08 2017-01-04 龙芯中科技术有限公司 A kind of method and system accessing shared video data
CN107506261A (en) * 2017-08-01 2017-12-22 北京丁牛科技有限公司 Adapt to the cascade fault-tolerance processing method of CPU, GPU isomeric group

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129416B2 (en) * 2012-11-14 2015-09-08 Microsoft Technology Licensing, Llc Digital art undo and redo
CN105658146B (en) * 2013-10-24 2019-04-02 佳能株式会社 Information processing unit, information processing method and control device
CN107180405A (en) * 2016-03-10 2017-09-19 阿里巴巴集团控股有限公司 A kind of image processing method, device and intelligent terminal
WO2017166272A1 (en) * 2016-04-01 2017-10-05 Intel Corporation Method and apparatus of periodic snapshotting in graphics processing environment
CN106991007B (en) * 2017-03-31 2019-09-03 青岛大学 A kind of data processing method and equipment based on GPU on piece
CN109448078B (en) * 2018-10-19 2022-11-04 珠海金山数字网络科技有限公司 Image editing system, method and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810199A (en) * 2012-06-15 2012-12-05 成都平行视野科技有限公司 Image processing method based on GPU (Graphics Processing Unit)
CN106293953A (en) * 2015-06-08 2017-01-04 龙芯中科技术有限公司 A kind of method and system accessing shared video data
CN105955687A (en) * 2016-04-29 2016-09-21 华为技术有限公司 Image processing method, apparatus and system
CN107506261A (en) * 2017-08-01 2017-12-22 北京丁牛科技有限公司 Adapt to the cascade fault-tolerance processing method of CPU, GPU isomeric group

Also Published As

Publication number Publication date
CN109918233A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109271450B (en) Database synchronization method, device, server and storage medium
CN106610876B (en) Data snapshot recovery method and device
US8516210B2 (en) Application consistent snapshots of a shared volume
US20140317369A1 (en) Snapshot creation from block lists
CN105955843B (en) A kind of method and apparatus for database recovery
CN111767047A (en) Micro-service component management method and device
US20130297577A1 (en) Database element versioning system and method
CN114661248B (en) Data processing method and device
CN109918233B (en) Data processing method and device, computing equipment and storage medium
CN111666266A (en) Data migration method and related equipment
CN106874343B (en) Data deletion method and system for time sequence database
CN115617571A (en) Data backup method, device, system, equipment and storage medium
CN109829678B (en) Rollback processing method and device and electronic equipment
CN116610636A (en) Data processing method and device of file system, electronic equipment and storage medium
CN116541469A (en) Method, device, equipment and storage medium for realizing data synchronization
US8726147B1 (en) Systems and methods for restoring web parts in content management systems
US8346723B2 (en) Consolidation of patch transformations for database replication
US20230281187A1 (en) Method for keeping data consistent across different storage systems, computing device, and storage medium
CN115858486A (en) Data processing method and related equipment
CN113835835B (en) Method, device and computer readable storage medium for creating consistency group
CN109918178B (en) Transaction submitting method and related device
CN111158871B (en) Data packet concurrent request processing method and device, electronic equipment and storage medium
CN111078641B (en) File processing method and device and electronic equipment
CN112546617B (en) Task processing method and device
CN111368250A (en) Data processing system, method and device based on Fourier transform/inverse transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Patentee after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: Room 102, Room 202, Room 302, Room 402, Room 327, Room 102, Room 202, Room 329, Room 302, No. 325, Qiandao Ring Road, Tangjiawan Town, High-tech Zone

Patentee before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.