CN115953445A - Texture data processing method and device - Google Patents

Texture data processing method and device Download PDF

Info

Publication number
CN115953445A
CN115953445A CN202211637398.0A CN202211637398A CN115953445A CN 115953445 A CN115953445 A CN 115953445A CN 202211637398 A CN202211637398 A CN 202211637398A CN 115953445 A CN115953445 A CN 115953445A
Authority
CN
China
Prior art keywords
texture data
data
storage location
texture
buffered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211637398.0A
Other languages
Chinese (zh)
Inventor
贾辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211637398.0A priority Critical patent/CN115953445A/en
Publication of CN115953445A publication Critical patent/CN115953445A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present disclosure provides a texture data processing method and apparatus, which relates to the technical field of artificial intelligence, and in particular to augmented reality, virtual reality, and computer vision, and can be applied to scenes such as metas, virtual digital people, and the like. The scheme comprises the following steps: binding a first memory location of an image processor of the texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus such that texture data stored in the first memory location is associated with buffer data stored in the second memory location; writing first texture data to be processed in a first storage location to generate first buffer data associated with the first texture data in a second storage location; and in response to determining that the first buffered data is edited as second buffered data, retrieving second texture data currently stored in the first storage location as a processing result.

Description

Texture data processing method and device
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to augmented reality, virtual reality, and computer vision, which can be applied to scenes such as the meta universe and virtual digital people, and in particular, to a texture data processing method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Some cloud manufacturers currently provide services such as virtual makeup trial or beauty for customers. The existing beautifying/virtual makeup trial scheme is to perform algorithm identification based on an image sent by a cloud manufacturer and then beautify an original image according to an identified result. At present, a cloud manufacturer can provide texture data acquired by a camera, a subsequent texture data processing device can edit the texture data, the texture data processing device can perform operations such as face recognition on an image to be processed, and then development of secondary functions such as beauty filters is performed on point location results based on algorithm recognition.
The above model presents challenges when some cloud vendors provide only a textured data interface. Because the texture data is used as an image handle on the GPU, the texture data is only used in the current context environment and cannot be freely accessed like data in a memory. Therefore, texture data in the GPU needs to be converted into image buffer data in the memory, and the buffer data is imported into the GPU again after drawing, which generally consumes time for data conversion between the memory unit and the GPU, and is difficult to meet related requirements in scenes with high frame rate requirements such as live broadcast.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The disclosure provides a texture data processing method and device, an electronic device, a computer readable storage medium and a computer program product.
According to an aspect of the present disclosure, there is provided a processing method of texture data, applied to a texture data processing apparatus, the processing method including: binding a first memory location of an image processor of the texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus such that texture data stored in the first memory location is associated with buffered data stored in the second memory location; writing first texture data to be processed in a first storage location to generate first buffer data associated with the first texture data in a second storage location; and in response to determining that the first buffered data is edited as second buffered data, retrieving second texture data currently stored in the first storage location as a processing result.
According to another aspect of the present disclosure, there is provided a texture data processing apparatus including: a first binding module configured to bind a first memory location of an image processor of the texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus such that texture data stored in the first memory location is associated with buffered data stored in the second memory location; a first write module configured to write first texture data to be processed in a first storage location to generate first buffered data associated with the first texture data in a second storage location; and an obtaining module configured to obtain, as a processing result, second texture data currently stored in the first storage location in response to determining that the first buffered data is edited as second buffered data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the above-described method.
According to yet another aspect of the disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the above-mentioned method when executed by a processor.
According to one or more embodiments of the present disclosure, a first storage location of an image processor of a texture data processing apparatus is bound to a second storage location of a central processor such that texture data stored in the first storage location is associated with buffered data stored in the second storage location, and the texture data stored in the first storage location is changed accordingly when the buffered data stored in the second storage location is edited. Finally, only the texture data stored in the first storage location needs to be output, and the method of the embodiment does not need to convert the texture data and the buffer data, thereby improving the speed of processing the texture data.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of example only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
Fig. 1 shows a schematic configuration diagram of a texture data processing system according to the related art;
FIG. 2 shows a flow diagram of a method of processing texture data according to an embodiment of the present disclosure;
FIG. 3 shows a block diagram of a texture data processing system according to an embodiment of the present disclosure;
FIG. 4 illustrates a flow chart of a method of binding storage locations according to the present disclosure;
FIG. 5 shows a flow diagram of a method of writing first texture data into a first storage location, according to an embodiment of the present disclosure;
FIG. 6 shows a flow diagram of a method of processing texture data according to another embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a texture data processing apparatus according to an embodiment of the present disclosure;
FIG. 8 shows a flow diagram of a method of generating third buffered data based on second texture data in accordance with an embodiment of the present disclosure;
fig. 9 shows a block diagram of a texture data processing apparatus according to an embodiment of the present disclosure;
fig. 10 is a block diagram showing a configuration of a texture data processing apparatus according to another embodiment of the present disclosure; and
FIG. 11 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", and the like to describe various elements is not intended to limit the positional relationship, the temporal relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
Before describing the solution of the present application in detail, a brief description will be given of the related art.
Texture data typically exists in the GPU (image processor) of a computer. Since texture data cannot be directly transmitted to a plurality of texture data processing apparatuses (for example, computers), data editing such as an algorithm cannot be directly performed. Therefore, when performing interaction or data editing, it is necessary to convert texture data stored in a GPU (image processor) into binary buffer data in a CPU (central processing unit) and perform interaction or editing of data through the buffer data.
Fig. 1 shows a schematic configuration diagram of a texture data processing system 100 according to the related art. As shown in fig. 1, the texture data processing system 100 includes: a server 110 and a texture data processing device 120, the texture data processing device 120 first receives first texture data (texture data 1 shown in the figure) to be processed, and the texture data processing device 120 can receive the first texture data from the relevant server 110. The related server 110 may be, for example, a server of a cloud manufacturer that sells makeup products, clothing, and the like online, the server of the cloud manufacturer is used to provide online services such as virtual makeup test, beauty, and the like to the user, and the server 110 may edit a face picture uploaded by the user. As shown in fig. 1, the cloud vendor server 110 acquires an image to be processed by using the image acquiring unit 111, and the scheduling unit 112 converts the image to be processed into first texture data to be processed, and transmits the first texture data to the associated texture data processing device 120.
The texture data processing apparatus 120 first shares the context with the server 110 through the sharing unit 121 to ensure that the texture data is edited in the same OpenGL version. Since the algorithm cannot be directly applied to the texture data by the related application of beauty or virtual makeup, the texture data processing device 120 converts the first texture data into the first buffer data. The correlation application edits the first buffered data to obtain second buffered data, and then converts the second buffered data back to the second texture data through the conversion unit 124, thereby implementing the operation on the first texture data. The transformation unit 124 may include an associated texture rendering engine. The resulting second texture data is sent back to the server 110, and the server 110 performs a screen-up or stream-pushing operation via the screen-up/stream-pushing unit 113.
However, in the related art, since the cloud manufacturer server needs to perform multiple virtual makeup editing operations on the same image, the conversion between texture data and buffer data may need to be repeated, especially when the texture data is modified multiple times in real time (for example, in a live scene). Since the data conversion between the GPU and the CPU is complex and time-consuming, the above-mentioned frequent data conversion will affect the speed of texture data processing. In addition, the texture data processing apparatus 120 in fig. 1 needs to further provide a sharing unit 121 for sharing the context of the server 110, otherwise, the OpenGL version of the server 110 may not be compatible with the OpenGL version of the texture data processing apparatus 120.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. FIG. 2 shows a flow diagram of a method 200 of processing texture data according to an embodiment of the present disclosure. The processing method 200 may be applied to a texture data processing apparatus for receiving first texture data to be processed, for example, texture data of a face image of a user.
As shown in fig. 2, the method 200 includes:
step 210, binding a first storage location of an image processor of the texture data processing apparatus with a second storage location of a central processor of the texture data processing apparatus, so that texture data stored in the first storage location is associated with buffer data stored in the second storage location;
step 220, writing the first texture data to be processed into the first storage location to generate first buffer data associated with the first texture data in the second storage location; and
in response to determining that the first buffered data is edited as second buffered data, second texture data currently stored in the first storage location is retrieved as a processing result, step 230.
The method of the embodiment of the disclosure binds a first storage location of an image processor of a texture data processing device with a second storage location of a central processing unit, so that texture data stored in the first storage location is associated with buffer data stored in the second storage location, and when the buffer data stored in the second storage location is edited, the texture data stored in the first storage location is changed correspondingly. Finally, only the texture data stored in the first storage location needs to be output, and the method of the embodiment does not need to convert the texture data and the buffer data, thereby improving the speed of processing the texture data.
In addition, since the finally output texture data is directly obtained from the first storage location of the GPU, the method of the present disclosure may obtain the second texture data having the same OpenGL version as the first texture data to be processed without the aid of the sharing unit 121 of the texture data processing device 120.
Since there is a corresponding relationship between the data in the GPU (image processor) and the CPU (central processing unit) in the computer as the texture data processing device, in step 210, after the first storage location of the image processor is bound to the second storage location of the central processing unit of the texture data processing device, the texture data stored in the first storage location and the buffer data stored in the second storage location will be associated. When the buffered data stored in the second storage location is edited, the texture data stored in the first storage location is changed accordingly.
FIG. 3 shows a schematic diagram of a texture data processing system 300 including a server 310 and a texture data processing device 320 according to an embodiment of the present disclosure. The server 310 is similar to the server 110 shown in fig. 1, and includes an image obtaining unit 311, a scheduling unit 321, and an on-screen/stream pushing unit 313, which are not described in detail herein. In the texture data processing device 320, the first storage location 321 in the GPU and the second storage location 322 in the CPU are bound to each other, the buffer data stored in the second storage location 322 of the CPU and the texture data stored in the first storage location 321 of the GPU will be linked, when the buffer data stored in the second storage location 322 is edited, the texture data stored in the first storage location 321 will be changed correspondingly, and the texture data and the corresponding buffer data will represent the same image data.
In step 220, the first texture data obtained from the server 310 is written into the GPU first storage location 321, which may be implemented by using the drawing unit 323, and the writing process will be described in detail in conjunction with fig. 5, and will not be described in detail here. Since the first storage location 321 and the second storage location 322 are already bound to each other in step 210, the first buffer data associated with the first texture data is automatically generated in the second storage location 322 after the first texture data is written into the GPU first storage location 321. At this time, the first texture data and the first buffer data both describe a face image that has not been beautified or virtually made up.
As shown in fig. 3, the first buffered data of the second storage location 322 is subsequently sent to the scheduling unit 324 for editing by the editing unit 325. The editing unit 325 may include a beauty SDK (software development kit). The beauty SDK edits the first buffer data into second buffer data by using a related algorithm of beauty or virtual makeup. The second buffer data represents the image data after beauty or virtual makeup. In step 230, since the first storage location 321 and the second storage location 322 are already bound, after determining that the first buffered data in the second storage location 322 is edited as the second buffered data, the texture data in the first storage location 321 is also automatically changed and associated with the second buffered data. Subsequently, the second texture data currently stored in the first storage location 321 may be directly obtained as a processing result, the texture data processing device 320 may return the second texture data to the server 310, where the second texture data is image data obtained after the virtual makeup or the beauty treatment of the human face, and the server 310 performs a screen-up or a plug-flow on the second texture data. It can be understood that the second texture data and the first texture data are texture data of the same OpenGL version, so that the returned texture data version is ensured to be consistent with the received version, and the incompatibility problem is prevented.
FIG. 4 shows a flowchart of a method 400 of binding memory locations, the method 400 being used for binding a first memory location of an image processor of a texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus, the method 400 comprising:
step 410, creating empty texture data in a first storage location;
step 420, creating empty buffer data in a second storage position; and
step 430, the handle representing the address of the null texture data in the image processor is associated with the pointer representing the address of the null buffer data in the central processor.
In steps 410 and 420, empty texture data and empty buffer data are created in the GPU first storage location and the CPU second storage location of the texture data processing device 320, respectively, for facilitating subsequent importing of the first texture data into the empty texture data.
In the GPU, the address of texture data is indicated by a handle, and in the CPU, the address of buffer data is indicated by a pointer. In step 430, the handle of the empty texture data and the pointer of the address of the empty buffer data are mapped to each other, so as to implement the binding between the first storage location and the second storage location. Given a handle to the texture data of the first storage location, the buffered data of the second storage location may be queried based on the correspondence.
In this embodiment, the empty texture data is used for binding, so that the subsequent first texture data is conveniently imported to the first storage location, and the first texture data is prevented from being mixed with other texture data.
FIG. 5 shows a flow diagram of a method 500 of writing first texture data in a first storage location, according to an embodiment of the present disclosure. As shown in fig. 5, the method 500 includes:
step 510, creating a first frame buffer object in an image processor;
step 520, attaching null texture data to the first frame buffer object; and
in step 530, the content of the first texture data is rendered onto the first frame buffer object to copy the content of the first texture data into the first storage location where the empty texture data is located.
When a plurality of rendering samples of the texture are required and the rendering samples do not need to be displayed, in this case, a single Frame Buffer Object (FBO) can be used to store the results of the rendering samples which are not displayed, and the results are displayed on the window after the processing is finished. That is, the frame buffer object may correspond to a temporary canvas onto which texture data may be drawn to enable temporary storage of texture data and exchange of data between texture data in different storage locations.
In step 510, a first frame buffer object is created, onto which the content of the first texture data may be drawn in a subsequent step 530 to enable temporary storage of the first texture data. In addition, before rendering, null texture data is also appended to the first frame buffer object in step 520, so that after the content of the first texture data is rendered to the first frame buffer object, the content of the first texture data is copied to the first storage location where the null texture data is located, thereby enabling writing of the first texture data to the first storage location.
Since texture data cannot be written directly to a memory location in the GPU, the writing process needs to be done with the help of other objects. In this embodiment, by setting the frame buffer object, it is possible to copy the first texture data to the storage location where the empty texture data is located, thereby implementing writing of the texture data.
FIG. 6 shows a flow diagram of a method 600 of processing texture data according to another embodiment of the present disclosure. The method 600 is used to implement co-address rendering of texture data. As shown in fig. 6, the method 600 includes:
step 610, binding a third storage position of an image processor of the texture data processing device with a fourth storage position of a central processor of the texture data processing device, so that texture data stored in the third storage position is associated with buffer data stored in the fourth storage position, wherein the third storage position is a storage position of first texture data to be processed, which is acquired by the texture data processing device;
step 620, generating third buffer data based on the second texture data;
step 630, writing the third buffered data into the fourth storage location to generate third texture data associated with the third buffered data in the third storage location; and
step 640, outputting the third texture data.
In some scenarios, the cloud vendor requires co-address rendering, i.e., the edited texture data needs to be stored in the same storage location as the original texture data. For example, when the first texture data to be processed acquired by the texture data processing device 320 is stored in the third storage location of the GPU, the processed texture data also needs to be overwritten on the original first texture data in the third storage location, so as to achieve the same address rendering.
In step 610, a binding between a third storage location in the GPU and a fourth storage location in the CPU may additionally be established such that texture data stored in the third storage location is associated with buffered data stored in the fourth storage location. The binding process described above is similar to the binding process described in step 210 of method 200. Specifically, the binding may be performed by creating null texture data in the third storage location and null buffer data in the fourth storage location, respectively, and then establishing a corresponding relationship between a handle of the null texture data and a pointer of an address of the null buffer data.
Fig. 7 shows a block diagram of a texture data processing apparatus 700 according to an embodiment of the present disclosure. As shown in fig. 7, a binding is established between a third memory location 701 in the GPU and a fourth memory location 702 in the CPU. The output texture shown in fig. 7 may be the second texture data shown in fig. 3.
In step 620, the second texture data may be converted into third buffer data by means of rendering. In other embodiments, the buffered data edited by the editing unit 325 (i.e., the second buffered data shown in fig. 3) can also be directly used as the third buffered data.
In step 630, after writing the third buffered data into the fourth storage location 702, third texture data associated with the third buffered data will be automatically generated in the third storage location 701, since the binding has been established between the third storage location 701 and the fourth storage location 702. Since the third buffer data is converted from the second texture data, the third texture data associated with the third buffer data is substantially the same as the second texture data. By the method 600, the texture data processed by the first texture data is generated in the third storage position 701.
In this embodiment, by additionally establishing a binding between the third storage location 701 and the fourth storage location 702, and writing the third buffer data into the fourth storage location 702, it is possible to generate edited texture data in the third storage location 701, thereby implementing the same-address rendering.
Fig. 8 shows a flow diagram of a method 800 of generating third buffered data based on second texture data according to an embodiment of the disclosure. As shown in fig. 8, the method 800 includes:
step 810, creating a second frame buffer object in the image processor;
step 820, drawing the content of the second texture data to the second frame buffer object; and
step 830, generating third buffered data according to the second frame buffer object.
In step 810, as shown with reference to fig. 7, a second Frame Buffer Object (FBO) may be created in the GPU. Subsequently, in step 820, the content of the second texture data is rendered onto the second frame buffer object to enable temporary storage of the second texture data. Third buffer data is then generated from the second frame buffer object in step 830.
It should be noted that, in the above embodiments, the first texture data is described as an image of a human face, and the second texture data is described as data obtained by performing virtual makeup/beauty editing on the image of the human face. However, in other embodiments, the first texture data may be other data, such as a human body image, and the corresponding second texture data is described as data obtained by performing virtual fitting/dressing editing on the human body image. In summary, embodiments of the present disclosure do not limit the types of objects in an image that texture data specifically represents.
According to another aspect of the present disclosure, a texture data processing apparatus is also provided. Fig. 9 shows a block diagram of a texture data processing apparatus 900 according to an embodiment of the present disclosure, and as shown in fig. 9, the apparatus 900 includes: a first binding module 910 configured to bind a first memory location of an image processor of a texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus such that texture data stored in the first memory location is associated with buffer data stored in the second memory location; a first write module 920 configured to write the first texture data to be processed into the first storage location to generate first buffered data associated with the first texture data in the second storage location; and a fetch module 930 configured to fetch, as a processing result, the second texture data currently stored in the first storage location in response to determining that the first buffered data is edited as the second buffered data.
Fig. 10 is a block diagram illustrating a structure of a texture data processing apparatus 1000 according to an embodiment of the present disclosure, and as shown in fig. 10, a first binding module 1010 includes: a first creation submodule 1011 configured to create null texture data at a first storage location; a second create sub-module 1012 configured to create empty buffer data at a second storage location; and a building sub-module 1013 configured to build a correspondence between a handle representing an address of the null texture data in the image processor and a pointer representing an address of the null buffer data in the central processor.
In some embodiments, the first write module 1020 includes: a third creating submodule 1021 configured to create a first frame buffer object in the image processor; an attaching sub-module 1022 configured to attach null texture data to the first frame buffer object; and a first drawing sub-module 1023 configured to draw the contents of the first texture data onto the first frame buffer object to copy the contents of the first texture data into the first storage location where the empty texture data is located.
In some embodiments, the apparatus 1000 further comprises: a second binding module 1040, configured to bind a third storage location of an image processor of the texture data processing apparatus with a fourth storage location of a central processor of the texture data processing apparatus, so that texture data stored in the third storage location is associated with buffer data stored in the fourth storage location, where the third storage location is a storage location of the first texture data to be processed, which is acquired by the texture data processing apparatus; a generating module 1050 that generates third buffer data based on the second texture data; a second write module 1060 configured to write the third buffered data into the fourth storage location to generate third texture data associated with the third buffered data in the third storage location; and an output module 1070 configured to output the third texture data.
In some embodiments, the generation module 1050 includes: a fourth creating sub-module 1051 configured to create a second frame buffer object in the image processor; a second rendering sub-module 1052 configured to render the contents of the second texture data onto a second frame buffer object; and a generating submodule 1053 configured to generate third buffer data from the second frame buffer object.
It should be understood that the various modules of the apparatus 900 shown in fig. 9 may correspond to the various steps in the method 200 described with reference to fig. 2. The various modules of the apparatus 1000 shown in fig. 10 may correspond to the various steps in the methods 300-700 described with reference to fig. 3-7. Thus, the operations, features and advantages described above with respect to the methods 300-700 are equally applicable to the apparatus 900, the apparatus 1000 and the units and modules included therein. Certain operations, features and advantages may not be described in detail herein for the sake of brevity.
In the technical scheme of the disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the common customs of public order.
According to an embodiment of the present disclosure, an electronic device, a readable storage medium, and a computer program product are also provided.
Referring to fig. 11, a block diagram of a structure of an electronic device 1100 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, for example, the texture data processing apparatus described above has a structure similar to the electronic device 1100, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the electronic device 1100 includes a computing unit 1101, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1109 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the electronic device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in electronic device 1100 connect to I/O interface 1105, including: input unit 1106, output unit 1107, storage unit 1109, and communication unit 1109. The input unit 1106 may be any type of device capable of inputting information to the electronic device 1100, and the input unit 1106 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 1107 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 1109 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as the processing method of texture data. For example, in some embodiments, the processing method of texture data may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1109. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1100 via the ROM 1102 and/or the communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the processing method of texture data described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the processing method of the texture data by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable texture data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways.

Claims (15)

1. A processing method of texture data is applied to a texture data processing device, and comprises the following steps:
binding a first memory location of an image processor of the texture data processing apparatus with a second memory location of a central processor of the texture data processing apparatus such that texture data stored in the first memory location is associated with buffered data stored in the second memory location;
writing first texture data to be processed in the first storage location to generate first buffer data associated with the first texture data in the second storage location; and
in response to determining that the first buffered data is edited as second buffered data, second texture data currently stored in the first storage location is obtained as a processing result.
2. The method of claim 1, wherein the binding a first memory location of an image processor of the texture data processing apparatus to a second memory location of a central processor of the texture data processing apparatus comprises:
creating null texture data at the first storage location;
creating empty buffer data at the second storage location; and
and establishing a corresponding relation between a handle representing the address of the empty texture data in the image processor and a pointer representing the address of the empty buffer data in the central processor.
3. The method of claim 2, wherein the writing the first texture data to be processed into the first storage location comprises:
creating a first frame buffer object in the image processor;
appending the empty texture data to the first frame buffer object; and
drawing the content of the first texture data onto the first frame buffer object to copy the content of the first texture data into a first storage location where the empty texture data is located.
4. The method of claim 1, wherein said obtaining second texture data currently stored in said first storage location as a result of processing further comprises:
binding a third storage position of an image processor of the texture data processing device with a fourth storage position of a central processor of the texture data processing device, so that texture data stored in the third storage position is associated with buffer data stored in the fourth storage position, wherein the third storage position is a storage position of first texture data to be processed, which is acquired by the texture data processing device;
generating third buffer data based on the second texture data;
writing the third buffered data into the fourth storage location to generate third texture data associated with the third buffered data in the third storage location; and
outputting the third texture data.
5. The method of claim 4, wherein the generating third buffered data based on the second texture data comprises:
creating a second frame buffer object in the image processor;
rendering the content of the second texture data onto the second frame buffer object; and
and generating the third buffering data according to the second frame buffering object.
6. The method of any of claims 1-5, wherein the second texture data and the first texture data are the same version of texture data.
7. The method according to any one of claims 1 to 5, wherein the editing includes virtual makeup or beauty, the first texture data is image data of a human face obtained from a cloud manufacturer, and the second texture data is image data after virtual makeup or beauty of the human face.
8. A texture data processing apparatus comprising:
a first binding module configured to bind a first memory location of an image processor of the texture data processing device with a second memory location of a central processor of the texture data processing device such that texture data stored in the first memory location is associated with buffered data stored in the second memory location;
a first write module configured to write first texture data to be processed into the first storage location to generate first buffered data associated with the first texture data in the second storage location; and
a fetch module configured to fetch second texture data currently stored in the first storage location as a processing result in response to determining that the first buffered data is edited as second buffered data.
9. The apparatus of claim 8, wherein the first binding module comprises:
a first creation sub-module configured to create null texture data at the first storage location;
a second creating submodule configured to create empty buffer data at the second storage location; and
an establishing submodule configured to establish correspondence between a handle indicating an address of the empty texture data in the image processor and a pointer indicating an address of the empty buffer data in the central processor.
10. The apparatus of claim 9, wherein the first write module comprises:
a third creating sub-module configured to create a first frame buffer object in the image processor;
an appending sub-module configured to append the null texture data to the first frame buffer object; and
a first rendering sub-module configured to render contents of the first texture data onto the first frame buffer object to copy the contents of the first texture data into a first storage location where the null texture data is located.
11. The apparatus of claim 8, further comprising:
a second binding module, configured to bind a third storage location of an image processor of the texture data processing apparatus with a fourth storage location of a central processor of the texture data processing apparatus, so that texture data stored in the third storage location is associated with buffer data stored in the fourth storage location, where the third storage location is a storage location of first texture data to be processed, acquired by the texture data processing apparatus;
a generating module configured to generate third buffer data based on the second texture data;
a second write module configured to write the third buffered data into the fourth storage location to generate third texture data associated with the third buffered data in the third storage location; and
an output module configured to output the third texture data.
12. The apparatus of claim 11, wherein the generating means comprises:
a fourth creating sub-module configured to create a second frame buffer object in the image processor;
a second rendering sub-module configured to render the contents of the second texture data onto the second frame buffer object; and
a generating sub-module configured to generate the third buffered data from the second frame buffer object.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
15. A computer program product comprising a computer program, wherein the computer program realizes the method of any one of claims 1-7 when executed by a processor.
CN202211637398.0A 2022-12-16 2022-12-16 Texture data processing method and device Pending CN115953445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211637398.0A CN115953445A (en) 2022-12-16 2022-12-16 Texture data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211637398.0A CN115953445A (en) 2022-12-16 2022-12-16 Texture data processing method and device

Publications (1)

Publication Number Publication Date
CN115953445A true CN115953445A (en) 2023-04-11

Family

ID=87286988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211637398.0A Pending CN115953445A (en) 2022-12-16 2022-12-16 Texture data processing method and device

Country Status (1)

Country Link
CN (1) CN115953445A (en)

Similar Documents

Publication Publication Date Title
CN110298906B (en) Method and device for generating information
US9928637B1 (en) Managing rendering targets for graphics processing units
CN110750664B (en) Picture display method and device
CN110069191B (en) Terminal-based image dragging deformation implementation method and device
CN109698914A (en) A kind of lightning special efficacy rendering method, device, equipment and storage medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN107315729A (en) For the data processing method of chart, medium, device and computing device
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111240769A (en) Page starting method, device, equipment and storage medium
CN115147265A (en) Virtual image generation method and device, electronic equipment and storage medium
CN111462269B (en) Image processing method and device, storage medium and electronic equipment
CN108389153B (en) View loading method and terminal equipment
CN110288523B (en) Image generation method and device
CN111008934B (en) Scene construction method, device, equipment and storage medium
CN110069195B (en) Image dragging deformation method and device
CN109672931B (en) Method and apparatus for processing video frames
CN115953445A (en) Texture data processing method and device
CN115861510A (en) Object rendering method, device, electronic equipment, storage medium and program product
CN116030185A (en) Three-dimensional hairline generating method and model training method
CN112367399B (en) Filter effect generation method and device, electronic device and storage medium
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN114218166A (en) Data processing method and device, electronic equipment and readable storage medium
US20230343038A1 (en) Method and system for creating augmented reality filters on mobile devices
CN113542620B (en) Special effect processing method and device and electronic equipment
CN113986850B (en) Storage method, device, equipment and computer readable storage medium of electronic volume

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination