CN117788609A - Method, device, equipment and storage medium for picking up interface graphic elements - Google Patents

Method, device, equipment and storage medium for picking up interface graphic elements Download PDF

Info

Publication number
CN117788609A
CN117788609A CN202311746726.5A CN202311746726A CN117788609A CN 117788609 A CN117788609 A CN 117788609A CN 202311746726 A CN202311746726 A CN 202311746726A CN 117788609 A CN117788609 A CN 117788609A
Authority
CN
China
Prior art keywords
primitive
current interface
target
color
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311746726.5A
Other languages
Chinese (zh)
Inventor
刘思彤
周斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202311746726.5A priority Critical patent/CN117788609A/en
Publication of CN117788609A publication Critical patent/CN117788609A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for picking up an interface graphic element, which relate to the technical field of computers, and are used for combining a color value corresponding to a clicking position with a depth value, determining a picked-up object through the color value and determining a specific position of clicking on the surface of the object through the depth value, wherein the method comprises the following steps: performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data includes depth texture data and color texture data; responding to the primitive selection operation in the current interface, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface, and determining a target primitive corresponding to the target color value; and determining a target depth value corresponding to the click position from the depth texture data according to the click position, and determining world coordinates corresponding to the click position according to the target depth value, the click position and a camera coordinate system of the current interface.

Description

Method, device, equipment and storage medium for picking up interface graphic elements
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for picking up an interface primitive.
Background
Primitive picking is one of the key technologies for implementing a graphic editing system, and is a process of selecting a primitive from numerous primitives, so as to perform interactive operations such as translation, modification, scaling, rotation, deletion, and the like on the selected primitive.
In the conventional primitive picking method, generally, an identity (Identity document, ID) value of each object is converted into a color value, when a mouse clicks and picks up, the object is drawn into an off-screen buffer with the corresponding color value, after drawing is finished, the off-screen buffer is read in a central processing unit (Central Processing Unit, CPU), and the pixel color of the corresponding clicking position is obtained according to the clicking position of the mouse, so that the corresponding clicking object is found. However, the above-mentioned picking-up method can only pick up the clicked object, and cannot obtain the position information of the mouse clicked on the object surface, and cannot provide accurate feedback effect for the user.
Disclosure of Invention
Based on the technical problems, the application provides a method, a device, equipment and a storage medium for picking up an interface primitive, wherein a color value corresponding to a clicking position is combined with a depth value, an object is picked up through the color value, and a specific position of the clicking on the surface of the object is determined through the depth value.
In a first aspect, the present application provides a method for picking up an interface primitive, where the method includes: performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface; wherein, the color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different; responding to the primitive selection operation in the current interface, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface, and determining a target primitive corresponding to the target color value; determining a target depth value corresponding to the click position from the depth texture data according to the click position, and determining world coordinates corresponding to the click position according to the target depth value, the click position and a camera coordinate system of the current interface; and determining the target primitive as a picked primitive, and determining world coordinates as position information corresponding to the primitive selection operation on the picked primitive to obtain a picking result.
After the off-screen rendering is carried out on the current interface, the obtained off-screen rendering data not only comprises color texture data, but also comprises depth texture data. And when the user performs clicking operation on the current interface, the method and the device can determine a target depth value corresponding to the clicking position from the depth texture data according to the clicking position, combine the clicking position and the target depth value with a camera coordinate system of the current interface to obtain world coordinates corresponding to the clicking position, determine a target color value corresponding to the clicking position from the color texture data according to the clicking position, and determine a target primitive corresponding to the target color value. In this way, the method and the device can determine not only the pick-up primitive (i.e. the target primitive) corresponding to the clicking operation, but also the position information (i.e. the world coordinates) on the primitive, and further provide accurate pick-up response for the user aiming at the specific position on the pick-up primitive.
In one possible implementation, performing off-screen rendering on a current interface includes: obtaining vertex information, depth values and mapping relations between different primitives and different color values of each primitive in a current interface; performing off-screen rendering on the current interface based on vertex information and depth values of each primitive to obtain depth texture data; and performing off-screen rendering on the current interface based on vertex information of each primitive and mapping relations between different primitives and different color values to obtain color texture data.
In a possible implementation manner, determining world coordinates corresponding to the click position according to the click position, the target depth value, the click position and a camera coordinate system of the current interface includes: determining pixel point coordinates corresponding to the clicking position according to interface pixels of the current interface and pixel points covered by the clicking position, and converting the pixel point coordinates into two-dimensional standardized device coordinates NDC; taking the target depth value as a new dimension value, and adding the new dimension value into the two-dimensional NDC to obtain a three-dimensional NDC; and transforming the three-dimensional NDC according to the camera view projection matrix information corresponding to the camera coordinate system to obtain world coordinates corresponding to the clicking position.
In a possible implementation manner, determining world coordinates corresponding to the click position according to the target depth value, the click position and a camera coordinate system of the current interface includes: and in a computing shader of the GPU, according to the target depth value, the click position and a camera coordinate system of the current interface, calculating to obtain world coordinates corresponding to the click position.
In a possible implementation manner, according to a click position corresponding to a primitive selection operation in a current interface, determining a target color value corresponding to the click position from color texture data includes: in a computing shader of the graphic processor GPU, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface.
In a possible implementation manner, after determining, by the graphics processor GPU, a target color value corresponding to the click position, determining a target primitive corresponding to the target color value includes: and acquiring a target color value in the GPU through a CPU, and determining a target graphic element corresponding to the target color value through the CPU reading the mapping relation between the graphic element identifiers and the color values.
In a possible implementation manner, the method further includes: creating a storage buffer area in a video memory of the GPU; after world coordinates corresponding to the clicking positions and target color values corresponding to the clicking positions are obtained, storing the world coordinates corresponding to the clicking positions and the target color values corresponding to the clicking positions into a storage buffer area; the data in the created memory buffer is sent to the CPU.
In a second aspect, the present application provides a pickup device for an interface primitive, the device including a processing unit and a determining unit; the processing unit is used for performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface; wherein, the color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different; the determining unit is used for responding to the primitive selection operation in the current interface, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface, and determining a target primitive corresponding to the target color value; the determining unit is further used for determining a target depth value corresponding to the clicking position from the depth texture data according to the clicking position, and determining world coordinates corresponding to the clicking position according to the target depth value, the clicking position and a camera coordinate system of the current interface; and the determining unit is also used for determining the target primitive as a picked-up primitive and determining the world coordinate as position information corresponding to the primitive selecting operation on the picked-up primitive to obtain a picking-up result.
In a possible implementation manner, the processing unit is specifically configured to: obtaining vertex information, depth values and mapping relations between different primitives and different color values of each primitive in a current interface; performing off-screen rendering on the current interface based on vertex information and depth values of each primitive to obtain depth texture data; and performing off-screen rendering on the current interface based on vertex information of each primitive and mapping relations between different primitives and different color values to obtain color texture data.
In a possible implementation manner, the determining unit is specifically configured to: determining pixel point coordinates corresponding to the clicking position according to interface pixels of the current interface and pixel points covered by the clicking position, and converting the pixel point coordinates into two-dimensional standardized device coordinates NDC; taking the target depth value as a new dimension value, and adding the new dimension value into the two-dimensional NDC to obtain a three-dimensional NDC; and transforming the three-dimensional NDC according to the camera view projection matrix information corresponding to the camera coordinate system to obtain world coordinates corresponding to the clicking position.
In a possible implementation manner, the determining unit is specifically configured to: and in a computing shader of the GPU, according to the target depth value, the click position and a camera coordinate system of the current interface, calculating to obtain world coordinates corresponding to the click position.
In a possible implementation manner, the determining unit is specifically configured to: in a computing shader of the graphic processor GPU, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface.
In a possible implementation manner, after determining, by the graphics processor GPU, the target color value corresponding to the click position, the determining unit is specifically configured to: and acquiring a target color value in the GPU through a CPU, and determining a target graphic element corresponding to the target color value through the CPU reading the mapping relation between the graphic element identifiers and the color values.
In a possible implementation, the processing unit is further configured to: creating a storage buffer area in a video memory of the GPU; after world coordinates corresponding to the clicking positions and target color values corresponding to the clicking positions are obtained, storing the world coordinates corresponding to the clicking positions and the target color values corresponding to the clicking positions into a storage buffer area; the data in the created memory buffer is sent to the CPU.
In a third aspect, the present application provides an electronic device, comprising: a processor and a memory; the memory stores instructions executable by the processor; the processor is configured to execute the instructions to cause the electronic device to implement the method of the first aspect as described above.
In a fourth aspect, the present application provides a computer program product for, when run in an electronic device, causing the electronic device to perform the method of the first aspect described above, to carry out the method of the first aspect described above.
In a fifth aspect, the present application provides a computer readable storage medium comprising: a software instruction; the software instructions, when executed in an electronic device, cause the electronic device to implement the method of the first aspect described above.
The advantages of the second to fifth aspects described above may refer to the first aspect, and are not repeated here.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a pickup system for interface primitives according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a pickup device according to an embodiment of the present application;
Fig. 3 is a schematic diagram of the composition of an electronic device according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for picking up interface primitives according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a process for performing normal rendering in a GPU according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a process for performing off-screen rendering in a GPU according to an embodiment of the present application;
FIG. 7 is a second flowchart of a method for picking up interface primitives according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of performing a pick information computation within a GPU computing shader, according to an embodiment of the present application;
fig. 9 is a schematic diagram of cooperation between a CPU and a GPU according to an embodiment of the present application;
fig. 10 is a schematic diagram of the composition of a pickup device according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In addition, in the description of the embodiments of the present application, "/" means or, unless otherwise indicated, for example, a/B may mean a or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
Before explaining the embodiments of the present application in detail, some related terms and related techniques related to the embodiments of the present application are described.
With the rapid development of three-dimensional technology and the continuous increase of three-dimensional visualization technology requirements of various industries, higher requirements are put on the three-dimensional visualization technology, and the three-dimensional visualization technology is not only limited to realizing simple three-dimensional visualization capability, but also needs a user to interact with a three-dimensional virtual environment in a man-machine manner, and excellent user experience can be realized.
Object positioning and picking in a three-dimensional environment are the most important parts in three-dimensional interactive technology, and the fact that the speed and accuracy of object picking in the three-dimensional environment are not high always is a major problem. Current three-dimensional pick-up techniques and algorithms study existing shaped systems, and the current main algorithms include the following two classes:
The basic idea is to convert the click position of a mouse on a screen into a pickup ray by using a transformation matrix, and calculate whether the pickup ray intersects any triangle in any three-dimensional (3D) object, thereby realizing the selection of the three-dimensional object.
Specifically, the technology transforms the click position of the mouse on the screen to the coordinate under the world coordinate system, and forms the pick-up ray together with the viewpoint, so as to judge whether the pick-up ray is matched with a certain object. When judging whether to intersect, converting the pick-up rays into coordinates under a local coordinate system corresponding to the object through matrix transformation, traversing triangular surfaces of the object and intersecting calculation with the pick-up rays, thereby determining pick-up points and selected objects.
The above technology is not suitable for picking up large scenes or complex objects due to the fact that the objects in the scenes and each triangular surface in the objects are traversed, the calculated amount is large, the resource cost is consumed, and the technology is not suitable for picking up large scenes or complex objects.
Another type of image space based pick-up algorithm, such as GPU-based redraw pick-up techniques, picks up and displays objects by two draws in an open graphics library (Open Graphics Library, openGL) drawing mode. However, in this technology, since the off-screen buffer is read in the CPU and the pixel color of the corresponding position is calculated, the CPU memory is occupied and the data transmission and CPU processing time is consumed. Meanwhile, the technology can only pick up the clicked object, and can not acquire the position information of the mouse clicked on the surface of the object.
In view of the above problems, an embodiment of the present application provides a method for picking up an interface primitive, where a color value corresponding to a click position is combined with a depth value, a picked-up object is determined by the color value, a specific position of the click on the surface of the object is determined by the depth value, and further, when the picked-up primitive is determined, position information corresponding to a primitive selection operation on the picked-up primitive can be determined, and further, different feedback effects can be shown for a user according to different position information.
The following describes in detail the method for picking up the interface primitive provided in the embodiment of the present application with reference to the accompanying drawings.
The information extraction method provided by the embodiment of the application can be applied to a pickup system of an interface primitive, and fig. 1 shows a schematic structural diagram of the pickup system of the interface primitive. As shown in fig. 1, the pick-up system 10 of the interface primitive includes a display device 11 and a pick-up device 12. Wherein the display device 11 is connected to the pick-up device 12. The display device 11 and the pickup device 12 may be connected by a wired connection or by a wireless connection, which is not limited in the embodiment of the present application.
The display device 11 is used for displaying a visual interface and providing visual interaction for a user.
The pickup device 12 may render a visual interface for the display device 11, and may perform pickup confirmation based on a related operation of the user on the visual interface, so as to obtain a pickup result. The specific picking process may refer to the method for picking the interface primitive described in the method embodiment described below, which is not described herein.
The display device 11 may be any device capable of outputting an image or tactile information, and for example, the display device 11 may be a television display, a computer display, or the like.
The pickup device 12 may be any electronic device having an image processing function, for example, the pickup device 12 may be a mobile phone, a tablet computer, a desktop, a laptop, a notebook, an Ultra-mobile personal computer (UMPC), a handheld computer, a netbook, a personal digital assistant (Personal Digital Assistant, PDA), a wearable electronic device, a smart watch, or the like. The specific form of the electronic device is not particularly limited in this application.
As shown in fig. 2, the pickup device 12 may include: a central processor 121, a graphics processor 122. The pickup device 12 may perform the pickup method of the interface primitive of the embodiment of the present application through the central processor 121 and the graphic processor 122.
In some embodiments, the pick-up system 10 further comprises an input device for enabling information exchange between the user and the pick-up system 10. For example, the input device may be a keyboard, mouse, camera, scanner, light pen, handwriting tablet, joystick, voice input device, or the like. An input device is a means for human or external interaction with a computer for inputting raw data and programs for processing these numbers into the computer. The computer can receive various data, namely numerical data, or various non-numerical data, such as graphics, images, sound and the like, which can be input into the computer through different types of input devices for storage, processing and output.
In fig. 1, the display device 11 and the pickup device 12 are described as separate devices, and alternatively, the display device 11 and the pickup device 12 may be combined into one device. For example, the display device 11 or its corresponding function, and the pickup device 12 or its corresponding function may be integrated in one device. The embodiments of the present application are not limited in this regard.
The main execution body of the method for picking up the interface primitive provided in the embodiment of the present application may be the above-mentioned picking up device 12. As described above, the pickup device 12 may be an electronic apparatus having an image processing function, such as a computer or a server. Alternatively, the pickup device 12 may be a processor (e.g., a central processing unit (central processing unit, CPU)) in the aforementioned electronic apparatus; alternatively, the ground pickup device 12 may be an Application (APP) having a model training function installed in the aforementioned electronic apparatus; alternatively, the pickup device 12 may be a functional module or the like having a model training function in the electronic apparatus. The embodiments of the present application are not limited in this regard.
For simplicity of description, the pickup device 12 will be described as an example of an electronic apparatus.
Fig. 3 is a schematic diagram of the composition of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device may include: processor 20, memory 21, communication line 22, and communication interface 23, and input-output interface 24.
The processor 20, the memory 21, the communication interface 23, and the input/output interface 24 may be connected by a communication line 22.
The processor 20 is configured to execute instructions stored in the memory 21 to implement a fault analysis method provided in the following embodiments of the present application. The processor 20 may be a CPU, general purpose processor network processor (network processor, NP), digital signal processor (digital signal processing, DSP), microprocessor, microcontroller (micro control unit, MCU), programmable logic device (programmable logic device, PLD), or any combination thereof. The processor 20 may also be any other apparatus having a processing function, such as a circuit, a device, or a software module, which is not limited in this embodiment. In one example, processor 20 may include one or more CPUs, such as CPU0 and CPU1 in fig. 3. As an alternative implementation, the electronic device may include multiple processors, for example, processor 25 (illustrated in phantom in fig. 3) in addition to processor 20.
A memory 21 for storing instructions. For example, the instructions may be a computer program. Alternatively, the memory 21 may be a read-only memory (ROM) or other type of static storage device capable of storing static information and/or instructions, an access memory (random access memory, RAM) or other type of dynamic storage device capable of storing information and/or instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc (compact disc read-only memory, CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage medium, or other magnetic storage device, etc., which are not limited in this embodiment.
It should be noted that, the memory 21 may exist separately from the processor 20 or may be integrated with the processor 20. The memory 21 may be located inside the electronic device or may be located outside the electronic device, which is not limited in the embodiment of the present application.
Communication lines 22 for conveying information between components included in the electronic device.
A communication interface 23 for communicating with other devices (e.g., the image capturing apparatus 100 described above) or other communication networks. The other communication network may be an ethernet, a radio access network (radio access network, RAN), a wireless local area network (wireless local area networks, WLAN), etc. The communication interface 23 may be a module, a circuit, a transceiver, or any device capable of enabling communication.
And an input-output interface 24 for enabling human-machine interaction between the user and the electronic device. Such as enabling action interactions or information interactions between a user and an electronic device.
The input/output interface 24 may be a mouse, a keyboard, a display screen, or a touch-sensitive display screen, for example. The action interaction or information interaction between the user and the electronic equipment can be realized through a mouse, a keyboard, a display screen, a touch display screen or the like.
It should be noted that the structure shown in fig. 3 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in fig. 3, or a combination of some components, or a different arrangement of components.
The following describes a method for picking up interface primitives provided in the embodiments of the present application.
Fig. 4 is a flowchart of a method for picking up an interface primitive according to an embodiment of the present application. Alternatively, the method may be performed by an electronic apparatus having the above-described hardware structure shown in fig. 3, and as shown in fig. 4, the method includes S301 to S302.
S301, performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface.
The off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface. The color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different.
As a possible implementation manner, the electronic device may perform normal rendering on the display desktop first, generate a current display interface, and display the current interface for the user through the display screen. Further, the electronic equipment performs off-screen rendering according to the current interface displayed on the display screen, and off-screen rendering data corresponding to the current interface is obtained.
It should be noted that, in this embodiment of the present application, specific off-screen rendering opportunities are not limited, for example, after a current display interface is generated, an electronic device may start off-screen rendering on the current interface in response to a screen wakeup operation, or may start off-screen rendering on the current interface in response to a click operation when a user clicks a primitive on the current interface.
The current interface may generally include a plurality of primitives, where the primitives may be two-dimensional images, three-dimensional images, or points or lines, and the number of primitives and the pattern of primitives in the current interface are not limited in this embodiment of the present application. Different primitives may have different styles (e.g., colors, shapes, etc.). For example, the current interface may be a desktop displayed in a display screen, and the graphical element may be an icon on the desktop.
In some embodiments, the electronic device may obtain vertex information, a depth value, and a mapping relationship between different primitives and different color values in each primitive in the current interface. Further, the electronic device performs off-screen rendering on the current interface based on the vertex information and the depth value of each primitive to obtain depth texture data, and performs off-screen rendering on the current interface based on the vertex information of each primitive and the mapping relation between different primitives and different color values to obtain color texture data.
Optionally, the electronic device generates, in the CPU, a unique ID for each primitive to be drawn, converts the ID value into an optical color space (RGBA color space) color value according to a conversion rule, and generates and stores a mapping relationship between the primitive ID and the color value, where the rule of converting the primitive ID into the color value by the electronic device is as follows:
hex=Math.floor(id);
r=(hex>>16&255)/255;
g=(hex>>8&255)/255;
b=(hex&255)/255;
r represents the color of the red channel, g represents the color of the green channel, b represents the color of the blue channel, > > represents the bit right shift operator, & represents the logical AND operation.
After the primitive ID is converted into the color value, the electronic device may transmit the primitive-related data and the color value corresponding to the primitive ID to the GPU, and perform off-screen rendering (also referred to as off-screen rendering) in the GPU.
In some embodiments, a graphics processor (graphics processing unit, GPU) is included in the electronic device, with vertex shaders and fragment shaders designed into the GPU. Wherein the vertex shader is a set of instruction code that is executed when the vertex is rendered. The fragment shader is another set of instruction code that is used to calculate the final color for each pixel in the 3D scene. The fragment shader typically uses texture, illumination, and other data to calculate the color of a pixel. It can be used to achieve various effects such as shading, reflection, antialiasing, etc.
In practical application, the electronic device may implement normal rendering and off-screen rendering of the display desktop through the vertex shader and the fragment shader in the GPU.
Illustratively, as shown in FIG. 5, a process of normal rendering (i.e., normal rendering) is performed within the GPU. In order to obtain the current interface, the electronic device normally draws the to-be-displayed graphical element in the current interface. Specifically, the electronic device transmits data such as the position (such as the vertex) of the object and transformation information into the vertex shader, transforms each vertex, determines the rendering position of the object on the screen, calculates each primitive color in the primitive shader according to the received related information such as the object color, illumination and the like, and finally generates an image and outputs and draws the image to the screen.
Optionally, when the electronic device may perform off-screen rendering through the GPU, the GPU may newly open up a buffer area outside the current screen buffer area to perform the off-screen rendering operation.
In some embodiments, the electronic device may also create a color texture buffer and a depth texture buffer within the GPU for storing the results of the off-screen rendering. For example, the electronic device may store color texture data obtained by off-screen rendering in a color texture buffer, and store depth texture data obtained by off-screen rendering in a depth texture buffer, so as to facilitate classification of different data. Subsequent use, etc.
Illustratively, as shown in FIG. 6, off-screen rendering (i.e., off-screen rendering) is performed within the GPU. And when the mouse clicks the screen, the electronic equipment performs off-screen rendering. When off-screen rendering is performed, the electronic equipment uses the vertex shader and the data which are the same as those of normal rendering to transform each vertex, so that the drawn primitives can be in one-to-one correspondence with each pixel in normal drawing. Optionally, the off-screen drawing primitive shader can directly take the color corresponding to the received primitive ID as the output color without complex color calculation, so as to ensure that the color values corresponding to the same primitive are the same, the color values corresponding to different primitives are different, and the output color is not required to be consistent with the real color development of the primitive in the current interface. The electronic equipment finally generates an image corresponding to the current interface through the process, draws the image in the created color texture buffer zone and the depth texture buffer zone, stores the color value of each pixel of the image in the color texture buffer zone, and stores the depth value of each pixel of the image in the depth texture buffer zone.
S302, responding to the primitive selection operation in the current interface, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface, and determining a target primitive corresponding to the target color value.
The primitive selection operation may be generated by a mouse or by a keyboard, and the generation mode of the primitive selection operation in the embodiment of the present application is not limited. For example, the primitive selection operation may be generated by clicking a primitive in the current interface by the user through the mouse, and accordingly, the clicking position corresponding to the primitive selection operation in the current interface is the clicking position of the user mouse on the current interface.
As a possible implementation manner, after obtaining color texture data corresponding to the current interface through off-screen rendering, the electronic device may determine, in a computing shader of the graphics processor GPU, a target color value corresponding to the click position from the color texture data according to the click position corresponding to the primitive selection operation in the current interface. Further, the electronic device obtains a target color value in the GPU through a CPU, reads mapping relations between the plurality of graphic element identifiers and the plurality of color values through the CPU, and determines a target graphic element corresponding to the target color value.
In some embodiments, the electronic device determines a target color value corresponding to the click position by using a memory resource and a computing resource of the GPU, and feeds back the target color to the CPU. Correspondingly, after receiving the target color fed back by the GPU, the CPU directly reads the mapping relation between the plurality of primitive identifiers and the plurality of color values, so that the target primitive corresponding to the target color value can be determined, and the target primitive is taken as the pick-up primitive of the user.
It can be understood that compared with the mode of directly reading the off-screen rendering result at the CPU end and then calculating the pick-up information according to the mouse position in the related art, the implementation method fully utilizes GPU resources, saves CPU memory and shortens data transmission and CPU processing time.
In one design, in order to determine a specific position of a mouse click on a primitive, as shown in fig. 7, the method for picking up an interface primitive provided in the embodiment of the present application may further include, after S302 above:
s401, determining a target depth value corresponding to the clicking position from the depth texture data according to the clicking position, and determining world coordinates corresponding to the clicking position according to the target depth value, the clicking position and a camera coordinate system of a current interface.
As a possible implementation manner, the electronic device determines, according to the click position, a target depth value corresponding to the click position from the depth texture data, and further calculates, in a calculation shader of the graphics processor GPU, world coordinates corresponding to the click position according to the target depth value, the click position, and a camera coordinate system of the current interface.
In some embodiments, calculating, in a computing shader of the GPU, a world coordinate primitive corresponding to the click location may include: and determining pixel point coordinates corresponding to the clicking position according to the interface pixels of the current interface and the pixel points covered by the clicking position, and converting the pixel point coordinates into two-dimensional standardized equipment coordinates (Normalized Device Coordinates, NDC). Further, the electronic device adds the target depth value as a new dimension value to the two-dimensional NDC to obtain a three-dimensional NDC, and transforms the three-dimensional NDC according to camera view projection matrix information corresponding to a camera coordinate system to obtain world coordinates corresponding to the clicking position.
FIG. 8 is a flowchart of performing a fetch information computation within a GPU computing shader. After the off-screen rendering is finished, the electronic device may transfer the color texture buffer and the depth texture buffer output by the off-screen rendering into a computing shader of the GPU, and receive the mouse click position, the screen resolution data, and the camera view projection matrix data in the computing shader. A depth texture buffer, a color texture buffer, and a storage buffer are created in the GPU. The depth texture buffer is used for storing depth texture data corresponding to the current interface obtained by off-screen rendering, the color texture buffer is used for storing color texture data corresponding to the current interface obtained by off-screen rendering, and the storage buffer is used for storing pick-up calculation results of the GPU. The electronic device may calculate texture coordinates of the mouse clicking on the screen texture object based on the duty cycle of the mouse clicking position in the screen pixels and the texture size. Further, the electronic device may acquire color value data of the color texture at the corresponding texture coordinates, and record the color value data in the storage buffer. Similarly, the electronic device can also acquire depth value data of the depth texture under the corresponding texture coordinates, and generate three-dimensional NDC coordinates according to the click position of the mouse and the depth value data. The electronic device may convert the three-dimensional NDC coordinates to world coordinates according to the camera view projection matrix information, and record the obtained world coordinates in the storage buffer.
Optionally, the electronic device may calculate the size of the color texture buffer, and calculate the texel coordinates of the mouse on the screen texture object according to the duty ratio of the mouse position in the screen pixels, where the corresponding instruction is as follows:
color texture buffer size/fetch
let texSize=textureDimensions(visibleMap).xy;
Calculating the mouse position on-screen pixel duty cycle
let screenPoint=vec2<f32>(screenInfo.x/screenInfo.w,screenInfo.y/screenInfo.h);
Calculating texel coordinates of a mouse on a screen texture object
let mouseUV=screenPoint*vec2<f32>(texSize.xy)。
Further, the electronic device obtains a color value of the color texture under the corresponding texel coordinate, records the color value in the storage buffer, and obtains a depth value of the depth texture under the corresponding texel coordinate. And generating three-dimensional NDC coordinates, converting the three-dimensional NDC coordinates into world coordinates by combining a camera view projection matrix, recording the world coordinates in a storage buffer zone, and completing all acquisition and storage of the picked-up information.
In the process of generating the three-dimensional NDC coordinate, as the depth value of the acquired depth texture under the corresponding texel coordinate is the NDC coordinate, the mouse click position is only required to be converted into the two-dimensional NDC coordinate, and the specific conversion method is as follows:
ndc_x=(2*p_x)/width–1
ndc_y=1-(2*p_y)/height
where p_x, p_y are the click positions of the mouse in the screen coordinate system, width and height are the width and height of the screen, and ndc_x and ndc_y are the converted mouse positions.
In other embodiments, the electronic device may also create a memory buffer in the GPU's memory. After world coordinates corresponding to the clicking positions and target color values corresponding to the clicking positions are obtained, the world coordinates corresponding to the clicking positions and the target color values corresponding to the clicking positions are stored in a storage buffer, and further, the electronic equipment sends data in the created storage buffer to the CPU.
As shown in fig. 9, the pick-up method of the present application is performed in the CPU and GPU of the electronic device. The CPU of the electronic device may convert different primitive IDs into different color values and maintain the mapping relationship between primitive IDs and color values. And the CPU can transmit the related data which are normally drawn to the GPU so as to enable the GPU to normally draw and output and display a desktop value display screen. After acquiring the event that the user clicks the screen by the mouse, the CPU may transmit the mouse click position, the screen resolution, the camera view projection matrix data, etc. to the GPU, so as to instruct the GPU to perform the calculation of the pickup information. Accordingly, the GPU may create a color texture buffer as well as a depth texture buffer. After the electronic equipment performs off-screen drawing through the GPU, the obtained color data are stored in a color texture buffer area, and the obtained depth texture data are stored in the depth texture buffer area. The electronic device may also create a memory buffer in the GPU to store the calculated pickup information. Specifically, the electronic device calculates the picked-up color value and world coordinates in a calculation shader of the GPU, stores the calculated color value and world coordinates in a storage buffer, and feeds back the data in the storage buffer to the CPU. The CPU can determine the target primitive from the mapping relation comprising the primitive ID and the color value according to the color value obtained by GPU calculation, and feed back the target primitive and the world coordinates to the user.
It can be understood that, in the embodiment of the application, when picking up and performing off-screen rendering, the depth information is saved, and the three-dimensional world coordinates of the pick-up point are calculated in the GPU calculation shader according to the depth information and the mouse position, so that compared with the existing mode of only picking up an object, more pick-up information is provided, and more pick-up scenes are supported.
S402, determining the target primitive as a picked primitive, and determining world coordinates as position information corresponding to the primitive selecting operation on the picked primitive, so as to obtain a picking result.
As one possible implementation manner, the target primitive is determined as a picked primitive, and the world coordinate is determined as position information corresponding to the primitive selection operation on the picked primitive, so as to obtain a pick result. And the electronic device can feed back the picked-up primitives and world coordinates to the user.
After the off-screen rendering is carried out on the current interface, the obtained off-screen rendering data not only comprises color texture data, but also comprises depth texture data. And when the user performs clicking operation on the current interface, the method and the device can determine a target depth value corresponding to the clicking position from the depth texture data according to the clicking position, combine the clicking position and the target depth value with a camera coordinate system of the current interface to obtain world coordinates corresponding to the clicking position, determine a target color value corresponding to the clicking position from the color texture data according to the clicking position, and determine a target primitive corresponding to the target color value. In this way, the method and the device can determine not only the pick-up primitive (i.e. the target primitive) corresponding to the clicking operation, but also the position information (i.e. the world coordinates) on the primitive, and further provide accurate pick-up response for the user aiming at the specific position on the pick-up primitive.
The foregoing description of the solution provided in the embodiments of the present application has been mainly presented in terms of a method. To achieve the above functions, it includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. The technical aim may be to use different methods to implement the described functions for each particular application, but such implementation should not be considered beyond the scope of the present application.
In an exemplary embodiment, the present application further provides a pickup apparatus. Fig. 10 is a schematic diagram of the composition of a pickup device according to an embodiment of the present application. As shown in fig. 10, the pickup apparatus includes: a processing unit 501 and a determination unit 502.
The processing unit 501 is configured to perform off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface; wherein, the color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different; a determining unit 502, configured to determine, in response to a primitive selection operation in the current interface, a target color value corresponding to the click position from the color texture data according to the click position corresponding to the primitive selection operation in the current interface, and determine a target primitive corresponding to the target color value; the determining unit 502 is further configured to determine, according to the click position, a target depth value corresponding to the click position from the depth texture data, and determine world coordinates corresponding to the click position according to the target depth value, the click position, and a camera coordinate system of the current interface; the determining unit 502 is further configured to determine the target primitive as a picked-up primitive, and determine the world coordinate as position information corresponding to the primitive selection operation on the picked-up primitive, so as to obtain a pick-up result.
In a possible implementation manner, the processing unit 501 is specifically configured to: obtaining vertex information, depth values and mapping relations between different primitives and different color values of each primitive in a current interface; performing off-screen rendering on the current interface based on vertex information and depth values of each primitive to obtain depth texture data; and performing off-screen rendering on the current interface based on vertex information of each primitive and mapping relations between different primitives and different color values to obtain color texture data.
In a possible implementation manner, the determining unit 502 is specifically configured to: determining pixel point coordinates corresponding to the clicking position according to interface pixels of the current interface and pixel points covered by the clicking position, and converting the pixel point coordinates into two-dimensional standardized device coordinates NDC; taking the target depth value as a new dimension value, and adding the new dimension value into the two-dimensional NDC to obtain a three-dimensional NDC; and transforming the three-dimensional NDC according to the camera view projection matrix information corresponding to the camera coordinate system to obtain world coordinates corresponding to the clicking position.
In a possible implementation manner, the determining unit 502 is specifically configured to: and in a computing shader of the GPU, according to the target depth value, the click position and a camera coordinate system of the current interface, calculating to obtain world coordinates corresponding to the click position.
In a possible implementation manner, the determining unit 502 is specifically configured to: in a computing shader of the graphic processor GPU, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface.
In a possible implementation manner, after determining, by the graphics processor GPU, the target color value corresponding to the click position, the determining unit 502 is specifically configured to: and acquiring a target color value in the GPU through a CPU, and determining a target graphic element corresponding to the target color value through the CPU reading the mapping relation between the graphic element identifiers and the color values.
In a possible implementation, the processing unit 501 is further configured to: creating a storage buffer area in a video memory of the GPU; after world coordinates corresponding to the clicking positions and target color values corresponding to the clicking positions are obtained, storing the world coordinates corresponding to the clicking positions and the target color values corresponding to the clicking positions into a storage buffer area; the data in the created memory buffer is sent to the CPU.
It should be noted that the division of the modules in fig. 10 is illustrative, and is merely a logic function division, and other division manners may be implemented in practice. For example, two or more functions may also be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional units.
In an exemplary embodiment, a computer readable storage medium is also provided, comprising software instructions which, when run on an electronic device, cause the electronic device to perform any of the methods provided by the above embodiments.
In an exemplary embodiment, the present application also provides a computer program product comprising computer-executable instructions, which, when run on an electronic device, cause the electronic device to perform any of the methods provided by the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using a software program, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer-executable instructions. When the computer-executable instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer-executable instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, from one website, computer, server, or data center by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc. that can be integrated with the media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a Solid State Disk (SSD), etc.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "Comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Although the present application has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made without departing from the spirit and scope of the application. Accordingly, the specification and drawings are merely exemplary illustrations of the present application as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the present application. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
The foregoing is merely a specific embodiment of the present application, but the protection scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for picking up an interface primitive, the method comprising:
performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface; wherein, the color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different;
responding to a primitive selection operation in the current interface, determining a target color value corresponding to a clicking position in the current interface according to the clicking position of the primitive selection operation, and determining a target primitive corresponding to the target color value;
Determining a target depth value corresponding to the click position from the depth texture data according to the click position, and determining world coordinates corresponding to the click position according to the target depth value, the click position and a camera coordinate system of the current interface;
and determining the target primitive as a picked primitive, and determining the world coordinates as position information corresponding to the primitive selection operation on the picked primitive to obtain a picking result.
2. The method of claim 1, wherein the off-screen rendering of the current interface comprises:
obtaining vertex information, depth values and mapping relations between different primitives and different color values of each primitive in the current interface;
performing off-screen rendering on the current interface based on vertex information and depth values of each primitive to obtain the depth texture data;
and performing off-screen rendering on the current interface based on vertex information of each primitive and mapping relations between different primitives and different color values to obtain the color texture data.
3. The method of claim 1, wherein the determining the world coordinates corresponding to the click position from the click position, the target depth value, the click position, and a camera coordinate system of the current interface comprises:
Determining pixel point coordinates corresponding to the clicking position according to the interface pixels of the current interface and the pixel points covered by the clicking position, and converting the pixel point coordinates into two-dimensional standardized equipment coordinates NDC;
adding the target depth value as a new dimension value to the two-dimensional NDC to obtain a three-dimensional NDC;
and transforming the three-dimensional NDC according to the camera view projection matrix information corresponding to the camera coordinate system to obtain world coordinates corresponding to the clicking position.
4. The method of claim 1, wherein the determining world coordinates corresponding to the click position according to the target depth value, the click position, and a camera coordinate system of the current interface comprises:
and in a computing shader of the GPU, according to the target depth value, the click position and a camera coordinate system of the current interface, calculating to obtain world coordinates corresponding to the click position.
5. The method according to claim 1, wherein determining the target color value corresponding to the click position from the color texture data according to the click position corresponding to the primitive selection operation in the current interface comprises:
And in a computing shader of the GPU, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface.
6. The method of claim 5, wherein after determining, by a graphics processor GPU, a target color value corresponding to the click location, the determining a target primitive corresponding to the target color value comprises:
and acquiring a target color value in the GPU through a CPU, and reading mapping relations between the plurality of primitive identifications and the plurality of color values through the CPU to determine a target primitive corresponding to the target color value.
7. The method of claim 6, wherein the method further comprises:
creating a storage buffer area in a video memory of the GPU;
after world coordinates corresponding to the clicking positions and target color values corresponding to the clicking positions are obtained, storing the world coordinates corresponding to the clicking positions and the target color values corresponding to the clicking positions into the storage buffer;
and sending the data in the created storage buffer to the CPU.
8. A pick-up device for interface primitives, characterized in that the device comprises a processing unit and a determining unit;
the processing unit is used for performing off-screen rendering on the current interface to obtain off-screen rendering data corresponding to the current interface; the off-screen rendering data comprises depth texture data and color texture data, wherein the depth texture data is used for reflecting depth values corresponding to different positions in the current interface, and the color texture data is used for reflecting color values corresponding to different positions in the current interface; wherein, the color values corresponding to the same primitive in the color texture data are the same, and the color values corresponding to different primitives are different;
the determining unit is used for responding to the primitive selection operation in the current interface, determining a target color value corresponding to the clicking position from the color texture data according to the clicking position corresponding to the primitive selection operation in the current interface, and determining a target primitive corresponding to the target color value;
the determining unit is further configured to determine, according to the click position, a target depth value corresponding to the click position from the depth texture data, and determine world coordinates corresponding to the click position according to the target depth value, the click position, and a camera coordinate system of the current interface;
The determining unit is further configured to determine the target primitive as a picked-up primitive, and determine the world coordinate as position information corresponding to the primitive selection operation on the picked-up primitive, so as to obtain a pick-up result.
9. An electronic device, comprising: a processor and a memory;
the memory stores instructions executable by the processor;
the processor is configured to, when executing the instructions, cause the electronic device to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, the readable storage medium comprising: a software instruction;
when the software instructions are run in an electronic device, the electronic device is caused to implement the method of any one of claims 1-7.
CN202311746726.5A 2023-12-18 2023-12-18 Method, device, equipment and storage medium for picking up interface graphic elements Pending CN117788609A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311746726.5A CN117788609A (en) 2023-12-18 2023-12-18 Method, device, equipment and storage medium for picking up interface graphic elements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311746726.5A CN117788609A (en) 2023-12-18 2023-12-18 Method, device, equipment and storage medium for picking up interface graphic elements

Publications (1)

Publication Number Publication Date
CN117788609A true CN117788609A (en) 2024-03-29

Family

ID=90379235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311746726.5A Pending CN117788609A (en) 2023-12-18 2023-12-18 Method, device, equipment and storage medium for picking up interface graphic elements

Country Status (1)

Country Link
CN (1) CN117788609A (en)

Similar Documents

Publication Publication Date Title
KR101286318B1 (en) Displaying a visual representation of performance metrics for rendered graphics elements
CN113012269A (en) Three-dimensional image data rendering method and equipment based on GPU
CN107978018B (en) Method and device for constructing three-dimensional graph model, electronic equipment and storage medium
US11120611B2 (en) Using bounding volume representations for raytracing dynamic units within a virtual space
CN111638497A (en) Radar data processing method, device, equipment and storage medium
WO2022121653A1 (en) Transparency determination method and apparatus, electronic device, and storage medium
CN108290071B (en) Media, apparatus, system, and method for determining resource allocation for performing rendering with prediction of player&#39;s intention
CN114004972A (en) Image semantic segmentation method, device, equipment and storage medium
CN107481307B (en) Method for rapidly rendering three-dimensional scene
CN115512046B (en) Panorama display method and device for points outside model, equipment and medium
CN117788609A (en) Method, device, equipment and storage medium for picking up interface graphic elements
CN113436317B (en) Image processing method and device, electronic equipment and computer readable storage medium
US11978147B2 (en) 3D rendering
Eskandari et al. Diminished reality in architectural and environmental design: Literature review of techniques, applications, and challenges
US20230206567A1 (en) Geometry-aware augmented reality effects with real-time depth map
CN114020390A (en) BIM model display method and device, computer equipment and storage medium
CN115705668A (en) View drawing method and device and storage medium
CN113643320A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112634439A (en) 3D information display method and device
Boutsi et al. Α pattern-based augmented reality application for the dissemination of cultural heritage
CN112465692A (en) Image processing method, device, equipment and storage medium
US20230008224A1 (en) Visualization of complex data
CN111385489B (en) Method, device and equipment for manufacturing short video cover and storage medium
US20230119741A1 (en) Picture annotation method, apparatus, electronic device, and storage medium
WO2022121654A1 (en) Transparency determination method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication