CN109146766B - Object selection method and device - Google Patents

Object selection method and device Download PDF

Info

Publication number
CN109146766B
CN109146766B CN201811151411.5A CN201811151411A CN109146766B CN 109146766 B CN109146766 B CN 109146766B CN 201811151411 A CN201811151411 A CN 201811151411A CN 109146766 B CN109146766 B CN 109146766B
Authority
CN
China
Prior art keywords
value
image
encoding
values
color value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811151411.5A
Other languages
Chinese (zh)
Other versions
CN109146766A (en
Inventor
王海波
刘向辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognitive Computing Nanjing Information Technology Co ltd
Original Assignee
Cognitive Computing Nanjing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognitive Computing Nanjing Information Technology Co ltd filed Critical Cognitive Computing Nanjing Information Technology Co ltd
Priority to CN201811151411.5A priority Critical patent/CN109146766B/en
Publication of CN109146766A publication Critical patent/CN109146766A/en
Application granted granted Critical
Publication of CN109146766B publication Critical patent/CN109146766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses an object selection method and device, wherein the object selection method comprises the following steps: acquiring coordinate values of selected objects in a screen, wherein an image displayed in the screen comprises at least one object; obtaining a color value of a corresponding point of the position from a frame buffer of the display card according to the coordinate value, wherein the frame buffer stores a mapping relation between the position of each object in the image and the corresponding color value, and the color value is encoded by an encoding value of the object according to a preset rule; performing inverse coding on the obtained color values according to a preset rule to obtain coded values of the objects; and obtaining object identification information with a mapping relation with the coding value from the storage structure to finish the object selection operation. Compared with the scheme for realizing object selection by a collision test method in the prior art, the method has the advantages that the memory space and time are greatly saved, and the performance of the CPU is not greatly consumed.

Description

Object selection method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an object selection method and apparatus.
Background
WebGL (Web Graphics Library) is widely used in visualization and games as an efficient Web image rendering technology. At present, in the application based on WebGL, the selection of the object in the image is generally completed at the CPU side, in this process, the coordinate information of the mouse is firstly obtained, then the coordinates of the mouse and the coordinates of all the objects in the image are subjected to collision test, the coordinates of the mouse and the coordinates of the objects in the image are sequentially compared, and the object corresponding to the successfully compared coordinates is the object selected by the mouse. The method is simple and easy to realize, and can well complete the selection of the object when the data volume is small, but when the data volume is large, the CPU occupation is too high, the performance of the CPU is consumed, and the frame rate is rapidly reduced, so that the user experience is affected.
Disclosure of Invention
The invention aims to provide an object selection method and device, which effectively solve the technical problems that in the prior art, the frame rate is rapidly reduced and the user experience is affected due to excessive CPU time occupied in an object selection process.
The technical scheme provided by the invention is as follows:
an object selection method, comprising:
acquiring coordinate values of selected objects in a screen, wherein an image displayed in the screen comprises at least one object;
obtaining a color value of a corresponding point of the position from a frame buffer memory of the display card according to the coordinate value, wherein the frame buffer memory stores a mapping relation between the position of each object in the image and the corresponding color value, and the color value is encoded by an encoding value of the object according to a preset rule;
performing inverse coding on the obtained color values according to a preset rule to obtain coded values of the objects;
and obtaining object identification information with a mapping relation with the coding value from the storage structure to finish the object selection operation.
Further preferably, the step of encoding the object is included before the step of acquiring the coordinate values of the selected object in the screen, including:
detecting an object included in the image;
sequentially encoding objects in the image, wherein each encoding value uniquely corresponds to the object;
the mapping relation between the unique identification information of the object and the coded value is stored in a storage structure.
Further preferably, the storing of the mapping relationship between the unique identification information of the object and the encoded value in the storage structure includes rendering the object, including:
coding the coding values of the objects in sequence according to a preset rule to obtain corresponding color values;
rendering a first image into a frame buffer of a display card according to the color values of the objects obtained by encoding, and storing the position information of each point in the first image and the corresponding color value thereof;
and rendering the second image into the screen by using the real color value of the object, and establishing a position mapping relation of each point between the second image and the first image in the frame buffer.
Further preferably, after the step of sequentially encoding the encoding values of the objects according to the preset rule to obtain the corresponding color values, the method further includes:
identifying the coded object by using a designated identifier, and identifying that the object exists at the position;
in the step of drawing the first image according to the color value of each object obtained by encoding, the method specifically comprises the following steps: and drawing the first image according to the color value of each object obtained by encoding and the designated mark.
Further preferably, after obtaining the color value of the corresponding point at the position from the frame buffer of the display card according to the coordinate value, the method further includes:
reading a specified identifier corresponding to the position, and judging whether an object exists in the position; if so, the method proceeds to a step of performing inverse coding on the obtained color values according to a preset rule to obtain coded values of the object.
The invention also provides an object selecting device, which comprises:
the position information acquisition module is used for acquiring coordinate values of selected objects in a screen, and an image displayed in the screen comprises at least one object;
the color value acquisition module is used for acquiring a color value of a corresponding point of the position from a frame buffer memory of the display card according to the coordinate value, wherein the frame buffer memory stores a mapping relation between the position of each object in the image and the corresponding color value, and the color value is encoded by the encoding value of the object according to a preset rule;
the coding module is used for carrying out inverse coding on the obtained color values according to a preset rule to obtain coded values of the object;
and the object identification acquisition module is used for obtaining object identification information with a mapping relation with the coding value from the storage structure and finishing the selection operation of the object.
Further preferably, the object selecting apparatus further includes a configuration module, and the configuration module includes: an object detection unit and a first encoding unit, wherein,
an object detection unit configured to detect an object included in an image;
the first coding unit is used for coding the objects detected by the object detection unit in sequence, and each coding value is uniquely corresponding to the object; and storing the mapping relation between the unique identification information of the object and the coded value in a storage structure.
Further preferably, the configuration module further includes: a second encoding unit and an image rendering unit, wherein,
the second coding unit is used for coding the coding values of the objects in sequence according to a preset rule to obtain corresponding color values;
the image rendering unit is used for rendering the first image into a frame buffer of the display card according to the color values of the objects obtained by encoding, and storing the position information of each point in the first image and the corresponding color value; and the method is used for rendering the second image into the screen by using the real color value of the object, and establishing a position mapping relation of each point between the second image and the first image in the frame buffer.
Further preferably, the configuration module further includes: the identification unit is used for identifying the coded object by using a designated identifier and identifying the object at the position;
in the image rendering unit, a first image is drawn according to the color value of each object obtained by encoding and the specified mark.
Further preferably, the object selecting device further includes a judging module, configured to read, from the frame buffer, a specified identifier corresponding to the coordinate value acquired by the position information acquiring module, and judge whether an object exists at the position;
and if the object is judged to exist, the color value acquisition module acquires the color value of the corresponding point from the frame buffer of the display card according to the coordinate value of the position information acquisition module.
In the object selection method and device provided by the invention, the coded values of the objects in the image are further coded to obtain the color value and stored in the frame buffer of the display card, and meanwhile, the mapping relation between the identification information of each object and the coded value is stored in the storage structure, so that when one object is selected in the screen, the color value is obtained from the frame buffer, the coded value is obtained by inverse coding, and the identification information of the object is obtained from the storage structure, thereby realizing the object selection. In addition, in the invention, GPU (Graphics Processing Unit, graphic processor) is selected to execute the object selection task, so that the CPU execution time is not occupied, the refresh frame rate of the image is not changed in the task execution process, and the technical problem of rapid reduction of the frame rate possibly occurring in the prior art is effectively solved.
Drawings
The above features, technical features, advantages and implementation thereof will be further described in the following detailed description of preferred embodiments with reference to the accompanying drawings in a clearly understandable manner.
FIG. 1 is a flow chart of an embodiment of an object selection method according to the present invention;
FIG. 2 is a flow chart of another embodiment of the object selection method according to the present invention;
FIG. 3 is a schematic diagram of an embodiment of an object selection apparatus according to the present invention;
FIG. 4 is a schematic diagram of another embodiment of an object selection apparatus according to the present invention;
reference numerals illustrate:
100-object selecting device, 110-position information obtaining module, 120-color value obtaining module, 130-encoding module, 140-object identification obtaining module and 150-judging module.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
For the sake of simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the figures, which do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
Fig. 1 is a schematic flow chart of an embodiment of an object selection method according to the present invention, and as can be seen from the figure, the object selection method includes: s1, acquiring coordinate values of selected objects in a screen, wherein an image displayed in the screen comprises at least one object; s2, obtaining a color value of a corresponding point of the position from a frame buffer of the display card according to the coordinate value, wherein the frame buffer stores a mapping relation between the position of each object in the image and the corresponding color value, and the color value is encoded by an encoding value of the object according to a preset rule; s3, carrying out inverse coding on the obtained color value according to a preset rule to obtain a coding value of the object; s4, obtaining object identification information with a mapping relation with the coding value from a storage structure, and finishing the selection operation of the object.
In the embodiment, based on the WebGL graphic rendering technology, the object selection operation in the image is realized by matching with the object selection method, and the influence of the process on the performance of the CPU when the data in the WebGL application is large is solved.
Before the object selection operation, a process of configuring the object is performed, in which the process specifically includes a process of encoding and rendering the object, wherein,
in the process of encoding an object: after analyzing all objects (each object has unique identification information related to service) contained in the image, sequentially encoding the analyzed objects, wherein each encoding value key is uniquely corresponding to the object; then, the mapping relation between the unique identification information of the object and the code value key is stored in a storage structure.
In encoding objects, each object is assigned a unique number (encoded value key) by means of digital encoding. For example, in an example, the objects are encoded in the CPU in a 10-system manner, increment is performed by taking 1 as a unit from 0, if 5 objects are included in the image, the objects are encoded by using 0, 1, 2, 3 and 4 in sequence, and then the mapping relationship between the encoded value key of each object and the identification information thereof is stored in a pre-created storage structure. In practical applications, the object may be encoded in other ways, as long as each encoded value key uniquely corresponds to an object, and does not exceed a predetermined coding threshold (e.g., 256×256 in one example), the object is achieved. In addition, the pre-created storage structure is a bidirectional Map structure, in the structure, a corresponding code value key can be obtained according to the identification information of the object, and the identification information can be obtained according to the code value key of the object.
After the object is encoded to obtain the encoded value key, the object is further rendered, in one embodiment, the encoded value key is encoded by adopting a preset rule to obtain a color value, and three color components of RGB are obtained, specifically:
red_value=key/65536
green_value=(key-red_value*65536)
blue_value=key-red_value*65536-green_value*256
wherein red_value, green_value and blue_value correspond to red component, green component and blue component respectively, and then each value/255 is sent to the fragment shader in the form of [ red_value/255, green_value/255 and blue_value/255], so that the fragment shader renders the image into the frame buffer (frame buffer) of the display card according to the value. The code system for encoding the object is different from the code system for encoding the encoded value, the encoded value needs to be encoded into 256-system corresponding data before encoding the encoded value of the object, and then the color value is obtained by encoding the encoded value by adopting the rule. If the object is coded in a 10-system manner in the CPU, the object is coded into 256 corresponding data and then coded to obtain the color value.
In the process of rendering the objects, in order to improve the efficiency in the encoding process of the encoding values, the encoding values of the objects sent by the CPU are encoded into 256-system corresponding data by the vertex shader, then the corresponding color values are obtained by using preset rule encoding and sent to the fragment shader, and the fragment shader is used for rendering the images into the frame buffer according to the color values. Compared with a CPU, the GPU encodes the encoding value in a concurrent mode, so that the time is greatly saved.
In the process of rendering the image to the frame buffer by the fragment shader, on one hand, rendering a first image to the frame buffer according to the color value obtained by the encoding, and storing the position information of each point in the first image and the corresponding color value; on the other hand, drawing a second image in the screen by using the true color values of all objects in the image, and establishing a position mapping relation of each point between the second image and the first image in a frame buffer. The difference between the first image and the second image is that the color values used in the drawing process are different, the color values obtained by encoding according to the encoding values are used in the first image, the true color values of the images are used in the second image, and the mapping relationship established between the two images is specifically a position mapping relationship, so that when a user selects an object at a certain position in the second image displayed on a screen, the corresponding position information of the position in the first image can be obtained from the frame buffer according to the established position mapping relationship, and the corresponding color value of the position can be obtained.
In practical applications, the fixed-point shader and the fragment shader that perform the above-described process may be a shader that performs normal object rendering in the GPU (including a fixed-point shader and a fragment shader), or may be a shader that is created for the object selection process. If no new shader is added, setting 2 outputs in a shader (fragment shader) of normal object rendering, and respectively outputting a first image and a second image to a frame buffer and a screen; if a shader is created for the object selection process, the created fragment shader renders the first image into the FrameBuffer, and the normal object rendered shader (fragment shader) outputs the second image drawn to the screen.
Completing the configuration process of the object based on the above description, and when the object selection is performed, selecting the object by the user according to the image (corresponding to the second image) output in the screen; after the coordinate value (the coordinate information of the mouse) of the selected object is obtained, the color value of the point corresponding to the coordinate information is obtained according to the mapping relation with the data stored in the frame buffer; then, carrying out inverse coding on the obtained color value according to a preset rule to obtain a coding value of the object; and finally, obtaining object identification information with a mapping relation with the coding value from a storage structure to finish the object selection operation. Specifically, in the encoded value key of the object obtained by performing inverse encoding on the obtained color value according to a preset rule, the rule of inverse encoding is as follows:
key=r*255*65536+g*255*256+b
wherein r, g and b respectively represent the values corresponding to the red component, the green component and the blue component in the obtained color values, and specifically correspond to the values of red_value/255, green_value/255 and blue_value/255 respectively.
In another embodiment of the rendering operation for the object, after encoding the encoding value according to the preset rule to obtain the color value, the RGBA value sent to the fragment shader is [ red_value/255, green_value/255, blue_value/255,1], that is, the encoded object is identified by using a specific identifier (Alpha value), and when the specific identifier is 1, it indicates that the object exists at the position; when the designation flag is 0, this indicates that no object exists at this position. In this way, the fragment shader draws the first image according to the color value and the specified identifier of each object obtained by encoding, and renders the first image into the frame buffer (the color value is stored in the form of RGBA).
Based on this, as shown in fig. 2, the object selection method includes: s1, acquiring coordinate values of selected objects in a screen, wherein an image displayed in the screen comprises at least one object; s2, obtaining a color value of a corresponding point of the position from a frame buffer of the display card according to the coordinate value, wherein the frame buffer stores a mapping relation between the position of each object in the image and the corresponding color value, and the color value is encoded by an encoding value of the object according to a preset rule; s5, reading a designated identifier corresponding to the position, judging whether an object exists in the position, if so, jumping to the step S3; s3, carrying out inverse coding on the obtained color value according to a preset rule to obtain a coding value of the object; s4, obtaining object identification information with a mapping relation with the coding value from a storage structure, and finishing the selection operation of the object. If no object is judged, the output of the object is not performed.
Specifically, after obtaining a color value rgba of a point corresponding to the position from a frame buffer of the display card according to the coordinate value, extracting a specified identification value from the color value rgba, and if the specified identification value is 255 (corresponding to a specified identification 1 in the GPU), indicating that the point corresponding to the position has an object; if the value of the designation flag is 0, it is indicated that no object exists at this location, where the process is performed in the CPU, there is also an intersystem transition between the read color value rgba and the data stored in the frame buffer.
In the present embodiment, the object selected by the user in the screen is not limited to the object designated as 1, and the object may be any point in the image. In the process of image rendering, although RGB values of points corresponding to non-objects cannot be clearly defined, corresponding appointed marks are 0, so that a user selects any point in an image in a screen, and after corresponding color values rgba are obtained from a frame buffer according to the positions of the points, if the appointed marks read from the points are 0, the non-objects are judged, and reselection is prompted; if the read specified identifier is 255, and the object is determined to be an object, the obtained color value is inversely coded according to a preset rule to obtain a coded value, and then object identification information with a mapping relation with the coded value is obtained from a storage structure, so that the object selection operation is completed.
As shown in fig. 3, an embodiment of an object selecting apparatus 100 according to the present invention is schematically shown, and as can be seen from the figure, the object selecting apparatus 100 includes: the color value acquisition module 120 is connected with the position information acquisition module 110, the encoding module 130 is connected with the color value acquisition module 120, and the object identification acquisition module 140 is connected with the encoding module 130. The position information obtaining module 110 is configured to obtain coordinate values of selected objects in a screen, where an image displayed in the screen includes at least one object; the color value obtaining module 120 is configured to obtain a color value of a corresponding point at the position from a frame buffer of the display card according to the coordinate value, where the frame buffer stores a mapping relationship between a position of each object in the image and the corresponding color value, and the color value is encoded by an encoding value of the object according to a preset rule; the encoding module 130 is configured to perform inverse encoding on the obtained color value according to a preset rule to obtain an encoded value of the object; the object identifier obtaining module 140 is configured to obtain, from the storage structure, object identifier information having a mapping relationship with the encoded value, and complete an object selection operation.
Specifically, before the object selection operation is performed using the object selection apparatus 100, the object needs to be configured, and the configuration process includes encoding and rendering the object. Specifically, the configuration module includes: the image processing device comprises an object detection unit, a first coding unit and a storage structure, wherein the object detection unit is used for detecting an object included in an image; the first coding unit is used for coding the objects detected by the object detection unit in sequence, and each coding value is uniquely corresponding to the object; the storage structure is used for storing the mapping relation between the unique identification information of the object and the coded value.
In the process, after the object detection unit analyzes all objects (each object has unique identification information related to a service) contained in the image, the first coding unit sequentially codes the analyzed objects, and each code value key uniquely corresponds to the object; then, the mapping relation between the unique identification information of the object and the code value key is stored in a storage structure. In encoding the objects, each object is assigned a unique number (code value key) by means of digital encoding, e.g., 0, 1, …, n are used in the order of the objects (n does not exceed a preset encoding threshold, 256×256, etc.) and then storing the mapping relationship between the encoded value key of each object and the identification information thereof in a pre-created storage structure. Specifically, the pre-created storage structure is a bidirectional Map structure, in the structure, a corresponding code value key can be obtained according to the identification information of the object, and the identification information can be obtained according to the code value key of the object.
After the object is encoded to obtain the encoded value key, the object is further rendered, and in one embodiment, the configuration module further includes, in addition to the object detection unit, the first encoding unit, and the storage structure: the image rendering device comprises a second coding unit and an image rendering unit, wherein the second coding unit is used for sequentially coding the coding values of the objects according to a preset rule to obtain corresponding color values; the image rendering unit is used for drawing a first image according to the color values of the objects obtained by the second encoding unit, rendering the first image into a frame buffer of a display card, and storing the position information of each point in the first image and the corresponding color value; and the method is used for drawing a second image in the screen by using the real color value of the object, and establishing a position mapping relation of each point between the second image and the first image in the frame buffer.
In the second coding unit, a preset rule is adopted to code the coding value key to obtain a color value, and three color components of RGB are obtained, specifically:
red_value=key/65536
green_value=(key-red_value*65536)
blue_value=key-red_value*65536-green_value*256
wherein red_value, green_value and blue_value correspond to red component, green component and blue component respectively, and then each value/255 is sent to the image rendering unit in the form of [ red_value/255, green_value/255 and blue_value/255], so that the image rendering unit renders the image into a frame buffer (frame buffer) of the display card according to the value. The code system for encoding the object is different from the code system for encoding the encoded value, the encoded value needs to be encoded into 256-system corresponding data before encoding the encoded value of the object, and then the color value is obtained by encoding the encoded value by adopting the rule. If the CPU encodes the object in a 10-system manner, the second encoding unit encodes the object into 256 corresponding data and then encodes the data to obtain the color value.
In the process of rendering the objects, in order to improve the efficiency in the encoding process of the encoding values, the process is completed in a vertex shader (corresponding to the second encoding unit) in the GPU, specifically, the vertex shader encodes the encoding values of the objects sent by the CPU into 256-system corresponding data, then uses a preset rule encoding to obtain corresponding color values, and sends the corresponding color values to a fragment shader (corresponding to the image rendering unit), so that the fragment shader renders the image into a frame buffer according to the color values. Compared with a CPU, the GPU encodes the encoding value in a concurrent mode, so that the time is greatly saved.
In the process of rendering the image to the frame buffer by the fragment shader, on one hand, rendering a first image to the frame buffer according to the color value obtained by the encoding, and storing the position information of each point in the first image and the corresponding color value; on the other hand, drawing a second image in the screen by using the true color values of all objects in the image, and establishing a position mapping relation of each point between the second image and the first image in a frame buffer. The difference between the first image and the second image is that the color values used in the drawing process are different, the color values obtained by encoding according to the encoding values are used in the first image, the true color values of the images are used in the second image, and the mapping relationship established between the two images is specifically a position mapping relationship, so that when a user selects an object at a certain position in the second image displayed on a screen, the corresponding position information of the position in the first image can be obtained from the frame buffer according to the established position mapping relationship, and the corresponding color value of the position can be obtained.
In practical applications, the fixed-point shader and the fragment shader that perform the above-described process may be a shader that performs normal object rendering in the GPU (including a fixed-point shader and a fragment shader), or may be a shader that is created for the object selection process. If no new shader is added, setting 2 outputs in a shader (fragment shader) of normal object rendering, and respectively outputting a first image and a second image to a frame buffer and a screen; if a shader is created for the object selection process, the created fragment shader renders the first image into the FrameBuffer, and the normal object rendered shader (fragment shader) outputs the second image drawn to the screen.
When the configuration module completes the configuration process of the object and the object selection is needed, firstly, a user selects the object according to the image (corresponding to the second image) output in the screen; after the position information obtaining module 110 obtains the coordinate value (coordinate information of the mouse) of the selected object, the color value obtaining module 120 obtains the color value of the point corresponding to the coordinate information according to the mapping relation with the data stored in the frame buffer; the encoding module 130 then performs inverse encoding on the obtained color values according to a preset rule to obtain encoded values of the object; finally, the object identification obtaining module 140 obtains the object identification information having the mapping relation with the coding value from the storage structure, and completes the selection operation of the object. Specifically, in the encoded value key of the object obtained by performing inverse encoding on the obtained color value according to a preset rule, the rule of inverse encoding is as follows:
key=r*255*65536+g*255*256+b
wherein r, g and b respectively represent the values corresponding to the red component, the green component and the blue component in the obtained color values, and specifically correspond to the values of red_value/255, green_value/255 and blue_value/255 respectively.
In this embodiment, the configuration unit includes, in addition to the object detection unit, the first encoding unit, the storage structure, the second encoding unit, and the image rendering unit, an identification unit configured to identify the encoded object by using a specified identifier, and identify that the object exists at the position.
In this embodiment, after encoding the encoded value according to a preset rule to obtain a color value, the RGBA value sent to the fragment shader is [ red_value/255, green_value/255, blue_value/255,1], that is, the encoded object is identified by using a specific identifier (Alpha value), and when the specific identifier is 1, it indicates that the object exists at the position; when the designation flag is 0, this indicates that no object exists at this position. In this way, the fragment shader draws the first image according to the color value and the specified identifier of each object obtained by encoding, and renders the first image into the frame buffer (the color value is stored in the form of RGBA).
Based on this, the object selection apparatus 100 includes, in addition to: the configuration module (not shown in the figure), the position information obtaining module 110, the color value obtaining module 120, the encoding module 130, and the object identifier obtaining module 140 further include a judging module 140, as shown in fig. 4, in the process of selecting an object, the object is selected according to an image (corresponding to the second image) output in the screen; after the position information obtaining module 110 obtains the coordinate value (coordinate information of the mouse) of the selected object, the color value obtaining module 120 obtains the color value of the point corresponding to the coordinate information according to the mapping relation with the data stored in the frame buffer; further, the judging module 140 judges whether the object exists at the position according to the designated mark in the color value, and if the object does not exist, the output of the object is not performed; if so, the encoding module 130 performs inverse encoding on the obtained color value according to a preset rule to obtain an encoded value of the object; finally, the object identification obtaining module 140 obtains the object identification information having the mapping relation with the coding value from the storage structure, and completes the selection operation of the object.
In the present embodiment, the object selected by the user in the screen is not limited to the object designated as 1, and the object may be any point in the image. In the process of image rendering, although RGB values of points corresponding to non-objects cannot be clearly defined, corresponding appointed marks are 0, so that a user selects any point in an image in a screen, and after corresponding color values rgba are obtained from a frame buffer according to the positions of the points, if the appointed marks read from the points are 0, the non-objects are judged, and reselection is prompted; if the read specified identifier is 255, and the object is determined to be an object, the obtained color value is inversely coded according to a preset rule to obtain a coded value, and then object identification information with a mapping relation with the coded value is obtained from a storage structure, so that the object selection operation is completed.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (10)

1. An object selection method, wherein the object selection method comprises the following steps:
acquiring coordinate values of selected objects in a screen, wherein an image displayed in the screen comprises at least one object;
obtaining a color value of a corresponding point of the position from a frame buffer of a display card according to the coordinate value, wherein the frame buffer stores a mapping relation between the position of each object in the image and the corresponding color value, the color value is encoded by a vertex shader in the GPU according to a preset rule from the encoding value of the object, and a fragment shader renders the image into a frame buffer according to the color value;
performing inverse coding on the obtained color values according to a preset rule to obtain coded values of the objects; the code value is obtained by encoding an object contained in the image by the CPU;
and obtaining object identification information with a mapping relation with the coding value from the storage structure to finish the object selection operation.
2. The object selection method of claim 1, comprising the step of encoding the object before the step of acquiring the coordinate values of the selected object in the screen, comprising:
detecting an object included in the image;
sequentially encoding objects in the image, wherein each encoding value uniquely corresponds to the object;
the mapping relation between the unique identification information of the object and the coded value is stored in a storage structure.
3. The object selection method as claimed in claim 2, wherein the step of storing the mapping relationship of the unique identification information of the object and the encoded value in the storage structure comprises the step of rendering the object, comprising:
coding the coding values of the objects in sequence according to a preset rule to obtain corresponding color values;
rendering a first image into a frame buffer of a display card according to the color values of the objects obtained by encoding, and storing the position information of each point in the first image and the corresponding color value thereof;
and rendering the second image into the screen by using the real color value of the object, and establishing a position mapping relation of each point between the second image and the first image in the frame buffer.
4. The object selection method as claimed in claim 3, further comprising, after sequentially encoding the encoded values of the objects according to a preset rule to obtain the corresponding color values: identifying the coded object by using a designated identifier, and identifying that the object exists at the position;
in the step of drawing the first image according to the color value of each object obtained by encoding, the method specifically comprises the following steps: and drawing the first image according to the color value of each object obtained by encoding and the designated mark.
5. The object selection method according to claim 4, further comprising, after the step of obtaining the color value of the position corresponding point from the frame buffer of the graphic card according to the coordinate values:
reading a specified identifier corresponding to the position, and judging whether an object exists in the position; if so, the method proceeds to a step of performing inverse coding on the obtained color values according to a preset rule to obtain coded values of the object.
6. An object selecting apparatus, comprising:
the position information acquisition module is used for acquiring coordinate values of selected objects in a screen, and an image displayed in the screen comprises at least one object;
the color value acquisition module is used for acquiring a color value of a corresponding point at the position from a frame buffer of a display card according to the coordinate value, wherein the frame buffer stores a mapping relation between the position of each object in the image and the corresponding color value, the color value is encoded by an encoding value of the object by a vertex shader in the GPU, and the fragment shader renders the image into a frame buffer according to the color value;
the coding module is used for carrying out inverse coding on the obtained color values according to a preset rule to obtain coded values of the object; the code value is obtained by encoding an object contained in the image by the CPU;
and the object identification acquisition module is used for obtaining object identification information with a mapping relation with the coding value from the storage structure and finishing the selection operation of the object.
7. The object selection device according to claim 6, further comprising a configuration module, wherein the configuration module comprises: an object detection unit and a first encoding unit, wherein the object detection unit is used for detecting an object included in an image;
the first coding unit is used for coding the objects detected by the object detection unit in sequence, and each coding value is uniquely corresponding to the object; and storing the mapping relation between the unique identification information of the object and the coded value in a storage structure.
8. The object selection apparatus of claim 7, wherein the configuration module further comprises: a second encoding unit and an image rendering unit, wherein,
the second coding unit is used for coding the coding values of the objects in sequence according to a preset rule to obtain corresponding color values;
the image rendering unit is used for rendering the first image into a frame buffer of the display card according to the color values of the objects obtained by encoding, and storing the position information of each point in the first image and the corresponding color value; and the method is used for rendering the second image into the screen by using the real color value of the object, and establishing a position mapping relation of each point between the second image and the first image in the frame buffer.
9. The object selection apparatus of claim 8, wherein the configuration module further comprises: the identification unit is used for identifying the coded object by using a designated identifier and identifying the object at the position;
in the image rendering unit, a first image is drawn according to the color value of each object obtained by encoding and the specified mark.
10. The object selecting apparatus according to claim 9, further comprising a judging module for reading a specified identifier corresponding to the coordinate value acquired by the position information acquiring module from the frame buffer, and judging whether or not an object exists at the position;
and if the object is judged to exist, the color value acquisition module acquires the color value of the corresponding point from the frame buffer of the display card according to the coordinate value of the position information acquisition module.
CN201811151411.5A 2018-09-29 2018-09-29 Object selection method and device Active CN109146766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811151411.5A CN109146766B (en) 2018-09-29 2018-09-29 Object selection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811151411.5A CN109146766B (en) 2018-09-29 2018-09-29 Object selection method and device

Publications (2)

Publication Number Publication Date
CN109146766A CN109146766A (en) 2019-01-04
CN109146766B true CN109146766B (en) 2023-07-07

Family

ID=64813958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811151411.5A Active CN109146766B (en) 2018-09-29 2018-09-29 Object selection method and device

Country Status (1)

Country Link
CN (1) CN109146766B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111556277B (en) * 2020-05-19 2022-07-26 安徽听见科技有限公司 Method, device and equipment for processing identifiers of participants in video conference and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855132B (en) * 2011-06-30 2016-01-20 大族激光科技产业集团股份有限公司 A kind of choosing method of Drawing Object and system
CN103577322B (en) * 2012-08-08 2015-08-12 腾讯科技(深圳)有限公司 A kind of hit testing method and apparatus

Also Published As

Publication number Publication date
CN109146766A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
JP4769984B2 (en) Tile-based graphic rendering
CN104966265B (en) Graphics processing method and apparatus
US20140118351A1 (en) System, method, and computer program product for inputting modified coverage data into a pixel shader
US9342430B2 (en) Method of determining the state of a tile based deferred rendering processor and apparatus thereof
US9286858B2 (en) Hit testing method and apparatus
KR100833842B1 (en) Method for processing pixel rasterization at 3-dimensions graphic engine and device for processing the same
CN101604454A (en) Graphic system
US9747692B2 (en) Rendering apparatus and method
KR20160059237A (en) Method and apparatus for processing texture
US10453274B2 (en) Texture compressing method and apparatus and texture decompressing method and apparatus
CN113554008A (en) Method and device for detecting static object in area, electronic equipment and storage medium
CN109146766B (en) Object selection method and device
KR20180038793A (en) Method and apparatus for processing image data
US8605100B2 (en) Drawing device and drawing method
CN116958375A (en) Graphics processor, system, apparatus, device, and method
CN113835890A (en) Rendering data processing method, device, equipment and storage medium
CN112954344A (en) Encoding and decoding method, device and system
CN106716984A (en) Image processing apparatus, image processing method, and storage medium
CN116263981B (en) Graphics processor, system, apparatus, device, and method
CN115861511B (en) Method, device, system and computer equipment for processing drawing command
CN117077710A (en) Bar code recognition method and device, electronic equipment and readable storage medium
KR101689132B1 (en) Apparatus and method for rendering three dimensional image in electronic device
CN116975114A (en) Data processing method and device, electronic equipment and storage medium
CN116957900A (en) Graphics processor, system, electronic device, apparatus, and graphics processing method
CN115731324A (en) Method, apparatus, system and medium for processing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant