CN113238692B - Region selection method, map division method, device and computer equipment - Google Patents

Region selection method, map division method, device and computer equipment Download PDF

Info

Publication number
CN113238692B
CN113238692B CN202110638715.XA CN202110638715A CN113238692B CN 113238692 B CN113238692 B CN 113238692B CN 202110638715 A CN202110638715 A CN 202110638715A CN 113238692 B CN113238692 B CN 113238692B
Authority
CN
China
Prior art keywords
target
map
position area
color space
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110638715.XA
Other languages
Chinese (zh)
Other versions
CN113238692A (en
Inventor
于禧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202110638715.XA priority Critical patent/CN113238692B/en
Publication of CN113238692A publication Critical patent/CN113238692A/en
Application granted granted Critical
Publication of CN113238692B publication Critical patent/CN113238692B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a region selection method, a map division device and computer equipment, wherein the region selection method comprises the following steps: responding to a first trigger operation aiming at a target map, and determining a color space value of a pixel point corresponding to the first trigger operation; analyzing the color space value to obtain a target position area identifier; and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein the position area formed by the target pixel points is a target position area corresponding to the first trigger operation.

Description

Region selection method, map division method, device and computer equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a region selection method, a map division device, and a computer device.
Background
With the development of scientific technology, electronic maps are more and more widely applied, for example, people can rely on electronic maps to navigate in real life, or in game scenes, game scope display is performed through electronic maps.
In order to facilitate the partition display of each position area of the map, in the related art, in order to facilitate the highlight display of each position area, mask pictures corresponding to each area are generally generated in advance, and then each position area is highlighted based on each mask picture.
Disclosure of Invention
The embodiment of the disclosure at least provides a region selection method, a map division device and computer equipment.
In a first aspect, an embodiment of the present disclosure provides a region selection method, including:
responding to a first trigger operation aiming at a target map, and determining a color space value of a pixel point corresponding to the first trigger operation;
analyzing the color space value to obtain a target position area identifier;
and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein the position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
In one possible embodiment, the color space is an RGBA color space;
the target location area identifies an Alpha channel located in an RGBA color space of the target map.
In a possible implementation manner, the rendering the target pixel point according to the first target display state includes:
displaying the target pixel points according to the target display color; or changing the display brightness of the target pixel point; or adding the selected pattern on the target pixel point.
In a possible implementation manner, the determining a color space value of a pixel point corresponding to the first trigger operation includes:
determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation;
determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information;
and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring color space values of pixel points corresponding to the image coordinates.
In a possible implementation manner, the searching for a target pixel point in the target map where the location area identifier is the same as the target location area identifier includes:
under the condition that the number of position areas in the target map exceeds a first preset value, acquiring a mask picture corresponding to the target map; pixel points in the mask picture correspond to pixel points of the target map one to one;
determining a position area identifier corresponding to each pixel point based on the Alpha channel of the target map and the value of at least one channel of the mask picture;
and determining target pixel points which are the same as the target position area identifications on the basis of the position area identifications corresponding to the pixel points.
In one possible embodiment, the color space is an RGBA color space;
analyzing the color space value to obtain a target position area identifier, including:
analyzing the color space value, and acquiring a mask picture corresponding to the target map under the condition that the value of an Alpha channel of the color space is detected to be a second preset value based on an analysis result; pixel points in the mask picture correspond to pixel points of the target map one to one;
and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
In one possible embodiment, after obtaining the target location area identifier, the method further includes:
determining edge pixel points corresponding to the target position area identification;
and rendering the edge pixel points according to the second target display state.
In a second aspect, an embodiment of the present disclosure provides a map dividing method, including:
acquiring a map to be divided, and determining a position area identifier corresponding to each pixel point in the map to be divided;
and storing the position area identification in the color space of the map to be divided so that after any pixel point is triggered, based on the position area identification stored in the color space, the pixel point with the same target position area identification as the triggered pixel point is highlighted.
In a possible implementation manner, in a case that the number of the location areas corresponding to the map to be divided exceeds a first preset value, the method further includes:
acquiring a mask picture corresponding to the map to be divided, wherein pixel points in the mask picture correspond to pixel points of the map to be divided one by one;
and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, the pixel point identical to the target position area identifier of the triggered pixel point is highlighted based on the position area identifier stored in the color space and the channel value of the mask picture.
In a possible implementation, the determining the location area identifier corresponding to each pixel point in the map to be divided includes:
acquiring regional boundary information corresponding to the map to be divided;
and determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
In a possible embodiment, the method further comprises:
determining edge pixel points among different position areas based on position area identifications corresponding to all pixel points in the map to be divided;
and marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
In a third aspect, an embodiment of the present disclosure further provides an area selection apparatus, including:
the determining module is used for responding to a first trigger operation aiming at a target map and determining a color space value of a pixel point corresponding to the first trigger operation;
the acquisition module is used for analyzing the color space value to acquire a target position area identifier;
and the rendering module is used for searching for target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein a position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
In one possible embodiment, the color space is an RGBA color space;
the target location area identifies an Alpha channel located in an RGBA color space of the target map.
In a possible implementation manner, when rendering the target pixel point according to the first target display state, the rendering module is configured to:
displaying the target pixel points according to the target display color; or changing the display brightness of the target pixel point; or adding the selected pattern on the target pixel point.
In a possible implementation manner, when determining the color space value of the pixel point corresponding to the first trigger operation, the determining module is configured to:
determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation;
determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information;
and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring the color space value of a pixel point corresponding to the image coordinates.
In a possible implementation manner, when searching for a target pixel point in the target map whose location area identifier is the same as the target location area identifier, the rendering module is configured to:
under the condition that the number of position areas in the target map exceeds a first preset value, acquiring a mask picture corresponding to the target map; pixel points in the mask picture correspond to pixel points of the target map one to one;
determining a position area identifier corresponding to each pixel point based on the value of the Alpha channel of the target map and the value of at least one channel of the mask picture;
and determining target pixel points which are the same as the target position area identifications on the basis of the position area identifications corresponding to the pixel points.
In one possible embodiment, the color space is an RGBA color space;
the acquisition module is used for analyzing the color space value and acquiring the target position area identifier:
analyzing the color space value, and acquiring a mask picture corresponding to the target map under the condition that the value of an Alpha channel of the color space is detected to be a second preset value based on an analysis result; pixel points in the mask picture correspond to pixel points of the target map one to one;
and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
In a possible implementation, after obtaining the target location area identifier, the rendering module is further configured to:
determining edge pixel points corresponding to the target position area identification;
and rendering the edge pixel points according to the second target display state.
In a fourth aspect, an embodiment of the present disclosure further provides a map dividing apparatus, including:
the determining module is used for acquiring a target map and determining a position area identifier corresponding to each pixel point in the target map;
and the storage module is used for storing the position area identification in the color space of the target map, so that after any pixel point is triggered, the pixel point which is the same as the target position area identification of the triggered pixel point is highlighted based on the position area identification stored in the color space.
In a possible implementation manner, in a case that the number of the location areas corresponding to the map to be divided exceeds a first preset value, the determining module is further configured to:
acquiring a mask picture corresponding to the map to be divided, wherein pixel points in the mask picture correspond to pixel points of the map to be divided one by one;
and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, the pixel point identical to the target position area identifier of the triggered pixel point is highlighted based on the position area identifier stored in the color space and the channel value of the mask picture.
In a possible implementation manner, the determining module, when determining the location area identifier corresponding to each pixel point in the map to be divided, is configured to:
acquiring regional boundary information corresponding to the map to be divided;
and determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
In a possible implementation, the determining module is further configured to:
determining edge pixel points among different position areas based on position area identifications corresponding to all pixel points in the map to be divided;
and marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
In a fifth aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in any one of the possible implementations of the first aspect or the second aspect.
In a sixth aspect, alternative implementations of the present disclosure also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in any one of the possible implementations of the first or second aspect.
The area selection method provided by the embodiment of the disclosure responds to a first trigger operation aiming at a target map, and determines a color space value of a pixel point corresponding to the first trigger operation; analyzing the color space value to obtain a target position area identifier, thus effectively solving the storage problem of the position area identifier by using the color space to store the position area identifier; and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state. Based on the mode, when the area range of any position area changes, only the position area identification corresponding to the position area needs to be changed, compared with the mode of regenerating the mask picture, the method simplifies the operation process and improves the image display efficiency when the area range changes.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 illustrates a flow chart of a region selection method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for determining a color space value of a pixel point corresponding to a first trigger operation in the region selection method provided in the embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a mask picture provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target location area with changed display colors provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart illustrating a method of map partitioning provided by an embodiment of the present disclosure;
FIG. 6 shows a schematic diagram of a quadrilateral area provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating an architecture of a region selection apparatus provided in an embodiment of the present disclosure;
fig. 8 is a schematic diagram illustrating an architecture of a map partitioning apparatus provided in an embodiment of the present disclosure;
fig. 9 shows a schematic diagram of a computer device 900 provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It is found through research that in order to facilitate highlighting each position area during the drawing process of the electronic map, mask pictures corresponding to each area are generally generated in advance, and then each position area is highlighted based on each mask picture.
The mask picture is generated in advance, so that the workload in the map drawing process is greatly increased, and when the position area in the map is changed, the corresponding mask picture needs to be generated again, so that the method has the disadvantages of complicated operation process and low efficiency.
Based on the research, the present disclosure provides a region selection method, a map division method, a device, and a computer device, which respond to a first trigger operation for a target map and determine a color space value of a pixel point corresponding to the first trigger operation; analyzing the color space value to obtain a target position area identifier, and thus, storing the position area identifier by using the color space to effectively solve the storage problem of the position area identifier; and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state. Based on the mode, when the area range of any position area changes, only the position area identification corresponding to the position area needs to be changed, compared with the mode of regenerating the mask picture, the method simplifies the operation process and improves the image display efficiency when the area range changes.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given of a region selection method disclosed in an embodiment of the present disclosure, where an execution subject of the region selection method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the region selection method may be implemented by way of a processor calling computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a region selection method provided in an embodiment of the present disclosure is shown, where the method includes S101 to S103, where:
s101: responding to a first trigger operation aiming at a target map, and determining a color space value of a pixel point corresponding to the first trigger operation.
S102: and analyzing the color space value to obtain a target position area identifier.
S103: and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein a position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
The following is a detailed description of the above steps:
with respect to S101,
Here, the target map may be an electronic map that has been divided, such as a game map within an electronic game, an electronic map that is applicable to navigation, or the like; the first trigger operation may include any one of click, long press, re-press, single click, and double hit. When the target map is opened, the user can see the target map displayed on the user side, and when the user wants to know the range contained in different position areas, the first trigger operation can be executed on the position area which the user wants to know.
In a possible implementation manner, as shown in fig. 2, the color space value of the pixel point corresponding to the first trigger operation may be determined through the following steps:
s201: and determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation.
Through touch detection, screen coordinates corresponding to the first trigger operation can be determined; the viewpoint information corresponding to the target map represents the current view angle of the user viewing the target map, and can be represented by the camera view angle when the target map is displayed, and the camera view angle can be directly read from the display data of the target map.
S202: and determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information.
Here, after obtaining the screen coordinates, the screen coordinates corresponding to the first trigger operation may be converted into three-dimensional space coordinates corresponding to the target map through the viewpoint information and a 3D pickup technique.
Specifically, after the screen coordinates are obtained, the screen coordinates may be converted to coordinates in a three-dimensional space in which a map is located by a projection matrix and an observation matrix obtained based on the viewpoint information.
S203: and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring the color space value of a pixel point corresponding to the image coordinates.
Here, after the three-dimensional coordinates are determined, UV coordinates (image coordinates) of the target map corresponding to the first trigger operation may be determined based on the three-dimensional space coordinates.
In one possible embodiment, the coordinates of the three-dimensional space can be transformed into the UV coordinate (image coordinates) space by a straight-line and polygon intersection algorithm.
Illustratively, the coordinates of the lower left corner of the target map correspond to a point (0,0) in the UV coordinate space, the upper right corner corresponds to a point (1,1) in the UV coordinate space, the spatial coordinates and the UV coordinates of the point on the target map are linear correspondences, and based on the linear correspondences, the coordinates of the point corresponding to the first trigger operation in the three-dimensional space can be converted into two-dimensional coordinates on the target map.
Specifically, the conversion may be performed according to the following formula:
Figure BDA0003106256990000101
Figure BDA0003106256990000102
wherein in the formula, u represents an abscissa in the UV coordinates; v represents the ordinate in the UV coordinate; x represents the abscissa of a point corresponding to the first trigger operation in the three-dimensional space coordinate; y represents a vertical coordinate of a point corresponding to the first trigger operation in the three-dimensional space coordinate; x is the number of 0 An abscissa representing a lower left corner of the target map in three-dimensional space; y is 0 A vertical coordinate in three-dimensional space representing the lower left corner of the target map; the width is the width of the three-dimensional space corresponding to the target map; height is the height of the three-dimensional space corresponding to the target map, wherein the width of the three-dimensional space corresponding to the target map and the height of the three-dimensional space corresponding to the target map are preset.
Further, after the three-dimensional coordinates are converted into image coordinates on the target map, color space values of pixel points corresponding to the image coordinates can be determined; wherein the color space may be an RGBA color space.
For S102 and S103,
Here, the color space may be an RGBA color space, the target map is a color map (that is, the corresponding color space is an RGBA color space), the color map includes R, G, B, A four channels, which respectively represent color spaces of Red (Red), green (Green), blue (Blue), and Alpha, and the image colors of the map to be divided can be represented by using the Red (Red), green (Green), and Blue (Blue) channels.
The Alpha channel is a channel specifically designed to hold a selected region. In computer graphics, a color picture of an RGB color model is composed of three color information channels Red (Red), green (Green), and Blue (Blue), each channel uses 8 bits of color depth, and 24 bits in total, and contains all color information. In order to realize the transparency effect of the graph, another 8-bit information is added in the processing and storage of the graph file, and the added channel information representing the transparency of each prime point in the graph is called an Alpha channel. The Alpha channel uses 8-bit binary numbers to represent 256 levels of gray scale, i.e., 256 levels of transparency. Because the target map does not need to be transparently displayed or non-transparently displayed, the position area identifier corresponding to each pixel point can be stored in the Alpha channel of each pixel point.
Because Alpha channels have limited information that can be stored, when the target map contains more location areas, the mask picture can be used to store location area identification. The pixel points in the mask picture and the pixel points of the target map can be in one-to-one correspondence. Illustratively, a position region identifier 1 may be added to each pixel point in the region 1, and a position region identifier 2 may be added to each pixel point in the region 2. Here, different colors may also be used to represent different regions, where the different colors correspond to different location area identifiers, and pixel points in the same region have the same location area identifier. Such as blue for region 1 and yellow for region 2.
In specific implementation, the number of channels of the mask picture may be selected/adjusted based on parameter information such as the number of regions to be divided, and may be a single channel, a dual channel, a triple channel, or a quadruple channel. Wherein, each channel can store 256 position area identifications in total from 0 to 255.
Illustratively, the mask picture may be as shown in fig. 3. In fig. 3, the mask picture divides the map into three areas by adding location area identifications 1, 2, 3.
To sum up, the location area identifier may be stored in the Alpha channel and/or the mask picture, and correspondingly, when the target location area identifier is obtained, any one of the following three ways may be used:
mode 1, location area identification is stored only in Alpha channels.
In this case, the color space value may be analyzed, and the value of the Alpha channel in the analysis result may be used as the target location area identifier.
Specifically, the Alpha channel uses 8-bit binary number, which can represent 256-level gray scale, so that the Alpha channel can store 256 target location area identifications at most, and thus the method is suitable for the case that the number of the location area identifications does not exceed 256.
Correspondingly, when searching for target pixel points with the same position area identification as the target position area identification in the target map, the color space corresponding to each pixel point can be directly analyzed, and the value of the corresponding Alpha channel in the analysis result and the pixel points of the target position area identification are determined as the target pixel points.
Mode 2, the location area identification is stored only in the mask picture.
In this case, the color space value may be analyzed, and a mask picture corresponding to the target map is obtained when it is detected that the value of the Alpha channel of the color space is a second preset value based on the analysis result; and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
Here, since the Alpha channel itself has limited storage capacity, the location area identification can be stored only by a mask picture.
Specifically, the pixel points of the mask picture correspond to the pixel points of the target map one to one, so that the pixel points of the mask picture corresponding to the first trigger operation can be determined, the determined pixel points can be subjected to value taking on the channel of the mask picture, and the target position area identifier corresponding to the first trigger operation can be determined according to the value taking.
Illustratively, taking the second preset value as 0 as an example, when it is detected that the value of an Alpha channel in the analysis result of the color space is 0, a mask picture corresponding to the target map is obtained, and if the value of a pixel point corresponding to the first trigger operation on the channel of the mask picture is 4, the target position area identifier is 4.
Correspondingly, when searching for a target pixel point with a position area identifier identical to the target position area identifier in the target map, the value of the mask picture on the channel can be taken as the pixel point of the target position area identifier, and the corresponding pixel point on the target map is taken as the target pixel point.
Mode 3, location area identification is stored in Alpha channel and mask picture.
In this case, the color space values and the mask picture may be parsed; and then determining the position area identifier corresponding to each pixel point based on the Alpha channel of the target map and the value of at least one channel of the mask picture.
Here, the first preset value may be 256, that is, when the number of location areas in the target map exceeds 256, theoretically, the corresponding location area identifier cannot be completely stored by using an Alpha channel, and therefore, a mask picture corresponding to the target map needs to be acquired.
Specifically, more than 256 parts of the position area identifiers may be stored in the mask picture; alternatively, the position region identifier may be represented in 2-ary form by using a channel of the mask picture and an Alpha channel.
For example, taking the mask picture as a single channel 8-bit (which can represent 256 numbers of the 8 th power of 2), and the Alpha channel is also 8-bit (which can represent 256 numbers of the 8 th power of 2), the channel value in the mask picture can be taken as the high bit in the binary number, the Alpha channel value can be taken as the low bit in the binary number, when the value of the mask picture is represented by binary 00000001, and the value of the Alpha channel is represented by binary 10101011, the concatenated value is 0000000110101011, and the corresponding decimal is 427, the target position area identifier is 427.
Correspondingly, when target pixel points with the same position area identification as the target position area identification in the target map are searched, under the condition that the number of the position areas in the target map exceeds a first preset value, a mask picture corresponding to the target map is obtained; and determining a position area identifier corresponding to each pixel point based on the value of the Alpha channel of the target map and the value of at least one channel of the mask picture, and taking the pixel point with the target position area identifier in the target map as the target pixel point.
Specifically, when the target pixel point is rendered according to the first target display state, at least one of the following methods may be used:
the method 1 displays the target pixel points according to the target display color.
Here, the target display color may be a preset color such as light yellow, light red, or the like; or, the display color may be calculated according to the pixel point corresponding to the first trigger operation and a preset formula.
And 2, changing the display brightness of the target pixel point.
Here, changing the presentation brightness may be increasing/decreasing the presentation brightness of the target pixel point.
And 3, adding the selected pattern on the target pixel point.
Here, the selected pattern may be a pattern that enables the target pixel point to have a highlight effect.
Hereinafter, the method 1 will be described in detail, and the methods 2 and 3 will not be described.
In specific implementation, when the target pixel point is displayed according to the target display color, new rendering parameters (including values of the pixel point on each color channel) of the pixel point in the target position region are set first after the first trigger operation based on a map rendering tool; and then in the next picture frame, rendering the pixel points corresponding to the target position area according to the new rendering parameters. And rendering the pixel points in other areas except the target position area according to the original rendering parameters.
In a possible implementation manner, the target value of each pixel point in the target position area in the next frame of picture frame on the R, G, B channel can be determined in advance, and then the target position area is rendered based on the target value of each pixel point in the target position area on the R, G, B channel.
When determining the target value of each pixel point in the target position area on the R, G, B channel, for any pixel point, the following formula can be exemplarily referred to for calculation:
R=r*(1-A)+r*A
G=g*(1-A)+g*A
B=b*(1-A)+b*A
r, G, B refers to a target value of each pixel point on a R, G, B channel; a refers to the Alpha value; r, g and b refer to original values of all pixel points on a R, G, B channel.
Illustratively, the resulting presentation effect may be as described in fig. 4. In fig. 4, the target position area changes the display color.
In a possible implementation manner, after the target location area identifier is obtained, edge pixel points corresponding to the target location area identifier may also be determined, and the edge pixel points are rendered according to a second target display state.
Here, the edge pixel points are pixel points located at the edge of the target area, and when the edge pixel points are rendered according to the second target display state, the edge pixel points can be rendered by using a striking color. Illustratively, after the target position area identifier is acquired as an area 1, each edge pixel point of the area 1 may be rendered by using a striking color such as yellow in a unified manner.
Specifically, when determining edge pixel points of edges of each region, an image edge detection algorithm may be used to detect edges of different regions, where the image edge detection algorithm includes a Canny algorithm, a Sobel algorithm, a Laplace algorithm, and the like.
In a possible implementation manner, after the target pixel point in the target location area is rendered according to the first target display state, the display of the target pixel point in the target location area in the first target display state may be cancelled in response to a second trigger operation for the target location area after the display state is changed.
Here, the second trigger operation may include any one of the following operations:
click, long press, re-press, single click, double click.
Specifically, the target pixel point is canceled from being displayed according to the target display color; or, canceling to change the display brightness of the target pixel point; or, the selected pattern is not added to the target pixel point.
Referring to fig. 5, a flowchart of a map division method provided in the embodiment of the present disclosure is shown, where the method includes S501 to S502, where:
s501: obtaining a map to be divided, and determining a position area identifier corresponding to each pixel point in the map to be divided.
S502: and storing the position area identification in the color space of the map to be divided, so that after any pixel point is triggered, based on the position area identification stored in the color space, the pixel point with the same target position area identification as the triggered pixel point is highlighted.
The following is a detailed description of the above steps:
for S501,
Here, when determining a location area identifier corresponding to each pixel point in the map to be divided, area boundary information corresponding to the map to be divided may be obtained first; and then determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
Wherein the region boundary information may be region boundary coordinate point data.
Specifically, in a two-dimensional plane, three region boundary coordinate points which are not on the same straight line are sequentially connected, so that a region can be obtained. On the basis, with the increase of the number of the region boundary coordinate points, a more complex and fine geometric figure can be obtained, so that accurate region division information corresponding to each pixel point can be obtained by using enough region boundary coordinate points, for example, four region boundary coordinate points of (0,1), (2,4), (6,8) and (7,0) "are used to obtain a well-divided quadrilateral region.
Illustratively, the quadrilateral area may be as shown in fig. 6. In fig. 6, the four region boundary coordinate points "(0,1), (2,4), (6,8), and (7,0)" are sequentially connected to each other, and then a single quadrangular region is defined in a plane.
After the area boundary information is obtained, it may be determined that all pixel points within the area boundary are divided into the same area based on the area boundary information, so as to determine a location area identifier corresponding to each pixel point in the map to be divided.
For S502,
Here, the color space may be a color space such as an RGBA color space, and when the color space is the RGBA color space, specific contents when the position region identifier is stored may refer to the above related description, which is not described herein again, and after the position region identifier is stored in the color space of the map to be divided, the target map after the map to be divided is divided can be obtained.
In a possible implementation manner, under the condition that the number of position areas corresponding to the map to be divided exceeds a first preset value, a mask picture corresponding to the map to be divided can be obtained, and pixel points in the mask picture correspond to pixel points of the map to be divided one by one; and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, based on the position area identifier stored in the color space and the channel value of the mask picture, the pixel point with the same target position area identifier as the triggered pixel point is highlighted.
Here, the specific content related to the mask picture may refer to fig. 3 (mask picture) and the description of the content related to the mask picture, and is not repeated herein.
In a possible implementation manner, edge pixel points between different position areas can be determined based on position area identifiers corresponding to all pixel points in the map to be divided; and then marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
The area selection method provided by the embodiment of the disclosure responds to a first trigger operation aiming at a target map, and determines a color space value of a pixel point corresponding to the first trigger operation; analyzing the color space value to obtain a target position area identifier, thus effectively solving the storage problem of the position area identifier by using the color space to store the position area identifier; and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state. Based on the mode, when the area range of any position area changes, only the position area identification corresponding to the position area needs to be changed, compared with the mode of regenerating the mask picture, the method simplifies the operation process and improves the image display efficiency when the area range changes.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an area selection device corresponding to the area selection method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the area selection method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 7, there is shown a schematic structural diagram of a region selection apparatus according to an embodiment of the present disclosure, where the apparatus includes: a determining module 701, an obtaining module 702, and a rendering module 703; wherein, the first and the second end of the pipe are connected with each other,
a determining module 701, configured to determine, in response to a first trigger operation for a target map, a color space value of a pixel point corresponding to the first trigger operation;
an obtaining module 702, configured to analyze the color space value to obtain a target location area identifier;
and a rendering module 703, configured to search for a target pixel point in the target map whose location area identifier is the same as the target location area identifier, and render the target pixel point according to a first target display state, where a location area formed by the target pixel point is a target location area corresponding to the first trigger operation.
In one possible embodiment, the color space is an RGBA color space;
the target location area identifies an Alpha channel located in an RGBA color space of the target map.
In a possible implementation manner, when rendering the target pixel point according to the first target display state, the rendering module 703 is configured to:
displaying the target pixel points according to the target display color; or changing the display brightness of the target pixel point; or adding the selected pattern on the target pixel point.
In a possible implementation manner, when determining the color space value of the pixel point corresponding to the first trigger operation, the determining module 701 is configured to:
determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation;
determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information;
and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring the color space value of a pixel point corresponding to the image coordinates.
In a possible implementation manner, when searching for a target pixel point in the target map whose location area identifier is the same as the target location area identifier, the rendering module 703 is configured to:
under the condition that the number of position areas in the target map exceeds a first preset value, acquiring a mask picture corresponding to the target map; pixel points in the mask picture correspond to pixel points of the target map one to one;
determining a position area identifier corresponding to each pixel point based on the Alpha channel of the target map and the value of at least one channel of the mask picture;
and determining target pixel points which are the same as the target position area identifications on the basis of the position area identifications corresponding to the pixel points.
In one possible embodiment, the color space is an RGBA color space;
the obtaining module 702, when analyzing the color space value and obtaining the target location area identifier, is configured to:
analyzing the color space value, and acquiring a mask picture corresponding to the target map under the condition that the value of an Alpha channel of the color space is detected to be a second preset value based on an analysis result; the pixel points in the mask picture correspond to the pixel points of the target map one by one;
and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
In a possible implementation, after obtaining the target location area identifier, the rendering module 703 is further configured to:
determining edge pixel points corresponding to the target position area identification;
and rendering the edge pixel points according to the second target display state.
The area selection device provided by the embodiment of the disclosure responds to a first trigger operation for a target map, and determines a color space value of a pixel point corresponding to the first trigger operation; analyzing the color space value to obtain a target position area identifier, thus effectively solving the storage problem of the position area identifier by using the color space to store the position area identifier; and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state. Based on the mode, when the area range of any position area changes, only the position area identification corresponding to the position area needs to be changed, compared with the mode of regenerating the mask picture, the method simplifies the operation process and improves the image display efficiency when the area range changes.
Referring to fig. 8, a schematic architecture diagram of a map partitioning apparatus provided in an embodiment of the present disclosure is shown, the apparatus includes: a determination module 801 and a storage module 802; wherein the content of the first and second substances,
a determining module 801, configured to acquire a target map and determine a location area identifier corresponding to each pixel point in the target map;
a storage module 802, configured to store the location area identifier in the color space of the target map, so that after any pixel point is triggered, based on the location area identifier stored in the color space, a pixel point that is the same as the target location area identifier of the triggered pixel point is highlighted.
In a possible implementation manner, in a case that the number of the location areas corresponding to the map to be divided exceeds a first preset value, the determining module 801 is further configured to:
acquiring a mask picture corresponding to the map to be divided, wherein pixel points in the mask picture correspond to pixel points of the map to be divided one by one;
and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, based on the position area identifier stored in the color space and the channel value of the mask picture, the pixel point with the same target position area identifier as the triggered pixel point is highlighted.
In a possible implementation, the determining module 801, when determining the location area identifier corresponding to each pixel point in the map to be divided, is configured to:
acquiring regional boundary information corresponding to the map to be divided;
and determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
In a possible implementation, the determining module 801 is further configured to:
determining edge pixel points among different position areas based on position area identifications corresponding to all pixel points in the map to be divided;
and marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 9, a schematic structural diagram of a computer device 900 provided in the embodiment of the present disclosure includes a processor 901, a memory 902, and a bus 903. The memory 902 is used for storing execution instructions, and includes a memory 9021 and an external memory 9022; the memory 9021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 901 and data exchanged with an external memory 9022 such as a hard disk, the processor 901 exchanges data with the external memory 9022 through the memory 9021, and when the computer device 900 is operated, the processor 901 communicates with the memory 902 through the bus 903, so that the processor 901 executes the following instructions:
responding to a first trigger operation aiming at a target map, and determining a color space value of a pixel point corresponding to the first trigger operation;
analyzing the color space value to obtain a target position area identifier;
and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein a position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
In a possible embodiment, in the instructions executed by the processor 901, the color space is an RGBA color space;
the target location area identifies an Alpha channel located in an RGBA color space of the target map.
In a possible implementation manner, in an instruction executed by the processor 901, the rendering the target pixel point according to the first target display state includes:
displaying the target pixel points according to the target display color; or changing the display brightness of the target pixel point; or adding the selected pattern on the target pixel point.
In a possible implementation manner, in an instruction executed by the processor 901, the determining a color space value of a pixel point corresponding to the first trigger operation includes:
determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation;
determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information;
and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring the color space value of a pixel point corresponding to the image coordinates.
In a possible implementation manner, in an instruction executed by the processor 901, the finding a target pixel point in the target map where the location area identifier is the same as the target location area identifier includes:
under the condition that the number of position areas in the target map exceeds a first preset value, acquiring a mask picture corresponding to the target map; pixel points in the mask picture correspond to pixel points of the target map one to one;
determining a position area identifier corresponding to each pixel point based on the Alpha channel of the target map and the value of at least one channel of the mask picture;
and determining target pixel points which are the same as the target position area identifications on the basis of the position area identifications corresponding to the pixel points.
In a possible embodiment, in the instructions executed by the processor 901, the color space is an RGBA color space;
analyzing the color space value to obtain a target position area identifier, including:
analyzing the color space value, and acquiring a mask picture corresponding to the target map under the condition that the value of an Alpha channel of the color space is detected to be a second preset value based on an analysis result; the pixel points in the mask picture correspond to the pixel points of the target map one by one;
and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
In a possible implementation manner, the instructions executed by the processor 901 further include, after obtaining the target location area identifier:
determining edge pixel points corresponding to the target position area identification;
and rendering the edge pixel points according to the second target display state.
Alternatively, the processor 901 is caused to execute the following instructions:
acquiring a map to be divided, and determining a position area identifier corresponding to each pixel point in the map to be divided;
and storing the position area identification in the color space of the map to be divided, so that after any pixel point is triggered, based on the position area identification stored in the color space, the pixel point with the same target position area identification as the triggered pixel point is highlighted.
In a possible implementation manner, in the instruction executed by the processor 901, when the number of the location areas corresponding to the map to be divided exceeds a first preset value, the method further includes:
acquiring a mask picture corresponding to the map to be divided, wherein pixel points in the mask picture correspond to pixel points of the map to be divided one by one;
and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, the pixel point identical to the target position area identifier of the triggered pixel point is highlighted based on the position area identifier stored in the color space and the channel value of the mask picture.
In a possible implementation manner, the instructions executed by the processor 901 for determining the location area identifier corresponding to each pixel point in the map to be divided includes:
acquiring regional boundary information corresponding to the map to be divided;
and determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
In a possible implementation manner, the instructions executed by the processor 901 further include:
determining edge pixel points among different position areas based on position area identifications corresponding to all pixel points in the map to be divided;
and marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the area selection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the live broadcasting method described in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implementing, and for example, a plurality of units or components may be combined, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-transitory computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes and substitutions do not depart from the spirit and scope of the embodiments disclosed herein, and they should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (15)

1. A method of region selection, comprising:
responding to a first trigger operation aiming at a target map, and determining a color space value of a pixel point corresponding to the first trigger operation; the color space of the pixel point comprises an Alpha channel for storing the position area identification;
analyzing the color space value to obtain a target position area identifier;
and searching target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein the position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
2. The method of claim 1, wherein the color space is an RGBA color space;
the target location area identifies an Alpha channel located in an RGBA color space of the target map.
3. The method of claim 1, wherein the rendering the target pixel point according to the first target display state comprises:
displaying the target pixel points according to the target display color; or changing the display brightness of the target pixel point; or adding the selected pattern on the target pixel point.
4. The method according to claim 1, wherein the determining a color space value of a pixel corresponding to the first trigger operation includes:
determining screen coordinates corresponding to the first trigger operation, and determining viewpoint information corresponding to the target map when a user executes the first trigger operation;
determining a three-dimensional coordinate corresponding to the first trigger operation in a three-dimensional model corresponding to the target map based on the screen coordinate and the viewpoint information;
and converting the three-dimensional coordinates into image coordinates on the target map based on the conversion relation between the target map and the three-dimensional model, and acquiring the color space value of a pixel point corresponding to the image coordinates.
5. The method of claim 2, wherein the finding of the target pixel point in the target map having the same location area identifier as the target location area identifier comprises:
under the condition that the number of position areas in the target map exceeds a first preset value, acquiring a mask picture corresponding to the target map; the pixel points in the mask picture correspond to the pixel points of the target map one by one;
determining a position area identifier corresponding to each pixel point based on the Alpha channel of the target map and the value of at least one channel of the mask picture;
and determining target pixel points which are the same as the target position area identifications on the basis of the position area identifications corresponding to the pixel points.
6. The method of claim 1, wherein the color space is an RGBA color space;
analyzing the color space value to obtain a target position area identifier, including:
analyzing the color space value, and acquiring a mask picture corresponding to the target map under the condition that the value of an Alpha channel of the color space is detected to be a second preset value based on an analysis result; pixel points in the mask picture correspond to pixel points of the target map one to one;
and determining the value of the pixel point corresponding to the first trigger operation on the channel of the mask picture as the target position area identifier.
7. The method of claim 1, wherein after obtaining the target location area identification, the method further comprises:
determining edge pixel points corresponding to the target position area identification;
and rendering the edge pixel points according to the second target display state.
8. A map division method, comprising:
acquiring a map to be divided, and determining a position area identifier corresponding to each pixel point in the map to be divided;
storing the position area identification in the color space of the map to be divided, so that after any pixel point is triggered, based on the position area identification stored in the color space, the pixel point with the same target position area identification as the triggered pixel point is highlighted; wherein, the color space comprises an Alpha channel for storing the position area identification.
9. The method according to claim 8, wherein when the number of the location areas corresponding to the map to be divided exceeds a first preset value, the method further comprises:
acquiring a mask picture corresponding to the map to be divided, wherein pixel points in the mask picture correspond to pixel points of the map to be divided one by one;
and determining a channel value of the mask picture based on the position area identifier corresponding to each pixel point in the map to be divided, so that after any pixel point is triggered, the pixel point identical to the target position area identifier of the triggered pixel point is highlighted based on the position area identifier stored in the color space and the channel value of the mask picture.
10. The method according to claim 8, wherein the determining the location area identifier corresponding to each pixel point in the map to be divided comprises:
acquiring regional boundary information corresponding to the map to be divided;
and determining a position area identifier corresponding to each pixel point in the map to be divided based on the area boundary information.
11. The method of claim 8, further comprising:
determining edge pixel points among different position areas based on position area identifications corresponding to all pixel points in the map to be divided;
and marking the edge pixel points so as to display the marks of the edge pixel points when the map to be divided is displayed.
12. An area selection device, comprising:
the determining module is used for responding to a first trigger operation aiming at a target map and determining a color space value of a pixel point corresponding to the first trigger operation; the color space of the pixel point comprises an Alpha channel for storing the position area identification;
the acquisition module is used for analyzing the color space value to acquire a target position area identifier;
and the rendering module is used for searching for target pixel points with the same position area identification as the target position area identification in the target map, and rendering the target pixel points according to a first target display state, wherein a position area formed by the target pixel points is a target position area corresponding to the first trigger operation.
13. A map division apparatus, characterized by comprising:
the determining module is used for acquiring a target map and determining a position area identifier corresponding to each pixel point in the target map;
the storage module is used for storing the position area identification in the color space of the target map, so that after any pixel point is triggered, the pixel point which is the same as the target position area identification of the triggered pixel point is highlighted based on the position area identification stored in the color space; wherein, the color space comprises an Alpha channel for storing the position area identification.
14. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor for executing the machine-readable instructions stored in the memory, the machine-readable instructions, when executed by the processor, causing the processor to perform the steps of the region selection method of any one of claims 1 to 7 or the steps of the map division method of any one of claims 8 to 11.
15. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when executed by a computer device, performs the steps of the region selection method of any one of claims 1 to 7, or performs the steps of the map division method of any one of claims 8 to 11.
CN202110638715.XA 2021-06-08 2021-06-08 Region selection method, map division method, device and computer equipment Active CN113238692B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110638715.XA CN113238692B (en) 2021-06-08 2021-06-08 Region selection method, map division method, device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110638715.XA CN113238692B (en) 2021-06-08 2021-06-08 Region selection method, map division method, device and computer equipment

Publications (2)

Publication Number Publication Date
CN113238692A CN113238692A (en) 2021-08-10
CN113238692B true CN113238692B (en) 2023-04-18

Family

ID=77137162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110638715.XA Active CN113238692B (en) 2021-06-08 2021-06-08 Region selection method, map division method, device and computer equipment

Country Status (1)

Country Link
CN (1) CN113238692B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251081A (en) * 2022-08-31 2023-12-19 腾讯科技(深圳)有限公司 Method, apparatus, computer device and storage medium for detecting picked-up object
CN115984264B (en) * 2023-03-16 2023-08-01 北京路凯智行科技有限公司 Static obstacle filtering method based on map and map filter

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140123325A (en) * 2013-04-12 2014-10-22 삼성전자주식회사 Method and apparatus for filling color of image
CN106296773B (en) * 2016-08-12 2019-11-29 中国科学院计算技术研究所 A kind of method and apparatus generating multi-level polar plot
CN108581212A (en) * 2018-04-23 2018-09-28 长沙八思量信息技术有限公司 Fill method, device, equipment and the readable storage medium storing program for executing of laser marking
CN110276816A (en) * 2019-06-27 2019-09-24 北京金山安全软件有限公司 Regional color filling method and device, electronic equipment and computer readable storage medium
CN110852938B (en) * 2019-10-28 2024-03-19 腾讯科技(深圳)有限公司 Display picture generation method, device and storage medium
CN111127592B (en) * 2019-12-25 2023-11-28 超级魔方(北京)科技有限公司 Picture color filling method and device, electronic equipment and readable storage medium
CN112241714B (en) * 2020-10-22 2024-04-26 北京字跳网络技术有限公司 Method and device for identifying designated area in image, readable medium and electronic equipment

Also Published As

Publication number Publication date
CN113238692A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
KR101579876B1 (en) Image processing apparatus, image processing method, and computer readable medium
CN113238692B (en) Region selection method, map division method, device and computer equipment
US9245043B2 (en) Embedded media markers and systems and methods for generating and using them
US11308655B2 (en) Image synthesis method and apparatus
KR101579873B1 (en) Image processing apparatus, image processing method, and computer readable medium
CN113344998B (en) Depth detection method and device, computer equipment and storage medium
US20120268464A1 (en) Method and device for processing spatial data
CN104428792A (en) Parameter selection and coarse localization of regions of interest for MSER processing
JPWO2013145295A1 (en) Color chart detection apparatus, color chart detection method, and computer program for color chart detection
CN113238829B (en) Background color generation method and device, electronic equipment and storage medium
JP2022511319A (en) Distance field color palette
CN112069339A (en) Background picture processing and search result display method, device, equipment and medium
US8830251B2 (en) Method and system for creating an image
CN112069341A (en) Background picture generation and search result display method, device, equipment and medium
CN112927307A (en) Calibration method, calibration device, electronic equipment and storage medium
CN110852132B (en) Two-dimensional code space position confirmation method and device
CN116862813A (en) Color calibration method and system for augmented reality technology
CN110457098B (en) Page local image sharing method, device, equipment and storage medium
CN104917963A (en) Image processing method and terminal
CN114119778A (en) Deep color mode generation method of user interface, electronic equipment and storage medium
CN116758206A (en) Vector data fusion rendering method and device, computer equipment and storage medium
CN113935891B (en) Pixel-style scene rendering method, device and storage medium
CN112150527B (en) Measurement method and device, electronic equipment and storage medium
CN111552755B (en) Drawing method, device and equipment for three-dimensional place name label and storage medium
JP5240305B2 (en) Subject identification program and camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant