WO2023179465A1 - Image texture extraction method and device, and computer readable storage medium - Google Patents

Image texture extraction method and device, and computer readable storage medium Download PDF

Info

Publication number
WO2023179465A1
WO2023179465A1 PCT/CN2023/082070 CN2023082070W WO2023179465A1 WO 2023179465 A1 WO2023179465 A1 WO 2023179465A1 CN 2023082070 W CN2023082070 W CN 2023082070W WO 2023179465 A1 WO2023179465 A1 WO 2023179465A1
Authority
WO
WIPO (PCT)
Prior art keywords
texture
window frame
pixel
color
comparison group
Prior art date
Application number
PCT/CN2023/082070
Other languages
French (fr)
Chinese (zh)
Inventor
张国流
Original Assignee
张国流
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 张国流 filed Critical 张国流
Publication of WO2023179465A1 publication Critical patent/WO2023179465A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the technical field of image processing, and in particular to an image texture extraction method, equipment and computer-readable storage medium.
  • Texture extraction refers to the process of using certain technologies and methods to achieve target texture extraction in an image containing a target and background, ignoring the influence of background and noise interference.
  • edge detection refers to the process of using certain technologies and methods to achieve target texture extraction in an image containing a target and background, ignoring the influence of background and noise interference.
  • the traditional texture extraction method uses traditional edge detection operators to detect edge information with significant grayscale changes in the image.
  • Traditional edge detection technology mainly uses grayscale as the detection index, and often needs to convert complex images into grayscale images before performing edge detection processing.
  • the purpose of the present invention is to provide an image texture extraction method, device and computer-readable storage medium, which can effectively improve the completeness and precision of image texture extraction.
  • embodiments of the present invention provide an image texture extraction method, including:
  • All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
  • the color value of each pixel of the target image is converted into a binary system to obtain a digital array, including:
  • the color scale values of the R, G, and B color channels of each pixel are expressed in N-base form, and the RGB value of each pixel is expressed as a 3 ⁇ M-bit digital array; among them, the N-base is smaller than the color value Current forwarding, M represents the number corresponding to the maximum level value of the color channel in the N system.
  • the color values of each pixel of the target image are converted into N numbers to obtain a digital array for each pixel, including:
  • the target image is decomposed into several layers, including:
  • the digits of each color channel in all pixels are clustered and merged into one layer.
  • the window frame mask is used to extract texture information for each layer, and the sub-texture map of each layer is obtained, including:
  • Each layer is covered with the window frame mask; wherein the window frame mask is composed of several window frames;
  • each window frame uses at least two comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located; among them, each comparison group consists of two comparison groups on the window frame. It consists of two detection points on opposite sides of each other;
  • the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer.
  • a first comparison group and a second comparison group are set on the boundary of each window frame, wherein the line connecting the two detection points of the first comparison group is connected with the two detection points of the second comparison group.
  • the angle between the lines connecting the detection points is 360°/2n, where n represents the number of comparison groups;
  • each window frame uses at least two pairs of comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located, including:
  • the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
  • the window frame is divided into two parts based on the mid-perpendicular line of any two detection points with different values among the four detection points of the first comparison group and the second comparison group, and the two values are 1.
  • the half of the window frame where the detection point is located is filled with color, and the half where the two detection points with a value of 0 is is not filled with color;
  • the color filling of the window frame is the texture information of the corresponding window frame.
  • the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer, including:
  • the texture information of all window frames is merged according to its position in the window frame mask to form a sub-texture map of the layer where it is located.
  • the weight superposition of all sub-texture maps to synthesize the total texture of the target image includes:
  • All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
  • the method also includes:
  • texture color blocks are filtered on the total texture map to obtain the final texture map output result.
  • an image texture extraction device including:
  • the base conversion module is used to convert the color value of each pixel of the target image into a base to obtain a digital array of each pixel; among which, the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion. system;
  • Layer decomposition module used to decompose the target image into several layers according to the digital array
  • a sub-texture map extraction module used to extract texture information using a window frame mask for each of the layers, and obtain a sub-texture map of each of the layers;
  • the total texture extraction module is used to superimpose the weights of all the sub-texture maps to synthesize the total texture map of the target image.
  • embodiments of the present invention provide an image texture extraction device, including: a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor.
  • the processor executes the The computer program is used to implement the image texture extraction method as described in any one of the first aspects.
  • inventions of the present invention provide a computer-readable storage medium.
  • the computer-readable storage medium includes a stored computer program, wherein when the computer program is running, the device where the computer-readable storage medium is located is controlled.
  • the image texture extraction method as described in any one of the first aspects is executed.
  • the beneficial effect of the embodiments of the present invention is that by performing base conversion on the color value of each pixel of the target image, a digital array of each pixel is obtained, and according to the digital array, the target image is converted into Decompose it into several layers; use a window frame mask to extract texture information for each layer to obtain a sub-texture map of each layer; superpose the weights of all the sub-texture maps and synthesize them
  • the total texture map of the target image the present invention increases the number of layers of decomposition of the target image by reducing the color value of the pixels of the target image, thereby effectively improving the completeness and accuracy of image texture extraction.
  • Figure 1 is a flow chart of an image texture extraction method provided by an embodiment of the present invention.
  • Figure 2 is a schematic diagram of an exploded image provided by an embodiment of the present invention.
  • Figure 3 is a schematic diagram of a window frame mask provided by an embodiment of the present invention.
  • Figure 4 is a schematic diagram of the detection point of the window frame provided by the embodiment of the present invention.
  • Figure 5 is a schematic diagram of texture filling provided by an embodiment of the present invention.
  • Figure 6 is a schematic diagram of layer merging provided by an embodiment of the present invention.
  • Figure 7 is a schematic diagram of image texture information provided by an embodiment of the present invention.
  • Figure 8 is a schematic diagram of layer weight overlay provided by an embodiment of the present invention.
  • Figure 9 is a schematic diagram of an image texture extraction device provided by an embodiment of the present invention.
  • Figure 10 is a schematic structural diagram of an image texture extraction device provided by an embodiment of the present invention.
  • An embodiment of the present invention provides an image texture extraction method, which can be called the Multi-layer Grids Contralateral Opponent extraction method (MGCO), which specifically includes:
  • S1 Convert the color value of each pixel of the target image into a base to obtain a digital array for each pixel; where the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion;
  • Texture is actually the difference in optical properties between two glossy surfaces. Texture will exist if and only if there is a color difference between the two glossy surfaces. Based on this, texture can be detected by detecting the color difference value between two light surfaces.
  • images can be represented in color using a variety of different color spaces, such as HSV, HSB, CMY, L*a*b*, etc.
  • RGB color space is used as an example to present the process steps of the present invention, but in principle, the method implemented in the present invention can be applied to any other color space to realize texture information extraction of images.
  • the color value of each pixel of the target image is converted into a binary system to obtain a digital array, including:
  • the color scale values of the R, G, and B color channels of each pixel are expressed in N-base form, and the RGB value of each pixel is expressed as a 3 ⁇ M-bit digital array; among them, the N-base is smaller than the color value Current forwarding, M represents the number corresponding to the maximum level value of the color channel in the N system.
  • the color value of each pixel of the target image can be converted from decimal to N, with N ⁇ 10, thereby improving the layer separation of the image. For example, convert to binary, ternary, quaternary, etc. The smaller the selected base, the more layers the target image is decomposed, and the higher the accuracy of texture extraction.
  • each pixel of the target image is converted into an N-ary system to obtain a digital array of each pixel, including:
  • the color value of the decimal pixel is converted into binary. Since the RGB color space consists of three color channels: red, green, and blue, the value of each color channel is an integer of [0, 255]. Value, a total of 256 color scale values. That is, each color channel can be represented by 8-bit binary. If all three color channels are represented in binary, the RGB value of each pixel can be expressed as a digital array composed of 3 ⁇ 8 total of 24 0/1 signals. Therefore, the color value comparison between two glossy surfaces can be understood in computer language as the numerical comparison between two sets of 3 ⁇ 8 digital arrays. The difference value between the two digital arrays is the texture value between the two glossy surfaces.
  • Embodiments of the present invention take advantage of the simplicity and clear contrast of binary signals to convert the task of identifying complete texture information into a task adapted to computer language, so that the computer can more simply and accurately extract all texture information in the target image.
  • the binary number array corresponding to the color value is presented in the following table.
  • the digits of each color channel in all pixels are clustered and merged into one layer, resulting in a total of 24 layers.
  • each color channel can generate a layer.
  • the color value of each pixel is composed of three color channels: R, G, and B.
  • Each color channel has an 8-bit 0/1 number. Name each number according to its "channel + binary digit". For example, the number located at the 6th digit of the R channel is named R6, the number located at the 4th digit of the G channel is named G4, and so on.
  • Each pixel contains R1 ⁇ R8, G1 ⁇ G8, B1 ⁇ B8, a total of 24 0/1 numbers. The digits of each color channel in all pixels are clustered and merged separately.
  • the R1 digital clustering of all pixels is combined into one layer
  • the R2 digital clustering of all pixels is combined to form a second layer
  • Each layer has the same number of pixels as the original image, but the pixel values only have two values: 0 and 1.
  • each layer with the bits that make up the layer For example, a layer composed of G5 digital clusters of each pixel is called a G5 layer, and so on.
  • the 24 layers are layers R1 ⁇ R8, G1 ⁇ G8, and B1 ⁇ B8, as shown in Figure 2.
  • S3 Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map of each layer;
  • weight values of all the sub-texture maps are superimposed to synthesize a total texture map of the target image.
  • Assign a weight to the sub-texture map assign a weight to the sub-texture map according to the binary digit reflected in the name of the sub-texture map.
  • All the sub-texture maps are superimposed according to their weights to form a total texture map of the target image.
  • the total texture map is obtained by superimposing the weights of the sub-texture maps corresponding to the 24 layers, as shown in Figures 7 and 8.
  • the total texture map obtained at this time will describe the texture of the target image, and intuitively use RGB values to express the color difference value of each texture point.
  • FIG. 4 Taking Figure 4 as an example to illustrate the principle of texture weight superposition in the window frame.
  • the RGB values on the left side of the window frame in Figure 8 are 200, 90, and 20, and the RGB values on the right side of the window frame are 210, 220, and 240.
  • the sub-texture maps corresponding to the 24 layers are weighted according to their weights. After the values are superimposed, the total texture color difference between the left and right sides of the window frame can be obtained as:
  • the target image is divided into 24 layers, and then the texture is extracted for each layer, and then each image is The weights of the sub-texture maps of the layer are superimposed to obtain the total texture map, which can effectively improve the accuracy of image texture extraction and ensure that the extracted texture is complete and refined.
  • S3 Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map of each layer, including:
  • Each layer is covered with the window frame mask; wherein the window frame mask is composed of several window frames;
  • each window frame uses at least two comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located; among them, each comparison group consists of two comparison groups on the window frame. It consists of two detection points on opposite sides of each other;
  • the point located on the boundary of the window frame is taken as the detection point to detect the pixel value of the layer at the location of the point, and the two detection points on opposite sides of the window frame are used as a pair of comparison groups.
  • the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer.
  • a first comparison group and a second comparison group are set at the boundary of each window frame, wherein the line connecting the two detection points of the first comparison group is connected to the line connecting the two detection points of the second comparison group.
  • the angle between the lines is 360°/2n, where n represents the number of comparison groups;
  • each window frame uses at least two pairs of comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located, including:
  • the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
  • the window frame is divided into two parts based on the mid-perpendicular line of any two detection points with different values among the four detection points of the first comparison group and the second comparison group, and the two values are 1.
  • the half of the window frame where the detection point is located is filled with color, and the half where the two detection points with a value of 0 is is not filled with color;
  • the color filling of the window frame is the texture information of the corresponding window frame.
  • the window frame mask (contralateral antagonistic window frame mask) is composed of several window frames of customized sizes. For example, in the embodiment of the invention, if the window frame is set to a size of 3 ⁇ 3 pixels, then for an original target image of 11 ⁇ 11 pixels, each layer is covered by a mask composed of 5 ⁇ 5 window frames. Then each window frame covers a total of 9 pixels of 3x3 of the image, as shown in Figure 3.
  • each window frame has at least two comparison groups.
  • the characteristic points of the comparison group are: 1) A comparison group consists of two detection points located on the boundary of the window frame. The points are symmetrical along the midpoint and located on opposite sides of the window frame; 2) The angle between the orientations of the two comparison groups is 360°/2n, where n represents the number of comparison groups.
  • Texture recognition Extract the values of the pixels where the detection points of comparison groups A and B are located, and compare the values; if the two detection points of comparison groups A and B have the same value (both are 0, or both are 1), then It is determined that there is no texture between the comparison groups A and B; if the two detection points of the comparison group A or B have different values (one is 0 and the other is 1), it is determined that there is texture between the comparison group A or B;
  • Texture extraction The window frame has a total of 4 detection points of comparison groups A and B to extract texture from the covered pixel area. According to the different detection results of comparison groups A and B, the following texture results are output:
  • the window frame has no texture and no color filling.
  • the window frame is divided into two parts according to the mid-vertical line of comparison group A, and the half of the window frame where the detection point with a value of 1 in comparison group A is located Carry out coloring and leave the other half uncolored;
  • the window frame is divided into two according to the mid-vertical line of comparison group B, and the half of the window frame where the detection point with a value of 1 in comparison group B is located Carry out coloring and leave the other half uncolored;
  • the window frame is divided into two according to the mid-perpendicular line of any two detection points with different values among the four detection points, and the two Half of the window frame where the 1 value detection point is located is filled with color, and the other half is not filled with color;
  • comparison group A is horizontally oriented
  • comparison group B is vertically oriented
  • the window frame is divided into two parts by the mid-perpendicular line of comparison group A.
  • the left detection point of comparison group A is 1 and the right detection is 0, then the left half of the window frame will be filled with color and the right half will remain colorless; if the 1 value is at the right detection point, the right side of the window frame will Do color filling and keep the left side colorless;
  • the window frame will be divided into two parts with the mid-perpendicular line of comparison group B.
  • the upper detection point of group B is 1 and the lower detection point is 0, the upper half of the window frame will be filled with color and the lower half will remain colorless; otherwise, the lower half will be filled with color and the upper half will remain colorless. color;
  • the window frame is divided into two with the mid-perpendicular line of the two detection points that are different values among the four detection points, and the two detection points are divided into two parts.
  • the half side where the 1 value detection point is located is filled with color.
  • the left detection point and the upper detection point are 1, the upper left side of the window frame will be filled with color, and the lower right side will remain colorless; if the right detection point and the lower detection point are 1, then the right side of the window frame will be filled with color.
  • the lower side is filled with color, and the lower left side remains colorless;
  • the four detection points of the window frame can be divided into vertical ones.
  • a pair of detection points and a pair of detection points in the horizontal direction. Compare the values of the pixels where a pair of detection points are located. If any pair of detection points has outliers in the comparison, it means that there must be texture between the two detection points, that is, there is texture in the window frame; If there are no outliers in the comparison between the two pairs of detection points, there is no texture in the window frame.
  • the values of the pixels located at a pair of detection points in the vertical direction of a window frame on layer B1 are divided into 1 and 0, and the values of the pixels located at a pair of detection points in the horizontal direction are 0 and 0, then there is a texture in the window frame;
  • the values of pixels located at a pair of detection points in the vertical direction of another window frame on layer B1 are divided into 0 and 0, and the values of pixels located at a pair of detection points in the horizontal direction are 0 and 0, then there are no pixels in the window frame. Texture is present.
  • the window frame can be divided into two parts using the mid-perpendicular line of a pair of detection points with different pixel values as the dividing line, and the half of the window frame where the pixel value is 1 is filled with color, and the pixel value in the window frame is 0.
  • the half of the window frame is not filled with color; the window frame is filled with color according to the color weight of the corresponding layer.
  • the color weight depends on the color channel corresponding to the layer and the weight corresponding to its number. Its weight y is the same as the color of the window frame.
  • the texture within the window frame not only has color difference attributes, but also orientation attributes.
  • the angle between the two pairs is also set to 360°/2n, where n represents the number of detection points in the comparison group to help determine the orientation of the texture.
  • the color-filling process on the window frame specifically includes:
  • layer R1 set a horizontal and vertical comparison group for each window frame in layer R1.
  • the mid-perpendicular line of a pair of outlier detection points is the orientation of the texture; when both pairs of detection points are outliers, the orientation of the texture is the diagonal line of the window frame.
  • Both pairs of comparison groups have the same value, that is, when they are both 0 or 1, it means there is no texture in the window frame.
  • the texture orientation within the window frame is shown in Figure 5.
  • the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer, including:
  • the texture information of all window frames is merged according to its position in the window frame mask to form a sub-texture map of the layer where it is located.
  • the sub-texture map is named according to the layer in which it is located.
  • the sub-texture map of the R1 layer is called the R1 sub-texture map, and so on. As shown in Figure 6.
  • the total texture map of the target image is the result of the superposition of texture phase weights in the window frames of 24 layers. Since there are 8 possibilities for the texture direction of the window frame on each layer, and the texture direction of the same window frame on different layers may be different, so after multiple layers are superimposed, the texture of the window frame in the total texture map It is often in the form of a rice grid.
  • the area of the window frame is S
  • the minimum color block unit of the total texture map is a right triangle with an area of S/8. Therefore, the present invention extracts the texture of the target image by using the window frame as the unit, and the obtained total texture map is The resolution will be 8 times higher than the window frame mask.
  • the window frame When the window frame is set to 3x3 pixels, the area of each pixel is S/9, and the minimum resolvable color patch area of the total texture map is S/8. It can be inferred that the minimum color patch area of the total texture map is 1.125 times that of a single pixel. It can be seen that when the window frame is set to 3x3 pixels, the final texture image is close to the single-pixel level in detail resolution and has a detail resolution similar to the original image, making the extracted texture information better. completeness and precision.
  • the present invention has excellent texture extraction performance for various image types, including abstract images, scatter plots, black and white images, color images, character images, environment images, etc.
  • the total texture map extracted by the present invention not only retains all texture information, but also uses RGB values to intuitively reflect the color difference characteristics of each texture point. Therefore, we can use the rule that the textures of the same object often have similar color difference properties to separate the textures of different objects in the target image (illustration required). It also has great application prospects in subsequent image processing technologies such as object separation and object recognition.
  • the method further includes:
  • texture color blocks are filtered on the total texture map to obtain the final texture map output result.
  • This method can flexibly filter the texture information of the total texture map by setting the RGB value range, thereby obtaining texture results with different levels of simplicity.
  • the total texture map contains all the texture information of the target image, and uses intuitive RGB values to display the color difference characteristics of each texture point, making the texture data contained in the total texture map highly mineable. space and extremely easy operability.
  • the total texture map is filtered through the RGB value filtering conditions, thereby obtaining different texture results.
  • the colors in the total texture map that do not meet the RGB value filtering conditions are Blocks are deleted directly, and the image formed by the remaining color blocks is the result image with clear texture direction, making the texture result have customized characteristics.
  • the total texture map in the embodiment of the present invention includes all texture information with color difference values 1-255, therefore Textures that are easily overlooked/difficult to recognize even with the naked eye can be detected.
  • a sub-regional ranking filtering method can also be used to filter the total texture map, specifically including:
  • n and m are preset constants.
  • Filtering the total texture map through the sub-region ranking screening method can ensure that each sub-region has a certain texture retention, so that relatively strong textures in weak texture areas can be retained, and these relatively strong textures are often affected by light and shadow. Important textures that are difficult to extract due to environmental interference, thereby avoiding interference from ambient lighting on texture extraction.
  • an embodiment of the present invention provides an image texture extraction device, including:
  • the system conversion module 1 is used to convert the color value of each pixel of the target image into a system to obtain a digital array of each pixel; where the system of the color value of the pixel after conversion is lower than the system of the color value of the pixel before conversion. system;
  • Layer decomposition module 2 used to decompose the target image into several layers according to the digital array
  • the sub-texture map extraction module 3 is used to extract texture information using a window frame mask for each of the layers, and obtain the sub-texture map of each of the layers;
  • the total texture extraction module 4 is used to superpose the weights of all the sub-texture maps to synthesize the total texture map of the target image.
  • system conversion module is used to convert the color values of each pixel of the target image into N systems to obtain a digital array for each pixel;
  • the color scale values of the R, G, and B color channels of each pixel are expressed in N-ary form respectively, and the RGB value of each pixel is expressed as a 3 ⁇ M-bit digital array; where M represents the maximum size of the pixel. The corresponding digit of the color value in N base.
  • system conversion module includes
  • a binary conversion unit used to perform binary conversion on the color values of each pixel of the target image to obtain a 3 ⁇ 8-bit digital array for each pixel;
  • the layer decomposition module includes:
  • the same digit clustering unit is used to cluster the digits of each color channel in all pixels into one layer according to the digital array of each pixel.
  • the sub-texture map extraction module includes:
  • a layer texture extraction unit is used to cover each layer with the window frame mask; wherein the window frame mask is composed of several window frames;
  • the window frame texture extraction unit is used for each layer and each window frame to use at least two comparison groups to extract the texture of the layer area where each window frame is located, and obtain the texture information of the layer area where each window frame is located; wherein, each indivual The comparison group consists of two detection points on opposite sides of the window frame;
  • the window frame splicing unit is used to splice the texture information of all window frames in the window frame mask to obtain the sub-texture map of the corresponding layer.
  • a first comparison group and a second comparison group are set at the boundary of each window frame, wherein a line connecting the two detection points of the first comparison group and the second comparison group are The angle between the lines connecting the two detection points in the group is 360°/2n, where n represents the number of comparison groups;
  • the window frame texture extraction unit includes:
  • a numerical comparison unit used to respectively extract the layer pixel values of the locations where the detection points of the first comparison group and the second comparison group are located, and perform numerical comparison;
  • the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
  • the color filling of the window frame is the texture information of the corresponding window frame.
  • the window frame splicing unit is specifically configured to add the texture information of all window frames according to the window frame mask after each window frame completes the color filling process of its area. Merge at the middle position to form the sub-texture map of the layer where it is located.
  • the total texture extraction module is configured to superpose weights of all the sub-texture maps to synthesize a total texture map of the target image.
  • the device further includes:
  • the texture filtering module is used to filter the texture information of the total texture map according to the preset RGB value range to obtain the final total texture map.
  • an embodiment of the present invention provides an image texture extraction device, including at least one processor 11, such as a CPU, at least one network interface 14 or other user interface 13, a memory 15, and at least one communication bus 12.
  • the communication bus 12 is used to implement connection communication between these components.
  • the user interface 13 may optionally include a USB interface, other standard interfaces, and wired interfaces.
  • the network interface 14 may optionally include a Wi-Fi interface and other wireless interfaces.
  • the memory 15 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the memory 15 may optionally include at least one storage device located remotely from the aforementioned processor 11 .
  • memory 15 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof:
  • Operating system 151 including various system programs, is used to implement various basic services and process hardware-based tasks;
  • the processor 11 is configured to call the program 152 stored in the memory 15 to execute the image texture extraction method described in the above embodiment, such as step S1 shown in FIG. 1 .
  • the processor executes the computer program, it implements the functions of each module/unit in each of the above device embodiments, such as a binary conversion module.
  • the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the present invention.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions.
  • the instruction segments are used to describe the execution process of the computer program in the image texture extraction device.
  • the image texture extraction device may be a computing device such as VCU, ECU, BMS, etc.
  • the image texture extraction device may include, but is not limited to, a processor and a memory.
  • a processor and a memory.
  • the schematic diagram is only an example of the image texture extraction device and does not constitute a limitation on the image texture extraction device. It may include more or fewer components than shown, or combine certain components, or Different parts.
  • the so-called processor 11 can be a microprocessor (Microcontroller Unit, MCU) central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), or application-specific integrated circuits. (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general processor can be a microprocessor or the processor can be any conventional processor, etc.
  • the processor 11 is the control center of the image texture extraction device and uses various interfaces and lines to connect the entire image texture extraction device. various parts of.
  • the memory 15 may be used to store the computer programs and/or modules.
  • the processor 11 implements the above by running or executing the computer programs and/or modules stored in the memory and calling the data stored in the memory. Describes various functions of image texture extraction equipment.
  • the memory 15 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Store data created based on the use of the mobile phone (such as audio data, phone book, etc.), etc.
  • the memory 15 may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) Card, Flash Card, at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • non-volatile memory such as hard disk, memory, plug-in hard disk, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) Card, Flash Card, at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • the integrated modules/units of the image texture extraction device are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the present invention implements all or part of the processes in the above embodiment methods, and can also implement A computer program instructs relevant hardware to complete the process.
  • the computer program can be stored in a computer-readable storage medium. When executed by a processor, the computer program can implement the steps of each of the above method embodiments.
  • the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording media, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media, etc.
  • Embodiments of the present invention provide a computer-readable storage medium.
  • the computer-readable storage medium includes a stored computer program.
  • the device where the computer-readable storage medium is located is controlled to execute the first step.
  • the device embodiments described above are only illustrative.
  • the units described as separate components may or may not be physically separated.
  • the components shown as units may or may not be physically separate.
  • the unit can be located in one place, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the connection relationship between modules indicates that there are communication connections between them, which can be specifically implemented as one or more communication buses or signal lines. Persons of ordinary skill in the art can understand and implement the method without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed in the present invention are an image texture extraction method and device, and a computer readable storage medium, the method comprising: performing number system conversion on color values of each pixel in a target image to obtain a digital array of each pixel, wherein the base to which the color values of each pixel are converted is less than the base from which the color values of the pixel are converted; according to the number of digits of each digital array, decomposing the target image into a plurality of image layers; performing texture information extraction on the image layers respectively by using a grid mask, so as to obtain a sub-texture map of each image layer; and performing weight superposition on all the sub-texture maps to synthesize an overall texture map of the target image. Compared with the traditional edge detection technique, the present invention can greatly improve the texture information extraction rate and texture information extraction precision for the target image, thus providing more complete texture and edge information for a downstream image information processing procedure.

Description

图像纹理提取方法、设备及计算机可读存储介质Image texture extraction method, equipment and computer-readable storage medium 技术领域Technical field
本发明涉及图像处理技术领域,尤其涉及一种图像纹理提取方法、设备及计算机可读存储介质。The present invention relates to the technical field of image processing, and in particular to an image texture extraction method, equipment and computer-readable storage medium.
背景技术Background technique
图像的边缘、纹理提取在人类视觉和计算机视觉领域起着重要作用。纹理提取(边缘检测)是指在包含目标和背景的图像中,忽略背景以及噪声干扰的影响,采用一定的技术和方法来实现目标纹理提取的过程。目前,传统的纹理提取方法是利用传统的边缘检测算子检测图像中灰度变化显著的边缘信息。基于传统边缘检测技术主要以灰度作为检测指标,并常需要将复杂的图像先转换为灰度图再进行边缘检测处理,因此传统边缘检测技术往往输出结果单调,边缘信息同质化严重,难以做进一步的细分处理;同时由于传统边缘检测算法仅对灰度变化的峰值能有较好的响应,因此往往仅对强纹理有较好的检测效果,对中度纹理或弱纹理的提取效果较差,会丢失图像中大量程度较弱但同样非常重要的细节信息。这些都导致传统的边缘检测算法在纹理信息提取的完整度和精细度上都表现较差。Image edge and texture extraction play an important role in the fields of human vision and computer vision. Texture extraction (edge detection) refers to the process of using certain technologies and methods to achieve target texture extraction in an image containing a target and background, ignoring the influence of background and noise interference. At present, the traditional texture extraction method uses traditional edge detection operators to detect edge information with significant grayscale changes in the image. Traditional edge detection technology mainly uses grayscale as the detection index, and often needs to convert complex images into grayscale images before performing edge detection processing. Therefore, traditional edge detection technology often outputs monotonous results, serious homogeneity of edge information, and is difficult to Do further subdivision processing; at the same time, because the traditional edge detection algorithm can only respond well to the peak value of grayscale change, it often only has a good detection effect on strong textures, and has a good detection effect on medium or weak textures. Poor, a large amount of weaker but equally important detail information in the image will be lost. All these result in traditional edge detection algorithms performing poorly in terms of completeness and fineness of texture information extraction.
发明内容Contents of the invention
针对上述问题,本发明的目的在于提供一种图像纹理提取方法、设备及计算机可读存储介质,其能有效提高图像纹理提取的完整度和精细度。第一方面,本发明实施例提供了一种图像纹理提取方法,包括:In response to the above problems, the purpose of the present invention is to provide an image texture extraction method, device and computer-readable storage medium, which can effectively improve the completeness and precision of image texture extraction. In a first aspect, embodiments of the present invention provide an image texture extraction method, including:
将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;Convert the color value of each pixel of the target image into a base to obtain a digital array for each pixel; where the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion;
根据所述数字阵列,将所述目标图像分解为若干个图层;Decompose the target image into several layers according to the digital array;
对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map for each layer;
将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
作为上述方案的改进,所述将目标图像每个像素的颜色值进行进制转换,得到数字阵列,包括:As an improvement to the above solution, the color value of each pixel of the target image is converted into a binary system to obtain a digital array, including:
将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列;Convert the color values of each pixel of the target image into N-ary systems to obtain a digital array for each pixel;
其中,每个像素的R、G、B颜色通道的色阶值分别表示为N进制形式,则每个像素的RGB值表示为3×M位的数字阵列;其中,N进制小于颜色值当前进制,M表示颜色通道的最大色阶值在N进制下对应的数位。Among them, the color scale values of the R, G, and B color channels of each pixel are expressed in N-base form, and the RGB value of each pixel is expressed as a 3×M-bit digital array; among them, the N-base is smaller than the color value Current forwarding, M represents the number corresponding to the maximum level value of the color channel in the N system.
作为上述方案的改进,所述将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列,包括: As an improvement to the above solution, the color values of each pixel of the target image are converted into N numbers to obtain a digital array for each pixel, including:
将所述目标图像的各个像素的颜色值进行进制转换,得到每个像素的3×M位的数字阵列;Perform base conversion on the color values of each pixel of the target image to obtain a 3×M-bit digital array for each pixel;
则,根据所述数字阵列,将所述目标图像分解为若干个图层,包括:Then, according to the digital array, the target image is decomposed into several layers, including:
根据每个像素的数字阵列,将所有像素中各颜色通道的各数位分别聚类合并为一个图层。According to the digital array of each pixel, the digits of each color channel in all pixels are clustered and merged into one layer.
作为上述方案的改进,所述对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图,包括:As an improvement to the above solution, the window frame mask is used to extract texture information for each layer, and the sub-texture map of each layer is obtained, including:
采用所述窗框掩膜对每个图层进行覆盖;其中,所述窗框掩膜由若干个窗框构成;Each layer is covered with the window frame mask; wherein the window frame mask is composed of several window frames;
对每个图层,每个窗框采用至少两个对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息;其中,每个对比组由窗框上两个互为对侧的检测点构成;For each layer, each window frame uses at least two comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located; among them, each comparison group consists of two comparison groups on the window frame. It consists of two detection points on opposite sides of each other;
将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图。The texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer.
作为上述方案的改进,在每个窗框的边界上设置第一对比组和第二对比组,其中,所述第一对比组的两个检测点的连线与所述第二对比组中两个检测点的连线之间的夹角为360°/2n,n表示对比组数量;As an improvement to the above solution, a first comparison group and a second comparison group are set on the boundary of each window frame, wherein the line connecting the two detection points of the first comparison group is connected with the two detection points of the second comparison group. The angle between the lines connecting the detection points is 360°/2n, where n represents the number of comparison groups;
则,每个窗框采用至少两对对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息,包括:Then, each window frame uses at least two pairs of comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located, including:
分别提取所述第一对比组、第二对比组的检测点所在位点的图层像素值,并进行数值比对;Extract the layer pixel values of the locations where the detection points of the first comparison group and the second comparison group are located respectively, and conduct numerical comparisons;
当所述第一对比组的两检测点同值、且所述第二对比组的两检测点同值时,则判定所述第一对比组和所述第二对比组之间均不存在纹理,不进行填色处理;When the two detection points of the first comparison group have the same value and the two detection points of the second comparison group have the same value, it is determined that there is no texture between the first comparison group and the second comparison group. , no coloring processing is performed;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点同值时,则判定所述第一对比组之间存在纹理,且所述第二对比组不存在纹理;所述窗框按所述第一对比组的中垂线一分为二,对所述第一对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不进行填色;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have the same value, it is determined that there is texture between the first comparison group and the second comparison group does not There is texture; the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点异值时,则判定所述第一对比组和第二对比组之间存在共同纹理,所述窗框按第一对比组、第二对比组的4个检测点中任意两个互为异值的检测点的中垂线为界,将窗框一分为二,对两个数值为1的检测点所在的一半窗框进行填色处理,两个数值为0的检测点所在的一半不做填色处理;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have different values, it is determined that there is a common texture between the first comparison group and the second comparison group. The window frame is divided into two parts based on the mid-perpendicular line of any two detection points with different values among the four detection points of the first comparison group and the second comparison group, and the two values are 1. The half of the window frame where the detection point is located is filled with color, and the half where the two detection points with a value of 0 is is not filled with color;
其中,所述窗框的填色为对应窗框的纹理信息。Wherein, the color filling of the window frame is the texture information of the corresponding window frame.
作为上述方案的改进,所述将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图,包括:As an improvement to the above solution, the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer, including:
在每个窗框完成对所在区域的填色处理后,将所有窗框的纹理信息按照其在所述窗框掩膜中位置进行合并,构成所在图层的子纹理图。After each window frame completes the color filling process of the area where it is located, the texture information of all window frames is merged according to its position in the window frame mask to form a sub-texture map of the layer where it is located.
作为上述方案的改进,所述将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理,包括: As an improvement to the above solution, the weight superposition of all sub-texture maps to synthesize the total texture of the target image includes:
将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
作为上述方案的改进,所述方法还包括:As an improvement of the above solution, the method also includes:
通过设置RGB取值范围,对所述总纹理图进行纹理色块筛选,以获得最终的纹理图输出结果。By setting the RGB value range, texture color blocks are filtered on the total texture map to obtain the final texture map output result.
第二方面,本发明实施例提供了一种图像纹理提取装置,包括:In a second aspect, embodiments of the present invention provide an image texture extraction device, including:
进制转换模块,用于将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;The base conversion module is used to convert the color value of each pixel of the target image into a base to obtain a digital array of each pixel; among which, the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion. system;
图层分解模块:用于根据所述数字阵列,将所述目标图像分解为若干个图层;Layer decomposition module: used to decompose the target image into several layers according to the digital array;
子纹理图提取模块,用于对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;A sub-texture map extraction module, used to extract texture information using a window frame mask for each of the layers, and obtain a sub-texture map of each of the layers;
总纹理提取模块,用于将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。The total texture extraction module is used to superimpose the weights of all the sub-texture maps to synthesize the total texture map of the target image.
第三方面,本发明实施例提供了一种图像纹理提取设备,包括:处理器、存储器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现如第一方面中任意一项所述的图像纹理提取方法。In a third aspect, embodiments of the present invention provide an image texture extraction device, including: a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor. The processor executes the The computer program is used to implement the image texture extraction method as described in any one of the first aspects.
第四方面,本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行如第一方面中任意一项所述的图像纹理提取方法。In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium. The computer-readable storage medium includes a stored computer program, wherein when the computer program is running, the device where the computer-readable storage medium is located is controlled. The image texture extraction method as described in any one of the first aspects is executed.
相对于现有技术,本发明实施例的有益效果在于:通过将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列,并根据所述数字阵列,将所述目标图像分解为若干个图层;对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图;本发明通过降低目标图像的像素的颜色值的进制,从而增加目标图像分解的层数,进而可以有效提高图像纹理提取的完整度和精度。Compared with the existing technology, the beneficial effect of the embodiments of the present invention is that by performing base conversion on the color value of each pixel of the target image, a digital array of each pixel is obtained, and according to the digital array, the target image is converted into Decompose it into several layers; use a window frame mask to extract texture information for each layer to obtain a sub-texture map of each layer; superpose the weights of all the sub-texture maps and synthesize them The total texture map of the target image; the present invention increases the number of layers of decomposition of the target image by reducing the color value of the pixels of the target image, thereby effectively improving the completeness and accuracy of image texture extraction.
附图说明Description of the drawings
为了更清楚地说明本发明的技术方案,下面将对实施方式中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solution of the present invention more clearly, the drawings needed to be used in the implementation will be briefly introduced below. Obviously, the drawings in the following description are only some implementations of the present invention. For ordinary people in the art, For technical personnel, other drawings can also be obtained based on these drawings without exerting creative work.
图1是本发明实施例提供的一种图像纹理提取方法的流程图;Figure 1 is a flow chart of an image texture extraction method provided by an embodiment of the present invention;
图2是本发明实施例提供的分解图像示意图;Figure 2 is a schematic diagram of an exploded image provided by an embodiment of the present invention;
图3是本发明实施例提供的窗框掩膜示意图;Figure 3 is a schematic diagram of a window frame mask provided by an embodiment of the present invention;
图4是本发明实施例提供的窗框对此检测点示意图;Figure 4 is a schematic diagram of the detection point of the window frame provided by the embodiment of the present invention;
图5是本发明实施例提供的纹理填充示意图;Figure 5 is a schematic diagram of texture filling provided by an embodiment of the present invention;
图6是本发明实施例提供的图层合并示意图; Figure 6 is a schematic diagram of layer merging provided by an embodiment of the present invention;
图7是本发明实施例提供的图像纹理信息示意图;Figure 7 is a schematic diagram of image texture information provided by an embodiment of the present invention;
图8是本发明实施例提供的图层权重叠加示意图;Figure 8 is a schematic diagram of layer weight overlay provided by an embodiment of the present invention;
图9是本发明实施例提供的一种图像纹理提取装置的示意图;Figure 9 is a schematic diagram of an image texture extraction device provided by an embodiment of the present invention;
图10是本发明实施例提供的一种图像纹理提取设备的结构示意图。Figure 10 is a schematic structural diagram of an image texture extraction device provided by an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without making creative efforts fall within the scope of protection of the present invention.
请参阅图1,本发明实施例提供了一种图像纹理提取方法,可以称之为多图层窗框对侧拮抗提取法(Multi-layer Grids Contralateral Opponent extraction method,MGCO),具体包括:Please refer to Figure 1. An embodiment of the present invention provides an image texture extraction method, which can be called the Multi-layer Grids Contralateral Opponent extraction method (MGCO), which specifically includes:
S1:将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;S1: Convert the color value of each pixel of the target image into a base to obtain a digital array for each pixel; where the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion;
纹理实际上是两块光面之间的光学特性差异,当且仅当两光面存在颜色差异时,就会有纹理存在。基于此,可以通过检测两个光面间的色差值来检测纹理。具体地,图像可以用多种不同的颜色空间来进行颜色表示,比如HSV、HSB、CMY、L*a*b*等。在本发明实施中,以RGB颜色空间为示例来呈现本发明的流程步骤,但原理上,采用任何其它颜色空间都可以套用本发明所实施的办法,实现对图像的纹理信息提取。Texture is actually the difference in optical properties between two glossy surfaces. Texture will exist if and only if there is a color difference between the two glossy surfaces. Based on this, texture can be detected by detecting the color difference value between two light surfaces. Specifically, images can be represented in color using a variety of different color spaces, such as HSV, HSB, CMY, L*a*b*, etc. In the implementation of the present invention, RGB color space is used as an example to present the process steps of the present invention, but in principle, the method implemented in the present invention can be applied to any other color space to realize texture information extraction of images.
进一步,所述将目标图像每个像素的颜色值进行进制转换,得到数字阵列,包括:Further, the color value of each pixel of the target image is converted into a binary system to obtain a digital array, including:
将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列;Convert the color values of each pixel of the target image into N-ary systems to obtain a digital array for each pixel;
其中,每个像素的R、G、B颜色通道的色阶值分别表示为N进制形式,则每个像素的RGB值表示为3×M位的数字阵列;其中,N进制小于颜色值当前进制,M表示颜色通道的最大色阶值在N进制下对应的数位。Among them, the color scale values of the R, G, and B color channels of each pixel are expressed in N-base form, and the RGB value of each pixel is expressed as a 3×M-bit digital array; among them, the N-base is smaller than the color value Current forwarding, M represents the number corresponding to the maximum level value of the color channel in the N system.
在本发明实施例中,可以对所述目标图像的各个像素的颜色值由十进制转换为N进制,N<10,以此提高对图像的图层分离度。比如转换为二进制、三进制、四进制等。所选进制越小,目标图像分解的图层数越多,从而纹理提取的精度越高。In this embodiment of the present invention, the color value of each pixel of the target image can be converted from decimal to N, with N<10, thereby improving the layer separation of the image. For example, convert to binary, ternary, quaternary, etc. The smaller the selected base, the more layers the target image is decomposed, and the higher the accuracy of texture extraction.
进一步,所述将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列,包括:Further, the color value of each pixel of the target image is converted into an N-ary system to obtain a digital array of each pixel, including:
将所述目标图像的各个像素的颜色值进行二进制转换,得到每个像素的3×M位的数字阵列;Perform binary conversion on the color values of each pixel of the target image to obtain a 3×M-bit digital array for each pixel;
在本发明实施例中,优选将10进制的像素的颜色值转换为二进制。由于RGB颜色空间由红、绿、蓝三个颜色通道构成,每个颜色通道的取值为[0,255]的整 数值,共256个色阶值。即每个颜色通道可用8位二进制表示。如果将三个颜色通道都用二进制表示,则每个像素的RGB值都可表示为由3×8共24个0/1信号构成的数字阵列。因而,两个光面之间的颜色值比对,用计算机语言可以理解为是两组3×8数字阵列之间的数值比对。两数字阵列之间的差异值,就是两光面之间的纹理值。In the embodiment of the present invention, it is preferable to convert the color value of the decimal pixel into binary. Since the RGB color space consists of three color channels: red, green, and blue, the value of each color channel is an integer of [0, 255]. Value, a total of 256 color scale values. That is, each color channel can be represented by 8-bit binary. If all three color channels are represented in binary, the RGB value of each pixel can be expressed as a digital array composed of 3×8 total of 24 0/1 signals. Therefore, the color value comparison between two glossy surfaces can be understood in computer language as the numerical comparison between two sets of 3×8 digital arrays. The difference value between the two digital arrays is the texture value between the two glossy surfaces.
本发明实施例利用二进制的信号简单和对比明确的特点,将识别完整纹理信息的任务转换为适应计算机语言的任务,使计算机能够更为简单且精确地提取出目标图像中的所有纹理信息。Embodiments of the present invention take advantage of the simplicity and clear contrast of binary signals to convert the task of identifying complete texture information into a task adapted to computer language, so that the computer can more simply and accurately extract all texture information in the target image.
示例性的,对于颜色值为RGB(50,120,200)的像素,其颜色值对应的二进制数字阵列如下表所呈现。
For example, for a pixel with a color value of RGB (50, 120, 200), the binary number array corresponding to the color value is presented in the following table.
其中:
R50=128×0+64×0+32×1+16×1+4×0+2×1+1×0;
G120=128×0+64×1+32×1+16×1+4×1+2×0+1×0;
B200=128×1+64×1+32×0+16×0+4×1+2×0+1×0;
in:
R50=128×0+64×0+32×1+16×1+4×0+2×1+1×0;
G120=128×0+64×1+32×1+16×1+4×1+2×0+1×0;
B200=128×1+64×1+32×0+16×0+4×1+2×0+1×0;
S2:根据所述数字阵列,将所述目标图像分解为若干个图层;S2: Decompose the target image into several layers according to the digital array;
进一步,基于二进制转换后,根据每个像素的数字阵列,将所有像素中各颜色通道的各数位分别聚类合并为一个图层,共得到24个图层。Furthermore, after binary conversion, according to the digital array of each pixel, the digits of each color channel in all pixels are clustered and merged into one layer, resulting in a total of 24 layers.
其中,每一个颜色通道的一个数位可以生成一个图层。每个像素的颜色值都由R、G、B三个颜色通道构成,每个颜色通道有8位0/1数字。对每个数字按其“所在通道+二进制数位”进行命名。比如位于R通道第6位的数字命名为R6,位于G通道第4位的数字命名为G4,等等。每个像素都包含R1~R8、G1~G8、B1~B8共24个0/1数字。将所有像素中各颜色通道的各数位分别进行聚类合并,比如所有像素的R1数位聚类合并为一个图层,所有像素的R2数位聚类合并构成第二个图层,以此类推,最终将得到24个图层。每个图层与原图像的像素数一致,但像素值只有0和1两种取值。Among them, one bit of each color channel can generate a layer. The color value of each pixel is composed of three color channels: R, G, and B. Each color channel has an 8-bit 0/1 number. Name each number according to its "channel + binary digit". For example, the number located at the 6th digit of the R channel is named R6, the number located at the 4th digit of the G channel is named G4, and so on. Each pixel contains R1~R8, G1~G8, B1~B8, a total of 24 0/1 numbers. The digits of each color channel in all pixels are clustered and merged separately. For example, the R1 digital clustering of all pixels is combined into one layer, the R2 digital clustering of all pixels is combined to form a second layer, and so on. Finally, You will get 24 layers. Each layer has the same number of pixels as the original image, but the pixel values only have two values: 0 and 1.
用构成图层的数位对每个图层进行命名,比如由每个像素的G5数位聚类合并成的图层称为G5图层,等等。则24个图层分别为图层R1~R8、G1~G8、B1~B8,如图2所示。Name each layer with the bits that make up the layer. For example, a layer composed of G5 digital clusters of each pixel is called a G5 layer, and so on. The 24 layers are layers R1~R8, G1~G8, and B1~B8, as shown in Figure 2.
S3:对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图; S3: Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map of each layer;
进一步,将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。Further, weight values of all the sub-texture maps are superimposed to synthesize a total texture map of the target image.
对子纹理图赋予权值:根据子纹理图的名称中体现的二进制数位,对子纹理图赋予权值。权值y与其二进制数位n的关系为y=2^(n-1)。例如:G5子纹理图的权值y(G5)=2^4=16。即G5子纹理图的权值是绿色值16。Assign a weight to the sub-texture map: assign a weight to the sub-texture map according to the binary digit reflected in the name of the sub-texture map. The relationship between the weight y and its binary digit n is y=2^(n-1). For example: the weight of the G5 sub-texture map y(G5)=2^4=16. That is, the weight of the G5 sub-texture map is the green value 16.
将所有所述子纹理图按其权值进行权值叠加,构成目标图像的总纹理图。All the sub-texture maps are superimposed according to their weights to form a total texture map of the target image.
示例性的,将24个图层对应的子纹理图进行权值叠加后得到所述总纹理图,如图7和图8所示。此时获得的总纹理图将描出目标图像的纹理,并且直观地用RGB值表示出每个纹理点的色差值。For example, the total texture map is obtained by superimposing the weights of the sub-texture maps corresponding to the 24 layers, as shown in Figures 7 and 8. The total texture map obtained at this time will describe the texture of the target image, and intuitively use RGB values to express the color difference value of each texture point.
以图4为例对窗框内的纹理权值叠加原理进行说明。其中,图8中窗框左侧的RGB值为200、90、20,窗框右侧的RGB值为210、220、240,则进行24个图层对应的子纹理图按其权值进行权值叠加后,可以得到窗框左右两侧之间的总纹理色差为:Taking Figure 4 as an example to illustrate the principle of texture weight superposition in the window frame. Among them, the RGB values on the left side of the window frame in Figure 8 are 200, 90, and 20, and the RGB values on the right side of the window frame are 210, 220, and 240. Then the sub-texture maps corresponding to the 24 layers are weighted according to their weights. After the values are superimposed, the total texture color difference between the left and right sides of the window frame can be obtained as:
R=图层R5+图层R4+图层R2=-16+4-2=-10;R=layer R5+layer R4+layer R2=-16+4-2=-10;
G=图层G8+图层G3+图层G2=-128-4+2=-130;G=layer G8+layer G3+layer G2=-128-4+2=-130;
B=图层B8+图层B7+图层B6+图层B3=-128-64-32+4=-220。B=layer B8+layer B7+layer B6+layer B3=-128-64-32+4=-220.
在本发明实施中,基于二进制数字阵列的R、G、B三个通道各8个数位,将目标图像划分为24个图层,然后针对每个图层分别进行纹理提取,之后将每个图层的子纹理图进行权值叠加,得到总纹理图,能有效提高图像纹理提取的精度,保证提取的纹理具有完整性和精细性。In the implementation of the present invention, based on the three channels of R, G, and B of the binary digital array with 8 digits each, the target image is divided into 24 layers, and then the texture is extracted for each layer, and then each image is The weights of the sub-texture maps of the layer are superimposed to obtain the total texture map, which can effectively improve the accuracy of image texture extraction and ensure that the extracted texture is complete and refined.
在一种可选的实施例中,S3:对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图,包括:In an optional embodiment, S3: Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map of each layer, including:
采用所述窗框掩膜对每个图层进行覆盖;其中,所述窗框掩膜由若干个窗框构成;Each layer is covered with the window frame mask; wherein the window frame mask is composed of several window frames;
对每个图层,每个窗框采用至少两个对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息;其中,每个对比组由窗框上两个互为对侧的检测点构成;For each layer, each window frame uses at least two comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located; among them, each comparison group consists of two comparison groups on the window frame. It consists of two detection points on opposite sides of each other;
其中,取位于窗框边界上的点作为检测点,用于检测该点所在位置上的图层像素值,并把在窗框中互为对侧的两个检测点作为一对对比组。Among them, the point located on the boundary of the window frame is taken as the detection point to detect the pixel value of the layer at the location of the point, and the two detection points on opposite sides of the window frame are used as a pair of comparison groups.
将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图。The texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer.
进一步,在每个窗框的边界设置第一对比组和第二对比组,其中,所述第一对比组的两个检测点的连线与所述第二对比组中两个检测点的连线之间的夹角为360°/2n,n表示对比组数量;Further, a first comparison group and a second comparison group are set at the boundary of each window frame, wherein the line connecting the two detection points of the first comparison group is connected to the line connecting the two detection points of the second comparison group. The angle between the lines is 360°/2n, where n represents the number of comparison groups;
则,每个窗框采用至少两对对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息,包括:Then, each window frame uses at least two pairs of comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located, including:
分别提取所述第一对比组、第二对比组的检测点所在位点的图层像素值,并进行数值比对;Extract the layer pixel values of the locations where the detection points of the first comparison group and the second comparison group are located respectively, and conduct numerical comparisons;
当所述第一对比组的两检测点同值、且所述第二对比组的两检测点同值时,则判定所述第一对比组和所述第二对比组之间均不存在纹理,不进行填色处理; When the two detection points of the first comparison group have the same value and the two detection points of the second comparison group have the same value, it is determined that there is no texture between the first comparison group and the second comparison group. , no coloring processing is performed;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点同值时,则判定所述第一对比组之间存在纹理,且所述第二对比组不存在纹理;所述窗框按所述第一对比组的中垂线一分为二,对所述第一对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不进行填色;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have the same value, it is determined that there is texture between the first comparison group and the second comparison group does not There is texture; the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点异值时,则判定所述第一对比组和第二对比组之间存在共同纹理,所述窗框按第一对比组、第二对比组的4个检测点中任意两个互为异值的检测点的中垂线为界,将窗框一分为二,对两个数值为1的检测点所在的一半窗框进行填色处理,两个数值为0的检测点所在的一半不做填色处理;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have different values, it is determined that there is a common texture between the first comparison group and the second comparison group. The window frame is divided into two parts based on the mid-perpendicular line of any two detection points with different values among the four detection points of the first comparison group and the second comparison group, and the two values are 1. The half of the window frame where the detection point is located is filled with color, and the half where the two detection points with a value of 0 is is not filled with color;
其中,所述窗框的填色为对应窗框的纹理信息。Wherein, the color filling of the window frame is the texture information of the corresponding window frame.
在本发明实施例中,窗框掩膜(对侧拮抗窗框掩膜)由若干个自定义大小的窗框构成。示例性的,在发明实施例中,将窗框设定为3×3像素大小,则对于一个11x11像素的原目标图像,每个图层都由5x5个窗框构成的掩膜进行覆盖。则每个窗框覆盖图像的3x3共9个像素,如图3所示。In the embodiment of the present invention, the window frame mask (contralateral antagonistic window frame mask) is composed of several window frames of customized sizes. For example, in the embodiment of the invention, if the window frame is set to a size of 3×3 pixels, then for an original target image of 11×11 pixels, each layer is covered by a mask composed of 5×5 window frames. Then each window frame covers a total of 9 pixels of 3x3 of the image, as shown in Figure 3.
窗框的结构:如图4所示,每个窗框设至少两个对比组,对比组的特征点是:1)一个对比组由两个位于窗框边界上的检测点组成,两个检测点沿中点对称,互相位于窗框的对侧;2)两个对比组的朝向之间的夹角为360°/2n,n表示对比组数量。以设置第一对比组A和第二对比组B为例,对窗框的纹理信息提取进行说明:The structure of the window frame: As shown in Figure 4, each window frame has at least two comparison groups. The characteristic points of the comparison group are: 1) A comparison group consists of two detection points located on the boundary of the window frame. The points are symmetrical along the midpoint and located on opposite sides of the window frame; 2) The angle between the orientations of the two comparison groups is 360°/2n, where n represents the number of comparison groups. Taking the setting of the first comparison group A and the second comparison group B as an example, the texture information extraction of the window frame is explained:
纹理识别:分别提取对比组A、B的检测点所在像素点的数值,并进行数值比对;如果对比组A、B的两个检测点同值(都为0,或都为1),则判定该对比组A、B之间不存在纹理;如果对比组A或B的两检测点异值(一者为0一者为1),则判定对比组A或B之间存在纹理;Texture recognition: Extract the values of the pixels where the detection points of comparison groups A and B are located, and compare the values; if the two detection points of comparison groups A and B have the same value (both are 0, or both are 1), then It is determined that there is no texture between the comparison groups A and B; if the two detection points of the comparison group A or B have different values (one is 0 and the other is 1), it is determined that there is texture between the comparison group A or B;
纹理提取:窗框同时有对比组A、B共4个检测点对所覆盖的像素区域进行纹理提取。根据对比组A、B不同的检测结果,分别进行如下纹理结果输出:Texture extraction: The window frame has a total of 4 detection points of comparison groups A and B to extract texture from the covered pixel area. According to the different detection results of comparison groups A and B, the following texture results are output:
1)若A对比组的两检测点同值,B对比组的检测点同值:窗框无纹理,不填色1) If the two detection points of comparison group A have the same value, and the detection points of comparison group B have the same value: the window frame has no texture and no color filling.
2)若A对比组检测点异值,B对比组检测点同值:窗框按A对比组的中垂线一分为二,对A对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不填色;2) If the detection points of comparison group A have different values and the detection points of comparison group B have the same value: the window frame is divided into two parts according to the mid-vertical line of comparison group A, and the half of the window frame where the detection point with a value of 1 in comparison group A is located Carry out coloring and leave the other half uncolored;
3)若A对比组检测点同值,B对比组检测点异值:窗框按B对比组的中垂线一分为二,对B对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不填色;3) If the detection points of comparison group A have the same value and the detection points of comparison group B have different values: the window frame is divided into two according to the mid-vertical line of comparison group B, and the half of the window frame where the detection point with a value of 1 in comparison group B is located Carry out coloring and leave the other half uncolored;
4)若A对比组检测点异值,B对比组检测点异值:窗框按4个检测点中,任意互为异值的两个检测点的中垂线一分为二,对两个1值检测点所在的一半窗框进行填色处理,另一半不填色;4) If the detection points of comparison group A have different values and the detection points of comparison group B have different values: the window frame is divided into two according to the mid-perpendicular line of any two detection points with different values among the four detection points, and the two Half of the window frame where the 1 value detection point is located is filled with color, and the other half is not filled with color;
示例性的,假设A对比组为水平朝向,B对比组为垂直朝向。For example, assume that comparison group A is horizontally oriented, and comparison group B is vertically oriented.
当A对比组检测点同值,B对比组检测点也同值时,判定窗框内不存在纹理; When the detection points of comparison group A have the same value and the detection points of comparison group B also have the same value, it is determined that there is no texture in the window frame;
当A对比组检测点异值,B对比组检测点同值时,将窗框以A对比组的中垂线一分为二。此时,如果A对比组的左检测点为1右检测为0,则将窗框的左半侧进行颜色填充,右半侧维持无色;若1值在右检测点,则窗框右侧做颜色填充而左侧维持无色;When the detection points of comparison group A have different values and the detection points of comparison group B have the same value, the window frame is divided into two parts by the mid-perpendicular line of comparison group A. At this time, if the left detection point of comparison group A is 1 and the right detection is 0, then the left half of the window frame will be filled with color and the right half will remain colorless; if the 1 value is at the right detection point, the right side of the window frame will Do color filling and keep the left side colorless;
当A对比组检测点同值,B对比组检测点异值时,则将窗框以B对比组的中垂线一分为二。此时,如果B组的上检测点为1下检测点为0,则将窗框的上半侧进行颜色填充,下半侧维持无色;反之则下半侧颜色填充,上半侧维持无色;When the detection points of comparison group A have the same value and the detection points of comparison group B have different values, the window frame will be divided into two parts with the mid-perpendicular line of comparison group B. At this time, if the upper detection point of group B is 1 and the lower detection point is 0, the upper half of the window frame will be filled with color and the lower half will remain colorless; otherwise, the lower half will be filled with color and the upper half will remain colorless. color;
当A对比组检测点异值,B对比组检测点也异值时,以4个检测点中互为异值的两个检测点的中垂线将窗框一分为二,并对两个1值检测点所在的半边进行颜色填充。此时如果左检测点和上检测点为1值,则对窗框的左上侧进行颜色填充,右下侧保持无色;如果右检测点和下检测点为1值,则对窗框的右下侧进行颜色填充,左下侧保持无色;When the detection points of comparison group A have different values and the detection points of comparison group B also have different values, the window frame is divided into two with the mid-perpendicular line of the two detection points that are different values among the four detection points, and the two detection points are divided into two parts. The half side where the 1 value detection point is located is filled with color. At this time, if the left detection point and the upper detection point are 1, the upper left side of the window frame will be filled with color, and the lower right side will remain colorless; if the right detection point and the lower detection point are 1, then the right side of the window frame will be filled with color. The lower side is filled with color, and the lower left side remains colorless;
以所述窗框四条边的中点为4个检测点,其中,对互为对侧的一对检测点作为一个对比组,则可以将所述窗框的4个检测点划分为垂直方向的一对检测点和水平方向的一对检测点。将一对检测点所在像素的数值进行比对,如果任意一对检测点在对比中存在异值的情况,则说明这两个检测点之间一定存在纹理,即所述窗框内存在纹理;如果两对检测点在对比中均不存在异值,则所述窗框内不存在纹理。例如图层B1上一个窗框的垂直方向的一对检测点所在像素的数值分为1和0,水平方向的一对检测点所在像素的数值为0和0,则该窗框内存在纹理;又例如图层B1上另一个窗框的垂直方向的一对检测点所在像素的数值分为0和0,水平方向的一对检测点所在像素的数值为0和0,则该窗框内不存在纹理。具体地,可以以像素数值不相同的一对检测点的中垂线为分隔线将窗框一分为二,将窗框中像素数值为1所在的一半填充颜色,窗框中像素数值为0所在的一半不填充颜色;其中,所述窗框按照对应图层的颜色权值进行颜色填充,颜色权值取决于该图层所对应的颜色通道和其数位对应的权重,其权重y与其所在数位n的关系为y=2^(n-1)。例如图层R1的窗框按照R=1进行颜色填充,图层G5的窗框按照G=16进行颜色填充,图层B8的窗框按B=128进行颜色填充。Taking the midpoints of the four sides of the window frame as four detection points, and a pair of detection points on opposite sides as a comparison group, the four detection points of the window frame can be divided into vertical ones. A pair of detection points and a pair of detection points in the horizontal direction. Compare the values of the pixels where a pair of detection points are located. If any pair of detection points has outliers in the comparison, it means that there must be texture between the two detection points, that is, there is texture in the window frame; If there are no outliers in the comparison between the two pairs of detection points, there is no texture in the window frame. For example, the values of the pixels located at a pair of detection points in the vertical direction of a window frame on layer B1 are divided into 1 and 0, and the values of the pixels located at a pair of detection points in the horizontal direction are 0 and 0, then there is a texture in the window frame; For another example, the values of pixels located at a pair of detection points in the vertical direction of another window frame on layer B1 are divided into 0 and 0, and the values of pixels located at a pair of detection points in the horizontal direction are 0 and 0, then there are no pixels in the window frame. Texture is present. Specifically, the window frame can be divided into two parts using the mid-perpendicular line of a pair of detection points with different pixel values as the dividing line, and the half of the window frame where the pixel value is 1 is filled with color, and the pixel value in the window frame is 0. The half of the window frame is not filled with color; the window frame is filled with color according to the color weight of the corresponding layer. The color weight depends on the color channel corresponding to the layer and the weight corresponding to its number. Its weight y is the same as the color of the window frame. The relationship between the digit n is y=2^(n-1). For example, the window frame of layer R1 is filled with R=1, the window frame of layer G5 is filled with G=16, and the window frame of layer B8 is filled with B=128.
考虑到窗框内的纹理不光有色差属性,还有朝向属性。在本发明实施中还设置两对之间的夹角为360°/2n,n表示对比组数量的检测点,来帮助确定纹理的朝向。此时,所述对所述窗框进行填色处理,具体包括:Considering that the texture within the window frame not only has color difference attributes, but also orientation attributes. In the implementation of the present invention, the angle between the two pairs is also set to 360°/2n, where n represents the number of detection points in the comparison group to help determine the orientation of the texture. At this time, the color-filling process on the window frame specifically includes:
以图层R1为例,对图层R1中的每个窗框都设置一个水平方向和一个垂直方向的对比组,当只有一对检测点是异值,即一个为1一个为0时,以一对异值检测点的中垂线为纹理朝向;当两对检测点均为异值时,纹理的朝向为窗框的对角线,2对检测点共有16种组合情形。两对对比组都同值,即同为0或同为1时,代表窗框内没有纹理。其中,窗框内纹理朝向如图5所示。通过设置多对检测点进行像素数值对比,能提高窗框内纹理朝向的精确度。Taking layer R1 as an example, set a horizontal and vertical comparison group for each window frame in layer R1. When only a pair of detection points are outliers, that is, one is 1 and the other is 0, then The mid-perpendicular line of a pair of outlier detection points is the orientation of the texture; when both pairs of detection points are outliers, the orientation of the texture is the diagonal line of the window frame. There are 16 combinations of two pairs of detection points. Both pairs of comparison groups have the same value, that is, when they are both 0 or 1, it means there is no texture in the window frame. Among them, the texture orientation within the window frame is shown in Figure 5. By setting multiple pairs of detection points to compare pixel values, the accuracy of the texture orientation within the window frame can be improved.
在一种可选的实施例中,所述将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图,包括: In an optional embodiment, the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer, including:
在每个窗框完成对所在区域的填色处理后,将所有窗框的纹理信息按照其在所述窗框掩膜中位置进行合并,构成所在图层的子纹理图。After each window frame completes the color filling process of the area where it is located, the texture information of all window frames is merged according to its position in the window frame mask to form a sub-texture map of the layer where it is located.
其中,子纹理图按其所在图层进行命名,比如R1图层的子纹理图称为R1子纹理图,等等。如图6所示。Among them, the sub-texture map is named according to the layer in which it is located. For example, the sub-texture map of the R1 layer is called the R1 sub-texture map, and so on. As shown in Figure 6.
在本发明实施例中,所述目标图像的总纹理图是24个图层的窗框内纹理相权值叠加的结果。由于每个图层的窗框的纹理方向有8种可能性,而同一窗框在不同图层的纹理方向可能不一,所以多个图层进行叠加后,总纹理图中的窗框的纹理往往是米字格形式。设窗框的面积为S,总纹理图的最小色块单位是面积为S/8的直角三角形,因此,本发明通过以窗框为单位来对目标图像纹理进行提取,得到的总纹理图的分辨率会比窗框掩膜高8倍。当把窗框设置为3x3像素大小时,每个像素的面积为S/9,而总纹理图的最小可分辨色块面积为S/8,即可以推断出总纹理图的最小色块面积为单个像素的1.125倍。由此可见,当把窗框设置为3x3像素大小时,最终得到的总纹理图在细节分辨率上接近于单像素水平,具有与原图像相近的细节分辨率,使提取的纹理信息具有较好的完整性和精细性。In this embodiment of the present invention, the total texture map of the target image is the result of the superposition of texture phase weights in the window frames of 24 layers. Since there are 8 possibilities for the texture direction of the window frame on each layer, and the texture direction of the same window frame on different layers may be different, so after multiple layers are superimposed, the texture of the window frame in the total texture map It is often in the form of a rice grid. Suppose the area of the window frame is S, and the minimum color block unit of the total texture map is a right triangle with an area of S/8. Therefore, the present invention extracts the texture of the target image by using the window frame as the unit, and the obtained total texture map is The resolution will be 8 times higher than the window frame mask. When the window frame is set to 3x3 pixels, the area of each pixel is S/9, and the minimum resolvable color patch area of the total texture map is S/8. It can be inferred that the minimum color patch area of the total texture map is 1.125 times that of a single pixel. It can be seen that when the window frame is set to 3x3 pixels, the final texture image is close to the single-pixel level in detail resolution and has a detail resolution similar to the original image, making the extracted texture information better. completeness and precision.
传统的边缘检测算法对纹理明显的灰度图像的边缘检测较好,但对于彩色图像或者纹理较弱或较为复杂的自然环境类图像的边缘检测效果较差。相较之下,本发明对各种图像类型,包括抽象图、散点图、黑白图像、彩色图像、人物图像、环境图像等等,都有优秀的纹理提取性能。Traditional edge detection algorithms are good at edge detection for grayscale images with obvious textures, but have poor edge detection effects for color images or natural environment images with weak or complex textures. In comparison, the present invention has excellent texture extraction performance for various image types, including abstract images, scatter plots, black and white images, color images, character images, environment images, etc.
由于所述本发明所提取的总纹理图不仅保留着所有纹理信息,而且用RGB值直观体现着每个纹理点的色差性状。因此可以利用相同物体的纹理往往色差性状相近的这一规律,将目标图像中不同物体的纹理实现分离(需要图示)。在对象分离、对象识别等后续的图像处理技术上也有很大的应用前景。Because the total texture map extracted by the present invention not only retains all texture information, but also uses RGB values to intuitively reflect the color difference characteristics of each texture point. Therefore, we can use the rule that the textures of the same object often have similar color difference properties to separate the textures of different objects in the target image (illustration required). It also has great application prospects in subsequent image processing technologies such as object separation and object recognition.
在一种可选的实施例中,所述方法还包括:In an optional embodiment, the method further includes:
通过设置RGB取值范围,对所述总纹理图进行纹理色块筛选,以获得最终的纹理图输出结果。By setting the RGB value range, texture color blocks are filtered on the total texture map to obtain the final texture map output result.
本方法可以通过设置RGB值取值范围的方式对所述总纹理图的纹理信息进行灵活筛选,从而获取不同精简程度的纹理结果。This method can flexibly filter the texture information of the total texture map by setting the RGB value range, thereby obtaining texture results with different levels of simplicity.
在本发明实施例中,所述总纹理图包含目标图像的所有纹理信息,并且用直观的RGB值显示每个纹理点的色差性状,使得总纹理图内包含的纹理数据具有极大的可挖掘空间和极简便的操作性。例如通过设置不同数值范围的RGB值筛选条件,通过该RGB值筛选条件,对所述总纹理图进行过滤,从而获得不同的纹理结果。例如设置R,G,B∈(8,255]的RGB值筛选条件,即滤除掉所有RGB值同时小于8的纹理色块;又或者进一步提高RGB值筛选条件,设置R,G,B∈(16,255]的RGB值筛选条件,滤除掉RGB值同时小于16的纹理色块,进一步精简纹理。通过设置不同的RGB值筛选条件,将总纹理图中不符合RGB值筛选条件的色块直接删除,其剩余的色块所构成的图像就是纹理走向清晰的结果图,使得纹理结果具有自定义的特点。In the embodiment of the present invention, the total texture map contains all the texture information of the target image, and uses intuitive RGB values to display the color difference characteristics of each texture point, making the texture data contained in the total texture map highly mineable. space and extremely easy operability. For example, by setting RGB value filtering conditions in different numerical ranges, the total texture map is filtered through the RGB value filtering conditions, thereby obtaining different texture results. For example, set the RGB value filtering condition of R, G, B∈ (8, 255], that is, filter out all texture color patches with RGB values less than 8 at the same time; or further improve the RGB value filtering condition, set R, G, B∈ (16, 255] RGB value filtering conditions, filter out texture color blocks with RGB values less than 16 at the same time, and further streamline the texture. By setting different RGB value filtering conditions, the colors in the total texture map that do not meet the RGB value filtering conditions are Blocks are deleted directly, and the image formed by the remaining color blocks is the result image with clear texture direction, making the texture result have customized characteristics.
同时由于本发明实施例的总纹理图包括色差值1-255的所有纹理信息,因此 可以检测到连肉眼也容易忽略/难识别的纹理。At the same time, since the total texture map in the embodiment of the present invention includes all texture information with color difference values 1-255, therefore Textures that are easily overlooked/difficult to recognize even with the naked eye can be detected.
在其他实施例中,还可以采用分区域排名筛选法对所述总纹理图进行过滤处理,具体包括:In other embodiments, a sub-regional ranking filtering method can also be used to filter the total texture map, specifically including:
将总纹理图分割成i列j行共ixj块子区域,对每块子区域独立进行色块筛选。筛选条件设置为删除该子区域内色差值排名低于n%的所有色块,并过滤掉RGB值小于m的极弱纹理。其中,n、m为预设常数。Divide the total texture map into i columns, j rows and ixj sub-regions, and perform color block screening independently on each sub-region. The filtering conditions are set to delete all color blocks with a color difference value ranking lower than n% in the sub-region, and filter out extremely weak textures with RGB values less than m. Among them, n and m are preset constants.
通过分区域排名筛选法对所述总纹理图进行过滤处理,可以保证每个子区域都有一定的纹理留存,使得弱纹理区当中的相对强纹理能够得以保留,而这些相对强纹理往往就是受光影环境干扰而难以提取的重要纹理,从而避免环境光照对纹理提取的干扰。Filtering the total texture map through the sub-region ranking screening method can ensure that each sub-region has a certain texture retention, so that relatively strong textures in weak texture areas can be retained, and these relatively strong textures are often affected by light and shadow. Important textures that are difficult to extract due to environmental interference, thereby avoiding interference from ambient lighting on texture extraction.
实施例二Embodiment 2
请参阅图9,本发明实施例提供了一种图像纹理提取装置,包括:Referring to Figure 9, an embodiment of the present invention provides an image texture extraction device, including:
制转换模块1,用于将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;The system conversion module 1 is used to convert the color value of each pixel of the target image into a system to obtain a digital array of each pixel; where the system of the color value of the pixel after conversion is lower than the system of the color value of the pixel before conversion. system;
图层分解模块2:用于根据所述数字阵列,将所述目标图像分解为若干个图层;Layer decomposition module 2: used to decompose the target image into several layers according to the digital array;
子纹理图提取模块3,用于对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;The sub-texture map extraction module 3 is used to extract texture information using a window frame mask for each of the layers, and obtain the sub-texture map of each of the layers;
总纹理提取模块4,用于将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。The total texture extraction module 4 is used to superpose the weights of all the sub-texture maps to synthesize the total texture map of the target image.
在一种可选的实施例中,所述制转换模块用于将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列;In an optional embodiment, the system conversion module is used to convert the color values of each pixel of the target image into N systems to obtain a digital array for each pixel;
其中,每个像素的R、G、B三色通道的色阶值分别表示为N进制形式,则每个像素的RGB值表示为3×M位的数字阵列;其中,M表示像素的最大颜色值在N进制下对应的数位。Among them, the color scale values of the R, G, and B color channels of each pixel are expressed in N-ary form respectively, and the RGB value of each pixel is expressed as a 3×M-bit digital array; where M represents the maximum size of the pixel. The corresponding digit of the color value in N base.
在一种可选的实施例中,所述制转换模块,包括In an optional embodiment, the system conversion module includes
二进制转换单元,用于将所述目标图像的各个像素的颜色值进行二进制转换,得到每个像素的3×8位的数字阵列;A binary conversion unit, used to perform binary conversion on the color values of each pixel of the target image to obtain a 3×8-bit digital array for each pixel;
则所述图层分解模块包括:The layer decomposition module includes:
同数位聚类单元,用于根据每个像素的数字阵列,将所有像素中的各颜色通道的各数位分别聚类合并为一个图层。The same digit clustering unit is used to cluster the digits of each color channel in all pixels into one layer according to the digital array of each pixel.
在一种可选的实施例中,所述子纹理图提取模块包括:In an optional embodiment, the sub-texture map extraction module includes:
图层纹理提取单元,用于采用所述窗框掩膜对每个图层进行覆盖;其中,所述窗框掩膜由若干个窗框构成;A layer texture extraction unit is used to cover each layer with the window frame mask; wherein the window frame mask is composed of several window frames;
窗框纹理提取单元,用于对每个图层,每个窗框采用至少两个对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息;其中,每个 对比组由窗框上两个互为对侧的检测点构成;The window frame texture extraction unit is used for each layer and each window frame to use at least two comparison groups to extract the texture of the layer area where each window frame is located, and obtain the texture information of the layer area where each window frame is located; wherein, each indivual The comparison group consists of two detection points on opposite sides of the window frame;
窗框拼接单元,用于将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图。The window frame splicing unit is used to splice the texture information of all window frames in the window frame mask to obtain the sub-texture map of the corresponding layer.
在一种可选的实施例中,在每个窗框的边界设置第一对比组和第二对比组,其中,所述第一对比组的两个检测点的连线与所述第二对比组中两个检测点的连线之间的夹角为360°/2n,n表示对比组数量;In an optional embodiment, a first comparison group and a second comparison group are set at the boundary of each window frame, wherein a line connecting the two detection points of the first comparison group and the second comparison group are The angle between the lines connecting the two detection points in the group is 360°/2n, where n represents the number of comparison groups;
所述窗框纹理提取单元包括:The window frame texture extraction unit includes:
数值对比单元,用于分别提取所述第一对比组、第二对比组的检测点所在位点的图层像素值,并进行数值比对;A numerical comparison unit, used to respectively extract the layer pixel values of the locations where the detection points of the first comparison group and the second comparison group are located, and perform numerical comparison;
当所述第一对比组的两检测点同值、且所述第二对比组的两检测点同值时,则判定所述第一对比组和所述第二对比组之间均不存在纹理,不进行填色处理;When the two detection points of the first comparison group have the same value and the two detection points of the second comparison group have the same value, it is determined that there is no texture between the first comparison group and the second comparison group. , no coloring processing is performed;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点同值时,则判定所述第一对比组之间存在纹理,且所述第二对比组不存在纹理;所述窗框按所述第一对比组的中垂线一分为二,对所述第一对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不进行填色;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have the same value, it is determined that there is texture between the first comparison group and the second comparison group does not There is texture; the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
当所述第一对比组的两检测点异值、且所述第二对比组的两检测点异值时,则判定所述第一对比组之间存在纹理,且所述第二对比组存在纹理;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have different values, it is determined that there is texture between the first comparison group and the second comparison group. texture;
其中,所述窗框的填色为对应窗框的纹理信息。Wherein, the color filling of the window frame is the texture information of the corresponding window frame.
在一种可选的实施例中,所述窗框拼接单元具体用于在每个窗框完成对所在区域的填色处理后,将所有窗框的纹理信息按照其在所述窗框掩膜中位置进行合并,构成所在图层的子纹理图。In an optional embodiment, the window frame splicing unit is specifically configured to add the texture information of all window frames according to the window frame mask after each window frame completes the color filling process of its area. Merge at the middle position to form the sub-texture map of the layer where it is located.
在一种可选的实施例中,所述总纹理提取模块,用于将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。In an optional embodiment, the total texture extraction module is configured to superpose weights of all the sub-texture maps to synthesize a total texture map of the target image.
在一种可选的实施例中,所述装置还包括:In an optional embodiment, the device further includes:
纹理过滤模块,用于按照预设的RGB取值范围对所述总纹理图的纹理信息进行筛选,得到最终的总纹理图The texture filtering module is used to filter the texture information of the total texture map according to the preset RGB value range to obtain the final total texture map.
需要说明的是,本发明实施例的实施原理和技术效果第一实施例相同,在这里不再重复赘述。It should be noted that the implementation principles and technical effects of the embodiments of the present invention are the same as those of the first embodiment, and will not be repeated here.
实施例三Embodiment 3
请参阅图10,本发明实施例提供了一种图像纹理提取设备,包括至少一个处理器11,例如CPU,至少一个网络接口14或者其他用户接口13,存储器15,至少一个通信总线12,通信总线12用于实现这些组件之间的连接通信。其中,用户接口13可选的可以包括USB接口以及其他标准接口、有线接口。网络接口14可选的可以包括Wi-Fi接口以及其他无线接口。存储器15可能包含高速RAM存储器,也可能还包括非不稳定的存储器(non-volatilememory),例如至少一个磁盘存储器。存储器15可选的可以包含至少一个位于远离前述处理器11的存储装置。 Referring to Figure 10, an embodiment of the present invention provides an image texture extraction device, including at least one processor 11, such as a CPU, at least one network interface 14 or other user interface 13, a memory 15, and at least one communication bus 12. The communication bus 12 is used to implement connection communication between these components. Among them, the user interface 13 may optionally include a USB interface, other standard interfaces, and wired interfaces. The network interface 14 may optionally include a Wi-Fi interface and other wireless interfaces. The memory 15 may include high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 15 may optionally include at least one storage device located remotely from the aforementioned processor 11 .
在一些实施方式中,存储器15存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:In some embodiments, memory 15 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof:
操作系统151,包含各种系统程序,用于实现各种基础业务以及处理基于硬件的任务;Operating system 151, including various system programs, is used to implement various basic services and process hardware-based tasks;
程序152。Program 152.
具体地,处理器11用于调用存储器15中存储的程序152,执行上述实施例所述的图像纹理提取方法,例如图1所示的步骤S1。或者,所述处理器执行所述计算机程序时实现上述各装置实施例中各模块/单元的功能,例如二进制转换模块。Specifically, the processor 11 is configured to call the program 152 stored in the memory 15 to execute the image texture extraction method described in the above embodiment, such as step S1 shown in FIG. 1 . Alternatively, when the processor executes the computer program, it implements the functions of each module/unit in each of the above device embodiments, such as a binary conversion module.
示例性的,所述计算机程序可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器中,并由所述处理器执行,以完成本发明。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序在所述图像纹理提取设备中的执行过程。Exemplarily, the computer program may be divided into one or more modules/units, and the one or more modules/units are stored in the memory and executed by the processor to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of completing specific functions. The instruction segments are used to describe the execution process of the computer program in the image texture extraction device.
所述图像纹理提取设备可以是VCU、ECU、BMS等计算设备。所述图像纹理提取设备可包括,但不仅限于,处理器、存储器。本领域技术人员可以理解,所述示意图仅仅是图像纹理提取设备的示例,并不构成对图像纹理提取设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件。The image texture extraction device may be a computing device such as VCU, ECU, BMS, etc. The image texture extraction device may include, but is not limited to, a processor and a memory. Those skilled in the art can understand that the schematic diagram is only an example of the image texture extraction device and does not constitute a limitation on the image texture extraction device. It may include more or fewer components than shown, or combine certain components, or Different parts.
所称处理器11可以是微处理器(Microcontroller Unit,MCU)中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器11是所述图像纹理提取设备的控制中心,利用各种接口和线路连接整个图像纹理提取设备的各个部分。The so-called processor 11 can be a microprocessor (Microcontroller Unit, MCU) central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), or application-specific integrated circuits. (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general processor can be a microprocessor or the processor can be any conventional processor, etc. The processor 11 is the control center of the image texture extraction device and uses various interfaces and lines to connect the entire image texture extraction device. various parts of.
所述存储器15可用于存储所述计算机程序和/或模块,所述处理器11通过运行或执行存储在所述存储器内的计算机程序和/或模块,以及调用存储在存储器内的数据,实现所述图像纹理提取设备的各种功能。所述存储器15可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器15可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 15 may be used to store the computer programs and/or modules. The processor 11 implements the above by running or executing the computer programs and/or modules stored in the memory and calling the data stored in the memory. Describes various functions of image texture extraction equipment. The memory 15 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Store data created based on the use of the mobile phone (such as audio data, phone book, etc.), etc. In addition, the memory 15 may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, smart memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) Card, Flash Card, at least one disk storage device, flash memory device, or other volatile solid-state storage device.
其中,所述图像纹理提取设备集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例方法中的全部或部分流程,也可以通过 计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。Wherein, if the integrated modules/units of the image texture extraction device are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the present invention implements all or part of the processes in the above embodiment methods, and can also implement A computer program instructs relevant hardware to complete the process. The computer program can be stored in a computer-readable storage medium. When executed by a processor, the computer program can implement the steps of each of the above method embodiments. Wherein, the computer program includes computer program code, which may be in the form of source code, object code, executable file or some intermediate form. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording media, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media, etc.
实施例四Embodiment 4
本发明实施例提供了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行如第一实施例中任意一项所述的图像纹理提取方法。Embodiments of the present invention provide a computer-readable storage medium. The computer-readable storage medium includes a stored computer program. When the computer program is running, the device where the computer-readable storage medium is located is controlled to execute the first step. The image texture extraction method described in any one of the embodiments.
需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本发明提供的装置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。It should be noted that the device embodiments described above are only illustrative. The units described as separate components may or may not be physically separated. The components shown as units may or may not be physically separate. The unit can be located in one place, or it can be distributed across multiple network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the device embodiments provided by the present invention, the connection relationship between modules indicates that there are communication connections between them, which can be specifically implemented as one or more communication buses or signal lines. Persons of ordinary skill in the art can understand and implement the method without any creative effort.
以上所述是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也视为本发明的保护范围。 The above is the preferred embodiment of the present invention. It should be pointed out that for those of ordinary skill in the art, several improvements and modifications can be made without departing from the principles of the present invention. These improvements and modifications are also regarded as It is the protection scope of the present invention.

Claims (11)

  1. 一种图像纹理提取方法,其特征在于,包括:An image texture extraction method, characterized by including:
    将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;Convert the color value of each pixel of the target image into a base to obtain a digital array for each pixel; where the base of the color value of the pixel after conversion is lower than the base of the color value of the pixel before conversion;
    根据所述数字阵列,将所述目标图像分解为若干个图层;Decompose the target image into several layers according to the digital array;
    对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;Use a window frame mask to extract texture information for each layer, and obtain a sub-texture map for each layer;
    将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
  2. 如权利要求1所述的图像纹理提取方法,其特征在于,所述将目标图像每个像素的颜色值进行进制转换,得到数字阵列,包括:The image texture extraction method according to claim 1, characterized in that the color value of each pixel of the target image is converted into a binary system to obtain a digital array, including:
    将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列;Convert the color values of each pixel of the target image into N-ary systems to obtain a digital array for each pixel;
    其中,每个像素的R、G、B颜色通道的色阶值分别表示为N进制形式,则每个像素的RGB值表示为3×M位的数字阵列;其中,N进制小于颜色值当前进制,M表示颜色通道的最大色阶值在N进制下对应的数位。Among them, the color scale values of the R, G, and B color channels of each pixel are expressed in N-base form, and the RGB value of each pixel is expressed as a 3×M-bit digital array; among them, the N-base is smaller than the color value Current forwarding, M represents the number corresponding to the maximum level value of the color channel in the N system.
  3. 如权利要求2所述的图像纹理提取方法,其特征在于,所述将所述目标图像的各个像素的颜色值进行N进制转换,得到每个像素的数字阵列,包括:The image texture extraction method according to claim 2, characterized in that the color value of each pixel of the target image is converted into an N system to obtain a digital array of each pixel, including:
    将所述目标图像的各个像素的颜色值进行进制转换,得到每个像素的3×M位的数字阵列;Perform base conversion on the color values of each pixel of the target image to obtain a 3×M-bit digital array for each pixel;
    则,根据所述数字阵列,将所述目标图像分解为若干个图层,包括:Then, according to the digital array, the target image is decomposed into several layers, including:
    根据每个像素的数字阵列,将所有像素中各颜色通道的各数位分别聚类合并为一个图层。 According to the digital array of each pixel, the digits of each color channel in all pixels are clustered and merged into one layer.
  4. 如权利要求3所述的图像纹理提取方法,其特征在于,所述对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图,包括:The image texture extraction method according to claim 3, characterized in that, using a window frame mask to extract texture information for each layer, and obtaining a sub-texture map of each layer, including:
    采用所述窗框掩膜对每个图层进行覆盖;其中,所述窗框掩膜由若干个窗框单元构成;The window frame mask is used to cover each layer; wherein the window frame mask is composed of several window frame units;
    每个窗框采用至少两个对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息;其中,每个对比组由窗框上两个互为对侧的检测点构成;Each window frame uses at least two contrast groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located; among them, each contrast group consists of two opposite sides of the window frame. Detection point composition;
    将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图。The texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer.
  5. 如权利要求4所述的图像轮廓检测方法,其特征在于,在每个窗框的边界设置第一对比组和第二对比组,其中,所述第一对比组的两个检测点的连线与所述第二对比组中两个检测点的连线之间的夹角为360°/2n,n表示对比组数量;The image contour detection method according to claim 4, characterized in that a first comparison group and a second comparison group are set at the boundary of each window frame, wherein the line connecting the two detection points of the first comparison group The angle between the line connecting the two detection points in the second comparison group is 360°/2n, where n represents the number of comparison groups;
    则,每个窗框采用至少两对对比组对其所在图层区域进行纹理提取,获得每个窗框所在图层区域的纹理信息,包括:Then, each window frame uses at least two pairs of comparison groups to extract the texture of the layer area where it is located, and obtain the texture information of the layer area where each window frame is located, including:
    分别提取所述第一对比组、第二对比组的检测点所在位点的图层像素值,并进行数值比对;Extract the layer pixel values of the locations where the detection points of the first comparison group and the second comparison group are located respectively, and conduct numerical comparisons;
    当所述第一对比组的两检测点同值、且所述第二对比组的两检测点同值时,则判定所述第一对比组和所述第二对比组之间均不存在纹理,窗框不进行填色处理;When the two detection points of the first comparison group have the same value and the two detection points of the second comparison group have the same value, it is determined that there is no texture between the first comparison group and the second comparison group. , the window frame is not filled with color;
    当所述第一对比组的两检测点异值、且所述第二对比组的两检测点同值时,则判定所述第一对比组之间存在纹理,且所述第二对比组不存在纹理;所述窗框按所述第一对比组的中垂线一分为二,对所述第一对比组中数值为1的检测点所在的一半窗框进行填色处理,另一半不进行填色;When the two detection points of the first comparison group have different values and the two detection points of the second comparison group have the same value, it is determined that there is texture between the first comparison group and the second comparison group does not There is texture; the window frame is divided into two according to the mid-perpendicular line of the first comparison group, and half of the window frame where the detection point with a value of 1 in the first comparison group is located is filled with color, and the other half is not perform coloring;
    当所述第一对比组的两检测点异值、且所述第二对比组的两检测点异值时, 则判定所述第一对比组和第二对比组之间存在共同纹理,所述窗框按第一对比组、第二对比组的4个检测点中任意两个互为异值的检测点的中垂线为界,将窗框一分为二,对两个数值为1的检测点所在的一半窗框进行填色处理,两个数值为0的检测点所在的一半不做填色处理;When the two detection points of the first comparison group have different values, and the two detection points of the second comparison group have different values, Then it is determined that there is a common texture between the first comparison group and the second comparison group, and the window frame is determined by any two of the four detection points of the first comparison group and the second comparison group that are mutually different values. The middle vertical line is used as the boundary, and the window frame is divided into two parts. The half of the window frame where the two detection points with a value of 1 are located is filled with color, and the half of the window frame where the two detection points with a value of 0 are located is not filled with color.
    其中,所述窗框的填色结果为对应窗框的纹理信息。Wherein, the color filling result of the window frame is the texture information of the corresponding window frame.
  6. 如权利要求5所述的图像纹理提取方法,其特征在于,所述将所述窗框掩膜中所有窗框的纹理信息进行拼接,得到对应图层的子纹理图,包括:The image texture extraction method according to claim 5, wherein the texture information of all window frames in the window frame mask is spliced to obtain a sub-texture map of the corresponding layer, including:
    在每个窗框完成对所在区域的填色处理后,将所有窗框的纹理信息按照其在所述窗框掩膜中位置进行合并,构成所在图层的子纹理图。After each window frame completes the color filling process of the area where it is located, the texture information of all window frames is merged according to its position in the window frame mask to form a sub-texture map of the layer where it is located.
  7. 如权利要求1所述的图像纹理提取方法,其特征在于,所述将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图,包括:The image texture extraction method according to claim 1, characterized in that, superposing weights of all sub-texture maps to synthesize a total texture map of the target image includes:
    将所有所述子纹理图进行权值叠加,合成所述目标图像的总纹理图。All the sub-texture maps are superimposed by weights to synthesize a total texture map of the target image.
  8. 如权利要求1所述的图像纹理提取方法,其特征在于,还包括:The image texture extraction method according to claim 1, further comprising:
    通过设置RGB取值范围,对所述总纹理图进行纹理色块筛选,以获得最终的纹理图输出结果。By setting the RGB value range, texture color blocks are filtered on the total texture map to obtain the final texture map output result.
  9. 一种图像纹理提取装置,其特征在于,包括:An image texture extraction device, characterized by including:
    制转换模块,用于将目标图像每个像素的颜色值进行进制转换,得到每个像素的数字阵列;其中,转换后像素的颜色值的进制低于转换前像素的颜色值的进制;The system conversion module is used to convert the color value of each pixel of the target image into a system to obtain a digital array of each pixel; among which, the system of the color value of the pixel after conversion is lower than the system of the color value of the pixel before conversion. ;
    图层分解模块:用于根据所述数字阵列,将所述目标图像分解为若干个图层;Layer decomposition module: used to decompose the target image into several layers according to the digital array;
    子纹理图提取模块,用于对每个所述图层分别采用窗框掩膜进行纹理信息提取,获得每个所述图层的子纹理图;A sub-texture map extraction module, used to extract texture information using a window frame mask for each of the layers, and obtain a sub-texture map of each of the layers;
    总纹理提取模块,用于将所有所述子纹理图进行权值叠加,合成所述目标图 像的总纹理图。The total texture extraction module is used to superpose the weights of all the sub-texture maps to synthesize the target map. The overall texture map of the image.
  10. 一种图像纹理提取设备,其特征在于,包括:处理器、存储器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-9中任意一项所述的图像纹理提取方法。An image texture extraction device, characterized in that it includes: a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor. When the processor executes the computer program, the following is implemented: The image texture extraction method according to any one of claims 1-9.
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行如权利要求1-9中任意一项所述的图像纹理提取方法。 A computer-readable storage medium, characterized in that the computer-readable storage medium includes a stored computer program, wherein when the computer program is running, the device where the computer-readable storage medium is located is controlled to execute as claimed in claim 1- The image texture extraction method described in any one of 9.
PCT/CN2023/082070 2022-03-24 2023-03-17 Image texture extraction method and device, and computer readable storage medium WO2023179465A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210295310.5A CN115965671A (en) 2022-03-24 2022-03-24 Image texture extraction method, device and computer readable storage medium
CN202210295310.5 2022-03-24

Publications (1)

Publication Number Publication Date
WO2023179465A1 true WO2023179465A1 (en) 2023-09-28

Family

ID=87356582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/082070 WO2023179465A1 (en) 2022-03-24 2023-03-17 Image texture extraction method and device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN115965671A (en)
WO (1) WO2023179465A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262778A (en) * 2011-08-24 2011-11-30 重庆大学 Method for enhancing image based on improved fractional order differential mask
CN102779277A (en) * 2012-06-08 2012-11-14 中山大学 Main vein extracting method based on image processing
US20150339827A1 (en) * 2014-05-26 2015-11-26 Canon Kabushiki Kaisha Image processing apparatus, method, and medium
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262778A (en) * 2011-08-24 2011-11-30 重庆大学 Method for enhancing image based on improved fractional order differential mask
CN102779277A (en) * 2012-06-08 2012-11-14 中山大学 Main vein extracting method based on image processing
US20150339827A1 (en) * 2014-05-26 2015-11-26 Canon Kabushiki Kaisha Image processing apparatus, method, and medium
US20210185285A1 (en) * 2018-09-18 2021-06-17 Zhejiang Uniview Technologies Co., Ltd. Image processing method and apparatus, electronic device, and readable storage medium
CN111243071A (en) * 2020-01-08 2020-06-05 叠境数字科技(上海)有限公司 Texture rendering method, system, chip, device and medium for real-time three-dimensional human body reconstruction

Also Published As

Publication number Publication date
CN115965671A (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN110472623B (en) Image detection method, device and system
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment
US11347792B2 (en) Video abstract generating method, apparatus, and storage medium
WO2023005743A1 (en) Image processing method and apparatus, computer device, storage medium, and computer program product
CN107506738A (en) Feature extracting method, image-recognizing method, device and electronic equipment
CN108875759B (en) Image processing method and device and server
CN112634312B (en) Image background processing method and device, electronic equipment and storage medium
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
CN106355592A (en) Educational toy suite and its circuit elements and electric wires identifying method thereof
CN107025464A (en) A kind of colour selecting method and terminal
CN109671132A (en) A kind of curve data acquisition method, apparatus and system based on colour gamut feature
CN115082400A (en) Image processing method and device, computer equipment and readable storage medium
WO2022218082A1 (en) Image processing method and apparatus based on artificial intelligence, and electronic device, computer-readable storage medium and computer program product
CN115512258A (en) Desensitization method and device for video image, terminal equipment and storage medium
CN110444181A (en) Display methods, device, terminal and computer readable storage medium
WO2023179465A1 (en) Image texture extraction method and device, and computer readable storage medium
CN113052923A (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN111798497A (en) Image processing method and device, electronic device and storage medium
CN111126187A (en) Fire detection method, system, electronic device and storage medium
WO2017003240A1 (en) Image conversion device and image conversion method therefor
CN111681187A (en) Color noise reduction method and device, electronic equipment and readable storage medium
Park et al. Applying enhanced confusion line color transform using color segmentation for mobile applications
US20220237742A1 (en) White Background Protection in SRGAN Based Super Resolution
AU2021240205B1 (en) Object sequence recognition method, network training method, apparatuses, device, and medium
CN109242750B (en) Picture signature method, picture matching method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23773713

Country of ref document: EP

Kind code of ref document: A1