CN112489142A - Color identification method, device, equipment and storage medium - Google Patents

Color identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN112489142A
CN112489142A CN202011374500.3A CN202011374500A CN112489142A CN 112489142 A CN112489142 A CN 112489142A CN 202011374500 A CN202011374500 A CN 202011374500A CN 112489142 A CN112489142 A CN 112489142A
Authority
CN
China
Prior art keywords
color
image
sub
image block
image blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011374500.3A
Other languages
Chinese (zh)
Other versions
CN112489142B (en
Inventor
余永龙
谢会斌
李聪廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Jinan Boguan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Boguan Intelligent Technology Co Ltd filed Critical Jinan Boguan Intelligent Technology Co Ltd
Priority to CN202011374500.3A priority Critical patent/CN112489142B/en
Publication of CN112489142A publication Critical patent/CN112489142A/en
Application granted granted Critical
Publication of CN112489142B publication Critical patent/CN112489142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a color identification method, a device, equipment and a storage medium, comprising the following steps: segmenting the original image by using an image segmentation model to obtain images of each segmentation area; partitioning the image of the partition area based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of a pixel point at a preset position in the sub-image block; classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of sub image blocks to obtain a first image block corresponding to each category of sub image blocks; and determining a second image block for reflecting the main color category of the first image block based on the first image block, and determining a color category sequence of the divided area image according to the second image block. The color classification sequence containing the main color classification of the segmented region image is determined based on the self-adaptive block partitioning strategy, and the accuracy and the integrity of color identification are improved.

Description

Color identification method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to a color recognition method, device, apparatus, and storage medium.
Background
At present, computer vision is widely applied to various fields such as face recognition, security protection, unmanned driving and the like, wherein an image recognition technology is an important branch of the computer vision technology, and a color attribute is one of the most significant distinguishing points in an image, so that color recognition is particularly important in image recognition, but in practical application, the color recognition is often interfered by more factors, so that the recognition accuracy is not high. For example, in an application scenario of human body structural feature extraction, identification of colors of various parts of a pedestrian body is not only interfered by illumination factors, but also mainly affected by factors such as a small size of an object to be identified, an irregular boundary, and blending of multiple colors in the same part, such as clothes in a pattern of color matching, stripes, lattices, and prints, compared with a case of performing color judgment on a vehicle, which is a target with a large size, a regular boundary, and a single color. The above factors directly cause that the accuracy of color recognition of each part of the pedestrian is low overall in an application scene of pedestrian structural feature extraction, and indirectly influence the execution efficiency of services such as image searching and pedestrian Re-identification (ReID) of the pedestrian.
Disclosure of Invention
In view of the above, the present invention provides a color recognition method, device, apparatus and storage medium, which can improve the accuracy and integrity of color recognition. The specific scheme is as follows:
a first aspect of the present application provides a color recognition method, including:
segmenting the original image by using an image segmentation model to obtain images of each segmentation area;
partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block;
classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks;
determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
Optionally, the partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block includes:
partitioning the image of the division area according to a preset size to obtain sub image blocks;
extracting a plurality of target pixel points at preset positions in the sub-image blocks and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point;
counting the proportion of the target pixel points of different color categories in all the target pixel points to obtain target proportions corresponding to different color categories;
and if the color category with the maximum target ratio is a single color category, determining the color category as the main color category of the sub image block.
Optionally, if the color category with the largest target ratio is a single color category, before determining the color category as the main color category of the sub image block, the method further includes:
and judging whether the maximum target ratio in all the target ratios is larger than a preset threshold value, and if not, discarding the sub image blocks.
Optionally, the screening out a plurality of sub image blocks from each type of the sub image blocks respectively to obtain a first image block corresponding to each type of the sub image blocks includes:
respectively screening a plurality of sub image blocks from each type of sub image blocks according to the color saturation of the sub image blocks to obtain first image blocks corresponding to each type of sub image blocks;
correspondingly, the determining a second image block based on the first image block for reflecting the dominant color category of the first image block includes:
and splicing the first image blocks based on a preset splicing rule to obtain second image blocks for reflecting the main color categories of the first image blocks.
Optionally, the splicing the first image block based on a preset splicing rule includes:
and if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding shortage number, creating the shortage number of filling sub-image blocks based on the color value of the first image blocks, and splicing the first image blocks and the filling sub-image blocks according to the preset splicing rule.
Optionally, the creating the missing number of padding sub image blocks based on the color value of the first image block includes:
calculating an average value of the color values of the first image block to obtain an average color value;
and creating the deficient image blocks as the filling sub-image blocks by an image creating mode of setting the color values of the pixel points to the average color value.
Optionally, before the step of respectively screening a plurality of sub image blocks from each type of the sub image blocks, the method further includes:
prioritizing each type of the sub image blocks according to the number of the sub image blocks in each type of the sub image blocks to obtain the priority of each type of the sub image blocks;
correspondingly, the color category sequence of the segmented area image is determined according to each second image block; wherein the color category sequence includes a dominant color category of the segmented region image, including:
inputting each second image block into a trained color recognition model so that the color recognition model can output the color category and the corresponding confidence coefficient of each second image block; the color recognition model is obtained by training a blank model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises sample main color image blocks and corresponding color classes serving as sample labels;
acquiring the color category and the corresponding confidence of each second image block output by the color identification model;
judging whether the confidence coefficient is greater than a preset threshold, if so, determining the color category of the second image block output by the color recognition model as the color category of the second image block, and if not, calculating the color value of the second image block to obtain the color category of the second image block;
and determining the priority of each second image block according to the priority of the sub image block corresponding to each second image block, and sequencing the color class of each second image block according to the priority of each second image block to obtain the color class sequence of the divided area image.
A second aspect of the present application provides a color recognition apparatus comprising:
the segmentation module is used for segmenting the original image by using the image segmentation model to obtain images of all segmentation areas;
the determining module is used for partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block;
the acquisition module is used for classifying the sub image blocks according to the main color categories of the sub image blocks and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks;
the identification module is used for determining a second image block for reflecting the main color category of the first image block based on the first image block so as to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
A third aspect of the application provides an electronic device comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the aforementioned color recognition method.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein computer-executable instructions that, when loaded and executed by a processor, implement the aforementioned color recognition method.
In the application, an original image is firstly segmented by using an image segmentation model to obtain images of each segmentation area, then, based on a preset rule, partitioning the image of the partitioned area to obtain each sub image block, determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block, secondly, classifying the sub image blocks according to the main color categories of the sub image blocks, respectively screening a plurality of sub image blocks from each category of the sub image blocks, to obtain a first image block corresponding to each of the sub image blocks of each type, and finally determining a second image block reflecting a main color class of the first image block based on the first image block, and determining a color category sequence of the divided area image, which contains the main color category of the divided area image, according to each second image block. Therefore, the method improves the accuracy and the integrity of color identification by partitioning the segmented region image of the original image based on the adaptive partitioning strategy to obtain the sub-image blocks, classifying, splicing and supplementing the sub-image blocks to obtain the main color image blocks of the segmented region image, and determining the color category sequence of the segmented region image containing the main color categories of the segmented region image according to the color categories of the main color image blocks.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a color recognition method provided herein;
FIG. 2 is a schematic diagram of a specific color recognition method provided herein;
FIG. 3 is a schematic diagram of a specific image segmentation process provided in the present application;
FIG. 4 is a schematic diagram of an image of a segmentation region obtained by preprocessing the image according to the present disclosure;
FIG. 5 is a flow chart of a specific color recognition method provided herein;
FIG. 6 is a schematic diagram illustrating a process for determining a dominant color class of a sub-image block provided by the present application;
FIG. 7 is a flow chart of a specific color recognition method provided herein;
FIG. 8 is a flow chart of a specific color recognition method provided herein;
FIG. 9 is a flow chart of color recognition of an original image provided herein;
FIG. 10 is a schematic structural diagram of a color recognition device according to the present application;
fig. 11 is a structural diagram of a color recognition electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to overcome the technical problems, the application provides a color identification scheme which determines a main color image block of a segmented region image based on a self-adaptive blocking strategy, determines a color category sequence of the segmented region image according to the color category of the main color image block, wherein the color category sequence comprises the main color category of the segmented region image, and can effectively improve the accuracy and the integrity of color identification.
Fig. 1 is a flowchart of a color identification method according to an embodiment of the present disclosure. Referring to fig. 1, the color recognition method includes:
s11: and segmenting the original image by using the image segmentation model to obtain images of each segmentation area.
In this embodiment, after an original image is obtained, in order to accurately locate target objects in different regions in the original image, firstly, the original image needs to be segmented, the image is divided into mutually disjoint regions, a conventional image segmentation method may be adopted, but more, semantic segmentation and other processing are performed on the original image by using a depth learning algorithm. It should be noted that, in order to avoid the interference of the background color with the color recognition result, the background of the segmented region image obtained after the original image is preprocessed is a pure color corresponding to a preset color value, so that the difficulty of subsequent color recognition is reduced, and the accuracy of the recognition result is improved, wherein the preset color value is a color value corresponding to the background of the segmented region image. It is understood that each of the segmented region images contains pixel points of a single region of the original image.
S12: and partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block.
In this embodiment, the partitioned area image is partitioned based on an adaptive partitioning policy to obtain each sub image block of the partitioned area image. The method of identifying the color class of the divided region by directly feeding the entire image into the color identification model for color prediction or calculating the color value of the divided region has relatively high identification accuracy for a divided region image whose color is a pure color, but it is difficult to effectively identify the correct color or main color class for a divided region image whose color constitutes a complicated color. According to the embodiment, the segmented region image with complex color structure is further divided into smaller units, namely, sub-image blocks according to a preset rule, and then the main color category of the sub-image blocks is determined according to the color values of the pixel points at preset positions in the sub-image blocks, wherein in order to reduce the time consumption of calculation, the color category of the pixel points at the preset positions only needs to be counted, the preset positions are set by themselves according to business requirements on the premise of ensuring uniformity, and of course, under the condition that the calculation capability allows, the main color category determined by the color values of all the pixel points in the sub-image blocks can represent the color category of the sub-image blocks most. It should be noted that the determined main color type of the sub image block needs to be consistent with the color type predicted by using the color recognition model in the subsequent step and the standard of the color type determined by calculating the color value. It is easy to understand that the smaller each sub image block is, the fewer the sub image blocks contain the fewer pixel points, and when the sub image blocks contain the fewer pixel points, the higher the accuracy of determining the main color category of the sub image block by using the color values of the pixel points on the sub image block is.
S13: classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks.
In this embodiment, the same divided area image may correspond to a plurality of sub image blocks, and the main color categories corresponding to different sub image blocks may be different or the same, so after the sub image blocks and the main color categories thereof of the divided area image are obtained, the sub image blocks need to be classified according to the main color categories of the sub image blocks, the sub image blocks having the same color categories are classified into one class, and the number of the main color categories is identical to and corresponds to the number of the sub image blocks. Then, a plurality of sub image blocks are respectively screened out from each type of sub image block to obtain a first image block corresponding to each type of sub image block, the essence of the first image block is still different sub image blocks with representative colors in each type of sub image block, when the sub image blocks contain more pixel points, only one sub image block can be screened out from each type of sub image block, and compared with the color identification obtained by screening a plurality of sub image blocks, the accuracy is lower, in the embodiment, a plurality of sub image blocks are screened out from each type of sub image block to obtain the first image block corresponding to each type of sub image block.
S14: determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In the present embodiment, a second image block for reflecting a dominant color class of the first image block is determined based on the first image block, the color class of the second image block is determined by the color class of the first image block corresponding to the second image block, the color category of the corresponding first image block is reflected to a certain extent, in this embodiment, the corresponding second image block is obtained mainly by splicing the first image block, when the divided area image corresponds to a plurality of the second image blocks and the color class of the second image block is not a single color class, indicating that the divided region image is not a solid color, the divided region image having a plurality of color classes, and determining a color category sequence of the segmented region image according to the color categories of the plurality of second image blocks, wherein the combination of the color categories of the second image blocks is the color category sequence of the segmented region image. Still further, a dominant color class of the segmented region image may also be determined. It is understood that the color type of the second image block may be predicted by a pre-trained color recognition model, or the color value of the second image block may be calculated to determine the color type thereof.
It can be seen that, in the embodiment of the present application, sub image blocks are obtained by blocking a segmented region image of an original image based on an adaptive blocking strategy, second image blocks of the segmented region image, that is, main color image blocks of the region image, are obtained by classifying, stitching, and the like the sub image blocks, and a color class sequence of the segmented region image including the main color class of the segmented region image is determined according to the color class of each second image block, so that the accuracy and the integrity of color identification are improved.
Fig. 2 is a flowchart of a specific color recognition method according to an embodiment of the present disclosure. Referring to fig. 2, the color recognition method includes:
s21: segmenting an original image by using an image segmentation model constructed based on a U-Net network to obtain a segmentation result; the segmentation result is each segmentation area obtained by setting different gray values for different areas of the original image.
In this embodiment, an original image is segmented by using an image segmentation model constructed based on a U-Net network to obtain each segmented region of the original image with a pixel class label, where the pixel class label is a gray value. Of course, in addition to the U-Net network to construct the image segmentation model, other image segmentation networks, such as JPP-Net network, may also be used in the present embodiment. In addition, the present embodiment may use the existing image segmentation model constructed in advance to segment the original image, or may use the image segmentation model constructed in real time to segment the image according to the business requirement.
In this embodiment, taking constructing a semantic segmentation model of an image in a human image scene as an example, a large number of images including a human body as a target, such as a monitoring image of a pedestrian, are prepared in advance, then according to the segmentation task requirements, regions of a hat, hair, glasses, a mask, a scarf, a jacket, a piece of underwear, shoes, various bags, an umbrella, etc. of the human body are labeled, a semantic segmentation label is a gray scale map, the size of the region is consistent with that of the original image, different gray scale values are set for different regions, such as the hat, the hair, …, the umbrella are respectively set to 1, 2, …, 10, the gray scale value of the background is generally set to 0 or 255, the gray scale value of the background is set to 0 in this embodiment, the result of processing the image (a) by using the above method is displayed as shown in fig. 3(c), so that the different segmented regions can be visually observed, and the gray scale values of the different segmented regions as shown in fig. 3 The effect graph of the division result in fig. 3 (a). And then, sending the marked image into a designed U-Net network for semantic segmentation training. After the semantic segmentation model is trained, the original pedestrian image to be recognized can be sent into the model for semantic segmentation. As shown in fig. 3, namely, a pedestrian image is processed by the semantic segmentation model to obtain a process of each segmented region, and the segmentation result is marked on an original image.
S22: and acquiring a circumscribed rectangle of the segmentation region, and intercepting a circumscribed rectangle image corresponding to the circumscribed rectangle on the original image based on the circumscribed rectangle.
S23: and setting three components in the RGB values of the areas outside the segmentation areas corresponding to the external rectangles in the external rectangle images as preset values to obtain the segmentation area images.
In this embodiment, the circumscribed rectangle of each of the divided regions is calculated based on the distribution of the pixel points in the divided region, and then an image of the circumscribed rectangle corresponding to the circumscribed rectangle is captured from an original image, that is, the original image is subjected to matting based on the circumscribed rectangle to obtain an image corresponding to the circumscribed rectangle. All pixel points of the segmentation region are contained in the circumscribed rectangle image, and because the boundary of the segmentation region is basically irregular, the circumscribed rectangle image may also include pixels of other segmented regions, so as to avoid the influence of the background and other segmented regions, the circumscribed rectangle image may be acquired while setting three components of RGB values of an area outside the division area corresponding to the circumscribed rectangle in the circumscribed rectangle image to preset values, for example, 255 may be set, in terms of color, that is, the color of the background and other divided regions of the circumscribed rectangular image is set to white, to obtain each divided region image containing only the divided region pixel points, which may also be set to 0, the corresponding color is black, and certainly, the background color of the segmented region image needs to be referred to in specific services. Specifically, the above process is as shown in fig. 4, and fig. 4(c) shows a cut-out region image including only the pixel points of the cut-out region, which is obtained by further resetting the color values of the background of each of the circumscribed rectangular images in fig. 4(c) based on the human body image shown in fig. 4(a) and the cut-out result map of the human body image shown in fig. 4(b) and different cut-out regions in fig. 4 (a).
S24: and partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block.
S25: classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks.
S26: determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain a second image block corresponding to each type of the sub image blocks, and determining the color category of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, regarding the specific processes from the step S24 to the step S26, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated herein.
Therefore, in the embodiment of the application, the original image is segmented by using the image segmentation model constructed based on the U-Net network to obtain each segmentation region with different gray values, the external rectangular image of the external rectangle of the segmentation region is obtained from the original image, the external rectangular image is subjected to background removal and other processing to obtain the segmentation region image only containing the pixel points of the segmentation region, and each segmentation region image obtained by the method for preprocessing the original image avoids the influence of the background color and the colors of other regions in the image on the color identification result to a certain extent, so that the accuracy of color identification is improved.
Fig. 5 is a flowchart of a specific color recognition method according to an embodiment of the present disclosure. Referring to fig. 5, the color recognition method includes:
s31: and segmenting the original image by using the image segmentation model to obtain images of each segmentation area.
In this embodiment, as to the specific process of the step S31, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated herein.
S32: and partitioning the image of the divided area according to a preset size to obtain each sub image block.
In this embodiment, each of the divided area images is mainly subjected to a blocking process based on an adaptive blocking strategy, that is, based on the size of the divided area image, the size of the sub image block obtained after the division is determined according to a service requirement to obtain a preset size, and then the divided area image is blocked according to the preset size to obtain each sub image block with a required size corresponding to the divided area image. When determining the preset size, the size of the divided area image and the influence of the number of pixel points included in the sub-image on the color identification result should be considered as much as possible to obtain sub-image blocks with proper number and size.
FIG. 6(b) shows the result of the adaptive block division for FIG. 6(a) provided by the present embodiment, specifically, the preset size of the present embodiment is set to be W width0And high H0Each 10 pixels, i.e. the width W of each sub image block in fig. 6(b)0And high H0Are all 10 pixels, and in practical application, are wide W0And high H0The color display of images each of which is 10 pixels can be regarded as the smallest unit, which is advantageous for improving the accuracy of color recognition. Naming said sub-image blocks as B according to the rows and columns of their positionsij(i is the row number of the sub image block, j is the column number of the sub image block, i is less than or equal to n, j is less than or equal to n), where the value of n can be calculated by the following formula, W, H is the length and height of the divided area image, respectively. It should be noted that in some extreme cases, such as when W is less than W0Or H is less than H0When the white pixel point is used, the width and height of the division regions W and H are compensated to be W0And H0In this case, n is 1.
Figure BDA0002807830590000111
S33: extracting a plurality of target pixel points at preset positions in the sub-image blocks and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point.
In this embodiment, in order to obtain a more accurate main color category of each sub image block, based on the sub image block obtained by blocking the divided region image, a plurality of target pixel points at preset positions in the sub image block are further extracted, in this embodiment, for each sub image block of 10 × 10(pix) in 6(b), only 25 pixel points are extracted as target pixel points, and in order to ensure uniformity, pixels with odd bit sequences in each row and each column are taken to perform color value extraction, as shown in a dark square in fig. 6(c), of course, pixels of other bit sequences may also be extracted in actual operation to perform calculation, the number of extracted target pixels may also be different, and under a condition that time consumption is not considered, color values of all pixels in the sub image block may be calculated, that is, the number of calculation is 10 × 10 times. And then calculating the extracted RGB value of the target pixel point, wherein the color category is judged to be more complex through the RGB value, and after the RGB value is converted into HSV, the color mixing becomes simpler, and the color category can be determined according to the value ranges of H (hue), S (saturation) and V (brightness). In this embodiment, the calculated RGB value of the target pixel point is converted into an HSV color space value, and the color type of the target pixel point is determined by using HSV reference colors, where it should be noted that, if a color identification model is used to predict a color type or calculate a color value in a subsequent step and then determine the color type, the final color type should be consistent with the HSV reference colors, so that a phenomenon that the reference colors are not uniform is avoided.
In an embodiment, considering that the segmented region image may be subjected to background removal processing or other segmented regions are set to be white, that is, R, G, B values of background pixels are all set to be 255, in actual processing, a target pixel point in the above situation does not participate in statistics.
S34: counting the ratios of the target pixel points of different color categories in all the target pixel points to obtain target ratios corresponding to the different color categories, and if the color category with the maximum target ratio is a single color category, determining the color category as the main color category of the sub-image block.
In this embodiment, the color value of each extracted target pixel point is calculated and the color category of each pixel point is determined, and then statistics can be performed, the target pixel points are classified according to the color category, then calculating the ratio of the number of target pixel points in each color category to the number of all the extracted target pixel points to obtain target occupation ratios corresponding to different color categories, further judging whether the color category corresponding to the occupation ratio with the largest value in the target occupation ratios is a single color category or not, if so, determining the color category as the main color category of the sub-image block, if not a single color category, that is, a plurality of color categories corresponding to the largest number of the target ratios exist, the sub image block is discarded to inhibit the sub image block from participating in a subsequent color identification process. It should be noted that, for some sub image blocks whose colors are too mixed to clearly determine their main color categories, it may be selected to directly discard the sub image block, and therefore, in one class of embodiments, before determining whether the color category with the largest target ratio is a single color category, it is further determined whether the largest target ratio among all the target ratios is greater than a preset threshold, and if not, the sub image block is discarded. The preset threshold is set according to a service requirement, and in this embodiment, the preset threshold is set to 0.8, that is, when the number of target pixel points of a color category corresponding to the maximum target occupation ratio is not less than 20 among the extracted 25 target pixel points, the color category of which the color category with the maximum target occupation ratio is a single color category is further determined as the main color category of the sub image block.
S35: classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks.
S36: determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, as to the specific processes of the steps S35 and S36, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
As can be seen, in the embodiment of the present application, the partition region is firstly partitioned according to a preset size to obtain each sub-image block, then the target pixel points at preset positions in the sub-image blocks are extracted, color values are calculated to determine the color categories of the target pixel points, and finally the color categories of the corresponding sub-image blocks are determined according to the target pixel points and the color categories thereof. As can be seen, in this embodiment, on the basis of blocking the segmented region image, the pixel points of the sub image blocks corresponding to the segmented region image are further counted to obtain a more accurate main color category of the sub image blocks.
Fig. 7 is a flowchart of a specific color recognition method according to an embodiment of the present application. Referring to fig. 7, the color recognition method includes:
s41: and segmenting the original image by using the image segmentation model to obtain images of each segmentation area.
S42: and partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block.
In this embodiment, as to the specific processes of the steps S41 and S42, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
S43: and classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks according to the color saturation of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks.
S44: and if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding shortage number.
S45: and calculating the average value of the color values of the first image blocks to obtain an average color value, and creating the deficient number of image blocks as the filling sub-image blocks by setting the color values of the pixel points to be the image creating mode of the average color value.
S46: splicing the first image block and the supplemented sub-image blocks according to the preset splicing rule to obtain second image blocks for reflecting the main color category of the first image block, and determining the color category sequence of the segmented area image according to the second image blocks; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, the sub image blocks of the same color category may be different due to different hues, lightness, and saturation of colors in the image, and in this embodiment, each type of the sub image blocks is sorted based on the order of the saturation degrees from strong to weak, and then a plurality of the sub image blocks are respectively screened out from each type of the sub image blocks according to the sorting, so as to obtain the first image block corresponding to each type of the sub image blocks. It should be noted that the number of the screened sub image blocks may be determined according to the number of sub image blocks actually required in a subsequent preset stitching rule, but for some color categories, the number of the sub image blocks included in the color image blocks is smaller than the number of the sub image blocks actually required in the preset stitching rule, a padding sub image block needs to be created based on the color value of the first image block, and then the first image block and the padding sub image block are stitched according to the preset stitching rule to obtain a second image block for reflecting the main color category of the first image block.
As shown in fig. 9(d), the image block in the first column in the figure is a square stitched image block obtained by stitching the first image block corresponding to each sub image block in each category according to the stitching rule of 3 × 3, that is, the number of the sub image blocks in each column and each row is 3, and for the sub image block with the black color category, the number of the color values is less than 9, the number of the defects is 2, in the above case, the average value of the color values of the first image block with the black color class needs to be calculated to obtain the average color value, and then, creating 2 image blocks as the complete-joint sub-image blocks by setting the color values of the pixel points to be the image creating mode of the average color value, and finally splicing the created 2 complete-joint sub-image blocks and the corresponding first image block to obtain a complete-joint image block, namely the second image block for reflecting the main color category of the first image block. It should be noted that when the number of the sub image blocks corresponding to a certain color category is small, the second image block may be generated optionally according to actual situations, for example, as shown in fig. 9(c), only the color categories of two sub image blocks are white, at this time, it may be considered that the second image block corresponding to the sub image block is not generated, and only the sub image blocks having the color categories of red and black are subjected to screening, stitching, and the like, so as to obtain the second image block corresponding to the color categories of red and black.
It should be noted that, the step of determining the color class of the divided area image according to the second image block may refer to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
As can be seen, in the embodiment of the present application, a plurality of sub image blocks are respectively screened from each type of sub image block according to the color saturation of the sub image block, so as to obtain a first image block corresponding to each type of sub image block, when the number of the first image block is smaller than the number actually required in a preset splicing rule, the padding sub image block is created by using a pixel point whose color value is an average value of color values of the first image block, and the first image block and the padding sub image block are spliced according to the preset splicing rule. The second image block used for reflecting the main color category of the first image block is obtained by utilizing the splicing and supplementing method, so that the accuracy of color identification can be effectively improved.
Fig. 8 is a flowchart of a specific color recognition method according to an embodiment of the present disclosure. Referring to fig. 8, the color recognition method includes:
s51: and segmenting the original image by using the image segmentation model to obtain images of each segmentation area.
S52: and partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block.
In this embodiment, as to the specific processes of the steps S51 and S52, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
S53: classifying the sub image blocks according to the main color categories of the sub image blocks, and prioritizing each category of the sub image blocks according to the number of the sub image blocks in each category of the sub image blocks to obtain the priority of each category of the sub image blocks.
In this embodiment, the counted sub image blocks all have corresponding color categories, the sub image blocks having the same color category are classified according to the color categories of the sub image blocks, the sub image blocks having the same color category are classified into one category, the color categories corresponding to the sub image blocks in the category can be used as category names of the sub image blocks, each category of the sub image blocks is prioritized according to the number of the sub image blocks in each category of the sub image blocks, the category names, that is, the color categories of the sub image blocks in the category, can be sorted according to the number of the sub image blocks in the category, so as to obtain main color category ranks top _1, top _2, …, top _ N of the divided area image, the higher the priority in the front is, the lower the priority in the back is, and the priority in the main color category is not difficult to understand, that is the priority of the sub image block corresponding to the main color category, namely, the sub image blocks are used for splicing and filling to obtain the priority of the second image block.
S54: and respectively screening a plurality of sub image blocks from each type of sub image blocks to obtain first image blocks corresponding to each type of sub image blocks.
S55: and determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain the second image block corresponding to each type of the sub image blocks.
In this embodiment, as to the specific processes of the steps S54 and S55, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
S56: and inputting each second image block into the trained color recognition model so that the color recognition model can output the color category and the corresponding confidence coefficient of each second image block.
In this embodiment, after each second image block is obtained, the color category of each second image block needs to be determined, and in this embodiment, a color identification model is used to predict the color category of the second image block and output a corresponding confidence, where the color identification model is obtained by training a blank model constructed based on a deep learning algorithm using a training set, and the training set includes sample image blocks and corresponding color categories serving as sample labels. In this embodiment, before the second image block is input to the trained color recognition model, the color recognition model is pre-constructed, specifically, based on the manner of obtaining the second image block in the above embodiment, a large number of human target images are collected, a large number of sample main color image blocks based on different colors are manufactured, and then labeling and training are performed, it should be noted that a color class label needs to correspond to the HSV reference color class in the above embodiment, and m may be used to represent various color class labels, and values 1, 2, …, m, and the like are correspondingly taken.
S57: and acquiring the color category and the corresponding confidence of the second image block output by the color recognition model, judging whether the confidence is greater than a preset threshold, and if the confidence is greater than the preset threshold, determining the color category of the second image block output by the color recognition model as the color category of the second image block.
S58: and if the confidence coefficient is less than or equal to the preset threshold, calculating the color value of the second image block to obtain the color category of the second image block.
In this embodiment, after the color category and the corresponding confidence of the second image block output by the color identification model are obtained, it is first determined whether the confidence is greater than a preset threshold, and if the confidence is greater than the preset threshold, it is determined that the color category predicted by the color identification model is authentic, and at this time, the color category of the second image block output by the color identification model is determined as the color category of the second image block. If the confidence is smaller than or equal to the preset threshold, it indicates that the color class predicted by the color recognition model is not reliable, the color value of the second image block needs to be further calculated to obtain the color class of the second image block, and the color class determined by calculating the color value is determined as the color class of the second image block, where a specific process is shown in fig. 9. In this embodiment, the RGB values of the second image block are calculated, the obtained RGB values are then converted into HSV color space values, and the color category of the second image block is determined according to HSV reference color categories, where the HSV reference color category is consistent with the HSV reference color category in the foregoing embodiments. It should be noted that, in this embodiment, the color class of the second image block is consistent with the color class of the sub image block corresponding to the second image block, that is, based on the foregoing embodiment, the color value of the sub image block is calculated by using a conventional method to obtain the color class of the sub image block, and the sub image block is classified according to the color class of the sub image block, where the color class of the sub image block corresponding to the second image block may be determined as the color class of the second image block, and the color value of the second image block does not need to be calculated again. It should be noted that, the value range of the preset threshold is not limited in this embodiment, and may be set according to specific service requirements, and the preset threshold is set to 0.95 in this embodiment.
S59: determining the priority of each second image block according to the priority of the sub image block corresponding to each second image block, and sequencing the color class of each second image block according to the priority of each second image block to obtain a color class sequence of the divided area image; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, the color class of each second image block is obtained through the above steps, it is understood that the combination of the color classes of the second image blocks is the color class of the divided region image, and a plurality of second image blocks exist in one divided region image to indicate that the corresponding divided region image is not a pure color, and the divided region image includes a plurality of color classes. In step S53, priorities top _1, top _2, …, and top _ N are assigned to each type of the sub image blocks by counting the number of the sub image blocks in each type of the sub image blocks, the priority of the second image block is the priority of the sub image block corresponding to the second image block, and the color categories of the second image block are sorted according to the priority of the second image block, the obtained color category sequence of the divided region image is the arrangement of the color categories according to the number of the sub image blocks included in the color category corresponding to the second image block, and the higher the priority is, the higher the earlier the order is, the color category of the first position in the color category sequence is the main color category of the divided region image.
Therefore, the color category of the image in the segmented region is comprehensively judged in a mode of combining the depth learning algorithm and the traditional color value calculation method, and the accuracy of color identification is effectively improved. Further, in this embodiment, a color class sequence of the divided-region image is determined by prioritizing each type of the sub image block and the corresponding color class thereof, so as to determine a main color class of the divided-region image.
Referring to fig. 10, an embodiment of the present application further discloses a color identification apparatus, which includes:
the segmentation module 11 is configured to segment the original image by using an image segmentation model to obtain images of each segmented area;
the determining module 12 is configured to block the segmented region image based on a preset rule to obtain each sub image block, and determine a main color category of the sub image block according to a color value of a pixel point at a preset position in the sub image block;
the obtaining module 13 is configured to classify the sub image blocks according to main color categories of the sub image blocks, and screen a plurality of the sub image blocks from each category of the sub image blocks respectively to obtain first image blocks corresponding to each category of the sub image blocks;
an identifying module 14, configured to determine, based on the first image block, a second image block for reflecting a main color category of the first image block, to obtain a second image block corresponding to each sub image block, and determine a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
Therefore, the method and the device for identifying the color of the image in the divided area determine the main color image block of the image in the divided area based on the self-adaptive blocking strategy, and determine the color category sequence of the image in the divided area, which contains the main color category of the image in the divided area, according to the color category of the main color image block, so that the accuracy and the integrity of color identification are further improved.
In some specific embodiments, the segmentation module 11 specifically includes:
the segmentation result acquisition unit is used for segmenting the original image by utilizing an image segmentation model constructed based on the U-Net network to obtain a segmentation result; the segmentation result is each segmentation region image obtained by setting different gray values for different regions of the original image;
the preprocessing unit is used for acquiring a circumscribed rectangle of the segmentation area and intercepting a circumscribed rectangle image corresponding to the circumscribed rectangle on the original image based on the circumscribed rectangle; and setting three components in the RGB values of the areas outside the segmentation areas corresponding to the external rectangles in the external rectangle images as preset values to obtain the segmentation area images.
In some specific embodiments, the determining module 12 specifically includes:
the partitioning unit is used for partitioning the image of the divided area according to a preset size to obtain each sub-image block;
the extraction unit is used for extracting a plurality of target pixel points at preset positions in the sub-image blocks and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point;
the statistical unit is used for counting the proportion of the target pixel points of different color categories in all the target pixel points to obtain target proportions corresponding to different color categories;
and the judging unit is used for determining the color type with the maximum target ratio as the main color type of the sub image block if the color type is the single color type.
In some specific embodiments, the obtaining module 13 specifically includes:
the screening unit is used for screening a plurality of sub image blocks from each type of sub image blocks according to the color saturation of the sub image blocks to obtain first image blocks corresponding to each type of sub image blocks;
the splicing unit is used for splicing the first image block based on a preset splicing rule to obtain a second image block for reflecting the main color category of the first image block;
the filling unit is used for determining the corresponding shortage quantity if the quantity of the first image blocks is smaller than the quantity actually required in the preset splicing rule, creating the shortage quantity of filling sub-image blocks based on the color value of the first image blocks, and splicing the first image blocks and the filling sub-image blocks according to the preset splicing rule;
in some specific embodiments, the color identification device further includes:
the dividing module is used for dividing the priority of each type of the sub image blocks according to the number of the sub image blocks in each type of the sub image blocks so as to obtain the priority of each type of the sub image blocks;
in some specific embodiments, the identification module 14 specifically includes:
the input unit is used for inputting the second image block to the trained color recognition model so that the color recognition model can output the color category and the corresponding confidence coefficient of the second image block;
the judging unit is used for judging whether the confidence coefficient is larger than a preset threshold value or not; if the confidence is greater than the preset threshold, determining the color category of the second image block output by the color recognition model as the color category of the second image block, and if the confidence is less than or equal to the preset threshold, calculating the color value of the second image block to obtain the color category of the second image block.
The calculation unit is used for calculating the RGB value of the second image block, converting the RGB value of the second image block into corresponding HSV color space values, and determining a color category corresponding to the HSV color space values according to HSV reference colors so as to determine the color category of the second image block;
the sorting unit is used for determining the priority of each second image block according to the priority of the sub image block corresponding to each second image block and sorting the color category of each second image block according to the priority of each second image block so as to obtain a color category sequence of the divided area image; wherein the color class sequence includes a dominant color class of the segmented region image.
Further, the embodiment of the application also provides electronic equipment. FIG. 11 is a block diagram illustrating an electronic device 20 according to an exemplary embodiment, and nothing in the figure should be taken as a limitation on the scope of use of the present application.
Fig. 11 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement the relevant steps in the color identification method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically a server.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, image data 223, etc., and the storage may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device and the computer program 222 on the electronic device 20, so as to realize the operation and processing of the processor 21 on the massive image data 223 in the memory 22, and may be Windows Server, Netware, Unix, Linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the color recognition method by the electronic device 20 disclosed in any of the foregoing embodiments. Data 223 may include various images collected by electronic device 20.
Further, an embodiment of the present application further discloses a storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the color identification method disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The color recognition method, the color recognition device, the color recognition apparatus and the storage medium provided by the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific examples herein, and the description of the above embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A color recognition method, comprising:
segmenting the original image by using an image segmentation model to obtain images of each segmentation area;
partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block;
classifying the sub image blocks according to the main color categories of the sub image blocks, and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks;
determining a second image block for reflecting the main color category of the first image block based on the first image block to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
2. The color identification method according to claim 1, wherein the partitioning the divided area image based on a preset rule to obtain each sub image block, and determining a main color category of the sub image block according to a color value of a pixel point at a preset position in the sub image block comprises:
partitioning the image of the division area according to a preset size to obtain sub image blocks;
extracting a plurality of target pixel points at preset positions in the sub-image blocks and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point;
counting the proportion of the target pixel points of different color categories in all the target pixel points to obtain target proportions corresponding to different color categories;
and if the color category with the maximum target ratio is a single color category, determining the color category as the main color category of the sub image block.
3. The color identification method according to claim 2, wherein if the color class with the largest target ratio is a single color class, before determining the color class as the main color class of the sub image block, the method further comprises:
and judging whether the maximum target ratio in all the target ratios is larger than a preset threshold value, and if not, discarding the sub image blocks.
4. The color identification method according to claim 1, wherein the respectively screening out a plurality of the sub image blocks from each type of the sub image blocks to obtain a first image block corresponding to each type of the sub image blocks comprises:
respectively screening a plurality of sub image blocks from each type of sub image blocks according to the color saturation of the sub image blocks to obtain first image blocks corresponding to each type of sub image blocks;
correspondingly, the determining a second image block based on the first image block for reflecting the dominant color category of the first image block includes:
and splicing the first image blocks based on a preset splicing rule to obtain second image blocks for reflecting the main color categories of the first image blocks.
5. The color identification method according to claim 4, wherein the stitching the first image block based on a preset stitching rule comprises:
and if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding shortage number, creating the shortage number of filling sub-image blocks based on the color value of the first image blocks, and splicing the first image blocks and the filling sub-image blocks according to the preset splicing rule.
6. The color identification method of claim 5 wherein said creating the missing number of filled sub image blocks based on the color values of the first image block comprises:
calculating an average value of the color values of the first image block to obtain an average color value;
and creating the deficient image blocks as the filling sub-image blocks by an image creating mode of setting the color values of the pixel points to the average color value.
7. The color identification method according to any one of claims 1 to 6, wherein before the step of respectively screening a plurality of the sub image blocks from each of the types of the sub image blocks, the step of further comprises:
prioritizing each type of the sub image blocks according to the number of the sub image blocks in each type of the sub image blocks to obtain the priority of each type of the sub image blocks;
correspondingly, the determining the color category sequence of the divided region image according to each second image block includes:
inputting each second image block into a trained color recognition model so that the color recognition model can output the color category and the corresponding confidence coefficient of each second image block; the color recognition model is obtained by training a blank model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises sample image blocks and corresponding color classes serving as sample labels;
acquiring the color category and the corresponding confidence of each second image block output by the color identification model;
judging whether the confidence coefficient is greater than a preset threshold, if so, determining the color category of the second image block output by the color recognition model as the color category of the second image block, and if not, calculating the color value of the second image block to obtain the color category of the second image block;
and determining the priority of each second image block according to the priority of the sub image block corresponding to each second image block, and sequencing the color class of each second image block according to the priority of each second image block to obtain the color class sequence of the divided area image.
8. A color identifying device, comprising:
the segmentation module is used for segmenting the original image by using the image segmentation model to obtain images of all segmentation areas;
the determining module is used for partitioning the image of the partitioned area based on a preset rule to obtain each sub image block, and determining the main color category of the sub image block according to the color value of a pixel point at a preset position in the sub image block;
the acquisition module is used for classifying the sub image blocks according to the main color categories of the sub image blocks and respectively screening a plurality of sub image blocks from each category of the sub image blocks to obtain first image blocks corresponding to each category of the sub image blocks;
the identification module is used for determining a second image block for reflecting the main color category of the first image block based on the first image block so as to obtain a second image block corresponding to each type of the sub image blocks, and determining a color category sequence of the divided area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
9. An electronic device, comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the color recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer-executable instructions which, when loaded and executed by a processor, implement a color recognition method as claimed in any one of claims 1 to 7.
CN202011374500.3A 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium Active CN112489142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011374500.3A CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011374500.3A CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112489142A true CN112489142A (en) 2021-03-12
CN112489142B CN112489142B (en) 2024-04-09

Family

ID=74937626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011374500.3A Active CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112489142B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239939A (en) * 2021-05-12 2021-08-10 北京杰迈科技股份有限公司 Track signal lamp identification method, module and storage medium
CN114511770A (en) * 2021-12-21 2022-05-17 武汉光谷卓越科技股份有限公司 Road sign plate identification method
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955952A (en) * 2014-05-04 2014-07-30 电子科技大学 Extraction and description method for garment image color features
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN110826418A (en) * 2019-10-15 2020-02-21 深圳和而泰家居在线网络科技有限公司 Face feature extraction method and device
CN111062993A (en) * 2019-12-12 2020-04-24 广东智媒云图科技股份有限公司 Color-merged drawing image processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955952A (en) * 2014-05-04 2014-07-30 电子科技大学 Extraction and description method for garment image color features
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN110826418A (en) * 2019-10-15 2020-02-21 深圳和而泰家居在线网络科技有限公司 Face feature extraction method and device
CN111062993A (en) * 2019-12-12 2020-04-24 广东智媒云图科技股份有限公司 Color-merged drawing image processing method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239939A (en) * 2021-05-12 2021-08-10 北京杰迈科技股份有限公司 Track signal lamp identification method, module and storage medium
CN114511770A (en) * 2021-12-21 2022-05-17 武汉光谷卓越科技股份有限公司 Road sign plate identification method
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Also Published As

Publication number Publication date
CN112489142B (en) 2024-04-09

Similar Documents

Publication Publication Date Title
CN112489142B (en) Color recognition method, device, equipment and storage medium
KR101640998B1 (en) Image processing apparatus and image processing method
CN111738064B (en) Haze concentration identification method for haze image
CN109657715B (en) Semantic segmentation method, device, equipment and medium
CN112489143A (en) Color identification method, device, equipment and storage medium
CN103617432A (en) Method and device for recognizing scenes
US8135216B2 (en) Systems and methods for unsupervised local boundary or region refinement of figure masks using over and under segmentation of regions
CN110443212B (en) Positive sample acquisition method, device, equipment and storage medium for target detection
CN106295645B (en) A kind of license plate character recognition method and device
CN110399842B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN109472193A (en) Method for detecting human face and device
CN106815587B (en) Image processing method and device
CN110163822A (en) The netted analyte detection and minimizing technology and system cut based on super-pixel segmentation and figure
CN108961250A (en) A kind of object statistical method, device, terminal and storage medium
CN111274964B (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN109740527B (en) Image processing method in video frame
CN113743378B (en) Fire monitoring method and device based on video
CN108769543B (en) Method and device for determining exposure time
CN112749696B (en) Text detection method and device
CN115620259A (en) Lane line detection method based on traffic off-site law enforcement scene
JP6855175B2 (en) Image processing equipment, image processing methods and programs
CN111083468B (en) Short video quality evaluation method and system based on image gradient
CN109727218B (en) Complete graph extraction method
US20160187637A1 (en) Image processing apparatus, storage medium, and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant