CN112489142B - Color recognition method, device, equipment and storage medium - Google Patents

Color recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN112489142B
CN112489142B CN202011374500.3A CN202011374500A CN112489142B CN 112489142 B CN112489142 B CN 112489142B CN 202011374500 A CN202011374500 A CN 202011374500A CN 112489142 B CN112489142 B CN 112489142B
Authority
CN
China
Prior art keywords
image
color
sub
image blocks
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011374500.3A
Other languages
Chinese (zh)
Other versions
CN112489142A (en
Inventor
余永龙
谢会斌
李聪廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Boguan Intelligent Technology Co Ltd
Original Assignee
Jinan Boguan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Boguan Intelligent Technology Co Ltd filed Critical Jinan Boguan Intelligent Technology Co Ltd
Priority to CN202011374500.3A priority Critical patent/CN112489142B/en
Publication of CN112489142A publication Critical patent/CN112489142A/en
Application granted granted Critical
Publication of CN112489142B publication Critical patent/CN112489142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a color identification method, a device, equipment and a storage medium, comprising the following steps: dividing an original image by using an image division model to obtain images of all the divided areas; dividing the divided area image based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block; classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks; a second image block for reflecting the dominant color class of the first image block is determined based on the first image block, and a color class sequence of the divided area image is determined from the second image block. According to the color classification method and device based on the self-adaptive blocking strategy, the color classification sequence containing the main color classification of the segmented region image is determined, and the accuracy and the completeness of color recognition are improved.

Description

Color recognition method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a color recognition method, apparatus, device, and storage medium.
Background
Currently, computer vision is widely applied to various fields such as face recognition, security protection and unmanned driving, wherein an image recognition technology is an important branch of the computer vision technology, and color attribute is one of the most obvious distinguishing points in an image, so that color recognition is particularly important in image recognition, but in practical application, color recognition is often interfered by more factors, so that recognition accuracy is not high. For example, in an application scenario of human body structural feature extraction, besides interference of illumination factors, color recognition of each part of a pedestrian body is mainly affected by factors such as smaller size of an object to be recognized, irregular boundaries, mixing of multiple colors at the same part, and the like, compared with color judgment of targets with larger size, regular boundaries and single color such as vehicles, such as color matching, stripes, grids, patterns of clothes and the like. Under the application scene of the extraction of the structural features of the pedestrians, the color recognition accuracy of each part of the pedestrians is low as a whole, and the execution efficiency of the services such as graph searching and graph searching, re-recognition (ReID) of the pedestrians is indirectly influenced.
Disclosure of Invention
Accordingly, the present invention is directed to a color recognition method, apparatus, device and storage medium, which can improve the accuracy and integrity of color recognition. The specific scheme is as follows:
a first aspect of the present application provides a color recognition method, including:
dividing an original image by using an image division model to obtain images of all the divided areas;
dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block;
classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks;
determining second image blocks for reflecting the main color categories of the first image blocks based on the first image blocks to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color category sequence of the segmentation area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
Optionally, the partitioning the split area image based on a preset rule to obtain each sub-image block, and determining the main color class of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block, including:
partitioning the partitioned area image according to a preset size to obtain each sub-image block;
extracting a plurality of target pixel points at preset positions in the sub-image block and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point;
counting the duty ratios of the target pixel points of different color categories in all the target pixel points to obtain target duty ratios corresponding to the different color categories;
and if the color class with the maximum target duty ratio is a single color class, determining the color class as the main color class of the sub-image block.
Optionally, before determining the color class as the main color class of the sub-image block if the color class with the largest target duty ratio is a single color class, the method further includes:
judging whether the largest target duty ratio in all the target duty ratios is larger than a preset threshold value, and discarding the sub-image block if not.
Optionally, the screening a plurality of sub-image blocks from each type of sub-image blocks to obtain a first image block corresponding to each type of sub-image block includes:
screening a plurality of sub-image blocks from each type of sub-image blocks according to the color saturation of the sub-image blocks to obtain first image blocks corresponding to each type of sub-image blocks;
accordingly, the determining, based on the first image block, a second image block for reflecting a dominant color class of the first image block includes:
and splicing the first image blocks based on a preset splicing rule to obtain a second image block for reflecting the main color category of the first image block.
Optionally, the splicing the first image block based on a preset splicing rule includes:
if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding deficiency number, creating the deficiency number of the supplementary sub-image blocks based on the color value of the first image blocks, and splicing the first image blocks and the supplementary sub-image blocks according to the preset splicing rule.
Optionally, the creating the missing number of patch sub-image blocks based on the color value of the first image block includes:
calculating an average value of the color values of the first image block to obtain an average color value;
and creating the deficient image blocks as the complement sub-image blocks by setting the color values of the pixel points as the image creation mode of the average color value.
Optionally, before the screening of the plurality of sub-image blocks from each class of sub-image blocks, the method further includes:
prioritizing each type of the sub-image blocks according to the number of the sub-image blocks in each type of the sub-image blocks to obtain the priority of each type of the sub-image blocks;
correspondingly, determining a color class sequence of the segmented region image according to each second image block; wherein the color class sequence includes main color classes of the segmented region image, including:
inputting each second image block into the trained color recognition model so that the color recognition model outputs the color category and the corresponding confidence of each second image block; the color recognition model is a model obtained by training a blank model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a sample main color image block and a corresponding color class serving as a sample label;
Acquiring the color category and the corresponding confidence of each second image block output by the color recognition model;
judging whether the confidence coefficient is larger than a preset threshold value, if the confidence coefficient is larger than the preset threshold value, determining the color category of the second image block output by the color recognition model as the color category of the second image block, and if the confidence coefficient is smaller than or equal to the preset threshold value, calculating the color value of the second image block to obtain the color category of the second image block;
determining the priority of each second image block according to the priority of the sub-image block corresponding to each second image block, and sorting the color categories of each second image block according to the priority of each second image block to obtain the color category sequence of the segmented region image.
A second aspect of the present application provides a color recognition device, comprising:
the segmentation module is used for segmenting the original image by utilizing the image segmentation model to obtain images of all the segmentation areas;
the determining module is used for dividing the divided area image into sub-image blocks based on a preset rule, and determining the main color category of the sub-image blocks according to the color value of the pixel point at the preset position in the sub-image blocks;
The acquisition module is used for classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks;
the identification module is used for determining second image blocks used for reflecting the main color types of the first image blocks based on the first image blocks so as to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color type sequence of the segmentation area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
A third aspect of the present application provides an electronic device comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the aforementioned color recognition method.
A fourth aspect of the present application provides a computer-readable storage medium having stored therein computer-executable instructions that, when loaded and executed by a processor, implement the foregoing color recognition method.
In the application, an original image is segmented by utilizing an image segmentation model to obtain each segmented region image, then the segmented region image is segmented based on a preset rule to obtain each sub-image block, the main color class of the sub-image block is determined according to the color value of a pixel point at a preset position in the sub-image block, the sub-image block is classified according to the main color class of the sub-image block, a plurality of sub-image blocks are respectively screened out from each sub-image block to obtain a first image block corresponding to each sub-image block, finally a second image block used for reflecting the main color class of the first image block is determined based on the first image block to obtain the second image block corresponding to each sub-image block, and the color class sequence of the segmented region image containing the main color class of the segmented region image is determined according to the second image block. Therefore, the sub-image blocks are obtained by partitioning the partitioned area image of the original image based on the self-adaptive partitioning strategy, the main color image blocks of the partitioned area image are obtained by classifying, splicing and supplementing the sub-image blocks, and the color class sequence of the main color class of the partitioned area image containing the partitioned area image is determined according to the color class of the main color image blocks.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a color identification method provided in the present application;
FIG. 2 is a schematic diagram of a specific color recognition method provided in the present application;
FIG. 3 is a schematic diagram of a specific image segmentation process provided in the present application;
FIG. 4 is a schematic diagram of preprocessing an image to obtain a segmented region image;
FIG. 5 is a flowchart of a specific color recognition method provided in the present application;
FIG. 6 is a schematic diagram of a process for determining a dominant color class of a sub-tile provided herein;
FIG. 7 is a flowchart of a specific color recognition method provided in the present application;
FIG. 8 is a flowchart of a specific color recognition method provided in the present application;
FIG. 9 is a flowchart of color recognition of an original image provided in the present application;
Fig. 10 is a schematic structural diagram of a color recognition device provided in the present application;
fig. 11 is a block diagram of a color recognition electronic device provided in the present application.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to overcome the technical problems, the application provides a color recognition scheme which determines a main color image block of a segmented area image based on a self-adaptive block strategy and determines a color class sequence of the main color class of the segmented area image according to the color class of the main color image block, wherein the color class sequence comprises the main color class of the segmented area image, and can effectively improve the accuracy and the completeness of color recognition.
Fig. 1 is a flowchart of a color recognition method according to an embodiment of the present application. Referring to fig. 1, the color recognition method includes:
s11: and dividing the original image by using the image division model to obtain images of each divided area.
In this embodiment, after an original image is obtained, in order to accurately locate target objects in different regions in the original image, the original image needs to be segmented, the image is divided into regions that are mutually disjoint, and a conventional image segmentation method may be adopted, but more, the original image is subjected to processing such as semantic segmentation by using a deep learning algorithm, in this embodiment, the original image is segmented by using an image segmentation model constructed based on a segmentation algorithm, so as to obtain a segmentation result of the original image, and the original image is preprocessed based on positions of different segmentation regions in the segmentation result, so as to obtain images of each segmentation region of the original image. In order to avoid the result of background color interference color recognition, the background of the segmented region image obtained after preprocessing the original image is a solid color corresponding to a preset color value, so that the difficulty of subsequent color recognition is reduced, and the accuracy of the recognition result is improved, wherein the preset color value is the color value corresponding to the segmented region image background. It will be appreciated that each of the segmented region images contains a single region of pixels on the original image.
S12: and dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block.
In this embodiment, the segmentation area image is subjected to a segmentation process based on an adaptive segmentation policy, so as to obtain each sub-image block of the segmentation area image. The method for identifying the color class of the divided area by directly sending the whole image into the color identification model to carry out color prediction or calculating the color value of the divided area has relatively high identification accuracy on the divided area image with pure color, but is difficult to effectively identify the correct color or main color class for the divided area image with complex color composition. According to the embodiment, the segmented area image with complex structure is further divided into smaller units, namely the sub-image blocks, and then the main color categories of the sub-image blocks are determined according to the color values of the pixel points at the preset positions in the sub-image blocks, wherein in order to reduce calculation time consumption, only the color categories of the pixel points at the preset positions need to be counted, the preset positions are set according to service requirements on the premise of ensuring uniformity, and of course, the main color categories determined by utilizing the color values of all the pixel points in the sub-image blocks can represent the color categories of the sub-image blocks most under the condition of allowing calculation capacity. It should be noted that, the determined main color class of the sub-image block needs to be consistent with the color class predicted by the color recognition model in the subsequent step and the standard of the color class determined by calculating the color value. It will be appreciated that the smaller each sub-image block, the fewer pixels it contains, and that the higher the accuracy with which the color values of the pixels on that sub-image block are used to determine the primary color class of that sub-image block when the sub-image block contains sufficiently few pixels.
S13: classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks.
In this embodiment, the same divided area image may correspond to a plurality of sub-image blocks, and main color categories corresponding to different sub-image blocks may be different or the same, so after the sub-image blocks and the main color categories thereof of the divided area image are acquired, the sub-image blocks need to be classified according to the main color categories of the sub-image blocks, the sub-image blocks with the same color categories are classified into one category, and the number of categories of the main color categories is identical and corresponds to the number of categories of the sub-image blocks. And then screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks, wherein the essence of the first image blocks is still different sub-image blocks with color representativeness in each type of sub-image blocks, when the sub-image blocks contain more pixel points, only one sub-image block can be screened out of each type of sub-image blocks, and compared with the method of screening a plurality of sub-image blocks, the method of screening a plurality of sub-image blocks from each type of sub-image blocks is lower in color identification accuracy, so that the first image blocks corresponding to each type of sub-image blocks are obtained.
S14: determining second image blocks for reflecting the main color categories of the first image blocks based on the first image blocks to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color category sequence of the segmentation area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, a second image block for reflecting the main color class of the first image block is determined based on the first image block, the color class of the second image block is determined by the color class of the corresponding first image block, the color class of the corresponding first image block is reflected to a certain extent, the corresponding second image block is obtained mainly by stitching the first image block, when the color classes of the second image block and the second image block are not single color classes, it is indicated that the split area image is not solid color, the split area image has multiple color classes, and the color class sequence of the split area image can be determined according to the color classes of the second image block, and at this time, the combination of the color classes of the second image blocks is the color class sequence of the split area image. Still further, a dominant color class of the segmented region image may also be determined. It will be appreciated that the color class of the second image block may be predicted by a pre-trained color recognition model, or the color value of the second image block may be calculated to determine the color class thereof, and the method for determining the color class of the second image block is not limited in this embodiment.
Therefore, according to the method, the sub-image blocks are obtained by partitioning the partitioned area image of the original image based on the self-adaptive partitioning strategy, the second image blocks of the partitioned area image, namely the main color image blocks of the area image, are obtained by classifying, splicing and the like the sub-image blocks, and the color class sequence of the partitioned area image containing the main color class of the partitioned area image is determined according to the color class of each second image block.
Fig. 2 is a flowchart of a specific color recognition method according to an embodiment of the present application. Referring to fig. 2, the color recognition method includes:
s21: dividing an original image by using an image division model constructed based on a U-Net network to obtain a division result; the segmentation result is each segmentation area obtained by setting different gray values for different areas of the original image.
In this embodiment, an original image is segmented by using an image segmentation model constructed based on a U-Net network, so as to obtain each segmented region of the original image with a pixel class label, where the pixel class label is a gray value. Of course, instead of using a U-Net network to construct the image segmentation model, other image segmentation networks, such as a JPP-Net network, may be used in this embodiment. In addition, the embodiment can divide the original image by using the existing image division model which is built in advance, and can divide the image by using the image division model which is built in real time according to the service requirement.
Taking an example of constructing an image semantic segmentation model under a portrait scene, a large number of images with a human body as a target, such as a monitoring image of a pedestrian, are prepared in advance, then according to the needs of segmentation tasks, regions of hats, hairs, glasses, masks, scarves, coats, shoes, various bags, umbrellas and the like of the human body are marked, the semantic segmentation labels are gray images, the size of the semantic segmentation labels is consistent with that of the original image, different gray values are set in different regions, such as hats, hairs, … and umbrellas are set to 1, 2, … and 10 respectively, the gray value of a background is set to 0 or 255, the gray value of the background is set to 0, the result after the processing of fig. 3 (a) is displayed as shown in fig. 3 (c), so that the result effect graph of the result of fig. 3 (a) shown in fig. 3 (d) can be obtained after the gray values of the different segmentation regions in fig. 3 (c) are set to different pixel values. And then sending the marked image into a designed U-Net network for semantic segmentation training. After the semantic segmentation model is trained, the original pedestrian image to be identified can be sent into the model for semantic segmentation. As shown in fig. 3, the process of obtaining each segmented region after the pedestrian image is processed by the semantic segmentation model is performed, and the segmentation result is marked and illustrated on the original image.
S22: and acquiring an external rectangle of the segmentation area, and intercepting an external rectangle image corresponding to the external rectangle on an original image based on the external rectangle.
S23: and setting three components in RGB values of an area outside a partition area corresponding to the circumscribed rectangle in the circumscribed rectangle image as preset values to obtain each partition area image.
In this embodiment, the circumscribed rectangles of the respective divided regions are calculated based on the distribution of the pixels in the divided regions, and then the circumscribed rectangle image corresponding to the circumscribed rectangles is cut out on the original image, that is, the original image is scratched based on the circumscribed rectangles to obtain the image corresponding to the circumscribed rectangles. The border of the division area is basically irregular, and the border of the division area may also include pixels of other division areas, so that, in order to avoid the influence caused by the background and other division areas, three components in RGB values of an area outside the division area corresponding to the external rectangle in the external rectangle image may be set to preset values, for example, may be set to 255, from a color perspective, that is, the background and other division areas of the external rectangle image are set to white, so as to obtain each division area image including only the pixels of the division area, and may also be set to 0, and the corresponding color is black. Specifically, as shown in fig. 4, fig. 4 (c) shows a rectangular image circumscribed by different divided areas in fig. 4 (a) obtained based on the human body image shown in fig. 4 (a) and the division result diagram of the human body image shown in fig. 4 (b), and the divided area image containing only the pixels of the divided areas is obtained after resetting the color value of the background of each rectangular image circumscribed in fig. 4 (c).
S24: and dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block.
S25: classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks.
S26: determining second image blocks for reflecting the main color categories of the first image blocks based on the first image blocks to obtain the second image blocks corresponding to each type of sub-image blocks, and determining the color categories of the segmented area images according to the second image blocks; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, for the specific process from step S24 to step S26, reference may be made to the corresponding content disclosed in the foregoing embodiment, and no further description is given here.
Therefore, in the embodiment of the application, the original image is segmented by using the image segmentation model constructed based on the U-Net network, so as to obtain each segmentation area with different gray values of the original image, then the circumscribed rectangular image of the circumscribed rectangle of the segmentation area is obtained on the original image, and the circumscribed rectangular image is subjected to background removal and other treatments so as to obtain the segmentation area image only containing the pixel points of the segmentation area, and the influence of the background color and the color of other areas in the image on the color recognition result is avoided to a certain extent by the aid of the method for preprocessing the original image, so that the accuracy of color recognition is improved.
Fig. 5 is a flowchart of a specific color recognition method according to an embodiment of the present application. Referring to fig. 5, the color recognition method includes:
s31: and dividing the original image by using the image division model to obtain images of each divided area.
In this embodiment, regarding the specific process of step S31, reference may be made to the corresponding content disclosed in the foregoing embodiment, and no further description is given here.
S32: and dividing the image of the dividing region into blocks according to a preset size to obtain each sub-image block.
In this embodiment, the partitioning process is performed on each of the partitioned area images mainly based on an adaptive partitioning policy, that is, based on the size of the partitioned area image, the size of the sub-image block obtained after the partitioning is determined according to the service requirement to obtain a preset size, and then the partitioned area image is partitioned according to the preset size to obtain each sub-image block with a required size corresponding to the partitioned area image. When determining the preset size, the size of the divided area image and the possible influence of the number of pixel points contained in the sub-image on the color recognition result should be considered as much as possible so as to obtain a proper number and size of sub-image blocks.
FIG. 6 (b) shows the result of the adaptive blocking of FIG. 6 (a) provided by the present embodiment, specifically, the preset size of the present embodiment is set to be wide W 0 And height H 0 Are 10 pixels, i.e. the width W of each sub-image block in FIG. 6 (b) 0 And height H 0 All 10 pixels, in practical application, the width W 0 And height H 0 The color display of the images, which are all 10 pixels, can be regarded as the smallest unit, which is advantageous for improving the accuracy of color recognition. Naming the sub-picture blocks as B by rows and columns of the sub-picture block locations ij (i is the sub-image blockAnd j is the column number of the sub-image block, i is equal to or less than n, j is equal to or less than n), wherein the value of n can be calculated according to the following formula, and W, H is the length and the height of the divided area image respectively. It should be noted that in some extreme cases, such as when W is less than W 0 Or H is less than H 0 When the white pixel points are used, the divided areas with the widths and heights W and H are complemented to be W 0 And H 0 In which case n is 1.
S33: and extracting a plurality of target pixel points at preset positions in the sub-image block, and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point.
In this embodiment, in order to obtain a more accurate main color class of each sub-image block, on the basis of obtaining the sub-image block by partitioning the partitioned area image, a plurality of target pixel points at preset positions in the sub-image block are further extracted, and in this embodiment, for each sub-image block of 10×10 (pix) in 6 (b), only 25 pixel points are extracted as target pixel points, and in order to ensure uniformity, pixels with odd bit sequences in each row and each column are taken for color value extraction, as shown by dark blocks in fig. 6 (c), and of course, in actual operation, pixels with other bit sequences may be extracted for calculation, and the number of extracted target pixels may also be different, and in case of not considering time consumption, the color values of all pixels in the sub-image block, that is, the number of calculation times is 10×10, may be calculated. And then calculating the extracted RGB value of the target pixel point, wherein the color category is determined to be complex by the RGB value, and after the color is converted into HSV, the color matching is simpler, and the color category can be determined according to the value ranges of H (hue), S (saturation) and V (brightness). In this embodiment, the RGB value of the target pixel point obtained by calculation is converted into an HSV color space value, and the HSV reference color is used to determine the color class of the target pixel point, and it should be noted that if the color type is predicted by using a color recognition model in the subsequent step or the color value is calculated and then the color class is determined, the final color class should be consistent with the HSV reference color, so as to avoid the phenomenon that the reference color is not uniform.
In one embodiment, considering that the segmented region image may be subjected to a background removal process or other segmented regions are set to be white, that is, the R, G, B values of background pixels are all set to 255, in actual processing, the target pixel points in which the above situation exists do not participate in statistics.
S34: and counting the duty ratios of the target pixel points of different color categories in all the target pixel points to obtain target duty ratios corresponding to different color categories, and if the color category with the largest target duty ratio is a single color category, determining the color category as the main color category of the sub-image block.
In this embodiment, after the color value of each extracted target pixel point is calculated and the color class of each pixel point is determined, statistics may be performed, first, the target pixel points are classified according to the color classes, then, the ratio of the number of target pixel points in each color class to the number of all extracted target pixel points is calculated, so as to obtain target duty ratios corresponding to different color classes, further, whether the color class corresponding to the duty ratio with the largest value in the target duty ratio is a single color class is determined, if yes, the color class is determined as the main color class of the sub-image block, if not the single color class, i.e. the color class corresponding to the duty ratio with the largest value in the target duty ratio exists in a plurality, the sub-image block is discarded, so as to prohibit the sub-image block from participating in the subsequent color identification process. It should be noted that, for some sub-image blocks whose main color class is too mixed to be clearly determined, the sub-image block may be selected to be discarded directly, so in one class of embodiments, before determining whether the color class with the largest target duty ratio is a single color class, it is further necessary to determine whether the largest target duty ratio of all the target duty ratios is greater than a preset threshold, and if not, discarding the sub-image block. The preset threshold is set according to the service requirement, in this embodiment, the preset threshold is set to 0.8, that is, when the number of target pixels of the color class corresponding to the maximum target duty ratio is not less than 20 in the extracted 25 target pixels, the color class with the maximum target duty ratio being the single color class is further determined as the main color class of the sub-image block.
S35: classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks.
S36: determining second image blocks for reflecting the main color categories of the first image blocks based on the first image blocks to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color category sequence of the segmentation area image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, regarding the specific processes of steps S35 and S36, reference may be made to the corresponding contents disclosed in the foregoing embodiment, and a detailed description is omitted herein.
As can be seen, in the embodiment of the present application, the segmentation area is firstly segmented according to a preset size to obtain each sub-image block, then a target pixel point at a preset position in the sub-image block is extracted and a color value is calculated to determine a color class of the target pixel point, and finally the color class of the corresponding sub-image block is determined according to the target pixel point and the color class thereof. Therefore, according to the embodiment, the pixel points of the sub-image block corresponding to the divided area image are further counted on the basis of dividing the divided area image, so that the main color category of the sub-image block is more accurate.
Fig. 7 is a flowchart of a specific color recognition method according to an embodiment of the present application. Referring to fig. 7, the color recognition method includes:
s41: and dividing the original image by using the image division model to obtain images of each divided area.
S42: and dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block.
In this embodiment, regarding the specific procedures of steps S41 and S42, reference may be made to the corresponding contents disclosed in the foregoing embodiment, and a detailed description is omitted herein.
S43: classifying the sub-image blocks according to the main color types of the sub-image blocks, and respectively screening a plurality of sub-image blocks from each type of sub-image blocks according to the color saturation of the sub-image blocks to obtain first image blocks corresponding to each type of sub-image blocks.
S44: if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding shortage number.
S45: and calculating the average value of the color values of the first image blocks to obtain average color values, and creating the deficient image blocks as the complement sub-image blocks by setting the color values of the pixel points as the image creation mode of the average color values.
S46: splicing the first image block and the supplementary sub-image block according to the preset splicing rule to obtain second image blocks for reflecting the main color types of the first image block, and determining a color type sequence of the segmented region image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, the sub-image blocks of the same color class may be different due to different hues, brightness and saturation of colors in the image, and the present embodiment sorts the sub-image blocks of each class according to the order of saturation from strong to weak, and then screens out a plurality of sub-image blocks from each class according to the sorting, so as to obtain the first image blocks corresponding to each sub-image block of each class. It should be noted that, the number of the sub-image blocks selected may be determined according to the number of sub-image blocks actually required in the subsequent preset stitching rule, but for some color classes, the number of the sub-image blocks included in the sub-image blocks is smaller than the number of the sub-image blocks actually required in the preset stitching rule, and then a patch sub-image block needs to be created based on the color value of the first image block, and then the first image block and the patch sub-image block are stitched according to the preset stitching rule, so as to obtain a second image block for reflecting the main color class of the first image block.
As shown in fig. 9 (d), the first column of image blocks in the figure is a square spliced image block obtained by splicing the first image blocks corresponding to each type of sub-image blocks according to the splicing rule of 3*3, that is, 3 sub-image blocks in each column and each row, and for sub-image blocks with black color, the number of the sub-image blocks is less than 9, and the number of the sub-image blocks is 2, in this case, an average value of color values of the first image blocks with black color needs to be calculated to obtain an average color value, then 2 image blocks are created as complementary sub-image blocks by setting the color values of pixel points as an image creation mode of the average color value, and finally the created 2 complementary sub-image blocks and the corresponding first image blocks are spliced to obtain a complete spliced complementary image block, that is, the second image block for reflecting the main color category of the first image block. It should be noted that when the number of the sub-image blocks corresponding to a certain type of color class is small, the second image blocks may be generated with choice according to the actual situation, for example, the sub-image blocks of each type listed in fig. 9 (c), only two sub-image blocks may have white color classes, and at this time, it may be considered that the second image blocks corresponding to the sub-image blocks are not generated, and only the sub-image blocks having red and black color classes may be screened, spliced, and supplemented to obtain the second image blocks corresponding to the red and black color classes.
It should be noted that, the step of determining the color class of the segmented area image according to the second image block may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein again.
As can be seen, in the embodiment of the present application, a plurality of sub-image blocks are respectively screened from each type of sub-image blocks according to the color saturation of the sub-image blocks, so as to obtain first image blocks corresponding to each type of sub-image blocks, when the number of the first image blocks is smaller than the number actually required in a preset stitching rule, the color value is used to create the filling sub-image blocks for pixel points of an average value of the color values of the first image blocks, and the first image blocks and the filling sub-image blocks are stitched according to the preset stitching rule. The second image block for reflecting the main color category of the first image block is obtained by the splicing and filling method, so that the accuracy of color identification can be effectively improved.
Fig. 8 is a flowchart of a specific color recognition method according to an embodiment of the present application. Referring to fig. 8, the color recognition method includes:
s51: and dividing the original image by using the image division model to obtain images of each divided area.
S52: and dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block.
In this embodiment, regarding the specific procedures of steps S51 and S52, reference may be made to the corresponding contents disclosed in the foregoing embodiment, and a detailed description is omitted herein.
S53: classifying the sub-image blocks according to the main color types of the sub-image blocks, and prioritizing each type of sub-image blocks according to the number of the sub-image blocks in each type of sub-image blocks to obtain the priority of each type of sub-image blocks.
In this embodiment, the counted sub-image blocks have corresponding color classes, the sub-image blocks are classified according to the color classes of the sub-image blocks, the sub-image blocks with the same color class are classified into one class, the color class corresponding to the sub-image block in the class can be used as the class name of the sub-image block in the class, meanwhile, the sub-image blocks in each class are classified according to the number of the sub-image blocks in each class, the classification priority can classify the class name, that is, the color class of the sub-image block in the class according to the number of the sub-image blocks therein, so as to obtain the priority of the main color class classification top_1, top_2, … and top_n of the split area image, the priority of the sub-image block with the higher classification is higher, the priority of the sub-image block with the lower classification is correspondingly lower, and it is not difficult to understand that the priority of the main color class is the priority of the sub-image block corresponding to the main color class, that is the sub-image block with the higher priority is the second sub-image block.
S54: and respectively screening a plurality of sub-image blocks from each type of sub-image blocks to obtain first image blocks corresponding to each type of sub-image blocks.
S55: and determining a second image block for reflecting the main color category of the first image block based on the first image block so as to obtain the second image block corresponding to each sub-image block.
In this embodiment, regarding the specific processes of steps S54 and S55, reference may be made to the corresponding contents disclosed in the foregoing embodiments, and a detailed description is omitted herein.
S56: and inputting each second image block into the trained color recognition model so that the color recognition model outputs the color category and the corresponding confidence of each second image block.
In this embodiment, after each second image block is obtained, a color class of each second image block needs to be determined, and in this embodiment, a color recognition model is used to predict the color class of the second image block and output a corresponding confidence level, where the color recognition model is a model obtained by training a blank model constructed based on a deep learning algorithm with a training set, and the training set includes a sample image block and a corresponding color class as a sample label. In this embodiment, before the second image block is input to the trained color recognition model, the color recognition model is pre-built, specifically, based on the manner of acquiring the second image block in the above embodiment, a large number of human body target images are collected, a large number of sample main color image blocks based on different colors are manufactured, and then labeling and training are performed, and it should be noted that the color class label needs to correspond to the HSV reference color class described in the foregoing embodiment, and may represent various color class labels with m corresponding to values of 1, 2, …, m, and so on.
S57: and acquiring the color category of the second image block output by the color recognition model and the corresponding confidence coefficient, judging whether the confidence coefficient is larger than a preset threshold value, and if so, determining the color category of the second image block output by the color recognition model as the color category of the second image block.
S58: and if the confidence is smaller than or equal to the preset threshold, calculating the color value of the second image block to obtain the color class of the second image block.
In this embodiment, after the color class of the second image block and the corresponding confidence coefficient output by the color recognition model are obtained, whether the confidence coefficient is greater than a preset threshold value is first determined, if the confidence coefficient is greater than the preset threshold value, it is indicated that the color class predicted by the color recognition model is trusted, and at this time, the color class of the second image block output by the color recognition model is determined as the color class of the second image block. If the confidence is smaller than or equal to the preset threshold, the color classification predicted by the color recognition model is not credible, the color value of the second image block needs to be further calculated to obtain the color classification of the second image block, and the color classification determined by calculating the color value is determined as the color classification of the second image block, and the specific process is shown in fig. 9. In this embodiment, the RGB value of the second image block is calculated, and then the obtained RGB value is converted into an HSV color space value, and the color class of the second image block is determined according to the HSV reference color class, where the HSV reference color class is consistent with the HSV reference color class in the foregoing embodiment. It should be noted that, in this embodiment, the color class of the second image block is consistent with the color class of the sub-image block corresponding to the second image block, that is, the color value of the sub-image block is calculated by using the conventional method to obtain the color class of the sub-image block and classify the sub-image block according to the color class of the sub-image block in the foregoing embodiment, where the color class of the sub-image block corresponding to the second image block may be determined as the color class of the second image block, without calculating the color value of the second image block again. It should be noted that, the present embodiment is not limited to the value range of the preset threshold, and may be set according to specific service requirements, and the preset threshold is set to 0.95 in this embodiment.
S59: determining the priority of each second image block according to the priority of the sub-image block corresponding to each second image block, and sorting the color categories of each second image block according to the priority of each second image block to obtain a color category sequence of the segmented region image; wherein the color class sequence includes a dominant color class of the segmented region image.
In this embodiment, the color classes of the second image blocks are obtained through the above steps, and it is easy to understand that the combination of the color classes of the second image blocks is the color class of the divided area image, and one divided area image includes a plurality of divided area images corresponding to the second image blocks and indicating that the divided area image is not solid color, and the divided area image includes a plurality of color classes. In step S53, the sub-image blocks of each type are classified into priorities top_1, top_2, …, top_n by counting the number of the sub-image blocks of each type, the priorities of the second image blocks are the priorities of the sub-image blocks corresponding to the second image blocks, the color classes of the second image blocks are ordered according to the priorities of the second image blocks, the color class sequence of the obtained divided area image is the arrangement of the color classes according to the number of the sub-image blocks included in the color class corresponding to the second image blocks, and the color class of the first bit in the color class sequence is the main color class of the divided area image as the priority is higher before the ordering.
Therefore, in the embodiment of the application, the color category of the image of the divided area is comprehensively judged by adopting a deep learning algorithm and a traditional method for calculating the color value in a deep combination mode, so that the accuracy of color recognition is effectively improved. Further, in this embodiment, the color class sequence of the segmented region image is determined by a method of prioritizing each type of the sub-image block and its corresponding color class, so as to determine the main color class of the segmented region image.
Referring to fig. 10, the embodiment of the application further correspondingly discloses a color identification device, which includes:
the segmentation module 11 is used for segmenting the original image by utilizing the image segmentation model to obtain images of all the segmentation areas;
a determining module 12, configured to divide the divided area image into sub-image blocks based on a preset rule, and determine a main color class of the sub-image block according to a color value of a pixel point at a preset position in the sub-image block;
the obtaining module 13 is configured to classify the sub-image blocks according to main color categories of the sub-image blocks, and screen a plurality of sub-image blocks from each type of sub-image blocks respectively, so as to obtain first image blocks corresponding to each type of sub-image blocks;
An identifying module 14, configured to determine, based on the first image block, a second image block for reflecting a main color class of the first image block, so as to obtain second image blocks corresponding to each type of the sub-image blocks, and determine a color class sequence of the segmented region image according to each second image block; wherein the color class sequence includes a dominant color class of the segmented region image.
Therefore, the embodiment of the application determines the main color image block of the divided area image based on the self-adaptive block strategy, and determines the color class sequence of the main color class of the divided area image containing the divided area image according to the color class of the main color image block, thereby further improving the accuracy and the completeness of color recognition.
In some embodiments, the dividing module 11 specifically includes:
the segmentation result acquisition unit is used for segmenting the original image by utilizing an image segmentation model constructed based on the U-Net network so as to obtain a segmentation result; the segmentation result is each segmented region image obtained by setting different gray values for different regions of the original image;
the preprocessing unit is used for acquiring an external rectangle of the segmentation area, and intercepting an external rectangle image corresponding to the external rectangle on an original image based on the external rectangle; and setting three components in RGB values of an area outside a partition area corresponding to the circumscribed rectangle in the circumscribed rectangle image as preset values to obtain each partition area image.
In some embodiments, the determining module 12 specifically includes:
the partitioning unit is used for partitioning the partitioned area image according to a preset size to obtain each sub-image block;
the extraction unit is used for extracting a plurality of target pixel points at preset positions in the sub-image block and calculating the RGB value of each target pixel point so as to obtain the color category of each target pixel point;
the statistics unit is used for counting the duty ratios of the target pixel points of different color categories in all the target pixel points to obtain target duty ratios corresponding to the different color categories;
and the judging unit is used for determining the color category as the main color category of the sub-image block if the color category with the maximum target duty ratio is a single color category.
In some embodiments, the obtaining module 13 specifically includes:
the screening unit is used for screening a plurality of sub-image blocks from each type of sub-image blocks according to the color saturation of the sub-image blocks so as to obtain first image blocks corresponding to each type of sub-image blocks;
the splicing unit is used for splicing the first image blocks based on a preset splicing rule to obtain second image blocks used for reflecting the main color types of the first image blocks;
The filling unit is used for determining corresponding deficiency number if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, creating a plurality of filling sub-image blocks with deficiency number based on the color value of the first image blocks, and splicing the first image blocks and the filling sub-image blocks according to the preset splicing rule;
in some specific embodiments, the color recognition device further comprises:
the dividing module is used for dividing the priorities of the sub-image blocks of each type according to the number of the sub-image blocks in the sub-image blocks of each type so as to obtain the priorities of the sub-image blocks of each type;
in some embodiments, the identification module 14 specifically includes:
the input unit is used for inputting the second image block into the trained color recognition model so that the color recognition model outputs the color category and the corresponding confidence of the second image block;
the judging unit is used for judging whether the confidence coefficient is larger than a preset threshold value or not; and if the confidence is larger than the preset threshold, determining the color class of the second image block output by the color recognition model as the color class of the second image block, and if the confidence is smaller than or equal to the preset threshold, calculating the color value of the second image block to obtain the color class of the second image block.
The computing unit is used for computing the RGB value of the second image block, converting the RGB value of the second image block into a corresponding HSV color space value, and determining the color category corresponding to the HSV color space value according to the HSV reference color so as to determine the color category of the second image block;
the sorting unit is used for determining the priority of each second image block according to the priority of the sub-image block corresponding to each second image block, and sorting the color class of each second image block according to the priority of each second image block so as to obtain a color class sequence of the segmented region image; wherein the color class sequence includes a dominant color class of the segmented region image.
Further, the embodiment of the application also provides electronic equipment. Fig. 11 is a block diagram of an electronic device 20, according to an exemplary embodiment, and the contents of the diagram should not be construed as limiting the scope of use of the present application in any way.
Fig. 11 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present application. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is configured to store a computer program that is loaded and executed by the processor 21 to implement the relevant steps of the color recognition method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be a server.
In this embodiment, the power supply 23 is configured to provide an operating voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present application, which is not specifically limited herein; the input/output interface 25 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application requirement, which is not limited herein.
The memory 22 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon may include an operating system 221, a computer program 222, and image data 223, and the storage may be temporary storage or permanent storage.
The operating system 221 is used for managing and controlling various hardware devices on the electronic device 20 and the computer program 222, so as to implement the operation and processing of the processor 21 on the massive image data 223 in the memory 22, which may be Windows Server, netware, unix, linux, etc. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the color recognition method performed by the electronic device 20 as disclosed in any of the previous embodiments. The data 223 may include various images collected by the electronic device 20.
Further, the embodiment of the application also discloses a storage medium, wherein the storage medium stores a computer program, and the computer program realizes the steps of the color recognition method disclosed in any one of the previous embodiments when being loaded and executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The color recognition method, device, equipment and storage medium provided by the invention are described in detail, and specific examples are applied to illustrate the principle and implementation of the invention, and the description of the examples is only used for helping to understand the method and core idea of the invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (8)

1. A color recognition method, comprising:
dividing an original image by using an image division model to obtain images of all the divided areas;
dividing the divided area image into blocks based on a preset rule to obtain each sub-image block, and determining the main color category of the sub-image block according to the color value of the pixel point at the preset position in the sub-image block;
partitioning the partitioned area image according to a preset size to obtain each sub-image block;
extracting a plurality of target pixel points at preset positions in the sub-image block and calculating the RGB value of each target pixel point to obtain the color category of each target pixel point;
Counting the duty ratios of the target pixel points of different color categories in all the target pixel points to obtain target duty ratios corresponding to the different color categories;
if the color class with the largest target duty ratio is a single color class, determining the color class as the main color class of the sub-image block;
classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks;
determining second image blocks for reflecting the main color categories of the first image blocks based on the first image blocks to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color category sequence of the segmentation area image according to each second image block; wherein the color class sequence comprises main color classes of the segmented region image;
wherein the screening a plurality of sub-image blocks from each type of sub-image blocks to obtain a first image block corresponding to each type of sub-image block, includes:
screening a plurality of sub-image blocks from each type of sub-image blocks according to the color saturation of the sub-image blocks to obtain first image blocks corresponding to each type of sub-image blocks;
The determining, based on the first image block, a second image block for reflecting a dominant color class of the first image block, comprising:
and splicing the first image blocks based on a preset splicing rule to obtain a second image block for reflecting the main color category of the first image block.
2. The method according to claim 1, wherein if the color class with the largest target duty ratio is a single color class, before determining the color class as the main color class of the sub-image block, the method further comprises:
judging whether the largest target duty ratio in all the target duty ratios is larger than a preset threshold value, and discarding the sub-image block if not.
3. The color recognition method according to claim 1, wherein the stitching the first image block based on a preset stitching rule includes:
if the number of the first image blocks is smaller than the number actually required in the preset splicing rule, determining the corresponding deficiency number, creating the deficiency number of the supplementary sub-image blocks based on the color value of the first image blocks, and splicing the first image blocks and the supplementary sub-image blocks according to the preset splicing rule.
4. A method of color recognition according to claim 3, wherein said creating the missing number of complement sub-image blocks based on the color value of the first image block comprises:
calculating an average value of the color values of the first image block to obtain an average color value;
and creating the deficient image blocks as the complement sub-image blocks by setting the color values of the pixel points as the image creation mode of the average color value.
5. The method according to any one of claims 1-4, wherein before said screening a number of said sub-image blocks from each class of said sub-image blocks, respectively, further comprises:
prioritizing each type of the sub-image blocks according to the number of the sub-image blocks in each type of the sub-image blocks to obtain the priority of each type of the sub-image blocks;
correspondingly, the determining the color class sequence of the segmented region image according to each second image block includes:
inputting each second image block into the trained color recognition model so that the color recognition model outputs the color category and the corresponding confidence of each second image block; the color recognition model is a model obtained by training a blank model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a sample image block and a corresponding color class serving as a sample label;
Acquiring the color category and the corresponding confidence of each second image block output by the color recognition model;
judging whether the confidence coefficient is larger than a preset threshold value, if the confidence coefficient is larger than the preset threshold value, determining the color category of the second image block output by the color recognition model as the color category of the second image block, and if the confidence coefficient is smaller than or equal to the preset threshold value, calculating the color value of the second image block to obtain the color category of the second image block;
determining the priority of each second image block according to the priority of the sub-image block corresponding to each second image block, and sorting the color categories of each second image block according to the priority of each second image block to obtain the color category sequence of the segmented region image.
6. A color recognition device, comprising:
the segmentation module is used for segmenting the original image by utilizing the image segmentation model to obtain images of all the segmentation areas;
the determining module is used for dividing the divided area image into sub-image blocks based on a preset rule, and determining the main color category of the sub-image blocks according to the color value of the pixel point at the preset position in the sub-image blocks;
The determining module specifically comprises:
the partitioning unit is used for partitioning the partitioned area image according to a preset size to obtain each sub-image block;
the extraction unit is used for extracting a plurality of target pixel points at preset positions in the sub-image block and calculating the RGB value of each target pixel point so as to obtain the color category of each target pixel point;
the statistics unit is used for counting the duty ratios of the target pixel points of different color categories in all the target pixel points to obtain target duty ratios corresponding to the different color categories;
the judging unit is used for determining the color category as the main color category of the sub-image block if the color category with the maximum target duty ratio is a single color category;
the acquisition module is used for classifying the sub-image blocks according to the main color types of the sub-image blocks, and screening a plurality of sub-image blocks from each type of sub-image blocks respectively to obtain first image blocks corresponding to each type of sub-image blocks;
the identification module is used for determining second image blocks used for reflecting the main color types of the first image blocks based on the first image blocks so as to obtain the second image blocks corresponding to each type of sub-image blocks, and determining a color type sequence of the segmentation area image according to each second image block; wherein the color class sequence comprises main color classes of the segmented region image;
The acquiring module is specifically configured to:
screening a plurality of sub-image blocks from each type of sub-image blocks according to the color saturation of the sub-image blocks to obtain first image blocks corresponding to each type of sub-image blocks;
the identification module is specifically configured to:
and splicing the first image blocks based on a preset splicing rule to obtain a second image block for reflecting the main color category of the first image block.
7. An electronic device comprising a processor and a memory; wherein the memory is for storing a computer program to be loaded and executed by the processor to implement the color recognition method of any one of claims 1 to 5.
8. A computer readable storage medium storing computer executable instructions which, when loaded and executed by a processor, implement the color recognition method of any one of claims 1 to 5.
CN202011374500.3A 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium Active CN112489142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011374500.3A CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011374500.3A CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112489142A CN112489142A (en) 2021-03-12
CN112489142B true CN112489142B (en) 2024-04-09

Family

ID=74937626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011374500.3A Active CN112489142B (en) 2020-11-30 2020-11-30 Color recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112489142B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113239939A (en) * 2021-05-12 2021-08-10 北京杰迈科技股份有限公司 Track signal lamp identification method, module and storage medium
CN114511770A (en) * 2021-12-21 2022-05-17 武汉光谷卓越科技股份有限公司 Road sign plate identification method
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955952A (en) * 2014-05-04 2014-07-30 电子科技大学 Extraction and description method for garment image color features
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN110826418A (en) * 2019-10-15 2020-02-21 深圳和而泰家居在线网络科技有限公司 Face feature extraction method and device
CN111062993A (en) * 2019-12-12 2020-04-24 广东智媒云图科技股份有限公司 Color-merged drawing image processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955952A (en) * 2014-05-04 2014-07-30 电子科技大学 Extraction and description method for garment image color features
CN107358242A (en) * 2017-07-11 2017-11-17 浙江宇视科技有限公司 Target area color identification method, device and monitor terminal
CN110826418A (en) * 2019-10-15 2020-02-21 深圳和而泰家居在线网络科技有限公司 Face feature extraction method and device
CN111062993A (en) * 2019-12-12 2020-04-24 广东智媒云图科技股份有限公司 Color-merged drawing image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112489142A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112489142B (en) Color recognition method, device, equipment and storage medium
KR101640998B1 (en) Image processing apparatus and image processing method
CN112489143A (en) Color identification method, device, equipment and storage medium
CN105608455B (en) A kind of license plate sloped correcting method and device
CN109657715B (en) Semantic segmentation method, device, equipment and medium
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN106295645B (en) A kind of license plate character recognition method and device
CN109472193A (en) Method for detecting human face and device
US20090148041A1 (en) Systems and methods for unsupervised local boundary or region refinement of figure masks using over and under segmentation of regions
CN109766828A (en) A kind of vehicle target dividing method, device and communication equipment
CN113792827B (en) Target object recognition method, electronic device, and computer-readable storage medium
Xiong et al. Early smoke detection of forest fires based on SVM image segmentation
CN112749696B (en) Text detection method and device
CN108961250A (en) A kind of object statistical method, device, terminal and storage medium
CN112115979A (en) Fusion method and device of infrared image and visible image
CN102521610B (en) Image filtering method and device
CN113065568A (en) Target detection, attribute identification and tracking method and system
CN109740527B (en) Image processing method in video frame
CN114913525A (en) Traffic signal lamp identification method, device, equipment and storage medium
CN113743378B (en) Fire monitoring method and device based on video
CN110334652A (en) Image processing method, electronic equipment and storage medium
CN105102607A (en) Image processing device, program, storage medium, and image processing method
Bell et al. Reflections on connoisseurship and computer vision
CN108961357B (en) Method and device for strengthening over-explosion image of traffic signal lamp
CN115620259A (en) Lane line detection method based on traffic off-site law enforcement scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant