CN117670897A - Background color extraction method, device, equipment, storage medium and program product - Google Patents

Background color extraction method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN117670897A
CN117670897A CN202211009071.9A CN202211009071A CN117670897A CN 117670897 A CN117670897 A CN 117670897A CN 202211009071 A CN202211009071 A CN 202211009071A CN 117670897 A CN117670897 A CN 117670897A
Authority
CN
China
Prior art keywords
color
background
binary image
image
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211009071.9A
Other languages
Chinese (zh)
Inventor
陈相
戚嘉懿
陶祝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202211009071.9A priority Critical patent/CN117670897A/en
Publication of CN117670897A publication Critical patent/CN117670897A/en
Pending legal-status Critical Current

Links

Abstract

The method comprises the steps of obtaining a plurality of binary images by carrying out color segmentation on a target picture, inputting the plurality of binary images into a pre-trained classification model, outputting an image category corresponding to each binary image, wherein the image category comprises a background category and other categories, and determining the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the plurality of binary images. The embodiment of the disclosure adopts a classification model to identify a binary image belonging to a background category in a plurality of input binary images, and then determines the background color of a target picture based on the binary image of the background category. The embodiment of the disclosure does not need to retrain the model when the color is increased, so that the method has better universality.

Description

Background color extraction method, device, equipment, storage medium and program product
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a program product for extracting a background color.
Background
At present, extraction of background colors of pictures is involved in many application scenes, for example, background color assertion is performed on a test screenshot in some text background color related User Interface (UI) automation tasks, and when the background color assertion is performed, the background color of the test screenshot needs to be extracted first.
In the related art, a color classification model is adopted to obtain the background color of the test screenshot, but the color classification model has poor universality, and when a new color is added into the test screenshot, the new color classification model needs to be retrained.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a method for extracting a background color, including:
performing color segmentation on a target picture based on a plurality of preset color intervals to obtain a plurality of binary images corresponding to the plurality of color intervals one by one;
inputting the plurality of binary images into a pre-trained classification model, and outputting an image category corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input and takes the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during the color segmentation;
And determining the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the plurality of binary images.
According to a second aspect of the present disclosure, there is provided an extraction device of background color, comprising:
the color segmentation module is configured to perform color segmentation on the target picture based on a plurality of preset color intervals to obtain a plurality of binary images corresponding to the plurality of color intervals one by one;
the image category determining module is configured to input the plurality of binary images into a pre-trained classification model and output an image category corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input and takes the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during the color segmentation;
and the background color determining module is configured to determine the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the binary images.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory storing a program, wherein the program comprises instructions that when executed by the processor cause the processor to perform the method according to the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method according to the first aspect of the present disclosure.
According to one or more technical schemes provided by the embodiment of the disclosure, a plurality of binary images are obtained by performing color segmentation on a target picture, the plurality of binary images are input into a pre-trained classification model, image categories corresponding to each binary image are output, the image categories comprise background categories and other categories, and when the background categories exist in the image categories corresponding to the plurality of binary images, the background color of the target picture is determined based on the binary images corresponding to the background categories. The embodiment of the disclosure adopts the classification model to identify the binary image belonging to the background category in the input multiple binary images, and then determines the background color of the target picture based on the binary image of the background category, and compared with the technology of determining the background color through the color classification model in the related technology, the embodiment of the disclosure does not need to retrain the model when the color is increased, thereby having better universality.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
fig. 1 is a flowchart one of a background color extraction method provided according to an exemplary embodiment of the present disclosure;
fig. 2 is a schematic diagram of a target picture provided in an exemplary embodiment of the present disclosure;
fig. 3 to 6 are schematic diagrams of binary images provided in exemplary embodiments of the present disclosure;
FIG. 7 is a second flowchart of a background color extraction method provided in accordance with an exemplary embodiment of the present disclosure;
FIG. 8 is a flowchart III of a background color extraction method provided in accordance with an exemplary embodiment of the present disclosure;
fig. 9 is a schematic block diagram of an extraction apparatus of background color provided by an exemplary embodiment of the present disclosure;
FIG. 10 is a schematic block diagram of a color asserting device provided by an exemplary embodiment of the present disclosure;
FIG. 11 is a schematic block diagram of a chip provided by an exemplary embodiment of the present disclosure;
fig. 12 is a schematic block diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
In the related art, an automatic color assertion scheme is adopted to assert the background color of the test screenshot obtained by the UI automation tool, a color classification model is trained in advance before assertion, the test screenshot is input into the color classification model, and the classification model outputs the background color of the test screenshot. However, this classification model is less versatile and requires modification and retraining when a new color is added to the test screenshot.
In view of this, the embodiments of the present disclosure provide a method for extracting background color, which may be executed by a terminal device or a server, where the terminal device may be a mobile phone, a tablet computer, a desktop computer, a notebook computer, or a personal digital assistant (personal digital assistant, abbreviated as PDA), and the server may be a color extraction server, and the terminal device and the server interact to implement a function of extracting background color, and specifically may be interacted with the server by a software application (application) on the terminal device, and the like. The terminal device and the user may perform man-machine interaction through one or more of a keyboard, a touch screen, a voice interaction or a handwriting manner, which is not limited in the present disclosure.
The method for extracting the background color in the embodiment of the disclosure can be applied to a test scene, such as extracting the background color in a test screenshot, or can be applied to a verification scene, such as logo verification of a market department, and the like. Without being limited thereto, any scene requiring extraction of background color may be extracted using the method disclosed in the embodiments of the present disclosure.
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 is a flowchart of a background color extraction method according to an exemplary embodiment of the present disclosure, as shown in fig. 1, including the following steps:
and S101, performing color segmentation on the target picture based on a plurality of preset color intervals to obtain binary images corresponding to the color intervals one by one.
Embodiments of the present disclosure are not limited to what the target picture includes, for example, it may include text, patterns, etc., where the text or pattern is located on a background area having a color that is the background color of the target picture. Embodiments of the present disclosure are not limited to the type of target picture, which may be a screenshot of a test generation, such as a screenshot of a test obtained through a UI automation tool, or a photograph taken by a camera, such as a certificate photograph including a person, or the like.
The embodiment of the disclosure is not limited to a specific manner of performing color segmentation on the target picture based on a preset plurality of color intervals, and in one possible implementation manner, the preset plurality of color intervals may be acquired first; for each color interval of the plurality of color intervals, calculating whether the pixel in the target picture falls into the color interval, if so, assigning a first pixel value to the pixel, and if not, assigning a second pixel value to the pixel, traversing all pixels in the target picture, and obtaining a binary image corresponding to the color interval. It will be appreciated that the binary image includes two pixel values, namely a first pixel value and a second pixel value, the first pixel value and the second pixel value being different; and traversing all the color intervals to obtain a plurality of binary images, wherein the plurality of binary images are in one-to-one correspondence with the plurality of color intervals, so that the color segmentation of the target picture can be realized.
The embodiment of the present disclosure is not limited to specific values of the first pixel value and the second pixel value, and for example, in one embodiment, the first pixel value may be set to 255, and the second pixel value may be set to 0, where the obtained binary image is a black-white binary image. In this embodiment, pixels falling within the preset color interval are white, and pixels not falling within the preset color interval are black. However, in other embodiments, the first pixel value may be set to 0 and the second pixel value may be set to 255, and in this case, the obtained binary image is also a black-and-white binary image, unlike the foregoing embodiments, pixels falling within the preset color interval are black, and pixels not falling within the preset color interval are white.
The color segmentation in the embodiments of the present disclosure may be performed in an HSV (Hue Saturation, value brightness) color space, which is a color space created according to the color visual characteristics, and is more visual for the user than an RGB (Red, green, blue) color space, which is more hardware-oriented.
In the HSV color space, the color intervals may be preset by a user, the user may divide the colors in the HSV color space into a plurality of color intervals, each color interval corresponds to a common color, and it is understood that the embodiments of the present disclosure do not limit the number of color intervals, which depends on the segmentation requirements of the user.
In practical applications, the format of the target picture is usually RGB format, if the target picture is segmented in HSV color space, the format of the target picture can be converted from RGB format to HSV format before the color segmentation, so that the color segmentation can be more facilitated.
In one example, the colors in the HSV color space are divided into 10 color sections, and the 10 color sections are respectively black section, gray section, white section, red section, orange section, yellow section, green section, cyan section, blue section, and purple section, and the value ranges of the 10 color sections in three channels of hue (H), saturation (S), and brightness (V) are shown in table 1.
Fig. 2 is a schematic diagram of a target picture according to an exemplary embodiment of the present disclosure, as shown in fig. 2, the target picture 20 includes a text 201 and a background area 202, where the background area 202 is light green in color. Meanwhile, the extracted picture 20 may further include a ground color region 203, the ground color region 203 being formed when the UI automation tool is used for capturing a picture, the ground color region 203 being white in color.
Table 1 different color intervals are within the range of H, S, V for three channels
The target picture 20 shown in fig. 2 is color-divided based on the above-described 10 color sections, and a binary image corresponding to the above-described 10 color sections one by one is obtained. For the green interval, it is calculated whether the pixel in the target picture 20 falls into the color interval corresponding to green, that is, whether the H value, S value and V value of each pixel fall into the value range corresponding to the green interval is determined, if so, the pixel value 255 is assigned to the pixel, and if not, the pixel value 0 is assigned to the pixel, and finally, a black-white binary image is obtained, as shown in fig. 3. The white area in fig. 3, otherwise referred to as the highlight area, corresponds to the background area 202 in the target picture 20. For the black interval, whether the pixel in the target picture 20 falls into the color interval corresponding to black is calculated, namely whether the H value, the S value and the V value of each pixel fall into the value range corresponding to the black interval is judged, if so, the pixel value 255 is assigned to the pixel, and if not, the pixel value 0 is assigned to the pixel, and finally, a black-white binary image is obtained, as shown in fig. 4. The white area in fig. 4, otherwise referred to as the highlight area, corresponds to the text 201 in the target picture 20. For the gray interval, whether the pixel in the target picture 20 falls into the color interval corresponding to gray is calculated, namely whether the H value, the S value and the V value of each pixel fall into the value range corresponding to the gray interval is judged, if so, the pixel value 255 is assigned to the pixel, and if not, the pixel value 0 is assigned to the pixel, and finally, a black-white binary image is obtained, as shown in fig. 5. For the white interval, whether the pixel in the target picture 20 falls into the color interval corresponding to the white color is calculated, namely whether the H value, the S value and the V value of each pixel fall into the value range corresponding to the white interval is judged, if so, the pixel value 255 is assigned to the pixel, and if not, the pixel value 0 is assigned to the pixel, and finally, a black-white binary image is obtained, as shown in fig. 6.
For other color intervals, based on the same process, black-and-white binary images corresponding to each other are obtained, and the description is omitted here.
S102, inputting a plurality of binary images into a pre-trained classification model, and outputting an image category corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input, the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during color segmentation.
The embodiments of the present disclosure do not limit the types of image categories output by the classification model, which in one possible implementation outputs three image categories, a background category, a text category, and other categories, respectively. In the embodiment, the classification model identifies the category of the characteristic region in the binary image, and when the characteristic region is identified as the background region, the image category corresponding to the binary image is output as the background category; when the characteristic region is identified as a character region, outputting the image category corresponding to the binary image as a character category, and when the characteristic region is identified as other regions, outputting the image category corresponding to the binary image as other categories.
Along the foregoing examples, the target picture 20 shown in fig. 2 is color-divided based on the 10 color sections listed in table 1, resulting in 10 binary images corresponding to the 10 color sections one to one. The 10 binary images are input into a classification model, which outputs the image class of the 10 binary images. Next, a binary image shown in fig. 3 to 6 will be described as an example.
The color section corresponding to the binary image shown in fig. 3 is green, and the white region (highlight region) in the binary image is a feature region of the binary image, and the feature region of the binary image is a background region. When the binary image is input into the classification model, the classification model outputs the image category corresponding to the binary image as the background category.
The color section corresponding to the binary image shown in fig. 4 is black, and the white region (highlight region) in the binary image is a feature region of the binary image, and the feature region of the binary image is a character region. When the binary image is input into the classification model, the classification model outputs the image category corresponding to the binary image as the text category.
The color sections corresponding to the binary image shown in fig. 5 are each gray, and the white region (highlight region) in the binary image is a characteristic region of the binary image, and the characteristic region of the binary image is a character region. When the binary image is input into the classification model, the classification model outputs the image category corresponding to the binary image as the text category.
The color sections corresponding to the binary image shown in fig. 6 are white, and the white region (highlight region) in the binary image is a characteristic region of the binary image, and the characteristic region of the binary image is another region. When the binary image is input into the classification model, the classification model outputs the image category corresponding to the binary image as other categories.
The embodiments of the present disclosure are not limited to the manner in which the classification model is trained, and in one possible implementation, the classification model may be trained via a convolutional neural network. Before training, preparing a training sample, which can be a test screenshot, performing color segmentation on the test screenshot to obtain a plurality of binary images, adding marks to the binary images, for example, if the characteristic area of the binary images is characters, adding marks 'characters' to the binary images; if the characteristic area of the binary image is the background, the binary image is added with an identification background, so that a plurality of binary image samples with identifications can be obtained, and the binary image samples with identifications are input into a convolutional neural network to be trained to obtain a classification model.
The classification model in the embodiment of the disclosure identifies the category of the feature region in the binary image instead of the color, so that when a new color is added in the target image, the model does not need to be retrained, i.e., the classification model in the embodiment of the disclosure has better universality compared with the color classification model in the related art.
And S103, determining the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the binary images.
The embodiment of the disclosure is not limited to a specific manner of determining the background color of the target picture based on the binary image corresponding to the background category, and in one possible implementation, as shown in fig. 7, the determining the background color of the target picture based on the binary image corresponding to the background category includes the following steps:
s701, obtaining the confidence coefficient of the binary image corresponding to the background type.
The confidence is a probability value given to each binary image belonging to a certain category by the classification model, and the higher the confidence is, the higher the probability that the binary image belongs to a certain category is. It will be appreciated that the number of binary images corresponding to the background category may be one or more.
S702, determining the binary image with the highest confidence as a target binary image.
And (3) determining the binary image with the highest confidence as the target binary image, wherein the binary image with the highest confidence is indicated to have the highest probability value of the binary image belonging to the background category. It is understood that the feature region of the target binary image is a background region.
S703, determining the background color of the target picture based on the target binary image.
The characteristic region of the target binary image is a background region of the binary image, and the background region corresponds to a background region in the target picture, so that the background region of the target picture can be determined based on the target binary image, and further the background color of the target picture can be determined.
It should be noted that this embodiment is applicable to a case where the background area of the target picture belongs to one color section, and the background area of the target picture is not segmented during color segmentation, that is, the background area of the target picture corresponds to only one binary image, that is, the feature area of the target binary image.
For the case that the background area of the target picture belongs to different color intervals, the background area of the target picture is divided into a plurality of areas during color division, that is, the background area of the target picture corresponds to the characteristic areas of a plurality of binary images. For this case, the background color of the target picture is determined based on the binary image corresponding to the background category, comprising the steps of: acquiring confidence degrees corresponding to binary images corresponding to background categories; determining a plurality of binary images with confidence degrees larger than or equal to a first threshold value as target binary images; and determining the background color of the target picture based on the target binary image. The presently disclosed embodiments are not limited to the magnitude of the first threshold, which may be 80%, 85%, 90%, 95%, etc., for example.
The embodiment of the disclosure is not limited to a specific manner of determining the background color of the target picture according to the target binary image, and in one possible implementation, as shown in fig. 8, the determining the background color of the target picture based on the target binary image includes the following steps:
s801, determining a mask image corresponding to a characteristic region of the target binary image.
In this step, a mask image is constructed in which the pixel value of the region corresponding to the feature region of the target binary image is 1 and the pixel value of the other regions in the mask image is 0.
S802, fusing the mask image and the target picture to obtain a background area of the target picture.
In the step, fusing the mask image with the target picture to obtain a new image; the new image only comprises two areas, one area corresponds to the background area of the target picture, the color of the background area is the same as that of the background of the target picture, and the other area corresponds to the other area of the target picture, and the color of the area is black.
S803, extracting the color of the background area to obtain the background color.
In this step, the color of the background area of the target picture is extracted, and the background color of the target picture is obtained.
Specifically, a color histogram corresponding to the new image is constructed, and the colors except black in the color histogram are the background colors of the target picture. According to the embodiment, the background color can be obtained more simply and rapidly, and the extraction efficiency of the background color is improved.
In one embodiment, after obtaining the plurality of binary images, the plurality of binary images may be further filtered, and a binary image with a feature parameter smaller than the second threshold value in the plurality of binary images is removed, where the feature parameter includes one of the following: the sum of the areas of the respective pixels included in the feature area of the binary image, the sum of the numbers of the respective pixels included in the feature area of the binary image, and the sum of the pixel values of the respective pixels included in the feature area of the binary image. The purpose of the embodiment is to filter out the binary image with smaller area of the characteristic area, and the area of the background area in the target picture is larger, so that the corresponding binary image with larger area of the characteristic area can be filtered out in advance, the subsequent model classification can be more targeted, and the classification accuracy is improved.
In one embodiment, as shown in fig. 9, when no background category exists in the image categories corresponding to the plurality of binary images, the method for extracting the background color further includes the following steps:
and S901, carrying out graying treatment on the target picture to obtain a gray image.
The embodiment of the disclosure is not limited to a specific manner of graying processing, and OpenCV may be used to graying processing on the target picture.
In one possible implementation, if the target picture is in RGB format, the format of the target picture may be converted to HSV format before the graying process is performed.
In one possible embodiment, the target picture is cut before the graying process, and the ground color area at the edge of the target picture is removed. Referring to fig. 2, the ground color region 203 of the target picture 20 is removed. Therefore, the maximum connected domain obtained later is ensured to be the background region of the target picture, and the accuracy of background color extraction can be improved.
S902, binarizing the gray level image to obtain a binary image corresponding to the gray level image.
The embodiments of the present disclosure are not limited to a specific manner of binarization processing, and in one possible implementation, binarizing the gray-scale image using an Otsu threshold segmentation algorithm to obtain a binary image.
S903, determining the background color of the target picture based on the maximum connected domain in the binary image corresponding to the gray level image.
In the step, a maximum connected domain in a binary image corresponding to a gray level image is determined, and then the color contained in the maximum connected domain is extracted to obtain the background color of a target picture.
The disclosed embodiments do not limit the method of calculating the maximum connected domain, and in one possible implementation, the maximum connected domain is determined using the connected domain analysis function (Connected Components With Stats) in OpenCV.
Steps 901 to 903 are a computer vision processing method, which is executed when the classification model does not hit the binary image of the background class, and it should be noted that the probability that the classification model hits the image of the background class in the embodiment of the disclosure is very high, so that the probability of executing the computer vision processing method is relatively low.
If the computer vision processing method is adopted to process all the target pictures, on one hand, the extraction efficiency is lower, and on the other hand, the method has more limitation on the target pictures, for example, the target pictures need to be cut before extraction, so that the maximum connected domain in the target pictures is ensured to be a background area, and the background area can be successfully determined. The embodiment combines the classification model and the computer vision processing method, on one hand, the defect of insufficient universality of the color classification model in the related technology can be overcome, and on the other hand, the problem that the process of extracting the background color by the computer vision processing method is complicated and the limitation on the target picture is excessive can be solved. Therefore, the background color extraction method provided by the embodiment has good universality, the model is not required to be retrained when the color of the target picture is increased, and meanwhile, the target picture is not excessively limited.
According to the background color extraction method provided by the embodiment of the disclosure, a target picture is subjected to color segmentation to obtain a plurality of binary images, then the binary image with the image type being the background type is determined from the plurality of binary images based on the classification model, and the background color of the target picture is obtained based on the binary image with the background type. The embodiment of the disclosure adopts the classification model to classify the categories of the binary images, and compared with the technology of determining the background colors through the color classification model in the related art, the embodiment of the disclosure does not need to retrain the model when the colors are increased, so that the method has better universality.
In one embodiment, the method further comprises the steps of: acquiring an expected background color of a target picture; comparing the expected background color of the target picture with the extracted background color to obtain a color comparison result; and determining a color assertion result of the target picture according to the color comparison result.
Color assertion, i.e. determining whether the actual background color of the extracted target picture is similar to or consistent with the expected background color, which is obtained by the extraction method of the background color mentioned in the foregoing embodiment.
In one embodiment, comparing the expected background color with the extracted background color to obtain a color comparison result, comprising the steps of: calculating a color distance between an expected background color and the extracted background color; based on the color distance, a color comparison result is obtained. Specifically, the manhattan distance between the expected background color and the actual background color may be calculated, and the obtained manhattan distance may be normalized to obtain a color comparison result, where the color comparison result may specifically be a color similarity.
For example, assuming that R, G, and B values of the expected background color are divided into a1, a2, and a3, and R, G, and B values of the extracted background color are divided into d1, d2, and d3, the color similarity S between the expected background color and the extracted background color can be calculated using the following formula (1).
In one embodiment, when determining the result of the color assertion based on the color comparison result, a threshold may be preset, and then the relationship between the color comparison result and the threshold may be compared to determine the result of the color assertion. For example, the color comparison results in a color similarity, and if the color similarity is less than a threshold, the expected background color is declared to be dissimilar to the actual background color, and if the color similarity is greater than or equal to the threshold, the expected background color is declared to be similar to the actual background color. It will be appreciated that different traffic directions may set different thresholds.
The foregoing description of the solution provided by the embodiments of the present disclosure has been mainly presented from the perspective of a server. It will be appreciated that the server, in order to implement the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The embodiments of the present disclosure may divide functional units of a server according to the above method examples, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present disclosure, the division of the modules is merely a logic function division, and other division manners may be implemented in actual practice.
In the case of dividing each functional module by corresponding each function, exemplary embodiments of the present disclosure provide an extraction apparatus of a background color, which may be a server or a chip applied to the server. Fig. 10 is a schematic block diagram of a background color extraction apparatus provided according to an exemplary embodiment of the present disclosure. As shown in fig. 10, the background color extraction apparatus 1000 includes:
the color segmentation module 1001 is configured to perform color segmentation on the target picture based on a plurality of preset color intervals, so as to obtain a plurality of binary images corresponding to the plurality of color intervals one by one;
the image class determining module 1002 is configured to input a plurality of binary images into a pre-trained classification model, and output an image class corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input, the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during color segmentation;
the background color determination module 1003 is configured to determine, in response to the existence of a background category in the image categories corresponding to the plurality of binary images, a background color of the target picture based on the binary image corresponding to the background category.
In one possible implementation, in response to the absence of a background category in the image categories corresponding to the plurality of binary images, the background color determination module 1003 is further configured to:
carrying out graying treatment on the target picture to obtain a gray image;
performing binarization processing on the gray level image to obtain a binary image corresponding to the gray level image;
and determining the background color of the target picture based on the maximum connected domain in the binary image corresponding to the gray level image.
In one possible implementation, the background color determination module 1003 is further configured to:
acquiring the confidence coefficient of the binary image corresponding to the background category;
determining the binary image with the highest confidence as a target binary image;
and determining the background color of the target picture based on the target binary image.
In one possible implementation, the background color determination module 1003 is further configured to:
acquiring confidence degrees corresponding to binary images corresponding to background categories;
determining a plurality of binary images with confidence degrees larger than or equal to a first threshold value as target binary images;
and determining the background color of the target picture based on the target binary image.
In one possible implementation, the background color determination module 1003 is further configured to:
Determining a mask image corresponding to a characteristic region of the target binary image;
fusing the mask image with the target picture to obtain a background area of the target picture;
and extracting the color of the background area to obtain the background color.
In one possible implementation, the apparatus 1000 further includes a screening module configured to:
the plurality of binary images are black-and-white binary images, and the characteristic areas of the plurality of binary images are white; the background color determination module 1004 is further configured to:
screening the plurality of binary images, and removing the binary images with characteristic parameters smaller than a second threshold value in the plurality of binary images; wherein the characteristic parameter comprises one of the following: the sum of the areas of the respective pixels included in the feature area of the binary image, the sum of the numbers of the respective pixels included in the feature area of the binary image, and the sum of the pixel values of the respective pixels included in the feature area of the binary image.
In one possible implementation, the apparatus 1000 further includes a color asserting module configured to:
acquiring an expected background color of a target picture;
comparing the expected background color of the target picture with the background color to obtain a color comparison result;
And determining a color assertion result of the target picture according to the color comparison result.
Fig. 11 is a schematic block diagram of a chip provided according to an exemplary embodiment of the present disclosure. As shown in fig. 11, the chip 1100 includes one or more (including two) processors 1101 and a communication interface 1102. The communication interface 1102 may support a server to perform the data transceiving steps in the image processing method described above, and the processor 1101 may support the server to perform the data processing steps in the image processing method described above.
Optionally, as shown in fig. 11, the chip 1100 further includes a memory 1103, where the memory 1103 may include a read only memory and a random access memory, and provides operating instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In some embodiments, as shown in fig. 11, the processor 1101 performs the corresponding operation by invoking a memory-stored operating instruction (which may be stored in an operating system). The processor 1101 controls the processing operation of any of the terminal devices, which may also be referred to as a central processing unit (central processing unit, CPU). The memory 1103 may include read only memory and random access memory, and provide instructions and data to the processor 1101. A portion of the memory 1103 may also include NVRAM. Such as a memory, a communication interface, and a memory coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of illustration, the various buses are labeled as bus system 1104 in fig. 11.
The method disclosed by the embodiment of the disclosure can be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general purpose processor, a digital signal processor (digital signal processing, DSP), an ASIC, an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks of the disclosure in the embodiments of the disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present disclosure may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
With reference to fig. 12, a block diagram of an electronic device 1200 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the electronic device 1200 includes a computing unit 1201 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 12012 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM 1202, and the RAM 1203 are connected to each other via a bus 1204. An input/output (I/O) interface 1205 is also connected to the bus 1204.
Various components in the electronic device 1200 are connected to the I/O interface 1205, including: an input unit 1206, an output unit 1207, a storage unit 1208, and a communication unit 1209. The input unit 1206 may be any type of device capable of inputting information to the electronic device 1200, and the input unit 1206 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 1207 may be any type of device capable of presenting information, and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 1204 may include, but is not limited to, magnetic disks, optical disks. The communication unit 1209 allows the electronic device 1200 to exchange information/data with other devices over computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1201 performs the various methods and processes described above. For example, in some embodiments, the foregoing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1200 via the ROM 1202 and/or the communication unit 1209. In some embodiments, the computing unit 1201 may be configured to perform the aforementioned methods by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs or instructions. When the computer program or instructions are loaded and executed on a computer, the processes or functions of the embodiments of the present disclosure are performed in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a terminal, a user equipment, or other programmable apparatus. The computer program or instructions may be stored in or transmitted from one computer readable storage medium to another, for example, by wired or wireless means from one website site, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices such as servers, data centers, etc. that integrate one or more available media. Usable media may be magnetic media such as floppy disks, hard disks, magnetic tape; optical media, such as digital video discs (digital video disc, DVD); but also semiconductor media such as solid state disks (solid state drive, SSD).
Although the present disclosure has been described in connection with specific features and embodiments thereof, it will be apparent that various modifications and combinations thereof can be made without departing from the spirit and scope of the disclosure. Accordingly, the specification and drawings are merely exemplary illustrations of the present disclosure as defined in the appended claims and are considered to cover any and all modifications, variations, combinations, or equivalents within the scope of the disclosure. It will be apparent to those skilled in the art that various modifications and variations can be made to the present disclosure without departing from the spirit or scope of the disclosure. Thus, the present disclosure is intended to include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (11)

1. A method for extracting background color, comprising:
performing color segmentation on a target picture based on a plurality of preset color intervals to obtain a plurality of binary images corresponding to the plurality of color intervals one by one;
inputting the plurality of binary images into a pre-trained classification model, and outputting an image category corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input and takes the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during the color segmentation;
And determining the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the plurality of binary images.
2. The method of claim 1, wherein in response to the absence of a background category in the image categories corresponding to the plurality of binary images, the method further comprises:
carrying out graying treatment on the target picture to obtain a gray image;
performing binarization processing on the gray level image to obtain a binary image corresponding to the gray level image;
and determining the background color of the target picture based on the maximum connected domain in the binary image corresponding to the gray level image.
3. The method of claim 1, wherein determining the background color of the target picture based on the binary image corresponding to the background category comprises:
acquiring the confidence coefficient of the binary image corresponding to the background category;
determining the binary image with the highest confidence as a target binary image;
and determining the background color of the target picture based on the target binary image.
4. The method of claim 1, wherein determining the background color of the target picture based on the binary image corresponding to the background category comprises:
Acquiring the confidence coefficient corresponding to the binary image corresponding to the background category;
determining a plurality of binary images with confidence degrees larger than or equal to a first threshold value as target binary images;
and determining the background color of the target picture based on the target binary image.
5. The method of claim 3 or 4, wherein determining the background color of the target picture based on the target binary image comprises:
determining a mask image corresponding to a characteristic region of the target binary image;
fusing the mask image with the target picture to obtain a background area of the target picture;
and extracting the color of the background area to obtain the background color.
6. The method of claim 1, wherein prior to inputting the plurality of binary images into a pre-trained classification model, the method further comprises:
screening the plurality of binary images, and removing the binary images with characteristic parameters smaller than a second threshold value from the plurality of binary images; wherein the characteristic parameter includes one of the following: the sum of areas of pixels included in the feature area of the binary image, the sum of numbers of pixels included in the feature area of the binary image, and the sum of pixel values of pixels included in the feature area of the binary image.
7. The method of any one of claims 1-4, 6, wherein the method further comprises:
acquiring an expected background color of the target picture;
comparing the expected background color of the target picture with the background color to obtain a color comparison result;
and determining a color assertion result of the target picture according to the color comparison result.
8. A background color extraction device, comprising:
the color segmentation module is configured to perform color segmentation on the target picture based on a plurality of preset color intervals to obtain a plurality of binary images corresponding to the plurality of color intervals one by one;
the image category determining module is configured to input the plurality of binary images into a pre-trained classification model and output an image category corresponding to each binary image; the classification model is used for identifying whether the binary image belongs to a background category, the classification model takes the binary image as input and takes the category of a characteristic region in the binary image as output, the output of the classification model comprises the background category and other categories, and the characteristic region is a set of pixels falling into a corresponding color interval during the color segmentation;
And the background color determining module is configured to determine the background color of the target picture based on the binary image corresponding to the background category when the background category exists in the image categories corresponding to the binary images.
9. An electronic device, comprising:
a processor; and
a memory in which a program is stored,
wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method according to any of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
11. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202211009071.9A 2022-08-22 2022-08-22 Background color extraction method, device, equipment, storage medium and program product Pending CN117670897A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211009071.9A CN117670897A (en) 2022-08-22 2022-08-22 Background color extraction method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211009071.9A CN117670897A (en) 2022-08-22 2022-08-22 Background color extraction method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN117670897A true CN117670897A (en) 2024-03-08

Family

ID=90071733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211009071.9A Pending CN117670897A (en) 2022-08-22 2022-08-22 Background color extraction method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN117670897A (en)

Similar Documents

Publication Publication Date Title
WO2019169772A1 (en) Picture processing method, electronic apparatus, and storage medium
WO2017121018A1 (en) Method and apparatus for processing two-dimensional code image, and terminal and storage medium
JP7051267B2 (en) Image detection methods, equipment, electronic equipment, storage media, and programs
EP3493101A1 (en) Image recognition method, terminal, and nonvolatile storage medium
US20150039637A1 (en) Systems Apparatus and Methods for Determining Computer Apparatus Usage Via Processed Visual Indicia
WO2020253127A1 (en) Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium
WO2020082731A1 (en) Electronic device, credential recognition method and storage medium
WO2020253508A1 (en) Abnormal cell detection method and apparatus, and computer readable storage medium
US11341739B2 (en) Image processing device, image processing method, and program recording medium
US20150010233A1 (en) Method Of Improving Contrast For Text Extraction And Recognition Applications
CN110390327B (en) Foreground extraction method and device, computer equipment and storage medium
CN112396050B (en) Image processing method, device and storage medium
CN110582783A (en) Training device, image recognition device, training method, and program
WO2020248848A1 (en) Intelligent abnormal cell determination method and device, and computer readable storage medium
CN111460355A (en) Page parsing method and device
JP2019220014A (en) Image analyzing apparatus, image analyzing method and program
CN110895811A (en) Image tampering detection method and device
TWI671686B (en) Image data retrieving method and image data retrieving device
CN112883762A (en) Living body detection method, device, system and storage medium
CN117670897A (en) Background color extraction method, device, equipment, storage medium and program product
CN110147765A (en) A kind of image processing method and device
EP2919149A2 (en) Image processing apparatus and image processing method
JP2012003358A (en) Background determination device, method, and program
CN112288045B (en) Seal authenticity distinguishing method
TWI775038B (en) Method and device for recognizing character and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination