CN113763486B - Dominant hue extraction method, device, electronic equipment and storage medium - Google Patents

Dominant hue extraction method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113763486B
CN113763486B CN202010485775.8A CN202010485775A CN113763486B CN 113763486 B CN113763486 B CN 113763486B CN 202010485775 A CN202010485775 A CN 202010485775A CN 113763486 B CN113763486 B CN 113763486B
Authority
CN
China
Prior art keywords
image
pixel point
color value
pixel
preset number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010485775.8A
Other languages
Chinese (zh)
Other versions
CN113763486A (en
Inventor
杨鼎超
刘易周
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010485775.8A priority Critical patent/CN113763486B/en
Priority to PCT/CN2020/127558 priority patent/WO2021243955A1/en
Publication of CN113763486A publication Critical patent/CN113763486A/en
Application granted granted Critical
Publication of CN113763486B publication Critical patent/CN113763486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The disclosure relates to a main tone extraction method, a main tone extraction device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: the following operations are performed simultaneously for each first pixel: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of any reference pixel point in the first image after mixing is obtained, the efficiency of processing the image can be improved, the first image is processed according to the color value of each first pixel point in the first image, a second image is generated, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined to be the dominant hue of the first image, the extracted dominant hue accords with the characteristics of the first image, the accuracy of extracting the dominant hue is improved, the processing time is saved, and the extraction efficiency of the dominant hue is improved.

Description

Dominant hue extraction method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a dominant hue extraction method, a dominant hue extraction device, electronic equipment and a storage medium.
Background
With the rapid development of computer technology, more and more image processing methods are available, and a common image processing method is to extract the dominant color tone in the image. Since the colors of the pixels in the image are different, how to extract the dominant hue in the image is a problem to be solved.
In the related art, a target image is acquired through a CPU, each pixel point in the target image is traversed, the color characteristic value of each pixel point is extracted, the color characteristic value with the largest number of pixel points is acquired according to the color characteristic value of each pixel point, and the color corresponding to the color characteristic value is used as the main tone of the target image. However, the above method requires traversing every pixel point in the target image, and has long processing time and low efficiency of extracting the dominant hue.
Disclosure of Invention
The invention provides a main tone extraction method, a main tone extraction device, electronic equipment and a storage medium, which can enable the extracted main tone to be more consistent with the characteristics of a first image, improve the accuracy of main tone extraction, save processing time and improve the extraction efficiency of the main tone.
According to a first aspect of embodiments of the present disclosure, there is provided a dominant hue extraction method, the method comprising:
Acquiring a first image, wherein the first image comprises a plurality of first pixel points;
the following operations are performed simultaneously for each first pixel: acquiring a color value of a reference pixel after mixing according to the distance between the reference pixel and the first pixel, the color value of the reference pixel and the color value of the first pixel, wherein the reference pixel is any pixel in the first image;
processing the first image according to the color value of each first pixel point in the first image after mixing to generate a second image;
and extracting the color value of at least one pixel point from the second image, and determining the color value of the at least one pixel point as the dominant hue of the first image.
In one possible implementation, the following operations are performed simultaneously for each first pixel point: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of the reference pixel point after mixing is obtained, wherein the reference pixel point is any pixel point in the first image, and the method comprises the following steps:
the following operations are simultaneously executed for each first pixel point: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
And taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the reference pixel point after mixing.
In another possible implementation, the acquiring the first image includes,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously, carrying out downsampling on the first preset number of image areas to obtain the first preset number of first pixel points, and forming the first image by the first preset number of first pixel points.
In another possible implementation manner, the simultaneously performing downsampling processing on the first preset number of image areas to obtain the first preset number of first pixel points, and forming the first image by using the first preset number of first pixel points includes:
meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation manner, the determining, according to the color value of the pixel point included in each image area, the color value of each image area as the color value of the first pixel point corresponding to each image area includes:
simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation manner, the dividing the third image into the first preset number of image areas includes:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
and dividing an image area meeting the first size from the third image according to the first size.
In another possible implementation manner, the extracting the color value of at least one pixel point from the second image includes:
dividing the second image into a second preset number of image areas, wherein the second preset number of image areas have the same size;
extracting any pixel point from each image area in the second preset number of image areas to obtain the second preset number of pixel points;
And extracting the color values of the second preset number of pixel points.
In another possible implementation manner, the dividing the second image into a second preset number of image areas includes:
determining a second size of the divided image areas according to the size of the second image and the second preset number;
and dividing an image area meeting the second size from the second image according to the second size.
In another possible implementation manner, the second preset number of image areas has corresponding image areas in the first image, and the determining the color value of the at least one pixel point as the dominant hue of the first image includes:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
According to a second aspect of embodiments of the present disclosure, there is provided a dominant hue extraction device, the device comprising:
an image acquisition unit configured to acquire a first image including a plurality of first pixel points;
a color value acquisition unit configured to simultaneously perform, for each first pixel, the following operations: acquiring a color value of a reference pixel after mixing according to the distance between the reference pixel and the first pixel, the color value of the reference pixel and the color value of the first pixel, wherein the reference pixel is any pixel in the first image;
The generating unit is used for processing the first image according to the color value of each first pixel point in the first image after mixing, and generating a second image;
and the extraction unit is used for extracting the color value of at least one pixel point from the second image and determining the color value of the at least one pixel point as the dominant hue of the first image.
In another possible implementation manner, the color value obtaining unit includes:
a mixing subunit, configured to perform the following operations for each first pixel simultaneously: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and the determining subunit is used for taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the reference pixel point after mixing.
In another possible implementation, the image acquisition unit includes,
a first dividing subunit, configured to divide the third image into a first preset number of image areas, where each image area has the same size;
And the processing subunit is used for simultaneously carrying out downsampling processing on the first preset number of image areas to obtain the first preset number of first pixel points, and forming the first image by the first preset number of first pixel points.
In another possible implementation, the processing subunit is configured to:
meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation, the processing subunit is configured to:
simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation manner, the first dividing subunit is configured to:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
And dividing an image area meeting the first size from the third image according to the first size.
In another possible implementation manner, the extracting unit includes:
a second dividing subunit, configured to divide the second image into a second preset number of image areas, where the sizes of the second preset number of image areas are the same;
a pixel point extraction subunit, configured to extract any pixel point from each image area in the second preset number of image areas, to obtain the second preset number of pixel points;
and the color value extraction subunit is used for extracting the color values of the second preset number of pixel points.
In another possible implementation manner, the second dividing subunit is configured to:
determining a second size of the divided image areas according to the size of the second image and the second preset number;
and dividing an image area meeting the second size from the second image according to the second size.
In another possible implementation manner, the second preset number of image areas has corresponding image areas in the first image, and the extracting unit is configured to:
And respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the dominant hue extraction method as described in the first aspect.
According to a fourth aspect provided by embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the dominant hue extraction method as described in the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which when executed by a processor of an electronic device, causes the electronic device to perform the dominant hue extraction method as described in the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
The method, the device, the electronic equipment and the storage medium provided by the embodiment of the application acquire a first image, wherein the first image comprises a plurality of first pixel points, and the following operations are simultaneously executed for each first pixel point: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of any mixed reference pixel point in the first image is obtained, the processing efficiency of the image can be improved through a simultaneous processing mode, the first image is processed according to the color value of each mixed first pixel point in the first image, a second image is generated, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined to be the dominant hue of the first image, the extracted dominant hue can be more consistent with the characteristics of the first image through mixing the color values of the pixel points of the first image, the accuracy of extracting the dominant hue is improved, the processing time is saved, and the extracting efficiency of the dominant hue is improved.
In addition, by acquiring the average value of the color values of the pixel points of each image area, the color values of the first pixel points corresponding to the determined image areas are more uniform, and the accuracy of the determined color values of the first pixel points can be improved. And, through carrying out downsampling processing on a first preset number of image areas of the third image to obtain the first image, the efficiency of carrying out dominant hue extraction on the image can be improved, and the data volume of processing is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a dominant hue extraction method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a dominant hue extraction method according to an exemplary embodiment.
Fig. 3 is a schematic diagram illustrating a division of a third image according to an exemplary embodiment.
Fig. 4 is a schematic diagram showing an acquisition of an average value of color feature values of a third image according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a downsampling process according to an example embodiment.
Fig. 6 is a schematic diagram illustrating an interpolation process according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a tone extraction according to an exemplary embodiment.
Fig. 8 is a schematic structural view of a dominant hue extraction device according to an exemplary embodiment.
Fig. 9 is a schematic structural view of another dominant hue extraction device shown according to an exemplary embodiment.
Fig. 10 is a block diagram of a terminal according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely as attached hereto
The embodiment of the disclosure provides a dominant hue extraction method, which comprises the steps of obtaining a first image, and simultaneously executing the following operations for each first pixel point in the first image: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of the mixed reference pixel point is obtained, the first image is processed according to the color value of each first pixel point in the first image, a second image is generated, the color characteristic value of at least one pixel point is extracted from the second image, and the color value of at least one pixel point is determined to be the dominant hue of the first image, so that the method can be applied to various scenes.
For example, when the method provided by the embodiment of the present disclosure is applied to an image classification scene and a terminal needs to classify a plurality of images, the method provided by the embodiment of the present disclosure may be used to obtain a dominant hue of each image, and then classify the plurality of images according to the dominant hue of each image.
Or, for example, the method provided by the embodiment of the present disclosure is applied in an image searching scene, when the terminal needs to search for images, the method provided by the embodiment of the present disclosure may be used to obtain a dominant hue of each image, and find an image with the same dominant hue as the dominant hue to be searched for obtaining a search result.
The main tone extraction method provided by the embodiment of the disclosure is applied to a terminal. The terminal can be a mobile phone, a tablet personal computer, a computer and other types of terminals.
In one possible implementation, the terminal includes a GPU (Graphics Processing Unit, graphics processor), which is a processor in the terminal that performs drawing operations, and may be used to process data in parallel, thereby increasing the rate at which data is processed.
Fig. 1 is a flowchart illustrating a dominant hue extraction method according to an exemplary embodiment, see fig. 1, the method comprising:
In step 101, a first image is acquired, the first image comprising a plurality of first pixel points.
In step 102, the following operations are performed simultaneously for each first pixel: and obtaining the color value of the mixed reference pixel point according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, wherein the reference pixel point is any pixel point in the first image.
In step 103, the first image is processed according to the color value of each first pixel point in the first image, so as to generate a second image.
In step 104, color values of at least one pixel are extracted from the second image, and the color values of the at least one pixel are determined as the dominant hue of the first image.
The method provided by the embodiment of the disclosure acquires a first image, wherein the first image comprises a plurality of first pixel points, and for each first pixel point, the following operations are executed simultaneously: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of any mixed reference pixel point in the first image is obtained, the processing efficiency of the image can be improved through a simultaneous processing mode, the first image is processed according to the color value of each mixed first pixel point in the first image, a second image is generated, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined to be the dominant hue of the first image, the extracted dominant hue can be more consistent with the characteristics of the first image through mixing the color values of the pixel points of the first image, the accuracy of extracting the dominant hue is improved, the processing time is saved, and the extracting efficiency of the dominant hue is improved.
In one possible implementation, the following operations are performed simultaneously for each first pixel point: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of the mixed reference pixel point is obtained, and the reference pixel point is any pixel point in the first image and comprises the following steps:
the following operations are performed simultaneously for each first pixel: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the mixed reference pixel point.
In another possible implementation, the first image is acquired, including,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously, carrying out downsampling treatment on a first preset number of image areas to obtain a first preset number of first pixel points, and forming a first image by the first preset number of first pixel points.
In another possible implementation manner, the downsampling process is performed on the first preset number of image areas at the same time to obtain a first preset number of first pixel points, and the forming the first image by the first preset number of first pixel points includes:
meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation manner, determining, simultaneously, according to a color value of a pixel point included in each image area, a color value of each image area as a color value of a first pixel point corresponding to each image area includes:
simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation, dividing the third image into a first preset number of image areas includes:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
An image area satisfying the first size is divided from the third image according to the first size.
In another possible implementation, extracting the color value of the at least one pixel from the second image includes:
dividing the second image into a second preset number of image areas, wherein the sizes of the second preset number of image areas are the same;
extracting any pixel point from each image area in a second preset number of image areas to obtain a second preset number of pixel points;
and extracting color values of a second preset number of pixel points.
In another possible implementation, dividing the second image into a second preset number of image areas includes:
determining a second size of the divided image areas according to the size of the second image and a second preset number;
and dividing an image area satisfying the second size from the second image according to the second size.
In another possible implementation, the second preset number of image areas has corresponding image areas in the first image, and determining the color value of the at least one pixel point as the dominant hue of the first image includes:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is positioned in the first image.
Fig. 2 is a flowchart illustrating a dominant hue extraction method, referring to fig. 2, applied to a terminal, according to an exemplary embodiment, the method includes:
in step 201, the third image is divided into a first preset number of image areas, each of the same size.
The third image is any image of which the dominant hue needs to be extracted. The third image may be a scenic image, a person image or other type of image, etc. In addition, the third image may be obtained by shooting, by looking up images posted by other users on a social platform, by retrieval, or by other means.
And acquiring a third image, dividing the third image into a first preset number of image areas, wherein the first preset number of image areas have the same size. In addition, the third image includes a plurality of pixels, and the plurality of pixels in the third image are uniformly distributed, so that the number of pixels in the first preset number of image areas is the same.
Wherein the first preset number is set by the terminal, or by the user, or may be set in other ways. For example, the first preset number may be 4, 6, 8, or other values.
For example, the third image is divided into image areas of 1*6, or the third image is divided into image areas of 2*8, or the third image is divided into image areas of 6*1, or the third image is divided into other numbers of image areas.
For example, as shown in fig. 3, one third image is divided into 5 image areas from top to bottom.
In one possible implementation, the length and width of the third image are uniformly divided, so that the third image may be divided into a first preset number of image areas.
Optionally, in the process of dividing the third image, determining a first size of the divided image area according to the size of the third image and the first preset number, and dividing the image area meeting the first size from the third image according to the first size.
In the process of dividing the third image, after the size of the third image and the first preset number are determined, the size of the third image can be divided averagely according to the first preset number to obtain a first size, namely, the size of each image area divided from the third image, so that the third image is divided into uniform image areas, and the sizes of the divided image areas are the same, therefore, the third image can be divided according to the first size to obtain the image areas with the first preset number.
Optionally, in the process of dividing the third image, the third image may be divided in a transverse manner to obtain a first preset number of areas arranged transversely, or the third image may be divided in a longitudinal manner to obtain a first preset number of areas arranged longitudinally, or the third image may be further divided in two manners including a transverse manner and a longitudinal manner to obtain a first preset number of areas arranged uniformly in the transverse and longitudinal directions.
In step 202, downsampling is performed on a first preset number of image areas to obtain a first preset number of first pixel points, and the first preset number of first pixel points form a first image.
The downsampling process is to reduce the number of pixels of the image, and create a new image with the reduced number of pixels. For example, when the third image includes 100 pixels, the downsampling process is performed on the third image to obtain a new image including 10 pixels, and the size of the third image is smaller than that of the first image.
When the downsampling processing is carried out on the first preset number of image areas, each image area is fused into one pixel point at the same time, the first preset number of first pixel points are obtained, and the first preset number of first pixel points form a first image.
For example, after the third image is divided into 5 image areas, downsampling is performed on the 5 image areas to obtain 5 first pixel points, and the 5 first pixel points form the first image.
When the first preset number of first pixel points form a first image, adding the first pixel points corresponding to each image area into the first image according to the position of each image area in the third image so as to form the first image.
In one possible implementation, the color value of each image area is determined according to the color value of the pixel point included in each image area, and as the color value of the first pixel point corresponding to each image area, a first image including a plurality of first pixel points is created according to the determined color values of the plurality of first pixel points.
Since the third image includes a first preset number of image areas, each image area corresponds to one first pixel point in the first image, the first image includes the first preset number of first pixel points.
Wherein the color value is used to represent the color of the pixel point. The color value may be an RGB (Red Green Blue) value, or the color value may be a pixel value, or the color value may be another type of value, or the like.
Optionally, an average value of the color values of the pixel points in each image area is obtained, and the average value of the color values of the pixel points in each image area is used as the color value of the first pixel point corresponding to each image area in the first image.
The average value of the color values of the pixel points in each image area is obtained by adopting the following formula:
wherein C is i A color average value for representing the color value of the pixel point of the i-th image area,for representing the area of the ith image area, C (u,v) For representing the color value, D, of a pixel at coordinates (u, v) i For representing a set of i-th picture regions.
For example, for a first image area, the first image area includes 2 pixels, and the color values of the 2 pixels are (20, 60, 30), (60, 80, 20), respectively, and then the average value of the obtained color values of the pixels in the first image area is (40, 70, 20) using the above formula.
For example, as shown in fig. 4, after the third image is divided into 5 image areas from top to bottom, an average value of color values of pixels of each image area is obtained, and the color values of 5 pixels are included in the first image.
For example, after the third image is divided into 4 image areas in the order from top to bottom, the color values of the 4 image areas obtained in the order from top to bottom are (20, 40, 20), (30, 20, 30), (50, 50, 60), (100, 20, 20), respectively, and the color value of each first pixel point of the determined first image is (20, 40, 20), (30, 20, 30), (50, 50, 60), (100, 20, 20), respectively.
In addition, when the resolution of the third image is m×n, the time complexity is O (m×n) when the downsampling process is performed by the steps in the related art, and the steps 201 to 202 provided in the embodiments of the present application are adopted to directly and simultaneously reduce the third image to the first image, and the time complexity is O (1), which improves the efficiency of downsampling the image by m×n times compared with the steps in the related art.
According to the method and the device for determining the color value of the first pixel point, the average value of the color values of the pixel points of each image area is obtained, the determined color value of the first pixel point corresponding to the image area is more uniform, and accuracy of the determined color value of the first pixel point can be improved. In addition, the first image is obtained by downsampling the third image, so that the efficiency of extracting the dominant hue of the image can be improved, and the data volume of the processing can be reduced.
In the embodiment of the present application, the downsampling process is performed on the third image, and the first image is obtained as an example. In another embodiment, the steps 201-202 may not be executed, and the first image may be any image that needs to be extracted for dominant color, and the obtaining manner is similar to that of the third image in the step 201.
In one possible implementation, when steps 201-202 are not performed, the first image in embodiments of the present application may be a landscape image, a person image, or other type of image, or the like. In addition, the first image may be obtained by shooting, by looking up images published by other users on a social platform, by retrieval, or by other means.
After the first image is obtained, steps 203-205 may be performed subsequently to directly obtain the dominant hue of the first image.
In step 203, the following operations are performed simultaneously for each first pixel: and acquiring a color value after mixing the reference pixel points according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points and the color value of the first pixel points.
The reference pixel point is any pixel point in the first image.
When each first pixel point is processed, any pixel point in the first image is used as a reference pixel point, the distance between the reference pixel point and the first pixel point is determined according to the coordinates of the reference pixel point and the coordinates of the first pixel point, and then the color value after the reference pixel point is mixed is obtained according to the obtained distance, the color value of the reference pixel point and the color value of the first pixel point.
Optionally, the following operations are performed simultaneously for each first pixel: and respectively mixing the color value of the first pixel point with the color value of the reference pixel point according to the distance between the reference pixel point and the first pixel point, taking the mixed color value of the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point, and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the mixed color value of the reference pixel point.
The following formula is adopted to obtain the mixed color value of each first pixel point relative to the reference pixel point:
in addition, y is a mixed color value of each first pixel point relative to a reference pixel point, A is a color value of each first pixel point, k is a fixed value, and x is a coordinate value of each first pixel point.
In addition, it should be noted that, in the process of performing interpolation processing on an image implemented in step 203 in the embodiment of the present application, interpolation processing is performed on a color value of each first pixel point to obtain a second image after interpolation processing, where the color value of each second pixel point in the second image is smoother, so that the accuracy of the main tone extracted later can be improved.
When the first image is processed, each first pixel point in the first image can be processed at the same time, so that the processing time can be saved, and the processing efficiency can be improved.
Optionally, steps 202 to 203 in the steps of the present application may use a GPU to perform downsampling processing on a first preset number of image areas in parallel to obtain a first preset number of first pixel points, and form the first image with the first preset number of first pixel points, and then obtain the color value after mixing the reference pixel points in parallel according to the distance between the reference pixel points and the first pixel points, the color value of the reference pixel points, and the color value of the first pixel points. The GPU is adopted to perform processing in parallel, so as to achieve the effect of simultaneously performing downsampling on a plurality of image areas in step 202, and simultaneously obtain the effect of color values after mixing the reference pixels in step 203.
In addition, if the resolution of the first image is m×n, when the convolution kernel is used to process the image, if the convolution kernel is C, in the related art, the processing time complexity is O (m×n×c), and in step 203 provided in the embodiment of the present application, by simultaneously performing the operation on the pixel points, the processing time complexity is O (C), which is improved by m×n times compared to the steps in the related art, and the efficiency of processing the image is improved.
In step 204, the first image is processed according to the color value of each first pixel point in the first image, so as to generate a second image.
After the mixed color value of each first pixel point in the first image is obtained, the first image can be continuously processed, the color value of each first pixel point in the first image is determined to be the mixed color value, and a second image corresponding to the first image is generated.
Note that, in the embodiment of the present application, interpolation processing is only performed on the first image by using a gaussian function as an example. In another embodiment, a linear interpolation method may be further adopted to perform interpolation processing on the first image, so as to obtain a second image after interpolation processing.
In step 205, color values of at least one pixel are extracted from the second image, and the color values of the at least one pixel are determined to be the dominant hue of the first image.
The second image includes a plurality of pixels, and when determining the dominant hue of the first image, the color value of at least one pixel may be extracted from the second image as the dominant hue of the first image.
In one possible implementation manner, the second image is divided into a second preset number of image areas, any pixel point is extracted from each image area in the second preset number of image areas, the second preset number of pixel points are obtained, the color value of the second preset number of pixel points is extracted, and the main tone of the first image is determined. Wherein the second predetermined number of image areas are the same size.
The second image is an image obtained by processing the first image, and in order to improve the accuracy of the determined dominant hue, the second image is divided into a second preset number of image areas, where the second preset number of image areas may represent the dominant hue of the corresponding area in the first image, and then any pixel point is extracted from each image area in the second preset number of image areas, so as to obtain a second preset number of pixel points, and the color value of the second preset number of pixel points is extracted to determine the dominant hue of the first image.
Optionally, when extracting pixels from a second preset number of image areas of the second image, extracting a central pixel of each image area to obtain a second preset number of pixels, extracting color values of the second preset number of pixels, and determining the color values as a dominant hue of the first image.
Wherein the second predetermined number of image areas are the same size. In addition, the second preset number is set by the terminal, or by the user, or may be set in other ways. For example, the second preset number may be 5, 6, 7, or other values. The center pixel is a pixel located at the center of the image area.
For example, when the number of pixels included in the image area is 7 and the 7 pixels are arranged from top to bottom, the 4 th pixel is the center pixel of the image area.
In addition, the second image is divided into 5 image areas from top to bottom, the sizes of the 5 image areas are the same, and then the center pixel point of each image area is determined according to the sizes of the 5 image areas, so that the dominant color of the first image can be obtained by the determined colors corresponding to the color values of the 5 center pixel points.
Optionally, a second size of the divided image areas is determined according to a size of the second image and a second preset number, and the image areas satisfying the second size are divided from the second image according to the second size.
In the process of dividing the second image, after the size of the second image and the second preset number are determined, the size of the second image can be divided averagely according to the second preset number to obtain a second size, namely, the size of each image area divided from the second image, so that the second image is divided into uniform numbers, and the sizes of each divided image area are the same, therefore, the second image can be divided according to the second size to obtain the image area with the second preset number.
In one possible implementation manner, the second preset number of image areas in the second image has corresponding image areas in the first image, so that the color value of each pixel point extracted in the second image is respectively used as the dominant hue of the corresponding area in the first image of the image area where each pixel point is located.
In addition, by way of example, a method of extracting a dominant hue of an image provided in the embodiments of the present application will be described. For example, as shown in fig. 5, steps 201 to 202 are performed, the third image is subjected to downsampling to obtain a first image, as shown in fig. 6, steps 203 to 204 are performed, the first image is subjected to interpolation to obtain a second image, as shown in fig. 7, step 205 is performed, 5 pixels are extracted from the second image, and color values of the 5 pixels are dominant color of the first image.
In the embodiment of the present application, the first image is an image obtained by downsampling the third image, and the determined dominant hue of the first image may also be the dominant hue of the third image. In yet another embodiment, if steps 201-202 are not performed, the dominant hue of the first image is directly acquired.
Alternatively, step 205 in the embodiment of the present application may extract the pixel value of at least one pixel point from the second image at the same time.
Alternatively, step 205 in the embodiment of the present application may be executed by the GPU, extracting the color value of at least one pixel in the second image in parallel, and then taking the color corresponding to the color value of at least one pixel as the dominant color of the first image.
According to the method provided by the embodiment of the application, a first image is acquired, the first image comprises a plurality of first pixel points, and the following operations are simultaneously executed for each first pixel point: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of any mixed reference pixel point in the first image is obtained, the processing efficiency of the image can be improved through a simultaneous processing mode, the first image is processed according to the color value of each mixed first pixel point in the first image, a second image is generated, the color value of each pixel point in the second image is more uniform, the color value of at least one pixel point is extracted from the second image, the color value of at least one pixel point is determined to be the dominant hue of the first image, the extracted dominant hue can be more consistent with the characteristics of the first image through mixing the color values of the pixel points of the first image, the accuracy of extracting the dominant hue is improved, the processing time is saved, and the extracting efficiency of the dominant hue is improved.
In addition, by acquiring the average value of the color values of the pixel points of each image area, the color values of the first pixel points corresponding to the determined image areas are more uniform, and the accuracy of the determined color values of the first pixel points can be improved. And, through carrying out downsampling processing on a first preset number of image areas of the third image to obtain the first image, the efficiency of carrying out dominant hue extraction on the image can be improved, and the data volume of processing is reduced.
Fig. 8 is a schematic structural view of a dominant hue extraction device according to an exemplary embodiment. Referring to fig. 8, the apparatus includes:
an image acquisition unit 801 configured to acquire a first image including a plurality of first pixel points;
a color value obtaining unit 802, configured to perform the following operations for each first pixel point simultaneously: acquiring a color value of the mixed reference pixel point according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, wherein the reference pixel point is any pixel point in the first image;
a generating unit 803, configured to process the first image according to the color value of each first pixel point in the first image after mixing, and generate a second image;
An extracting unit 804, configured to extract a color value of at least one pixel from the second image, and determine the color value of the at least one pixel as a dominant hue of the first image.
In one possible implementation, referring to fig. 9, the color value acquisition unit 802 includes:
a mixing subunit 8021, configured to perform the following operations for each first pixel simultaneously: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
the determining subunit 8022 is configured to use the sum of the mixed color values of each first pixel point relative to the reference pixel point as a color value after mixing the reference pixel points.
In another possible implementation, referring to fig. 9, the image acquisition unit 801 includes,
a first dividing subunit 8011, configured to divide the third image into a first preset number of image areas, where each image area has the same size;
the processing subunit 8012 is configured to perform downsampling processing on a first preset number of image areas simultaneously to obtain a first preset number of first pixel points, and form a first image with the first preset number of first pixel points.
In another possible implementation, referring to fig. 9, the processing subunit 8012 is configured to:
meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
In another possible implementation, the processing subunit 8012 is configured to:
simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
In another possible implementation, the first dividing subunit 8011 is configured to:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
an image area satisfying the first size is divided from the third image according to the first size.
In another possible implementation, referring to fig. 9, the extracting unit 804 includes:
a second dividing subunit 8041, configured to divide the second image into a second preset number of image areas, where the sizes of the second preset number of image areas are the same;
A pixel point extraction subunit 8042, configured to extract any pixel point from each image area in the second preset number of image areas, so as to obtain a second preset number of pixel points;
the color value extraction subunit 8043 is configured to extract color values of a second preset number of pixel points.
In another possible implementation, the second dividing subunit 8041 is configured to:
determining a second size of the divided image areas according to the size of the second image and a second preset number;
and dividing an image area satisfying the second size from the second image according to the second size.
In another possible implementation manner, the second preset number of image areas has corresponding image areas in the first image, and the extracting unit 804 is configured to:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is positioned in the first image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 10 is a block diagram of an electronic device, such as a terminal, according to an example embodiment. The terminal 1000 can be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 800 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
In general, terminal 1000 can include: one or more processors 1001 and one or more memories 1002.
The processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1001 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1001 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1001 may integrate a GPU (Graphics Processing Unit, data recommender) for taking care of rendering and drawing of content that the display screen needs to display. In some embodiments, the processor 1001 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include volatile memory or non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for being possessed by processor 1001 to implement the dominant hue extraction method provided by the method embodiments herein.
In some embodiments, terminal 1000 can optionally further include: a peripheral interface 1003, and at least one peripheral. The processor 1001, the memory 1002, and the peripheral interface 1003 may be connected by a bus or signal line. The various peripheral devices may be connected to the peripheral device interface 1003 via a bus, signal wire, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, a display 1005, a camera assembly 1006, audio circuitry 10010, a positioning assembly 1008, and a power source 1009.
Peripheral interface 1003 may be used to connect I/O (Input/Output) related at least one peripheral to processor 1001 and memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1001, memory 1002, and peripheral interface 1003 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
Radio Frequency circuit 1004 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. Radio frequency circuitry 1004 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. Radio frequency circuitry 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 1004 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1005 is a touch screen, the display 1005 also has the ability to capture touch signals at or above the surface of the display 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this time, the display 1005 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, display 1005 may be one, disposed on the front panel of terminal 1000; in other embodiments, display 1005 may be provided in at least two, separately provided on different surfaces of terminal 1000 or in a folded configuration; in other embodiments, display 1005 may be a flexible display disposed on a curved surface or a folded surface of terminal 1000. Even more, the display 1005 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 1005 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1006 is used to capture images or video. Optionally, camera assembly 1006 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing, or inputting the electric signals to the radio frequency circuit 1004 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each located at a different portion of terminal 1000. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 1007 may also include a headphone jack.
The location component 1008 is used to locate the current geographic location of terminal 1000 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1008 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer system of russia, or the galileo system of the european union.
Power supply 1009 is used to power the various components in terminal 1000. The power source 1009 may be alternating current, direct current, disposable battery or rechargeable battery. When the power source 1009 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1000 can further include one or more sensors 1011. The one or more sensors 1011 include, but are not limited to: acceleration sensor 1011, gyroscope sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
The acceleration sensor 1011 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1000. For example, the acceleration sensor 1011 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1001 may control the display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect the body direction and the rotation angle of the terminal 1000, and the gyro sensor 1012 may collect the 3D motion of the user to the terminal 1000 in cooperation with the acceleration sensor 1011. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1012: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1013 may be disposed on a side frame of terminal 1000 and/or on an underlying layer of display 1005. When the pressure sensor 1013 is provided at a side frame of the terminal 1000, a grip signal of the terminal 1000 by a user can be detected, and the processor 1001 performs right-and-left hand recognition or quick operation according to the grip signal collected by the pressure sensor 1013. When the pressure sensor 1013 is provided at the lower layer of the display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1005. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1001 to have associated sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1014 may be provided on the front, back or side of terminal 1000. When a physical key or vendor Logo is provided on terminal 1000, fingerprint sensor 1014 may be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the display screen 1005 based on the ambient light intensity collected by the optical sensor 1015. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1005 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1005 is turned down. In another embodiment, the processor 1001 may dynamically adjust the shooting parameters of the camera module 1006 according to the ambient light intensity collected by the optical sensor 1015.
Proximity sensor 1016, also referred to as a distance sensor, is typically located on the front panel of terminal 1000. Proximity sensor 1016 is used to collect the distance between the user and the front of terminal 1000. In one embodiment, when proximity sensor 1016 detects a gradual decrease in the distance between the user and the front face of terminal 1000, processor 1001 controls display 1005 to switch from the bright screen state to the off screen state; when proximity sensor 1016 detects a gradual increase in the distance between the user and the front of terminal 1000, processor 1001 controls display 1005 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 10 is not limiting and that terminal 1000 can include more or fewer components than shown, or certain components can be combined, or a different arrangement of components can be employed.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the steps performed by the terminal or server in the above-described dominant hue extraction method.
In an exemplary embodiment, a computer program product is also provided, which, when executed by a processor of an electronic device, enables the electronic device to perform the steps performed by the terminal or server in the above described dominant hue extraction method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A method of dominant hue extraction, the method comprising:
acquiring a first image, wherein the first image comprises a plurality of first pixel points;
the following operations are performed simultaneously for each first pixel: acquiring a color value of a reference pixel after mixing according to the distance between the reference pixel and the first pixel, the color value of the reference pixel and the color value of the first pixel, wherein the reference pixel is any pixel in the first image;
processing the first image according to the color value of each first pixel point in the first image after mixing to generate a second image;
dividing the second image into a second preset number of image areas, wherein the second preset number of image areas have the same size;
extracting any pixel point from each image area in the second preset number of image areas to obtain the second preset number of pixel points;
extracting color values of the second preset number of pixel points, and determining the color values of the second preset number of pixel points as the dominant hue of the first image.
2. The method of claim 1, wherein for each first pixel the following is performed simultaneously: according to the distance between the reference pixel point and the first pixel point, the color value of the reference pixel point and the color value of the first pixel point, the color value of the reference pixel point after mixing is obtained, wherein the reference pixel point is any pixel point in the first image, and the method comprises the following steps:
The following operations are simultaneously executed for each first pixel point: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
and taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the reference pixel point after mixing.
3. The method of claim 1, wherein the acquiring the first image comprises,
dividing the third image into a first preset number of image areas, wherein the size of each image area is the same;
and simultaneously, carrying out downsampling on the first preset number of image areas to obtain the first preset number of first pixel points, and forming the first image by the first preset number of first pixel points.
4. A method according to claim 3, wherein the simultaneously downsampling the first predetermined number of image areas to obtain the first predetermined number of first pixels, and forming the first image from the first predetermined number of first pixels comprises:
Meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
5. The method according to claim 4, wherein determining the color value of each image area as the color value of the first pixel corresponding to each image area according to the color value of the pixel included in each image area includes:
simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
6. A method according to claim 3, wherein dividing the third image into a first predetermined number of image areas comprises:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
and dividing an image area meeting the first size from the third image according to the first size.
7. The method of claim 1, wherein the dividing the second image into a second predetermined number of image areas comprises:
determining a second size of the divided image areas according to the size of the second image and the second preset number;
and dividing an image area meeting the second size from the second image according to the second size.
8. The method of claim 1, wherein the second predetermined number of image areas has a corresponding image area in the first image, the method further comprising:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
9. A dominant hue extraction device, said device comprising:
an image acquisition unit configured to acquire a first image including a plurality of first pixel points;
a color value acquisition unit configured to simultaneously perform, for each first pixel, the following operations: acquiring a color value of a reference pixel after mixing according to the distance between the reference pixel and the first pixel, the color value of the reference pixel and the color value of the first pixel, wherein the reference pixel is any pixel in the first image;
The generating unit is used for processing the first image according to the color value of each first pixel point in the first image after mixing, and generating a second image;
the extraction unit is used for dividing the second image into a second preset number of image areas, and the sizes of the second preset number of image areas are the same; extracting any pixel point from each image area in the second preset number of image areas to obtain the second preset number of pixel points; extracting color values of the second preset number of pixel points, and determining the color values of the second preset number of pixel points as the dominant hue of the first image.
10. The apparatus according to claim 9, wherein the color value acquisition unit includes:
a mixing subunit, configured to perform the following operations for each first pixel simultaneously: according to the distance between the reference pixel point and the first pixel point, respectively mixing the color value of the first pixel point with the color value of the reference pixel point, and taking the color value after mixing the first pixel point as the mixed color value of the first pixel point relative to the reference pixel point;
And the determining subunit is used for taking the sum of the mixed color values of each first pixel point relative to the reference pixel point as the color value of the reference pixel point after mixing.
11. The apparatus of claim 9, wherein the image acquisition unit comprises,
a dividing subunit, configured to divide the third image into a first preset number of image areas, where each image area has the same size;
and the processing subunit is used for simultaneously carrying out downsampling processing on the first preset number of image areas to obtain the first preset number of first pixel points, and forming the first image by the first preset number of first pixel points.
12. The apparatus of claim 11, wherein the processing subunit is configured to:
meanwhile, according to the color value of the pixel point included in each image area, determining the color value of each image area as the color value of the first pixel point corresponding to each image area;
and creating a first image containing the plurality of first pixel points according to the determined color values of the plurality of first pixel points.
13. The apparatus of claim 12, wherein the processing subunit is configured to:
Simultaneously acquiring an average value of color values of pixel points in each image area;
and taking the average value of the color values of the pixel points in each image area as the color value of the first pixel point corresponding to each image area in the first image.
14. The apparatus of claim 11, wherein the dividing subunit is configured to:
determining a first size of the divided image areas according to the size of the third image and the first preset number;
and dividing an image area meeting the first size from the third image according to the first size.
15. The apparatus of claim 9, wherein the extraction unit is configured to:
determining a second size of the divided image areas according to the size of the second image and the second preset number;
and dividing an image area meeting the second size from the second image according to the second size.
16. The apparatus according to claim 9, wherein the second preset number of image areas has corresponding image areas in the first image, the extracting unit being configured to:
and respectively taking the color value of each pixel point extracted from the second image as the dominant hue of the corresponding region of the image region where each pixel point is located in the first image.
17. An electronic device, the electronic device comprising:
one or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
wherein the one or more processors are configured to perform the dominant hue extraction method of any of claims 1-8.
18. A non-transitory computer-readable storage medium, wherein instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the dominant hue extraction method of any one of claims 1-8.
CN202010485775.8A 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium Active CN113763486B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010485775.8A CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium
PCT/CN2020/127558 WO2021243955A1 (en) 2020-06-01 2020-11-09 Dominant hue extraction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485775.8A CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113763486A CN113763486A (en) 2021-12-07
CN113763486B true CN113763486B (en) 2024-03-01

Family

ID=78782666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485775.8A Active CN113763486B (en) 2020-06-01 2020-06-01 Dominant hue extraction method, device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113763486B (en)
WO (1) WO2021243955A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015039567A1 (en) * 2013-09-17 2015-03-26 Tencent Technology (Shenzhen) Company Limited Method and user apparatus for window coloring
CN105989799A (en) * 2015-02-12 2016-10-05 西安诺瓦电子科技有限公司 Image processing method and image processing device
CN106780634A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 Picture dominant tone extracting method and device
CN106898026A (en) * 2017-03-15 2017-06-27 腾讯科技(深圳)有限公司 The dominant hue extracting method and device of a kind of picture
CN110825968A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Information pushing method and device, storage medium and computer equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100378351B1 (en) * 2000-11-13 2003-03-29 삼성전자주식회사 Method and apparatus for measuring color-texture distance, and method and apparatus for sectioning image into a plurality of regions using the measured color-texture distance
KR20070026701A (en) * 2004-06-30 2007-03-08 코닌클리케 필립스 일렉트로닉스 엔.브이. Dominant color extraction using perceptual rules to produce ambient light derived from video content
CN102523367B (en) * 2011-12-29 2016-06-15 全时云商务服务股份有限公司 Real time imaging based on many palettes compresses and method of reducing
EP2806401A1 (en) * 2013-05-23 2014-11-26 Thomson Licensing Method and device for processing a picture
CN103761303B (en) * 2014-01-22 2017-09-15 广东欧珀移动通信有限公司 The arrangement display methods and device of a kind of picture
CN109472832B (en) * 2018-10-15 2020-10-30 广东智媒云图科技股份有限公司 Color scheme generation method and device and intelligent robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015039567A1 (en) * 2013-09-17 2015-03-26 Tencent Technology (Shenzhen) Company Limited Method and user apparatus for window coloring
CN105989799A (en) * 2015-02-12 2016-10-05 西安诺瓦电子科技有限公司 Image processing method and image processing device
CN106780634A (en) * 2016-12-27 2017-05-31 努比亚技术有限公司 Picture dominant tone extracting method and device
CN106898026A (en) * 2017-03-15 2017-06-27 腾讯科技(深圳)有限公司 The dominant hue extracting method and device of a kind of picture
CN110825968A (en) * 2019-11-04 2020-02-21 腾讯科技(深圳)有限公司 Information pushing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN113763486A (en) 2021-12-07
WO2021243955A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN110502954B (en) Video analysis method and device
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN109558837B (en) Face key point detection method, device and storage medium
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN111541907A (en) Article display method, apparatus, device and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN113763228B (en) Image processing method, device, electronic equipment and storage medium
US11386586B2 (en) Method and electronic device for adding virtual item
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN110705614A (en) Model training method and device, electronic equipment and storage medium
CN111105474B (en) Font drawing method, font drawing device, computer device and computer readable storage medium
CN110991445B (en) Vertical text recognition method, device, equipment and medium
CN110619614B (en) Image processing method, device, computer equipment and storage medium
CN111353946A (en) Image restoration method, device, equipment and storage medium
CN110853124B (en) Method, device, electronic equipment and medium for generating GIF dynamic diagram
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN111639639B (en) Method, device, equipment and storage medium for detecting text area
CN113301422B (en) Method, terminal and storage medium for acquiring video cover
CN114155132A (en) Image processing method, device, equipment and computer readable storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN113763486B (en) Dominant hue extraction method, device, electronic equipment and storage medium
CN111488895B (en) Countermeasure data generation method, device, equipment and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN113592874A (en) Image display method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant