CN111462257A - Image processing method based on palette - Google Patents

Image processing method based on palette Download PDF

Info

Publication number
CN111462257A
CN111462257A CN202010241999.4A CN202010241999A CN111462257A CN 111462257 A CN111462257 A CN 111462257A CN 202010241999 A CN202010241999 A CN 202010241999A CN 111462257 A CN111462257 A CN 111462257A
Authority
CN
China
Prior art keywords
pixel
pixels
density
distance
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010241999.4A
Other languages
Chinese (zh)
Other versions
CN111462257B (en
Inventor
赵永涛
陈瑞阳
王伟
王渊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Glenfly Tech Co Ltd
Original Assignee
Shanghai Zhaoxin Integrated Circuit Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhaoxin Integrated Circuit Co Ltd filed Critical Shanghai Zhaoxin Integrated Circuit Co Ltd
Priority to CN202010241999.4A priority Critical patent/CN111462257B/en
Publication of CN111462257A publication Critical patent/CN111462257A/en
Application granted granted Critical
Publication of CN111462257B publication Critical patent/CN111462257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

A palette-based image processing method, comprising the steps of: obtaining an image block, wherein the image block comprises a plurality of pixels; calculating the pixel density corresponding to each pixel according to the pixel information of the image block; calculating the density distance corresponding to each pixel according to the pixel density corresponding to the pixel; selecting a plurality of central pixels from the pixels according to the density distance corresponding to the pixels; the pixel value of the center pixel is taken as a plurality of pixel values of the palette.

Description

Image processing method based on palette
Technical Field
The present invention relates to an image processing method, and more particularly, to an image processing method based on a palette.
Background
In general, the palette method is a classical method in compression of image blocks. The method encodes a plurality of palette pixels smaller than the original pixel number, and the pixel to be encoded only encodes the index of the pixel in the palette pixel and carries out reconstruction through the palette pixel.
Existing palette generation algorithms may build a palette from historical pixels. However, this method is very limited, wastes a lot of palette space when encoding a plurality of similar pixels, and cannot be efficiently updated in a regional update (partial update) image compression.
In addition, the existing palette generation algorithm may be a K-means (K-means) algorithm, and the cluster center of the K-means cluster is used as the palette pixel. However, this method requires many iterations, has severe data dependence, and the selection of the K value has a large influence on the encoding result. Therefore, how to select the center point of the dense pixel group as the palette pixel is a challenge to be studied by various manufacturers.
Disclosure of Invention
The invention provides an image processing method based on a palette, which can better and quickly select proper pixels from an image block to be used as pixels of the palette and increase the convenience in use.
The invention provides an image processing method based on a palette, which comprises the following steps. An image block is obtained, wherein the image block comprises a plurality of pixels. And calculating the pixel density corresponding to each pixel according to the pixel information of the image block. And calculating the density distance corresponding to each pixel according to the pixel density corresponding to the pixel. And selecting a plurality of central pixels from the pixels according to the corresponding density distance of the pixels. The pixel value of the center pixel is taken as a plurality of pixel values of the palette.
The invention discloses a palette-based image processing method, which obtains an image block, wherein the image block comprises a plurality of pixels. According to the pixel information of the image block, calculating the pixel density corresponding to each pixel, calculating the density distance corresponding to each pixel according to the pixel density corresponding to the pixel, selecting a plurality of central pixels from the pixels according to the density distance corresponding to the pixel, and taking the pixel value of the central pixels as a plurality of pixel values of the palette. Therefore, it is able to select the proper pixel from the image block to be used as the pixel of the palette better and quickly, and increase the convenience of use.
Drawings
Fig. 1 is a flowchart of a palette-based image processing method according to an embodiment of the present invention.
FIG. 2 is a diagram illustrating a distribution of pixels of an image block according to an embodiment of the invention.
Fig. 3 is a detailed flowchart of step S104 of fig. 1.
Fig. 4 is a detailed flowchart of step S106 of fig. 1.
Fig. 5 is a detailed flowchart of step S108 of fig. 1.
Fig. 6 is a flowchart of a palette-based image processing method according to an embodiment of the present invention.
Detailed Description
In each of the embodiments listed below, the same or similar elements or components will be denoted by the same reference numerals.
Fig. 1 is a flowchart of a palette-based image processing method according to an embodiment of the present invention. The palette-based image processing method according to the embodiment of the present invention may be executed by a computer device, for example. The computer device is, for example, a personal computer, a notebook computer, a server, or the like. In addition, the computer device at least comprises a processor and a memory. The memory may store an executable program, and the processor may read the executable program stored in the memory and execute the executable program to perform the palette-based image processing method according to the embodiment of the present invention.
Please refer to fig. 1. In step S102, an image block is obtained, wherein the image block includes a plurality of pixels. For example, as shown in fig. 2, an image block comprises, for example, 32 pixels. The coordinates of the 32 pixels are (4,3), (4,0), (6,0), (8,7), (8,0), (14,5), (14,8), (18,17), (18,9), (18,0), (18,12), (29,50), (34,55), (160,158), (160,153), (167,174), (167,145), (230,237), (230,225), (240,244), (235 ), (243,250), (243, 245). The 32 pixels shown in fig. 2 are an exemplary embodiment of the invention, but the embodiment of the invention is not limited thereto. The user can adjust the number of pixels according to the requirement and achieve the same effect.
In step S104, the pixel density corresponding to each pixel is calculated according to the pixel information of the image block. Further, step S104 may include steps S302-S306, as shown in FIG. 3. In step S302, coordinates of pixels in the image block are obtained according to the pixel information of the image block. For example, the coordinates of 32 pixels in the image block shown in fig. 2 are obtained.
In step S304, a plurality of distance differences between the coordinates of each pixel and the coordinates of other pixels are calculated based on the coordinates of the pixels. For example, a distance difference between the coordinates (4,3) of the 1 st pixel and the coordinates of the other 31 pixels is calculated, a distance difference between the coordinates (4,3) of the 2 nd pixel and the coordinates of the other 31 pixels is calculated, a distance difference between the coordinates (4,0) of the 3 rd pixel and the coordinates of the other 31 pixels is calculated, and a distance difference between the coordinates (243,245) of the 32 th pixel and the coordinates of the other 31 pixels is calculated.
In the present embodiment, the distance difference between the two pixels is, for example, a euclidean distance (euclidean distance). In addition, the euclidean distance can be represented by the following formula (1).
Figure BDA0002432851010000031
Where ρ is the distance difference, (x)2-x1) Is the difference in the X coordinate of two pixels, (y)2-y1) Is the difference in the Y coordinate of the two pixels.
The euclidean distance may be represented by the following formula (2).
Figure BDA0002432851010000032
Where ρ is the distance difference, (x)2-x1) Is the difference in the X coordinate of two pixels, (y)2-y1) Is the difference in the Y coordinate of two pixels, (z)2-z1) Is the difference in Z-coordinate of the two pixels.
In step S306, the pixel density corresponding to each pixel is calculated according to the distance difference between the coordinates of each pixel and the coordinates of other pixels. Further, step S306 may include steps S308 and S310. In step S308, a density contribution value corresponding to each distance difference is obtained in a lookup table according to the distance difference between the coordinates of each pixel and the coordinates of other pixels.
For example, the distance difference between the 1 st pixel (4,3) and the 2 nd pixel (4,3) is 0. After the distance difference "0" is obtained, an index value corresponding to the distance difference "0" can be calculated from the distance difference "0". In the present embodiment, the index value may be calculated as shown in the following equation (3).
If dist is equal to 0, index is equal to 0;
otherwise, index is Floor (log)2(dist))+1,(3)
Where dist is the distance difference, index is the index value, and Floor is the operation of fetching integer downwards. For example, Floor (2.2) ═ 2. Floor (3.4) ═ 3. Floor (5.8) ═ 5. The rest is analogized.
By equation (3), the index value corresponding to the distance difference "0" can be calculated to be 0. Then, according to the index value "0", the density contribution value corresponding to the index value "0" is obtained in the lookup table. For example, by referring to a lookup table such as table (1), the density contribution value corresponding to the index value "0" can be obtained as 43. That is, for the 1 st pixel, the 2 nd pixel may provide a density contribution of 43.
TABLE 1
Difference in distance Index value Density contribution value
0 0 43
1 1 29
2~3 2 19
4~7 3 13
8~15 4 7
16~31 5 5
32~63 6 3
64~127 7 1
128~255 8 0
Over 255 More than 8 0
In addition, the distance difference between the 1 st pixel (4,3) and the 3 rd pixel (4,0) is 3. Then, by the formula (3), the pair can be calculatedShould be an index value of "3" of the distance difference, e.g., index Floor (log)2(3) +1 ═ Floor (1.585) +1 ═ 1+1 ═ 2. That is, the distance difference "3" corresponds to an index value of 2. Then, in the lookup table shown in table 1, the density contribution value corresponding to the index value "2" can be obtained as 19. That is, for the 1 st pixel, the 3 rd pixel may provide a density contribution of 19.
In addition, the distance difference between the 1 st pixel (4,3) and the 21 st pixel (29,50) is 53. Next, by equation (3), an index value corresponding to the distance difference "53", for example, an index of Floor (log) can be calculated2(53) +1 ═ Floor (5.728) +1 ═ 5+1 ═ 6. That is, the distance difference "53" corresponds to an index value of 6. Then, in the lookup table shown in table 1, the density contribution value corresponding to the index value "6" can be obtained as 3. That is, for the 1 st pixel, the 21 st pixel may provide a density contribution of 3.
In addition, the distance difference between the 1 st pixel (4,3) and the 32 nd pixel (243,245) is 340. Next, by equation (3), an index value corresponding to the distance difference "340", for example, an index ═ Floor (log), can be calculated2(340) +1 ═ Floor (18.728) +1 ═ 18+1 ═ 19. That is, the distance difference "340" corresponds to an index value of 19. Then, in the lookup table shown in table 1, the density contribution value corresponding to the index value "19" can be obtained as 0. That is, for the 1 st pixel, the 32 nd pixel may provide a density contribution value of 0. The density contribution between the other pixels and the other pixels can be referred to the above description, and therefore, the description thereof is omitted.
In step S310, the density contribution values corresponding to each pixel are summed to calculate the pixel density corresponding to each pixel. That is, the 31 density contribution values corresponding to the 1 st pixel are summed up to calculate the pixel density corresponding to the 1 st pixel; summing the 31 density contribution values corresponding to the 2 nd pixel to calculate the pixel density corresponding to the 2 nd pixel; … …, respectively; and adding the 31 density contribution values corresponding to the 32 th pixel to calculate the pixel density corresponding to the 32 th pixel.
As can be seen from the above description, the pixel density of each pixel of the present embodiment is related to the magnitude of the distance difference. That is, the smaller the distance difference between two pixels, the greater the density contribution of the pixel. The larger the distance difference between two pixels, the smaller the density contribution of the pixel.
In addition, if a pixel has several closely spaced pixels, the pixel density of the pixel is greater. If a pixel has no closely spaced pixels or no other pixels in a predetermined area of the pixel, the pixel density of the pixel is smaller.
In this embodiment, the preset region can be freely set, for example. The predetermined area may be half of two pixels that are farthest from each other. That is, the pixel density of a pixel is calculated by the pixel and other pixels in the predetermined area, and other pixels beyond the predetermined area are not calculated.
In step S106, a density distance corresponding to each pixel is calculated according to the pixel density corresponding to the pixel. Further, step S106 may include steps S402-S408, as shown in FIG. 4.
In step S402, it is determined whether there are a plurality of other pixels around each pixel, the number of other pixels being greater than the pixel density of the pixel, based on the pixel density corresponding to the pixel. That is, after the pixel density corresponding to 32 pixels is obtained, it is checked whether or not there are a plurality of other pixels around each of the 32 pixels, the plurality of other pixels being larger than the pixel density of the pixel.
For example, according to the pixel density corresponding to the 32 pixels, it is determined whether there are other pixels around the 1 st pixel that are greater than the pixel density of the 1 st pixel; according to the pixel density corresponding to the 32 pixels, whether other pixels with the pixel density larger than that of the 2 nd pixel exist around the 2 nd pixel is confirmed; … …, respectively; and according to the pixel density corresponding to the 32 th pixel, determining whether other pixels with the pixel density larger than that of the 32 nd pixel exist around the 32 nd pixel.
When it is determined that there are other pixels around each pixel with a pixel density greater than the pixel density of the pixel, the process proceeds to step S404, and a plurality of distance differences between the pixel and the other pixels are calculated according to the coordinates of the pixel and the coordinates of the other pixels. For example, assume that there are other pixels around the 1 st pixel, such as the 3 rd pixel, the 5 th pixel, and the 20 th pixel, which have a pixel density greater than that of the 1 st pixel, but the embodiment of the invention is not limited thereto.
In addition, in this embodiment, the Distance difference between two pixels can also be calculated by the euclidean Distance or the Manhattan Distance (Manhattan Distance). Next, based on the coordinates (4,3) of the 1 st pixel and the coordinates (4,0) of the 3 rd pixel, a distance difference, for example, 3, between the 1 st pixel and the 3 rd pixel is calculated. Based on the coordinates (4,3) of the 1 st pixel and the coordinates (6,0) of the 5 th pixel, the distance difference between the 1 st pixel and the 5 th pixel is calculated, for example, about 3.6. The distance difference between the remaining pixels and the other pixels can be referred to the above description, and therefore, the description thereof is omitted.
In step S406, the smallest distance difference among the distance differences is selected as the density distance corresponding to the pixel. For example, if the distance difference "3" is the smallest distance difference among the distance differences corresponding to the 1 st pixel, the distance difference "3" can be selected as the density distance corresponding to the 1 st pixel. The density distances corresponding to the remaining pixels can be referred to the above description, and thus are not described herein again.
When it is determined that there is no other pixel around each pixel that is greater than the pixel density of the pixel, the process proceeds to step S408, and the density distance corresponding to the pixel is defined as the maximum distance. For example, assume that the pixel density of the 5 th pixel (6,0) is the maximum, which means that there are no other pixels around the 5 th pixel with a pixel density greater than that of the 5 th pixel. That is, when it is confirmed that there is no other pixel around the 5 th pixel which is greater than the pixel density of the 5 th pixel, the density distance corresponding to the 5 th pixel is defined as the maximum distance, for example, expressed by Inf. Thus, after the above confirmation and calculation, the density distances corresponding to 32 pixels are 3, 0,2, 0, Inf, 0, 7, 2, 11, 3, 5, 9, 8, 44, 10, 5, 278, 23, 15, 7, 12, 4, 18, 5, and 147, respectively.
In step S108, a plurality of central pixels are selected from the pixels according to the density distance corresponding to the pixels. Further, step S108 may include steps S502-S504, as shown in FIG. 5.
In step S502, the density distances are sorted according to the density distance corresponding to the pixel. That is, the density distances 3, 0,2, 0, Inf, 0, 7, 2, 11, 3, 5, 9, 8, 44, 10, 5, 278, 23, 15, 7, 12, 4, 18, 5, 147 corresponding to the 32 pixels are sorted.
In step S504, a center pixel is selected from the pixels according to the preset selection number and the sorting magnitude relationship of the density distance. For example, assume that the preset number of the selected pixels is 4, and 4 pixels with the largest density distance are selected as the central pixels. That is, the 5 th pixel, the 32 th pixel, the 24 th pixel, and the 21 st pixel corresponding to the density distances "Inf", "278", "147", and "44" are selected as the center pixels from among the 32 pixels.
Next, in step S110, the pixel value of the center pixel is set as a plurality of pixel values of the palette. That is, the pixel values of the 5 th pixel, the 32 th pixel, the 24 th pixel, and the 21 st pixel are set as a plurality of pixel values of the palette. Therefore, the appropriate pixels in the image block can be selected as the pixels of the palette better and quickly, and the convenience in use is improved. In addition, the compression quality of the image encoder is improved, the degree of data dependence is less, and the hardware implementation is simpler.
Fig. 6 is a flowchart of a palette-based image processing method according to an embodiment of the present invention. In the present embodiment, steps S102 to S110 are the same as or similar to steps S102 to S110 in fig. 1, and reference may be made to the description of the embodiment in fig. 1, so that no further description is provided herein.
In step S602, the remaining pixels are clustered according to the center pixel to generate a plurality of clusters. That is, the remaining pixels (i.e., 1-4 th pixel, 6-20 th pixel, 22-23 th pixel, 25-31 th pixel) are grouped according to the 5 th pixel, 32 th pixel, 24 th pixel and 21 st pixel corresponding to the center pixel to generate clusters 210, 220, 230 and 240.
For example, the clusters 210 include pixels from 1 st to 20 th. The clusters 220 include 21 st to 22 nd pixels. Clusters 230 include 23 th to 26 th pixels. The cluster 240 includes 27 th to 32 th pixels.
In step S604, pixel values of pixels in the cluster are calculated to generate a plurality of pixel values corresponding to the cluster. In the present embodiment, the pixel values corresponding to the clusters 210, 220, 230, 240 are, for example, the average of the pixel values of the pixels in the clusters 210, 220, 230, 240.
For example, the pixel values corresponding to the cluster 210 are, for example, the average of the pixel values of the 1 st to 20 th pixels in the cluster 210. The pixel values corresponding to the clusters 220 are, for example, the average of the pixel values of the 21 st to 22 nd pixels in the clusters 220. The pixel value of the corresponding cluster 230 is, for example, the average of the pixel values of the 23 rd to 26 th pixels in the cluster 230. The pixel value corresponding to the cluster 240 is, for example, the average of the pixel values of 27 th to 32 th pixels in the cluster 240.
In step S606, the pixel value of the corresponding cluster is taken as the pixel value of the palette. That is, the pixel values corresponding to clusters 210, 220, 230, 240 are taken as the pixel values of the palette. Therefore, the clustering effect can be increased by selecting the pixels with larger density distance, the average error of the pixels which are clustered can be reduced, and the convenience in use can be increased.
In summary, the palette-based image processing method disclosed in the present invention obtains an image block, where the image block includes a plurality of pixels. According to the pixel information of the image block, calculating the pixel density corresponding to each pixel, calculating the density distance corresponding to each pixel according to the pixel density corresponding to the pixel, selecting a plurality of central pixels from the pixels according to the density distance corresponding to the pixel, and taking the pixel value of the central pixels as a plurality of pixel values of the palette. Therefore, the appropriate pixels in the image block can be selected as the pixels of the palette better and quickly, and the convenience in use is improved.
In addition, the present embodiment may also group the remaining pixels according to the center pixel to generate a plurality of clusters, and calculate the pixel values of the pixels in the clusters to generate a plurality of pixel values corresponding to the clusters, and take the pixel values corresponding to the clusters as the pixel values of the palette. In this way, the average error of the pixels classified into the clusters can be reduced, and the convenience in use can be increased.
Although the present invention has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
[ notation ] to show
210. 220, 230, 240: clustering
X: x axis
Y: y-axis
S102 to S110, S302 to S310, S402 to S408, S502 to S504, and S602 to S606: step (ii) of

Claims (9)

1. A method of palette-based image processing, comprising:
obtaining an image block, wherein the image block comprises a plurality of pixels;
calculating the pixel density corresponding to each pixel according to the pixel information of the image block;
calculating the density distance corresponding to each pixel according to the pixel density corresponding to the pixel;
selecting a plurality of central pixels from the pixels according to the density distance corresponding to the pixels; and
and taking the pixel value of the central pixel as a plurality of pixel values of the palette.
2. The method of claim 1, wherein calculating the pixel density for each of the pixels according to the pixel information of the image block comprises:
obtaining the coordinates of the pixels in the image block according to the pixel information of the image block;
calculating a plurality of distance differences between the coordinates of each pixel and the coordinates of other pixels according to the coordinates of the pixels; and
and calculating the pixel density corresponding to each pixel according to the distance difference between the coordinate of each pixel and the coordinates of other pixels.
3. The method of claim 2, wherein calculating the pixel density for each pixel according to the distance difference between the coordinates of each pixel and the coordinates of other pixels comprises:
obtaining a density contribution value corresponding to each distance difference in a lookup table according to the distance difference between the coordinates of each pixel and the coordinates of other pixels; and
and summing the density contribution values corresponding to each pixel to calculate the pixel density corresponding to each pixel.
4. The method of claim 2, wherein the pixel density associated with each of the pixels is associated with the magnitude of the distance difference.
5. The method of claim 1, wherein calculating the density distance for each of the pixels according to the pixel density for the pixel comprises:
according to the pixel density corresponding to the pixels, whether a plurality of other pixels which are larger than the pixel density of the pixels exist around each pixel is confirmed;
when it is determined that there are other pixels around each of the pixels, the pixel density of which is greater than the pixel density of the pixel, calculating a plurality of distance differences between the pixel and the other pixels according to the coordinates of the pixel and the coordinates of the other pixels;
selecting the smallest distance difference from the distance differences as the density distance corresponding to the pixel; and
when it is determined that there are no other pixels around each of the pixels that are greater than the pixel density of the pixel, the density distance corresponding to the pixel is defined as a maximum distance.
6. The palette-based image processing method of claim 5, wherein the distance difference is calculated by a Euclidean distance or a Manhattan distance.
7. The method of claim 1, wherein selecting the center pixel from the pixels according to the density distance corresponding to the pixel comprises:
sorting the density distances according to the density distances corresponding to the pixels; and
and selecting the central pixel from the pixels according to a preset selection number and the sorting magnitude relation of the density distances.
8. The palette-based image processing method of claim 1, further comprising:
clustering the remaining pixels according to the center pixel to generate a plurality of clusters;
calculating pixel values of the pixels in the cluster to produce a plurality of pixel values corresponding to the cluster; and
the pixel values corresponding to the clusters are taken as the pixel values of the palette.
9. The method of palette-based image processing according to claim 8, wherein the pixel value corresponding to the cluster is an average of pixel values of the pixels in the cluster.
CN202010241999.4A 2020-03-31 2020-03-31 Palette-based image processing method Active CN111462257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241999.4A CN111462257B (en) 2020-03-31 2020-03-31 Palette-based image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241999.4A CN111462257B (en) 2020-03-31 2020-03-31 Palette-based image processing method

Publications (2)

Publication Number Publication Date
CN111462257A true CN111462257A (en) 2020-07-28
CN111462257B CN111462257B (en) 2023-06-23

Family

ID=71685069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241999.4A Active CN111462257B (en) 2020-03-31 2020-03-31 Palette-based image processing method

Country Status (1)

Country Link
CN (1) CN111462257B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017939A1 (en) * 2002-07-23 2004-01-29 Microsoft Corporation Segmentation of digital video and images into continuous tone and palettized regions
CN104899899A (en) * 2015-06-12 2015-09-09 天津大学 Color quantification method based on density peak value
CN106303526A (en) * 2015-06-09 2017-01-04 富士通株式会社 Method for encoding images, device and image processing equipment
CN107257989A (en) * 2015-03-24 2017-10-17 英特尔公司 The palette compression of cluster

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017939A1 (en) * 2002-07-23 2004-01-29 Microsoft Corporation Segmentation of digital video and images into continuous tone and palettized regions
CN107257989A (en) * 2015-03-24 2017-10-17 英特尔公司 The palette compression of cluster
CN106303526A (en) * 2015-06-09 2017-01-04 富士通株式会社 Method for encoding images, device and image processing equipment
CN104899899A (en) * 2015-06-12 2015-09-09 天津大学 Color quantification method based on density peak value

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴伟: "K均值算法的图象压缩" *

Also Published As

Publication number Publication date
CN111462257B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
Tiwari et al. AMGA: an archive-based micro genetic algorithm for multi-objective optimization
US8949092B2 (en) Method and apparatus for encoding a mesh model, encoded mesh model, and method and apparatus for decoding a mesh model
JP5755822B1 (en) Similarity calculation system, similarity calculation method, and program
CN104169946B (en) Extensible queries for visual search
US9058540B2 (en) Data clustering method and device, data processing apparatus and image processing apparatus
WO2022077863A1 (en) Visual positioning method, and method for training related model, related apparatus, and device
KR20080021592A (en) Image comparison by metric embeddings
CN109785221A (en) A kind of digital picture steganography method and secret information extraction method
CN111866518A (en) Self-adaptive three-dimensional point cloud compression method based on feature extraction
CN111259312B (en) Multi-target flow shop scheduling method and device, computer equipment and storage medium
CN114936518A (en) Method for solving design parameters of tension/compression spring
CN114186518A (en) Integrated circuit yield estimation method and memory
CN112668635B (en) Image archiving method, device, equipment and computer storage medium
CN111462257B (en) Palette-based image processing method
CN113704787A (en) Privacy protection clustering method based on differential privacy
CN117112852A (en) Large language model driven vector database retrieval method and device
CN112766348A (en) Method and device for generating sample data based on antagonistic neural network
CN111192302A (en) Feature matching method based on motion smoothness and RANSAC algorithm
CN110633386A (en) Model similarity calculation method based on genetic and acoustic mixed search
Schaefer et al. A hybrid color quantization algorithm incorporating a human visual perception model
JP7118295B1 (en) Image processing device, program and image processing method
CN114357928A (en) Photoetching model optimization method
CN112633369B (en) Image matching method and device, electronic equipment and computer-readable storage medium
CN101268623B (en) Method and device for creating shape variable blocks
CN113537308A (en) Two-stage k-means clustering processing system and method based on localized differential privacy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211130

Address after: Room 201, No. 2557, Jinke Road, pilot Free Trade Zone, Pudong New Area, Shanghai 201203

Applicant after: Gryfield Intelligent Technology Co.,Ltd.

Address before: Room 301, 2537 Jinke Road, Zhangjiang hi tech park, Shanghai 201203

Applicant before: VIA ALLIANCE SEMICONDUCTOR Co.,Ltd.

GR01 Patent grant
GR01 Patent grant