CN115546328B - Picture mapping method, compression method, decoding method and electronic equipment - Google Patents

Picture mapping method, compression method, decoding method and electronic equipment Download PDF

Info

Publication number
CN115546328B
CN115546328B CN202211497505.4A CN202211497505A CN115546328B CN 115546328 B CN115546328 B CN 115546328B CN 202211497505 A CN202211497505 A CN 202211497505A CN 115546328 B CN115546328 B CN 115546328B
Authority
CN
China
Prior art keywords
picture
mapping
data
pixel
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211497505.4A
Other languages
Chinese (zh)
Other versions
CN115546328A (en
Inventor
张延�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211497505.4A priority Critical patent/CN115546328B/en
Publication of CN115546328A publication Critical patent/CN115546328A/en
Application granted granted Critical
Publication of CN115546328B publication Critical patent/CN115546328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application provides a picture mapping method, a compression method, a decoding method and electronic equipment. The picture mapping method comprises the following steps: acquiring a target picture; partitioning the target picture to obtain a plurality of picture sub-blocks; for each picture sub-block, comparing the similarity between different pixels to determine a plurality of groups of similar pixels and a plurality of non-similar pixels; mapping processing is carried out on color channel values of any group of similar pixels in each picture sub-block based on a set mapping relation to obtain corresponding mapping data so as to multiplex the similar pixels in the same group of similar pixels; and mapping the color channel value of each non-similar pixel in each picture sub-block based on the set mapping relation to obtain corresponding mapping data, so that the target picture is subjected to lossless processing.

Description

Picture mapping method, compression method, decoding method and electronic device
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a picture mapping method, a compression method, a decoding method, electronic equipment and a computer storage medium.
Background
In virtual application scenes such as Extended Reality (XR), digital twinning, and 3D games, before technical processing links such as graphics Rendering (Rendering), three-dimensional Modeling (3D Modeling), picture Transfer (Picture Transfer), and Video flow (Video flow), it is generally necessary to compress original data of a Picture to obtain compressed data of the Picture, and then decompress the compressed data to obtain the original data of the Picture.
However, some of the existing picture compression schemes need to undergo processing procedures such as color conversion, DC Level Offset (also referred to as DC Level Offset), sub-sampling, discrete cosine transform, and quantization coding, and are actually lossy processing because there are many processing links and data distortion is easily caused.
Disclosure of Invention
In view of the above, embodiments of the present application provide a picture processing scheme to at least partially solve the above problem.
According to a first aspect of an embodiment of the present application, there is provided a picture mapping method, including:
acquiring a target picture;
carrying out blocking processing on the target picture to obtain a plurality of picture sub-blocks;
for each picture sub-block, comparing the similarity between different pixels to determine a plurality of groups of similar pixels and a plurality of non-similar pixels;
mapping processing is carried out on color channel values of the similar pixels on the basis of a set mapping relation aiming at any group of similar pixels in each picture sub-block to obtain corresponding mapping data so as to multiplex the similar pixels of the same group of similar pixels;
and mapping the color channel value of each non-similar pixel in each picture sub-block based on the set mapping relation to obtain corresponding mapping data.
According to a second aspect of the embodiments of the present application, there is provided a picture compression method, including:
acquiring mapping data generated for each pixel in each picture sub-block of a target picture, the mapping data being generated according to any one of the methods of the first aspect;
for each picture subblock, according to mapping data corresponding to each pixel, coding the corresponding picture subblock to obtain coded data and distributing a coded data storage address to store the coded data;
and generating compressed data of the target picture according to the coded data storage address and the coded data bit width of each picture subblock and the mapping data bit width of a single pixel in each picture subblock.
According to a third aspect of the embodiments of the present application, there is provided a picture decoding method, including:
analyzing compressed data of a target picture to obtain an encoded data storage address, an encoded data bit width and a mapping data bit width of a single pixel in each picture subblock in the target picture, wherein the compressed data is generated according to the method of any one of the second aspect;
for each picture subblock, acquiring encoded data from an encoded data storage address according to the encoded data bit width;
for each picture subblock, acquiring mapping data of each pixel from the encoding data according to the bit width of the mapping data of a single pixel;
and aiming at each picture subblock, obtaining a color channel value of each pixel according to the mapping data of each pixel so as to generate decoding data of the target picture.
According to a fourth aspect of embodiments of the present application, there is provided an electronic apparatus, including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus; the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the corresponding operation of the method according to the first aspect, the second aspect or the third aspect.
According to a fifth aspect of embodiments herein, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described in the first or second or third aspect.
According to a sixth aspect of embodiments herein, there is provided a computer program product comprising computer instructions for instructing a computing device to perform operations corresponding to the method according to the first aspect, the second aspect or the third aspect.
According to the picture processing scheme provided by the embodiment of the application, when the mapping data of each picture subblock is obtained by processing based on the picture subblocks, mapping processing is carried out on similar pixels in any group of pixels in any picture subblock based on the color channel values of the similar pixels to obtain corresponding mapping data, but the mapping data is multiplexed among the similar pixels in the same group; and for the non-similar pixels, mapping the color channel value of each non-similar pixel based on the set mapping relation to obtain corresponding mapping data, so that the target picture is subjected to lossless processing while data distortion is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the embodiments of the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 illustrates an exemplary system to which the picture processing method according to the embodiment of the present application is applied.
Fig. 2A shows a flow chart diagram of a picture mapping method.
Fig. 2B shows a block diagram in which pixel boundary factors are considered during block processing.
Fig. 2C shows a block diagram without considering pixel boundary factors in a block processing.
FIG. 2D provides an illustration of a Mipmap texture map.
Fig. 3A shows a flow chart of a picture compression method.
Fig. 3B shows a schematic flow chart of an encoding process.
FIG. 3C shows a schematic flow chart for generating compressed data.
FIG. 3D shows a schematic flow chart for generating compressed data.
Fig. 4 shows a flowchart of a picture decoding method.
Fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application shall fall within the scope of the protection of the embodiments in the present application.
The following further describes specific implementations of embodiments of the present application with reference to the drawings of the embodiments of the present application.
Fig. 1 illustrates an exemplary system to which the picture processing method according to the embodiment of the present application is applied. As shown in fig. 1, the system 100 may include a cloud service 102, a communication network 104, and/or one or more user devices 106, illustrated in fig. 1 as a plurality of user devices.
The cloud server 102 may be any suitable device for storing information, data, applications, and/or any other suitable type of content, including but not limited to distributed storage system devices, server clusters, computing cloud server clusters, and the like. In some embodiments, the image processing method provided in the following embodiments of the present application may be integrated in an application program and stored on the cloud server 102.
In some embodiments, the communication network 104 may be any suitable combination of one or more wired and/or wireless networks. For example, the communication network 104 can include any one or more of the following: the network may include, but is not limited to, the internet, an intranet, a Wide Area Network (WAN), a Local Area Network (LAN), a wireless network, a Digital Subscriber Line (DSL) network, a frame relay network, an Asynchronous Transfer Mode (ATM) network, a Virtual Private Network (VPN), and/or any other suitable communication network. The user device 106 can be connected to the communication network 104 via one or more communication links (e.g., communication link 112), and the communication network 104 can be linked to the cloud server 102 via one or more communication links (e.g., communication link 114). The communication link may be any communication link suitable for communicating data between the user device 106 and the cloud service 102, such as a network link, dial-up link, wireless link, hardwired link, any other suitable communication link, or any suitable combination of such links.
The user equipment 106 downloads the application program to the local through the communication network, so that the picture processing method provided by the following embodiment of the application is executed locally on the user equipment.
In some embodiments, user devices 106 may comprise any suitable type of device. For example, in some embodiments, the user device 106 may include a mobile device, a tablet computer, a laptop computer, a desktop computer, a wearable computer, a game console, a media player, a vehicle entertainment system, and/or any other suitable type of user device.
It should be noted that, in the embodiment of fig. 1, the method for processing a picture is executed locally in the user equipment is taken as an example for description, but the method is not limited to be executed locally in the user equipment. Actually, in some application scenarios, the execution may also be performed at the cloud server, and then the execution result is pushed to the user equipment.
For this reason, in the following embodiments, the picture processing schemes provided by the present application are exemplified one by one.
Fig. 2A is a flowchart illustrating a picture mapping method. As shown in fig. 2A, it includes the following steps S201-S204B:
s201, acquiring a target picture;
for example, the source, format, or size of the target picture is not limited, as long as the requirements of the application scenario can be met.
Illustratively, such as in the context of a graphics Rendering (Rendering) process, a three-dimensional light energy delivery process needs to be converted into a two-dimensional image. And the scene and the entity are represented in a three-dimensional form, and the three-dimensional form representation generally comprises geometric information of the object and material information corresponding to the geometric information, so that the three-dimensional form representation is closer to the real world and is convenient for conversion processing. The texture information usually includes multiple or multiple sets of texture attributes and their corresponding texture pictures, so in a specific application scenario, the target picture may be a texture picture (or also called a texture map).
Illustratively, the texture picture may be, for example, a multi-level gradually-distant texture picture, or also referred to as a Mipmap picture.
Here, the above description of the target picture is only an example, and is not a limitation, and in other application scenarios, the target picture may be in other formats, such as JPG, PNG.
S202, carrying out blocking processing on the target picture to obtain a plurality of picture sub-blocks;
for example, the step S202 of partitioning the target picture to obtain a plurality of picture sub-blocks includes: and carrying out uniform blocking processing on the target picture to obtain a plurality of picture sub-blocks. Here, each picture subblock is a pixel composition of the target picture, and thus, the picture subblock may also be referred to as a pixel grid.
For example, the target picture may be uniformly partitioned by a Tile-based image processing (Tile-based image processing) algorithm, such as specifically partitioning the target picture into picture sub-blocks with a length and a width of 2 powers according to a set sub-block size. The power of 2 is merely an example and is not intended to be limiting.
Here, when the uniform blocking processing is performed, it may occur that pixel boundaries of the picture sub-blocks obtained by the uniform blocking processing exceed pixel boundaries of the target picture, and only pixels located within the pixel boundaries of the target picture may be reserved for the picture sub-blocks of which the pixel boundaries exceed the target picture. Of course, in some other application scenarios, whether the pixel boundary of the target picture is exceeded or not may not be considered.
Illustratively, fig. 2B shows a block diagram considering pixel boundary factors in a block processing. As shown in fig. 2B, the pixel boundary of the target picture is shown as C1 in fig. 2B. After the uniform blocking processing, performing uniform blocking processing on the target picture to obtain 6 picture sub-blocks, which are respectively marked as S1-S6, wherein the sizes of S1 and S2 are the same, and for S3-S6, because the pixel boundary of the target picture is exceeded, for S3-S6, only the pixels in the target picture in the picture sub-blocks are reserved, for this reason, the pixel boundaries of S3-S6 are shown by dashed dotted lines in fig. 2B, and the total pixel boundaries of the picture sub-blocks are aligned with the pixel boundaries of the target picture.
Fig. 2C shows a block diagram without considering pixel boundary factors in a block processing. As shown in fig. 2C, for S3-S6, pixel boundaries beyond the target picture are not considered, and for this reason, the pixel boundaries of S3-S6 are shown by dashed lines in fig. 2B. Alternatively, the total pixel boundaries of the picture sub-blocks exceed the pixel boundaries of the target picture. For S3-S6 that exceed the pixel boundaries of the target picture, pixel padding processing may be performed so that the sizes of the respective picture sub-blocks are the same.
Here, it should be noted that the above-mentioned specifically used blocking algorithm and the number of picture sub-blocks obtained by performing uniform blocking processing on the target picture are merely examples and are not limited in any way. In other embodiments, other blocking algorithms may be used, and the target picture may be subjected to non-uniform blocking.
By the block processing, mapping processing can be performed on each picture subblock in the subsequent steps, so that local processing of a target picture is realized, and the speed and efficiency of data processing are improved.
S203, comparing the similarity among different pixels aiming at each picture sub-block to determine a plurality of groups of similar pixels and a plurality of non-similar pixels;
wherein each group of similar pixels comprises at least two similar pixels, but the number of the included similar pixels can be different between different groups of similar pixels.
For example, in step S203, comparing similarities between different pixels for each picture sub-block, and determining similar pixels and non-similar pixels therein includes:
and for each picture sub-block, comparing the similarity among different pixels according to the color channel value of each pixel, and determining similar pixels and non-similar pixels.
Exemplarily, if the color space of the target picture is RGBA (Red, green, blue, alpha), there are 4 color channels for each pixel, which are respectively an R channel, a G channel, a B channel, and an a channel, and for this, the similarity determination may be performed based on the color channel values of the R channel, the G channel, the B channel, and the a channel.
For this purpose, for example, comparing the similarity between different pixels according to each color channel value to determine similar pixels and non-similar pixels therein includes: judging whether the difference value of the color channel values among different pixels is within a set difference value threshold range, if so, judging as a similar pixel; otherwise, the non-similar pixel is judged.
Exemplarily, for example, color channel values of an R channel, a G channel, a B channel, and an a channel of two pixels are correspondingly compared to obtain an R color channel difference value, a G color channel difference value, a B color channel difference value, and an a color channel difference value, if the R color channel difference value, the G color channel difference value, the B color channel difference value, and the a color channel difference value are respectively in a set R color channel difference value threshold range, a set G color channel difference value threshold range, a set B color channel difference value threshold range, and a set a color channel difference value threshold range, then the two pixels are determined to be similar pixels satisfying the pixel similarity determination condition; otherwise, the two pixels are judged to be non-similar pixels.
Here, the above is an exemplary description of performing similar pixel determination with the color space of the target picture being RGBA, and the color space of the target picture is not limited to RGBA only. In fact, in some other application scenarios, the color space of the target picture may also be YUV, or RGB.
In addition, the above describes how to determine the similar pixels by directly comparing the color channel values, but the determination of the similar pixels is not limited to the comparison of the color channel values. However, in fact, in some other embodiments, the picture sub-block may be divided into a plurality of pixel regions, and similar pixels may be quickly determined based on the pixel regions. For example, the number of different color channel values of the same color channel in two pixel regions is counted, and the number is used to reflect the richness of the color channel values in the two pixel regions, and if the richness of all the color channels in the two pixel regions is smaller than a set richness threshold, the pixels in the two pixel regions are determined as similar pixels; otherwise, the non-similar pixel is judged. For example, in one pixel region, the color channel value of the R channel only has three values, i.e., 4, 68, and 237, so that the color channel value richness of the R channel is 3, and so on, to obtain the color channel value richness of the G channel, the B channel, and the a channel in the pixel region, so as to respectively match the color channel value richness of the R channel, the G channel, the B channel, and the a channel in another pixel region; and if the richness of the color channel values of the channels of the two pixel areas is smaller than the set richness threshold, judging that the pixels of the two pixel areas are similar pixels. Here, it should be noted that a uniform richness threshold may be set for the R channel, the G channel, the B channel, and the a channel, or different richness thresholds may be set for the R channel, the G channel, the B channel, and the a channel, respectively, as long as the precision of the application scene can be matched.
In some other examples, if the dimension of the color channel richness is higher, for comparison, the color channel richness may also be mapped to obtain low-dimensional data, and the low-dimensional data is compared with a corresponding low-dimensional threshold in the low-dimensional space, where the set richness threshold is set.
For example, the number of the included similar pixels may be different between different groups of similar pixels, such as 2 for one group of similar pixels and 3 for another group of similar pixels. Here, the specific number is merely an example and is not a limitation.
S204A, mapping the color channel values of the similar pixels in any group of the similar pixels in each picture sub-block based on a set mapping relation to obtain corresponding mapping data so as to multiplex the similar pixels in the same group of the similar pixels;
and S204B, mapping the color channel value of each non-similar pixel in each picture sub-block based on the set mapping relation to obtain corresponding mapping data.
For example, in step S204A, the mapping process may be performed for each of the similar pixels in any one group of similar pixels, and only one copy may be stored when storing the mapping data. Of course, alternatively, for any group of similar pixels, one similar pixel is selected from the group of similar pixels, and the mapping processing is performed on the color channel value of the selected similar pixel, and all similar pixels in the group may not be used for the mapping processing, so that the speed and the efficiency of data processing are improved. All similar pixels multiplex the same mapping data, thereby saving storage space.
For example, in the foregoing steps S204A and S20B, based on the set mapping relationship, when mapping the color channel value of one of the similar pixels or each of the non-similar pixels, mapping the color channel value corresponding to each of the color channels of the similar pixels or the non-similar pixels to obtain corresponding mapping data.
For example, mapping processing is performed on the color channel values of the R channel, the G channel, the B channel, and the a channel according to a set mapping relationship, so as to obtain mapping data corresponding to the R channel, the G channel, the B channel, and the a channel, respectively.
Further, for example, the mapping the color channel value corresponding to each color channel of the similar pixel or the non-similar pixel to obtain the corresponding mapping data may include: and performing dimension reduction mapping processing on the color channel value corresponding to each color channel of the similar pixels or the non-similar pixels based on the set data dimension reduction mapping relation to obtain corresponding mapping data.
Alternatively, in some other examples, the color channel value corresponding to each color channel of the similar pixel or the non-similar pixel may be subjected to a rising-dimension mapping process based on a set rising-dimension mapping relationship of data to obtain corresponding mapping data, so as to improve the resolution and reduce noise. Such as by performing a convolution operation with a convolution matrix of color channel values and data dimensions greater than the color channel values to implement a upscaled mapping process.
The specific data dimension reduction mapping relationship may be selected according to an application scenario, and is not limited in the present application. For example, based on the data dimension reduction mapping relationship, the mapping processing is performed on the color channel value corresponding to each color channel, so that on one hand, the dimension reduction processing on the color channel value is realized, and the storage space is saved. For example, the range of the color channel value is 0-255, and generally occupies 8bits, and after the dimension reduction processing is realized by performing product operation with the data dimension reduction mapping matrix, the mapping data occupies 2bits. In addition, through the dimension reduction mapping processing, the color channel value can be encrypted, so that the data security is ensured.
Exemplarily, the performing dimension reduction mapping processing on the color channel value corresponding to each color channel of the similar pixel or the non-similar pixel based on the set data dimension reduction mapping relationship to obtain corresponding mapping data includes: and based on a set hash operation relation, carrying out hash operation processing on color channel values corresponding to each color channel of the similar pixels or the non-similar pixels to obtain corresponding mapping data.
Specifically, the hash operation relationship is not particularly limited, and may be determined according to an application environment.
Exemplarily, the performing a hash operation on a color channel value corresponding to each color channel of the similar pixel or the non-similar pixel based on the set hash operation relationship to obtain corresponding mapping data includes: and based on a Hash operation relation for performing data dimension reduction processing, performing Hash operation processing on a color channel value corresponding to each color channel of the similar pixels or the non-similar pixels to obtain corresponding mapping data, wherein the data dimension of the mapping data is smaller than that of the color channel value.
By carrying out hash operation on each color channel value of the similar pixel or the non-similar pixel, the data dimension reduction processing is realized, and the data security is ensured.
Alternatively, in some other examples, the data dimension reduction mapping relationship may also be embodied by Principal Component Analysis (PCA).
Referring to the above embodiment, in the picture mapping scheme, the mapping data of each picture subblock is obtained by processing based on the picture subblocks, and the mapping data of all the picture subblocks are combined together to form the mapping data of the whole target picture, so that the subsequent processing of the mapping data can be performed in the unit of picture subblocks during subsequent application, for example, the mapping data corresponding to a certain picture subblock is specified for subsequent processing, which is equivalent to the capability of local application. When the mapping data of each picture subblock is obtained by processing based on the picture subblocks, mapping similar pixels in any picture subblock, for example, only a color channel value of one pixel can be mapped to obtain corresponding mapping data, and the same mapping data can be multiplexed among all the similar pixels; and for the non-similar pixels, mapping processing is carried out on the color channel value of each pixel based on the set mapping relation to obtain corresponding mapping data, so that lossless and rapid mapping processing on the target picture can be realized, and the mapping data is multiplexed among the similar pixels, thereby improving the mapping rate of the data, saving the storage space and reducing the IO and memory expenditure of data processing.
Illustratively, on the basis of any of the above embodiments, for example, before step S101, the method further includes:
acquiring a picture to be processed, wherein the picture to be processed is formed by splicing a plurality of sub-pictures;
and selecting any sub-picture from the pictures to be processed as the target picture to generate mapping data corresponding to each pixel on the pictures to be processed.
The picture to be processed is formed by splicing a plurality of sub-pictures, which is equivalent to a large picture.
For example, in the field of graphics rendering, a multi-level progressively distant texture picture (also referred to as a Mipmap texture map) is often used, in which a series of texture pictures (or also referred to as texture maps) are generated on an original texture picture in a manner of decreasing the resolution by half in sequence, and the series of texture pictures with different resolutions are spliced together to form a large picture, i.e., a Mipmap texture map. To this end, each texture picture, including the original texture picture, corresponds to a sub-picture of the Mipmap texture picture.
For this reason, in some examples, the scheme of the embodiment of the present application may be applied to a Mipmap, that is, the to-be-processed picture is a multi-level progressively-distant texture picture, and correspondingly, the reference for dividing the sub-picture is based on the resolution (that is, the picture characteristic for dividing the sub-picture is the resolution), and correspondingly, the sub-pictures are multiple texture maps with different resolutions. Fig. 2D provides an illustration of Mipmap texture maps, each level of texture map corresponding to a satellite texture picture of different resolution.
And sequentially taking each sub-picture in the Mipmap texture map as the target picture to obtain each sub-picture, and performing the blocking processing, similarity judgment, mapping processing and the like on each sub-picture to obtain mapping data corresponding to each sub-picture and mapping data of all sub-pictures, so as to obtain the mapping data of the Mipmap texture map.
Further, for example, after the mapping data corresponding to all the picture sub-blocks in the target picture is obtained, the similar picture sub-blocks may be merged and then processed based on the similarity of the mapping data, so as to further save the storage space. To this end, the method further comprises: and carrying out similarity judgment on the plurality of picture sub-blocks so as to merge mapping data corresponding to the similar picture sub-blocks meeting the similarity judgment condition of the picture sub-blocks. This step may be performed after the above-mentioned mapping data of all picture subblocks is obtained.
For example, the mapping data difference value is obtained by comparing the mapping data of two picture sub-blocks, if the mapping data difference value is within the set mapping data difference value range, the two picture sub-blocks are determined to be similar picture sub-blocks, and therefore, the mapping data of one picture sub-block is stored to be shared by other similar picture sub-blocks.
Of course, in some other embodiments, after the partitioning process and before the mapping process, the similarity determination may be performed on all the picture sub-blocks, and the similar picture sub-blocks may be merged. The condition for the similarity judgment here may be determined according to an application scenario.
The following description will be given by taking an example of the application of the image mapping processing scheme provided in the above embodiment to a subsequent image compression processing link.
Fig. 3A shows a flow chart of a picture compression method. As shown in fig. 3A, the picture compression method includes the following steps S301 to S303:
s301, mapping data generated by each pixel in each picture sub-block of the target picture is acquired.
Illustratively, the mapping data is generated by a picture mapping method according to any one of the embodiments of the present application.
In the above-mentioned picture mapping scheme, the mapping data of each picture sub-block is obtained by processing based on the picture sub-blocks, and the mapping data of all the picture sub-blocks are combined together to form the mapping data of the whole target picture, so that the subsequent processing of the mapping data can be performed in units of picture sub-blocks during subsequent application, for example, the mapping data corresponding to a certain picture sub-block is specified for subsequent processing, which is equivalent to the capability of local application. When the mapping data of each picture subblock is obtained by processing based on the picture subblocks, mapping similar pixels in any picture subblock only according to the color channel value of one pixel to obtain corresponding mapping data, wherein all similar pixels can be multiplexed; and for the non-similar pixels, mapping the color channel value of each pixel based on the set mapping relation to obtain corresponding mapping data, so that lossless and rapid mapping processing on the target picture can be realized, and the mapping data are multiplexed among the similar pixels, thereby improving the compression rate, saving the storage space, and reducing the IO and memory overhead of data processing
S302, aiming at each picture subblock, coding the corresponding picture subblock according to the mapping data corresponding to each pixel to obtain coded data, and allocating a coded data storage address to store the coded data.
For example, in step S302, for each picture subblock, according to mapping data corresponding to each pixel in the picture subblock, encoding the corresponding picture subblock to obtain encoded data, and allocating an encoded data storage address to store the encoded data storage address, the method includes:
and establishing an inverse mapping relation between each pixel and corresponding mapping data of each pixel aiming at each picture subblock so as to encode the corresponding picture subblock to obtain encoded data and allocate an encoded data storage address to store the encoded data.
As described above, the mapping data is generated by performing mapping processing (such as hash operation) on the color channel values of the pixels based on the set mapping relationship. In order to save storage space, while performing mapping processing, performing dimension reduction processing on the color channel value to obtain mapping data with dimensions smaller than the color channel value, and specifically storing the mapping data of all pixels corresponding to each picture subblock and the corresponding relationship between the mapping data and the pixels and the picture subblocks to obtain encoded data of each picture subblock. In order to obtain the color channel value corresponding to the mapping data by encoding data, a mapping relationship from the mapping data of the pixel to the color channel value, that is, an inverse mapping relationship between the pixel and the mapping data corresponding to the pixel is established, so as to obtain the color channel value by the mapping data.
Illustratively, fig. 3B shows a schematic encoding process flow. As shown in fig. 3B, in S302, for each picture sub-block, establishing an inverse mapping relationship between each pixel and its corresponding mapping data, so as to encode the corresponding picture sub-block to obtain encoded data, and allocate an encoded data storage address to store, the method includes the following steps S312 to S322:
s312, establishing a mapping data table for each picture sub-block based on mapping data corresponding to each pixel;
illustratively, the mapping data for all pixels is embodied in a mapping data table for each picture sub-block. However, it should be noted that the mapping data table is only an example and is not a limitation. In other embodiments, the method may be embodied in an array or other form.
Illustratively, for each picture sub-block, the mapping data corresponding to each pixel is stored in the mapping data table in a manner of traversing the pixels row by row or in an array manner.
And S322, based on the mapping data table and the mapping relation, establishing an inverse mapping relation between each pixel and the corresponding mapping data thereof, so as to encode the corresponding picture subblocks to obtain encoded data and allocate an encoded data storage address to store the encoded data.
For example, as described above, in order to obtain a color channel value by mapping data for each picture sub-block during the subsequent decompression processing, an inverse mapping relation from the mapping data to the color channel value is obtained by performing an inverse transformation on the mapping relation from the color channel value to the mapping data, and meanwhile, a storage address is allocated to the encoded data to implement storage of the encoded data. The mapping data table is stored in the storage position pointed by the corresponding coded data storage address.
For example, when performing the allocation of the encoded data storage address, the encoded data storage address may be determined according to the bit width of the encoded data for each picture subblock.
For example, because the target picture includes a plurality of picture sub-blocks, in order to store the encoded data, the encoded data of each picture sub-block is sequentially stored according to a set picture sub-block order, a stored initial address is established for the encoded data of a first picture sub-block, then the bit width of the encoded data is considered, an offset based on the initial address is determined, that is, the encoded data storage address of the first picture sub-block can be indirectly obtained, and for the encoded data of a second picture sub-block, on the basis of the encoded data storage address of the first picture sub-block, the bit width of the encoded data of the second picture sub-block is considered, the offset relative to the encoded data storage address of the first picture sub-block is determined, that is, the encoded data storage address of the second picture sub-block can be indirectly obtained.
And S303, generating compressed data of the target picture according to the coded data storage address and the coded data bit width of each picture subblock and the mapping data bit width of a single pixel in each picture subblock.
FIG. 3C shows a schematic flow chart for generating compressed data. As shown in fig. 3C, for example, the generating compressed data of the target picture according to the encoded data storage address and the encoded data bit width of each picture sub-block and the mapping data bit width of a single pixel in each picture sub-block includes:
s313, aiming at each picture subblock, storing the coded data storage address, the coded data bit width and the mapping data bit width corresponding to each pixel in a header constructed for the picture subblock;
illustratively, the header, which may also be referred to as a tilehead, stores, in sequence, an encoded data storage address, an encoded data bit width, and a mapping data bit width corresponding to each pixel in the header created for each picture subblock. For this reason, since there are several picture sub-blocks, there are several headers corresponding to the same number.
The coded data storage address is used for acquiring coded data in a subsequent processing link, and the coded data bit width is used for determining the coded storage address of the next picture subblock based on the coded storage address of the previous picture subblock. The bit width of the mapping data corresponding to each pixel is convenient for traversing the mapping data corresponding to all the pixels of the same picture subblock in the subsequent application.
As described above, since the encoded data storage address can be determined by the address offset, when the encoded data storage address is stored in the header, the address offset at which the encoded data storage address can be obtained is only required to be stored, thereby further saving the storage space.
For example, in some other embodiments, the encoded data storage address, the encoded data bit width, and the mapping data bit width corresponding to each pixel may also be stored by a key value pair, for example, the encoded data storage address is stored as a key of the key value pair, and the encoded data bit width, and the mapping data bit width corresponding to each pixel is used as a key value of the key value pair.
And S323, generating compressed data of the target picture according to the corresponding table headers of all the picture sub-blocks.
For example, in step S323, the generating compressed data of the target picture according to all headers storing corresponding compressed data includes:
and splicing the corresponding headers of all the picture subblocks according to the appointed picture subblock sequence to obtain compressed data of the target picture.
Illustratively, the specified picture sub-block order is, for example, in a row direction of the picture sub-blocks.
Since the header includes the encoded data storage address, different headers, also called header boundaries, can be distinguished by the encoded data storage address, so as to further distinguish the picture subblocks.
In the above image compression scheme, because the header corresponding to each image sub-block is used for processing when the compressed data of the target image is stored, the local compression processing of the target image can be realized, thereby improving the flexibility of image compression and realizing rapid compression as required.
Similarly, if the target picture is a sub-picture in a picture to be processed, the picture to be processed is formed by splicing a plurality of sub-pictures; the method further comprises:
traversing each sub-picture to be used as the target picture one by one, and storing the bit width of the compressed data of the target picture and the storage address of the compressed data into a header constructed for the target picture;
and generating compressed data of the picture to be processed according to the corresponding headers of all the sub-pictures.
As described above, the to-be-processed picture is, for example, a Mipmap texture map, and is composed of a series of texture pictures and an original texture picture, which are generated by sequentially halving the resolution, so that each texture picture can be regarded as a sub-picture of the Mipmap texture map, and for this purpose, the texture pictures on the Mipmap texture map can be sequentially used as the target picture, so as to obtain a header corresponding to each texture picture, and further obtain corresponding compressed data. In order to distinguish compressed data of texture pictures of different resolutions, the compressed data may be stored in a header (also referred to as a stage header) created for each texture picture. For Mipmap texture maps, the header is referred to as, for example, a Map head.
Illustratively, the headers of all texture pictures are spliced together, for example, to generate the compressed data of the Mipmap texture map.
FIG. 3D shows a schematic flow chart for generating compressed data. As shown in fig. 3D, the to-be-processed picture is a Mipmap texture map, which is a large picture formed by multiple texture pictures with multiple levels of resolutions, each texture picture is equivalent to a sub-picture, each texture picture in the multiple texture pictures is traversed, the texture pictures are sequentially used as target pictures (for example, the texture pictures are marked as a target picture 1, a target picture 2, a target picture 3, and the like), and a blocking processing step is performed, for example, 4 picture sub-blocks are obtained, and the picture sub-blocks are respectively marked as a picture sub-block 1, a picture sub-block 2, a picture sub-block 3, and a picture sub-block 4; the method comprises the steps of performing local similarity judgment on pixels in the same picture subblock aiming at each picture subblock, determining similarity among different pixels so as to obtain a plurality of groups of similar pixels and a plurality of non-similar pixels, generating mapping data by taking the pixels as a unit according to the groups of similar pixels and the non-similar pixels so as to obtain mapping data of all pixels in the same picture subblock, then performing a step of generating encoding data by taking the picture subblock as a unit and the like, and a step of generating compressed data by taking a target picture as a unit so as to obtain compressed data corresponding to each texture picture. Because each texture picture corresponds to one piece of compressed data, the texture pictures have a total of multiple pieces of compressed data, such as the compressed data recorded as the target picture 1, the compressed data recorded as the target picture 2, the compressed data recorded as the target picture 3, and so on. The multiple pieces of compressed data are spliced to form the compressed data of the Mipmap texture map.
Here, the above embodiments refer to the blocking processing step, the local similarity determination step for pixels in the same target picture, the mapping data generation step for each pixel, and the compressed data generation step for each picture sub-block.
Fig. 4 shows a flowchart of a picture decoding method. As shown in fig. 4, it includes the following steps S401-S404:
s401, analyzing compressed data of a target picture to obtain an encoded data storage address, an encoded data bit width and a mapping data bit width of a single pixel in each picture subblock in the target picture;
for example, the compressed data of the target picture may be generated based on the compression method provided in the above embodiment of the present application.
Illustratively, if the compressed data of the target picture is generated according to the headers of all the picture subblocks, and for each picture subblock, the corresponding header includes an encoded data storage address, an encoded data bit width, and a mapping data bit width corresponding to each pixel in the picture subblock, therefore, the compressed data may be parsed to obtain the header corresponding to each picture subblock, and further obtain the encoded data storage address, the encoded data bit width, and the mapping data bit width of a single pixel in each picture subblock from the header.
For example, when compressed data of a target picture is generated, the compressed data may be generated according to a specified picture subblock order, so that when the compressed data is analyzed, corresponding headers may be sequentially obtained from the compressed data according to the specified picture subblock order, and an encoded data storage address, an encoded data bit width, and a mapping data bit width of a single pixel in each picture subblock may be further obtained from the headers.
S402, aiming at each picture subblock, acquiring encoded data from an encoded data storage address according to the bit width of the encoded data;
as described above, if the storage address of the encoded data is expressed based on the address offset, the compressed data is analyzed to obtain the address offset of the storage address of the encoded data corresponding to the current picture subblock relative to the storage address of the encoded data corresponding to the previous picture subblock, and the address offset is added to the storage address of the encoded data corresponding to the current picture subblock, so as to obtain the storage address of the encoded data corresponding to the current picture subblock. As described above, the address offset is determined according to the encoded data bit width. Therefore, on the basis of the storage address of the coded data corresponding to the previous picture subblock, the address offset is obtained by considering the bit width of the coded data corresponding to the current picture subblock, and the storage address of the coded data corresponding to the current picture subblock can be obtained.
Since the compressed data of the target picture is managed according to the coded data corresponding to the picture sub-blocks, in step S402, the acquisition of the partial coded data can be realized, and the acquisition on demand can be realized.
S403, aiming at each picture subblock, acquiring mapping data of each pixel from the coding data of the picture subblock according to the bit width of the mapping data of a single pixel;
for example, the encoded data corresponding to each picture sub-block is composed of mapping data of a plurality of pixels included therein, and therefore, when step S403 is executed, the bit width of the mapping data of a single pixel is used as a step of data traversal, so that the mapping data corresponding to each pixel can be acquired one by one. For example, the storage location pointed by the encoded data storage address is accessed to obtain mapping data table, and based on the mapping data bit width of the single pixel, mapping data corresponding to each pixel is obtained one by one in a sequence of pixels on a picture subblock row by row (or column by column).
Since the encoded data of the picture sub-blocks is managed according to the pixel-corresponding mapping data, in step S403, the acquisition of the partial mapping data can be realized, and the on-demand acquisition can be realized.
S404, aiming at each picture sub-block, obtaining the color channel value of each pixel according to the mapping data of each pixel so as to generate the decoding data of the target picture.
For example, in step S404, when the color channel value of each pixel is obtained according to the mapping data of each pixel, the mapping data of each pixel may be subjected to inverse mapping processing based on the above inverse mapping relationship, so as to obtain the color channel value of each pixel. And traversing all pixels of each picture sub-block, and completing inverse mapping processing, thereby obtaining the decoding data of the target picture, wherein the decoding data comprises the color channel values of all the pixels on the target picture.
Since the encoded data of the picture sub-block is managed by taking the picture sub-block as a unit, in step S404, local decoding of the target picture may be performed, for example, the encoded data of a certain picture sub-block is specified to be decoded, so that on-demand acquisition is realized, and data loading efficiency in subsequent applications, such as rendering, may be improved.
Exemplarily, when the above-described picture decoding method is applied to a Mipmap application scene, the above-described picture decoding method is executed for each level of texture map in the Mipmap, so as to obtain decoded data of a corresponding texture map, and all levels of texture maps are traversed, so as to obtain decoded data of the Mipmap, thereby meeting the demand of a 3D renderer for rendering as required.
Corresponding to the method provided by the above embodiment, the embodiment of the present application further provides a corresponding apparatus, which is detailed as follows.
Illustratively, there is provided a picture mapping apparatus, comprising:
the picture acquisition unit is used for acquiring a target picture;
the picture partitioning unit is used for partitioning the target picture to obtain a plurality of picture sub-blocks;
the pixel comparison unit is used for comparing the similarity among different pixels aiming at each picture subblock and determining a plurality of groups of similar pixels and a plurality of non-similar pixels, wherein each group of similar pixels comprises at least two similar pixels;
the data mapping unit is used for mapping any group of similar pixels in each picture subblock based on a set mapping relation to obtain corresponding mapping data so as to multiplex the similar pixels in the same group; and mapping the color channel value of each non-similar pixel in each picture sub-block based on the set mapping relation to obtain corresponding mapping data.
In the embodiment of the picture mapping apparatus, for an exemplary explanation of each unit, reference may be made to the above-mentioned embodiment.
Illustratively, a picture compression apparatus is provided, which includes:
the image processing device comprises a data acquisition unit, a data processing unit and a processing unit, wherein the data acquisition unit is used for acquiring mapping data generated aiming at each pixel in each image sub-block of a target image, and the mapping data is generated according to the image mapping method in any one of the applications;
the data coding unit is used for coding the corresponding picture subblocks according to the mapping data corresponding to each pixel of each picture subblock to obtain coded data and distributing coded data storage addresses to store the coded data; and generating compressed data of the target picture according to the coded data storage address and the coded data bit width of each picture subblock and the mapping data bit width of a single pixel in each picture subblock.
In the embodiment of the picture compression apparatus, for exemplary explanation of each unit, reference may be made to the above-mentioned embodiment.
Illustratively, a picture decoding apparatus is provided, wherein it comprises:
the data analysis unit is used for analyzing compressed data of a target picture to obtain an encoded data storage address, an encoded data bit width and a mapping data bit width of a single pixel in each picture subblock in the target picture, wherein the compressed data is generated according to the picture compression method in any embodiment of the application;
the data decoding unit is used for acquiring the coded data from the coded data storage address of each picture subblock according to the coded data bit width of each picture subblock; for each picture subblock, acquiring mapping data of each pixel from encoding data of the picture subblock according to the bit width of the mapping data of a single pixel; and aiming at each picture subblock, obtaining a color channel value of each pixel according to the mapping data of each pixel to generate decoding data of the target picture.
In this embodiment of the picture decoding apparatus, for an exemplary explanation of each unit, reference may be made to the above-mentioned embodiments.
Referring to fig. 5, a schematic structural diagram of an electronic device in this embodiment is shown, and the specific embodiment of this application does not limit the specific implementation of the electronic device.
As shown in fig. 5, the electronic device may include: a processor (processor) 502, a communications interface (communications interface) 504, a memory 506, and a communications bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with other electronic devices or servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described image processing method embodiment.
In particular, program 510 may include program code comprising computer operating instructions.
The processor 502 may be a CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present application. The intelligent device comprises one or more processors which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to enable the processor 502 to execute operations corresponding to the image processing method, the model training method, and the image search method described in any of the foregoing method embodiments.
For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing method embodiments, and corresponding beneficial effects are provided, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application further provides a computer storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the operation corresponding to any one of the method embodiments.
The embodiment of the present application further provides a computer program product, which includes computer instructions for instructing a computing device to execute an operation corresponding to any one of the methods in the foregoing method embodiments.
It should be noted that, according to implementation needs, each component/step described in the embodiment of the present application may be divided into more components/steps, and two or more components/steps or partial operations of the components/steps may also be combined into a new component/step to achieve the purpose of the embodiment of the present application.
The above-described methods according to the embodiments of the present application may be implemented in hardware, firmware, or as software or computer code that may be stored in a recording medium such as a CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk, or as computer code downloaded through a network, originally stored in a remote recording medium or a non-transitory machine-readable medium, and to be stored in a local recording medium, so that the methods described herein may be stored in such software processes on a recording medium using a general purpose computer, a dedicated processor, or programmable or dedicated hardware such as an ASIC or FPGA. It is understood that a computer, processor, microprocessor controller, or programmable hardware includes memory components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code that, when accessed and executed by a computer, processor, or hardware, implements the methods described herein. Furthermore, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code transforms the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.
The above embodiments are only used for illustrating the embodiments of the present application, and not for limiting the embodiments of the present application, and those skilled in the art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also belong to the scope of the embodiments of the present application, and the scope of the patent protection of the embodiments of the present application should be defined by the claims.

Claims (13)

1. A picture mapping method, comprising:
acquiring a target picture;
carrying out blocking processing on the target picture to obtain a plurality of picture sub-blocks;
for each picture subblock, comparing the similarity between different pixels to determine a plurality of groups of similar pixels and a plurality of non-similar pixels, wherein each group of similar pixels comprises at least two similar pixels;
mapping processing is carried out on color channel values of any group of similar pixels in each picture sub-block based on a set mapping relation to obtain corresponding mapping data so as to multiplex the similar pixels in the same group of similar pixels;
and mapping the color channel value of each non-similar pixel in each picture sub-block based on the set mapping relation to obtain corresponding mapping data.
2. The method of claim 1, wherein the comparing the similarity between different pixels and determining similar pixels and non-similar pixels in each picture sub-block comprises: judging whether the difference value of the color channel values among different pixels is within a set difference value threshold range, if so, judging as a similar pixel; otherwise, the non-similar pixel is judged.
3. The method according to claim 1, wherein, when mapping the color channel values of the similar pixels or the non-similar pixels based on the set mapping relationship, mapping processing is performed on the color channel value corresponding to each color channel of the similar pixels or the non-similar pixels to obtain corresponding mapping data.
4. The method according to claim 3, wherein the mapping the color channel value corresponding to each color channel of the similar pixel or the non-similar pixel to obtain corresponding mapping data comprises: and performing dimension reduction mapping processing on the color channel value corresponding to each color channel of the similar pixels or the non-similar pixels based on the set data dimension reduction mapping relation to obtain corresponding mapping data.
5. The method according to claim 4, wherein performing dimension reduction mapping processing on a color channel value corresponding to each color channel of the similar pixels or the non-similar pixels based on the set data dimension reduction mapping relationship to obtain corresponding mapping data comprises: and based on a set hash operation relation, carrying out hash operation processing on color channel values corresponding to each color channel of the similar pixels or the non-similar pixels to obtain corresponding mapping data.
6. The method of any one of claims 1-5, wherein the method further comprises:
acquiring a picture to be processed, wherein the picture to be processed is formed by splicing a plurality of sub-pictures;
and selecting any sub-picture from the pictures to be processed as the target picture to generate mapping data corresponding to each pixel on the pictures to be processed.
7. A picture compression method, comprising:
obtaining mapping data generated for each pixel in each picture sub-block of the target picture, the mapping data being generated according to the picture mapping method of any one of claims 1-6;
for each picture subblock, according to mapping data corresponding to each pixel, coding the corresponding picture subblock to obtain coded data and distributing a coded data storage address for storage;
and generating compressed data of the target picture according to the coded data storage address and the coded data bit width of each picture subblock and the mapping data bit width of a single pixel in each picture subblock.
8. The method of claim 7, wherein for each picture sub-block, according to mapping data corresponding to each pixel, encoding the corresponding picture sub-block to obtain encoded data and allocating an encoded data storage address to store the encoded data, comprising:
and establishing an inverse mapping relation between each pixel and corresponding mapping data of each pixel aiming at each picture subblock so as to encode the corresponding picture subblock to obtain encoded data and allocate an encoded data storage address to store the encoded data.
9. The method according to claim 7, wherein the generating compressed data of the target picture according to the encoded data storage address and the encoded data bit width of each picture sub-block and the mapping data bit width of a single pixel in each picture sub-block comprises:
aiming at each picture subblock, storing the coded data storage address, the coded data bit width and the mapping data bit width corresponding to each pixel in a header constructed for the picture subblock;
and generating compressed data of the target picture according to the corresponding table headers of all the picture sub-blocks.
10. The method according to any one of claims 7-9, wherein if the target picture is a sub-picture of a to-be-processed picture, the to-be-processed picture is formed by splicing a plurality of sub-pictures; the method further comprises:
traversing each sub-picture to be used as the target picture one by one, and storing the bit width of the compressed data of the target picture and the storage address of the compressed data into a header constructed for the target picture;
and generating compressed data of the picture to be processed according to the corresponding headers of all the sub-pictures.
11. A picture decoding method, comprising:
analyzing compressed data of a target picture to obtain an encoded data storage address, an encoded data bit width and a mapping data bit width of a single pixel in each picture sub-block in the target picture, wherein the compressed data is generated according to the picture compression method of any one of claims 7 to 10;
for each picture subblock, acquiring encoded data from an encoded data storage address according to the encoded data bit width;
for each picture subblock, acquiring mapping data of each pixel from encoding data of the picture subblock according to the bit width of the mapping data of a single pixel;
and aiming at each picture subblock, obtaining a color channel value of each pixel according to the mapping data of each pixel to generate decoding data of the target picture.
12. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface are communicated with each other through the communication bus;
the memory is used for storing at least one executable instruction which causes the processor to execute the corresponding operation of the method according to any one of claims 1-11.
13. A computer storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-11.
CN202211497505.4A 2022-11-28 2022-11-28 Picture mapping method, compression method, decoding method and electronic equipment Active CN115546328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211497505.4A CN115546328B (en) 2022-11-28 2022-11-28 Picture mapping method, compression method, decoding method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211497505.4A CN115546328B (en) 2022-11-28 2022-11-28 Picture mapping method, compression method, decoding method and electronic equipment

Publications (2)

Publication Number Publication Date
CN115546328A CN115546328A (en) 2022-12-30
CN115546328B true CN115546328B (en) 2023-03-14

Family

ID=84722150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211497505.4A Active CN115546328B (en) 2022-11-28 2022-11-28 Picture mapping method, compression method, decoding method and electronic equipment

Country Status (1)

Country Link
CN (1) CN115546328B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104837A (en) * 1996-06-21 2000-08-15 U.S. Philis Corporation Image data compression for interactive applications
CN102257556A (en) * 2008-12-18 2011-11-23 夏普株式会社 Adaptive image processing method and apparatus for reduced colour shift in lcds
CN109191460A (en) * 2018-10-15 2019-01-11 方玉明 A kind of quality evaluating method for tone mapping image
CN111709483A (en) * 2020-06-18 2020-09-25 山东财经大学 Multi-feature-based super-pixel clustering method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10091512B2 (en) * 2014-05-23 2018-10-02 Futurewei Technologies, Inc. Advanced screen content coding with improved palette table and index map coding methods
CN113052923B (en) * 2021-03-31 2023-02-28 维沃移动通信(深圳)有限公司 Tone mapping method, tone mapping apparatus, electronic device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104837A (en) * 1996-06-21 2000-08-15 U.S. Philis Corporation Image data compression for interactive applications
CN102257556A (en) * 2008-12-18 2011-11-23 夏普株式会社 Adaptive image processing method and apparatus for reduced colour shift in lcds
CN109191460A (en) * 2018-10-15 2019-01-11 方玉明 A kind of quality evaluating method for tone mapping image
CN111709483A (en) * 2020-06-18 2020-09-25 山东财经大学 Multi-feature-based super-pixel clustering method and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于帧间相关性的高动态视频色调映射研究;李如春等;《高技术通讯》;20180715(第07期);全文 *

Also Published As

Publication number Publication date
CN115546328A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN114424542B (en) Video-based point cloud compression with non-canonical smoothing
CN111433821B (en) Method and apparatus for reconstructing a point cloud representing a 3D object
US7873212B2 (en) Compression of images for computer graphics
JP2022542419A (en) Mesh compression via point cloud representation
JP5399416B2 (en) Video coding system with reference frame compression
WO2014166434A1 (en) Method for coding/decoding depth image and coding/decoding device
RU2767771C1 (en) Method and equipment for encoding/decoding point cloud representing three-dimensional object
CN111402380B (en) GPU compressed texture processing method
WO2020146571A1 (en) Method and apparatus for dynamic point cloud partition packing
JP2010524332A (en) Image processing using vectors
CN113170140A (en) Bit plane encoding of data arrays
JP3790728B2 (en) Image encoding apparatus, image decoding apparatus and methods thereof
JP2022528540A (en) Point cloud processing
JP2013511226A (en) Embedded graphics coding: bitstreams reordered for parallel decoding
US10997795B2 (en) Method and apparatus for processing three dimensional object image using point cloud data
KR20120049881A (en) Vector embedded graphics coding
US11263786B2 (en) Decoding data arrays
TWI505717B (en) Joint scalar embedded graphics coding for color images
CN115546328B (en) Picture mapping method, compression method, decoding method and electronic equipment
WO2023093377A1 (en) Encoding method, decoding method and electronic device
US10341682B2 (en) Methods and devices for panoramic video coding and decoding based on multi-mode boundary fill
JP2022502892A (en) Methods and devices for encoding / reconstructing 3D points
CN107172425B (en) Thumbnail generation method and device and terminal equipment
US11515961B2 (en) Encoding data arrays
CN118037870B (en) Zdepth-compatible parallelization depth image compression algorithm, zdepth-compatible parallelization depth image compression device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant