CN114630125A - Vehicle image compression method and system based on artificial intelligence and big data - Google Patents

Vehicle image compression method and system based on artificial intelligence and big data Download PDF

Info

Publication number
CN114630125A
CN114630125A CN202210291198.8A CN202210291198A CN114630125A CN 114630125 A CN114630125 A CN 114630125A CN 202210291198 A CN202210291198 A CN 202210291198A CN 114630125 A CN114630125 A CN 114630125A
Authority
CN
China
Prior art keywords
bit
feature
encoder
separation
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210291198.8A
Other languages
Chinese (zh)
Other versions
CN114630125B (en
Inventor
康云清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Baishili Electric Vehicle Co ltd
Original Assignee
Xuzhou Baishili Electric Vehicle Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Baishili Electric Vehicle Co ltd filed Critical Xuzhou Baishili Electric Vehicle Co ltd
Priority to CN202210291198.8A priority Critical patent/CN114630125B/en
Publication of CN114630125A publication Critical patent/CN114630125A/en
Application granted granted Critical
Publication of CN114630125B publication Critical patent/CN114630125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to the field of artificial intelligence and big data, in particular to a vehicle image compression method and a system based on artificial intelligence and big data, wherein the method comprises the following steps: carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images; each group of bit images corresponds to a characteristic separation network, and the characteristic separation network is utilized to obtain the bit characteristics corresponding to each group of bit images; and fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing to obtain a vehicle image compression result. The method and the device enable the vehicle image compression result to store more important characteristic information by using the least data amount and ensure the accuracy of the decompression result.

Description

Vehicle image compression method and system based on artificial intelligence and big data
Technical Field
The invention relates to the field of artificial intelligence and big data, in particular to a vehicle image compression method and system based on artificial intelligence and big data.
Background
With the rapid development of artificial intelligence and big data, the urban traffic system also introduces artificial intelligence and big data systems for realizing intelligent traffic detection, intelligent traffic management scheduling and the like. The method is characterized in that a large amount of traffic image data is collected, and the data is used for large data analysis, but the image data is too large in size and occupies a large amount of storage systems, and although the storage amount of the image data can be reduced by the existing image compression algorithm, the problems of too much image data and too much storage space are still solved.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a vehicle image compression method based on artificial intelligence and big data, which adopts the following technical solutions:
carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images;
each group of bit images corresponds to a characteristic separation network, and the characteristic separation network is utilized to obtain the bit characteristics corresponding to each group of bit images; wherein, characteristic separation network includes characteristic separation layer, and characteristic separation layer includes: the first encoder is used for extracting the characteristics of the input of the characteristic separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the characteristics of the characteristic diagram obtained by combining the first characteristic diagram output by the shared layer and the output characteristic diagram of the second encoder to obtain a second characteristic diagram; the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic;
and fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing to obtain a vehicle image compression result.
Preferably, the feature separation network further comprises: the semantic encoder is used for extracting the characteristics of a group of bit images to obtain the input of a characteristic separation layer; and the semantic decoder is used for extracting the characteristics of the output of the characteristic separation layer to obtain the vehicle semantic region corresponding to the group of bit images.
Preferably, the feature separation network includes a plurality of feature separation layers, an input of the first feature separation layer is an output feature map of the semantic encoder, an output feature map of the first feature separation layer is an input of a subsequent feature separation layer, an output feature map of the last feature separation layer is an input of the semantic decoder, and each feature separation layer obtains one bit feature.
Preferably, the method further includes a compression network, the compression network includes a fusion layer and a compression encoder, and the compressing step of compressing each group of bit images of the vehicle image after fusing with the corresponding bit features to obtain a vehicle image compression result specifically includes: and each group of bit images corresponds to one fusion layer, each group of bit images of the vehicle images and the bit characteristics corresponding to the bit images are input into the fusion layers, and the outputs of all the fusion layers are stacked, merged and input into a compression encoder to obtain a vehicle image compression result.
Preferably, the fusion layer comprises a fourth encoder and several fusion encoders: the fourth encoder is used for extracting the characteristics of a group of bit images of the vehicle image; and the fusion encoder is used for extracting the features of the feature graph obtained by combining the feature graph output by the fourth encoder and the corresponding bit features, or extracting the features of the feature graph obtained by fusing the feature graph output by the preamble fusion encoder and the corresponding bit features.
The invention also provides a vehicle image compression system based on artificial intelligence and big data, which comprises:
the image data acquisition module is used for carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images;
the network training control module is used for constructing a feature separation network, each group of bit images corresponds to one feature separation network, and the feature separation network is utilized to obtain the bit features corresponding to each group of bit images; wherein, the characteristic separation network includes the characteristic separation layer, and the characteristic separation layer includes: the first encoder is used for extracting the characteristics of the input of the characteristic separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the characteristics of the characteristic diagram obtained by combining the first characteristic diagram output by the shared layer and the output characteristic diagram of the second encoder to obtain a second characteristic diagram; the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic;
and the image compression module is used for fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing the fused bit images to obtain a vehicle image compression result.
The embodiment of the invention at least has the following beneficial effects:
according to the method, a plurality of groups of bit images are obtained by layering the bit planes of the vehicle images, a plurality of bit features corresponding to each group of bit images are obtained by using the feature separation network, and the compression network and the decompression network of the vehicle images are constructed according to the bit features, so that the compression result stores more important feature information by using the least amount of data as possible, and the accuracy of the decompression result is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a process flow diagram;
FIG. 2 is a network architecture diagram of an nth feature separation network;
FIG. 3 is a network architecture diagram of a feature separation layer;
FIG. 4 is a diagram of a DNN network architecture of the present invention;
FIG. 5 is a structural view of the n-th fusion layer.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a vehicle image compression method and system based on artificial intelligence and big data according to the present invention, with reference to the accompanying drawings and preferred embodiments, and the detailed description thereof will be given below. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of a vehicle image compression method and system based on artificial intelligence and big data in detail with reference to the accompanying drawings.
Example 1:
referring to fig. 1, a flowchart illustrating steps of a method for compressing vehicle images based on artificial intelligence and big data according to an embodiment of the present invention is shown, where the method includes the following steps:
firstly, the vehicle image is layered in a bit plane to obtain a plurality of groups of bit images.
A vehicle image dataset of a vehicle is acquired, the dataset comprising a plurality of images of the vehicle, each image corresponding to a semantic tag of the vehicle, the dataset being used to train a network for obtaining semantic regions of the vehicle.
Any one vehicle RGB image in the vehicle image data set is acquired, Gaussian blur processing is carried out on the image by using an N multiplied by N Gaussian blur kernel, and noise in the image is removed, wherein N is 3 in the embodiment. Then, three channels of RGB of the image are respectively obtained, bit plane layering is carried out on each channel to obtain 8 bit planes, each bit plane is a binary image, and then a vehicle image can be divided into 24 bit planes. First bit planes of three channels of RGB of the image are superposed to form a three-channel image, which is called a first group of bit images, and nth bit planes of the three channels of RGB are superposed together to obtain an nth group of bit images in the same way, wherein n is equal to {1 is equal to or less than n is equal to or less than 8, and n is an integer }.
Then, constructing a feature separation network, wherein each group of bit images corresponds to one feature separation network, and acquiring bit features corresponding to each group of bit images by using the feature separation network; wherein, characteristic separation network includes characteristic separation layer, and characteristic separation layer includes: the first coder is used for carrying out feature extraction on the input of the feature separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the characteristics of the characteristic diagram obtained by combining the first characteristic diagram output by the shared layer and the output characteristic diagram of the second encoder to obtain a second characteristic diagram; and the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic.
The feature separation network further comprises: the semantic encoder is used for extracting the characteristics of a group of bit images to obtain the input of a characteristic separation layer; and the semantic decoder is used for extracting the characteristics of the output of the characteristic separation layer to obtain the vehicle semantic region corresponding to the group of bit images.
The characteristic separation network comprises a plurality of characteristic separation layers, wherein the input of a first layer of characteristic separation layers is an output characteristic diagram of a semantic encoder, the output characteristic diagram of the first layer of characteristic separation layers is the input of a subsequent characteristic separation layer, the output characteristic diagram of the last layer of characteristic separation layers is the input of a semantic decoder, and each layer of characteristic separation layer obtains one bit characteristic.
Specifically, 8 feature separation networks are constructed, the network structures of the 8 feature separation networks are consistent, the network layers of any two feature separation networks are corresponding, and the feature separation network comprises three feature separation layers. The structure of the nth feature separation network is shown in fig. 2: the network structure comprises a semantic encoder, a feature separation layer n1, a feature separation layer n2, a feature separation layer n3 and a semantic decoder, and semantic regions of the output vehicle.
An nth feature separation network is taken as an example for explanation: and inputting the nth group of bit images into a semantic encoder, wherein the semantic encoder performs feature extraction and downsampling on the input images, and the output result is a feature graph as the input of a feature separation layer.
Then, there are three feature separation layers, and the network structure of each feature separation layer is as shown in fig. 3, specifically: the input of each feature separation layer is a feature graph output by a last layer in a feature separation network, firstly, a semantic encoder output feature graph is input into a first encoder n (1), then the first encoder n (1) performs feature extraction and down sampling on the input feature graph to obtain an output feature graph, and then the output feature graph of the first encoder is input into a second encoder n (2) and a sharing layer.
The shared layer is also an encoder for extracting features of its input feature map and down-sampling, except that the network parameters in this shared layer are shared by the networks of the shared layer in the feature separation layers corresponding to other feature separation networks, i.e. the network parameters of the shared layer in the feature separation layers corresponding to any different feature separation networks are always kept consistent. When the feature separation networks are subsequently trained, the features extracted by the shared layer are all features common to all groups of bit images. The result of the sharing layer output is a first feature map t1, which is a feature common to all groups of bit images.
In the feature separation layer, the output feature map of the second encoder is stacked (Concat) with the first feature map t1, and then input into the third encoder n (3), and feature extraction and downsampling are continued to output the second feature map t 2. The purpose of stacking together here is to blend features common to all sets of bit images into a feature separation network.
The second feature map t2 is used as the output result of the feature separation layer. Then, by using the separation module, the difference between the first feature map t1 and the second feature map t2 is obtained to obtain a third feature map t3, and the third feature map t3 is subsequently used for supervising the training of the feature separation network, and represents the specific features contained in the input nth group of bit images except the features common to all the groups of bit images. The third feature map t3 is referred to as the bit feature of the nth group bit image. According to the number of the feature separation layers, the number of the bit features of the nth group of bit images is three, and the three bit features are respectively called as a first bit feature, a second bit feature and a third bit feature of the nth group of bit images.
And inputting the feature map output by the feature separation layer n3 into a semantic decoder, and decoding and upsampling the input feature map by the semantic decoder to finally obtain a semantic region, wherein the obtained semantic region represents a vehicle semantic region corresponding to the nth group of bit images.
One image corresponds to 8 groups of bit images and corresponds to one semantic label. The 8 feature separation networks have a shared layer, and the input 8 groups of bit images correspond to one label, so the 8 feature separation networks need to be trained together. It should be noted that the trained feature separation network is large, but the input of each feature separation network is a group of bit images, and the group of bit images is formed by combining three binary bit planes, so that the parameter quantity of each feature separation network is very small, and the total parameter quantity of the 8 feature separation networks is not much, so that the feature separation networks can be trained quickly.
In this embodiment, the 8 groups of bit images for obtaining a vehicle image I are { I }1,I2,…,In,…,I8Will { I }1,I2,…,In,…,I8And inputting the data into 8 feature separation networks respectively. Bit image InAfter inputting the nth feature separation network, assume that the bit feature output from the ith feature separation layer of the nth feature separation network is Fni,The semantic area output by the nth feature separation network is AnThe semantic label corresponding to the vehicle image I is A0
Constructing a loss function:
Figure BDA0003560258880000041
in the formula, | Fni-Fmj2The L2 norm of the difference of any two bit characteristics is shown,
Figure BDA0003560258880000042
Figure BDA0003560258880000043
for representing the difference of any two bit characteristics, the difference of any two bit characteristics is maximized when the Loss is minimized. From the feature separation layer of the feature separation network, the second feature map t2 represents the common features of all the groups of bit images, while the first feature map t1 represents the features all contained in one group of bit images, and the bit features are the difference values of the two feature maps, so that the bit features represent features other than the common features contained in the bit images.
αmnThe acquisition method comprises the following steps: respectively acquiring the nth feature separation network and the mth feature separation networkSemantic area A of individual feature separation network outputnAnd AmTo A, anAnd AmThe thresholding treatment specifically comprises the following steps: if A isnAnd AmIf the pixel value is greater than the empirical threshold (in this embodiment, the value of the empirical threshold is 0.8), the pixel value is set to 1, otherwise, the pixel value is set to 0. Then obtain AnAnd AmX, then alpha of the connected componentmnThe larger the value, exp (-x), the less features that indicate that the semantic regions of the two network outputs overlap, the more attention needs to be paid to the difference in bit features of the two sets of bit images.
The 8 feature separation networks were trained using a stochastic gradient descent method based on the loss function and the vehicle image dataset.
And finally, fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing to obtain a vehicle image compression result.
The method further comprises a compression network, wherein the compression network comprises a fusion layer and a compression encoder, and each group of bit images of the vehicle image are respectively fused with the corresponding bit features and then compressed to obtain a vehicle image compression result, specifically: and each group of bit images corresponds to one fusion layer, each group of bit images of the vehicle images and the bit characteristics corresponding to the bit images are input into the fusion layers, and the outputs of all the fusion layers are stacked, merged and input into a compression encoder to obtain a vehicle image compression result.
The fusion layer comprises a fourth encoder and a plurality of fusion encoders: the fourth encoder is used for extracting the characteristics of a group of bit images of the vehicle image; and the fusion encoder is used for extracting the features of the feature graph obtained by combining the feature graph output by the fourth encoder and the corresponding bit features, or extracting the features of the feature graph obtained by fusing the feature graph output by the preamble fusion encoder and the corresponding bit features.
The DNN network shown in FIG. 4 is constructed, the input of the network is 8 groups of bit images, each group of bit images is input into each fusion layer, the results output by the fusion layers are stacked and combined together, and then input into a compression encoder, and the compression encoder obtains a vehicle image compression result through feature extraction and downsampling, wherein the compression result is the compression result of the vehicle image corresponding to the input 8 groups of bit images and is a single-channel feature map. The network that obtains the compression result from the 8-bit image is a compression network. The compression result is input into a decoder, which is a decompression network, to obtain a decompression result, and the decoder is used to decompress the compressed image.
The structure of the n-th fusion layer is shown in fig. 5, specifically: the input is an nth group of bit images, the fourth encoder is used for extracting the features of the group of bit images, the group of bit images are input into the fusion encoder for feature extraction and down sampling to obtain a feature map, the feature map is fused with the first bit features b1 of the nth group of bit images, and the fusion result is the Hadamard product of the first bit features and the feature map.
And inputting the fusion result into a fusion encoder, performing feature extraction and downsampling, and obtaining a feature map, wherein the feature map is fused with the second bit feature b2 of the nth group of bit images, and the fusion result is the Hadamard product of the second bit feature and the feature map.
And inputting the fusion result into a fusion encoder, performing feature extraction and downsampling, and obtaining a feature map, wherein the feature map is fused with the third bit feature b3 of the nth group of bit images, and the fusion result is a Hadamard product of the third bit feature and the feature map. The fusion result third bit feature is used as the output result of the n fusion layer with the hadamard product of the feature map.
According to the DNN network structure, bit features are blended in a compression network, the bit features are used for representing the unique features contained in each group of bit images, the compression network can learn whether each group of bit images contain important feature information and the amount of the important feature information in the compression process of blending the unique features contained in each group of bit images into the images, so that the compression network can compress the bit images containing the important and unique bit images to a smaller extent, the loss of too much information is avoided, the compression network can compress the bit images without the important and unique features to a larger extent, the features to be stored are reduced as much as possible, and more data in the compression result can represent the important feature information. In addition, according to the embodiment, a plurality of bit features are introduced into different network layers, so that important information of each group of bit images can be obtained under different feature levels (under different feature dimensions), and the feature extraction capability of the compression network is improved. In summary, by introducing the bit characteristics into the compression network, the compression result stores a large amount of important information with a small amount of data, so that the data size of the compression result is small on one hand, and the decompression result is more accurate on the other hand.
The DNN network is composed of a compression network and a decompression network, and the input of the network is one vehicle image I corresponding to 8 bit images { I }1,I2,…,In,…,I8And any set of bit images InThe first bit characteristic, the second bit characteristic, the third bit characteristic.
The DNN network training method comprises the following steps: for a vehicle image I, the 8-bit image corresponding to the image I is { I }1,I2,…,In,…,I8And any set of bit images InIs input into the DNN network, the output of the DNN network is I0Then the Loss function is Loss | I-I02. And training the DNN network by using a stochastic gradient descent algorithm according to the loss function and the vehicle image data set, so that the DNN network is converged, thereby obtaining a compression network and a decompression network of the vehicle image for compression and decompression of the vehicle image.
Example 2:
the embodiment provides a vehicle image compression system based on artificial intelligence and big data, and the system comprises:
the image data acquisition module is used for carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images;
the network training control module is used for constructing a feature separation network, each group of bit images corresponds to one feature separation network, and the feature separation network is utilized to obtain the bit features corresponding to each group of bit images; wherein, characteristic separation network includes characteristic separation layer, and characteristic separation layer includes: the first encoder is used for extracting the characteristics of the input of the characteristic separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the characteristics of the characteristic diagram obtained by combining the first characteristic diagram output by the shared layer and the output characteristic diagram of the second encoder to obtain a second characteristic diagram; the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic;
and the image compression module is used for fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing the fused bit images to obtain a vehicle image compression result.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A vehicle image compression method based on artificial intelligence and big data is characterized by comprising the following steps:
carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images;
each group of bit images corresponds to a characteristic separation network, and the characteristic separation network is utilized to obtain the bit characteristics corresponding to each group of bit images; wherein, characteristic separation network includes characteristic separation layer, and characteristic separation layer includes: the first coder is used for carrying out feature extraction on the input of the feature separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the features of a feature map obtained by combining the first feature map output by the shared layer and the output feature map of the second encoder to obtain a second feature map; the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic;
and fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing to obtain a vehicle image compression result.
2. The method of claim 1, wherein the feature separation network further comprises:
the semantic encoder is used for extracting the characteristics of a group of bit images to obtain the input of a characteristic separation layer;
and the semantic decoder is used for extracting the characteristics of the output of the characteristic separation layer to obtain the vehicle semantic region corresponding to the group of bit images.
3. The method according to claim 2, wherein the feature separation network comprises a plurality of feature separation layers, wherein an input of a first layer of feature separation layers is an output feature map of a semantic encoder, an output feature map of a first layer of feature separation layers is an input of a subsequent feature separation layer, an output feature map of a last layer of feature separation layers is an input of a semantic decoder, and each layer of feature separation layers obtains one bit feature.
4. The method according to claim 1, further comprising a compression network, wherein the compression network comprises a fusion layer and a compression encoder, and the compressing of each group of bit images of the vehicle image after fusion with the corresponding bit features to obtain a vehicle image compression result specifically comprises: and each group of bit images corresponds to one fusion layer, each group of bit images of the vehicle images and the bit characteristics corresponding to the bit images are input into the fusion layers, and the outputs of all the fusion layers are stacked, merged and input into a compression encoder to obtain a vehicle image compression result.
5. The method of claim 4, wherein the fused layer comprises a fourth encoder and a number of fused encoders: the fourth encoder is used for extracting the characteristics of a group of bit images of the vehicle image; and the fusion encoder is used for extracting the features of the feature graph obtained by combining the feature graph output by the fourth encoder and the corresponding bit features, or extracting the features of the feature graph obtained by fusing the feature graph output by the preamble fusion encoder and the corresponding bit features.
6. An artificial intelligence and big data based image compression system, comprising:
the image data acquisition module is used for carrying out bit plane layering on the vehicle image to obtain a plurality of groups of bit images;
the network training control module is used for constructing a feature separation network, each group of bit images corresponds to one feature separation network, and the feature separation network is utilized to obtain the bit features corresponding to each group of bit images; wherein, characteristic separation network includes characteristic separation layer, and characteristic separation layer includes: the first encoder is used for extracting the characteristics of the input of the characteristic separation layer; the second encoder is used for extracting the characteristics of the output of the first encoder; the third encoder is used for extracting the characteristics of the characteristic diagram obtained by combining the first characteristic diagram output by the shared layer and the output characteristic diagram of the second encoder to obtain a second characteristic diagram; the separation module is used for acquiring a difference characteristic diagram of the second characteristic diagram and the first characteristic diagram as a bit characteristic;
and the image compression module is used for fusing each group of bit images of the vehicle image with the corresponding bit features respectively and then compressing the fused bit images to obtain a vehicle image compression result.
7. The system of claim 6, wherein the network training control module further comprises:
the semantic encoder is used for extracting the characteristics of a group of bit images to obtain the input of a characteristic separation layer;
and the semantic decoder is used for extracting the characteristics of the output of the characteristic separation layer to obtain the vehicle semantic region corresponding to the group of bit images.
8. The system according to claim 7, wherein the feature separation network comprises a plurality of feature separation layers, wherein an input of a first layer of feature separation layers is an output feature map of a semantic encoder, an output feature map of a first layer of feature separation layers is an input of a subsequent feature separation layer, an output feature map of a last layer of feature separation layers is an input of a semantic decoder, and each layer of feature separation layers obtains one bit feature.
9. The system according to claim 6, wherein the image compression module further includes a compression network, the compression network includes a fusion layer and a compression encoder, and the compressing of each group of bit images of the vehicle image after being fused with the corresponding bit features to obtain the vehicle image compression result specifically is:
and each group of bit images corresponds to one fusion layer, each group of bit images of the vehicle images and the bit characteristics corresponding to the bit images are input into the fusion layers, and the outputs of all the fusion layers are stacked, merged and input into a compression encoder to obtain a vehicle image compression result.
10. The system of claim 9, wherein the fused layer comprises a fourth encoder and a number of fused encoders: the fourth encoder is used for extracting the characteristics of a group of bit images of the vehicle image; and the fusion encoder is used for extracting the features of the feature graph obtained by combining the feature graph output by the fourth encoder and the corresponding bit features, or extracting the features of the feature graph obtained by fusing the feature graph output by the preamble fusion encoder and the corresponding bit features.
CN202210291198.8A 2022-03-23 2022-03-23 Vehicle image compression method and system based on artificial intelligence and big data Active CN114630125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210291198.8A CN114630125B (en) 2022-03-23 2022-03-23 Vehicle image compression method and system based on artificial intelligence and big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210291198.8A CN114630125B (en) 2022-03-23 2022-03-23 Vehicle image compression method and system based on artificial intelligence and big data

Publications (2)

Publication Number Publication Date
CN114630125A true CN114630125A (en) 2022-06-14
CN114630125B CN114630125B (en) 2023-10-27

Family

ID=81904630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210291198.8A Active CN114630125B (en) 2022-03-23 2022-03-23 Vehicle image compression method and system based on artificial intelligence and big data

Country Status (1)

Country Link
CN (1) CN114630125B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000022960A (en) * 1998-07-03 2000-01-21 Canon Inc Device and method for image processing and storage medium
JP2000083256A (en) * 1998-07-03 2000-03-21 Canon Inc Image processor and image processing method and recording medium
WO2001017230A1 (en) * 1999-08-27 2001-03-08 Sharp Kabushiki Kaisha Image encoding device and method therefor, image decoding method and method therefor, and computer-readable recorded medium on which image encoding program and image decoding program are recorded
JP2003116005A (en) * 2001-10-04 2003-04-18 Canon Inc Code amount control unit and method therefor
JP2003304405A (en) * 2002-04-10 2003-10-24 Ricoh Co Ltd Image processing method and image processor
KR20040080540A (en) * 2003-03-12 2004-09-20 유철 Layered DCT coding method using bit plain
JP2004312773A (en) * 2004-06-11 2004-11-04 Sharp Corp Image encoder and image decoder
US20050238245A1 (en) * 2004-04-23 2005-10-27 Shun-Yen Yao Image compression/decompression apparatus and method
CN1725861A (en) * 2004-07-21 2006-01-25 三星电子株式会社 The equipment of the method for compressing/decompressing image and this method of use
CN101938650A (en) * 2009-06-25 2011-01-05 夏普株式会社 Image compressing apparatus, image compressing method, image decompressing apparatus, image decompressing method, image forming apparatus
US20120189215A1 (en) * 2011-01-20 2012-07-26 Orions Digital Technologies, Inc. Image data processing for more efficient compression
CN106997607A (en) * 2017-03-16 2017-08-01 四川大学 Video bits face chaos encrypting method based on compressed sensing
CN108702514A (en) * 2016-03-09 2018-10-23 华为技术有限公司 A kind of high dynamic range images processing method and processing device
US20180373964A1 (en) * 2017-06-27 2018-12-27 Hitachi, Ltd. Information processing apparatus and processing method for image data
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN111050174A (en) * 2019-12-27 2020-04-21 清华大学 Image compression method, device and system
CN111049527A (en) * 2019-12-23 2020-04-21 云南大学 Image coding and decoding method
CN112785661A (en) * 2021-01-12 2021-05-11 山东师范大学 Depth semantic segmentation image compression method and system based on fusion perception loss
CN113012087A (en) * 2021-03-31 2021-06-22 中南大学 Image fusion method based on convolutional neural network
WO2021135715A1 (en) * 2019-12-31 2021-07-08 武汉Tcl集团工业研究院有限公司 Image compression method and apparatus
WO2021155832A1 (en) * 2020-02-07 2021-08-12 华为技术有限公司 Image processing method and related device
CN113673420A (en) * 2021-08-19 2021-11-19 清华大学 Target detection method and system based on global feature perception

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000022960A (en) * 1998-07-03 2000-01-21 Canon Inc Device and method for image processing and storage medium
JP2000083256A (en) * 1998-07-03 2000-03-21 Canon Inc Image processor and image processing method and recording medium
WO2001017230A1 (en) * 1999-08-27 2001-03-08 Sharp Kabushiki Kaisha Image encoding device and method therefor, image decoding method and method therefor, and computer-readable recorded medium on which image encoding program and image decoding program are recorded
US7031531B1 (en) * 1999-08-27 2006-04-18 Sharp Kabushiki Kaisha Image encoding device and method therefor, image decoding apparatus and method therefor, and computer-readable recorded medium on which image encoding program and image decoding program are recorded
JP2003116005A (en) * 2001-10-04 2003-04-18 Canon Inc Code amount control unit and method therefor
JP2003304405A (en) * 2002-04-10 2003-10-24 Ricoh Co Ltd Image processing method and image processor
KR20040080540A (en) * 2003-03-12 2004-09-20 유철 Layered DCT coding method using bit plain
US20050238245A1 (en) * 2004-04-23 2005-10-27 Shun-Yen Yao Image compression/decompression apparatus and method
JP2004312773A (en) * 2004-06-11 2004-11-04 Sharp Corp Image encoder and image decoder
CN1725861A (en) * 2004-07-21 2006-01-25 三星电子株式会社 The equipment of the method for compressing/decompressing image and this method of use
CN101938650A (en) * 2009-06-25 2011-01-05 夏普株式会社 Image compressing apparatus, image compressing method, image decompressing apparatus, image decompressing method, image forming apparatus
US20120189215A1 (en) * 2011-01-20 2012-07-26 Orions Digital Technologies, Inc. Image data processing for more efficient compression
CN108702514A (en) * 2016-03-09 2018-10-23 华为技术有限公司 A kind of high dynamic range images processing method and processing device
CN106997607A (en) * 2017-03-16 2017-08-01 四川大学 Video bits face chaos encrypting method based on compressed sensing
US20180373964A1 (en) * 2017-06-27 2018-12-27 Hitachi, Ltd. Information processing apparatus and processing method for image data
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN111049527A (en) * 2019-12-23 2020-04-21 云南大学 Image coding and decoding method
CN111050174A (en) * 2019-12-27 2020-04-21 清华大学 Image compression method, device and system
WO2021135715A1 (en) * 2019-12-31 2021-07-08 武汉Tcl集团工业研究院有限公司 Image compression method and apparatus
WO2021155832A1 (en) * 2020-02-07 2021-08-12 华为技术有限公司 Image processing method and related device
CN112785661A (en) * 2021-01-12 2021-05-11 山东师范大学 Depth semantic segmentation image compression method and system based on fusion perception loss
CN113012087A (en) * 2021-03-31 2021-06-22 中南大学 Image fusion method based on convolutional neural network
CN113673420A (en) * 2021-08-19 2021-11-19 清华大学 Target detection method and system based on global feature perception

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ZHIZHENG ZHANG等: "LEARNED SCALABLE IMAGE COMPRESSION WITH BIDIRECTIONAL CONTEXT DISENTANGLEMENT NETWORK", 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME) *
代毅;肖国强;李占闯;: "一种基于位平面的压缩域人脸识别算法", 计算机工程与应用, no. 01 *
王卫,蔡德钧,万发贯: "神经网络在图像编码中的应用", 电子学报, no. 07 *
田宏;杨树刚;: "基于重要位平面的真彩色图像检索算法", 计算机辅助设计与图形学学报, no. 02 *
胡粲彬;刘方;周军红;: "基于位平面特征的SAR图像筛选", 计算机应用, no. 11 *
花兴艳;葛耀林;: "基于比特平面分层和彩虹伪彩色编码的红外图像增强方法", 红外, no. 05 *

Also Published As

Publication number Publication date
CN114630125B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
Castro et al. End-to-end incremental learning
CN112541503B (en) Real-time semantic segmentation method based on context attention mechanism and information fusion
Liu et al. FDDWNet: a lightweight convolutional neural network for real-time semantic segmentation
CN113159051B (en) Remote sensing image lightweight semantic segmentation method based on edge decoupling
CN113409191B (en) Lightweight image super-resolution method and system based on attention feedback mechanism
CN111091130A (en) Real-time image semantic segmentation method and system based on lightweight convolutional neural network
CN110580704A (en) ET cell image automatic segmentation method and system based on convolutional neural network
CN111160350B (en) Portrait segmentation method, model training method, device, medium and electronic equipment
CN111144329A (en) Light-weight rapid crowd counting method based on multiple labels
CN111862127A (en) Image processing method, image processing device, storage medium and electronic equipment
CN115457498A (en) Urban road semantic segmentation method based on double attention and dense connection
CN114821058A (en) Image semantic segmentation method and device, electronic equipment and storage medium
CN113971735A (en) Depth image clustering method, system, device, medium and terminal
CN111160378A (en) Depth estimation system based on single image multitask enhancement
CN115496919A (en) Hybrid convolution-transformer framework based on window mask strategy and self-supervision method
CN115082306A (en) Image super-resolution method based on blueprint separable residual error network
CN114492581A (en) Method for classifying small sample pictures based on transfer learning and attention mechanism element learning application
CN117152435A (en) Remote sensing semantic segmentation method based on U-Net3+
CN114630125B (en) Vehicle image compression method and system based on artificial intelligence and big data
CN116402995A (en) Lightweight neural network-based ancient architecture point cloud semantic segmentation method and system
CN114119627B (en) High-temperature alloy microstructure image segmentation method and device based on deep learning
CN115861841A (en) SAR image target detection method combined with lightweight large convolution kernel
CN115953386A (en) MSTA-YOLOv 5-based lightweight gear surface defect detection method
CN115471901A (en) Multi-pose face frontization method and system based on generation of confrontation network
CN113313721B (en) Real-time semantic segmentation method based on multi-scale structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant