CN112348914B - Deep learning image compressed sensing method and system based on Internet of vehicles - Google Patents

Deep learning image compressed sensing method and system based on Internet of vehicles Download PDF

Info

Publication number
CN112348914B
CN112348914B CN202011260561.7A CN202011260561A CN112348914B CN 112348914 B CN112348914 B CN 112348914B CN 202011260561 A CN202011260561 A CN 202011260561A CN 112348914 B CN112348914 B CN 112348914B
Authority
CN
China
Prior art keywords
image
convolution layer
internet
vehicles
residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011260561.7A
Other languages
Chinese (zh)
Other versions
CN112348914A (en
Inventor
尹珠
吴仲城
张俊
李芳�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN202011260561.7A priority Critical patent/CN112348914B/en
Publication of CN112348914A publication Critical patent/CN112348914A/en
Application granted granted Critical
Publication of CN112348914B publication Critical patent/CN112348914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4046Scaling the whole image or part thereof using neural networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a deep learning image compression sensing method based on the Internet of vehicles, which comprises the steps of carrying out block compression sampling on pictures acquired by an intelligent terminal of the Internet of vehicles through a measurement matrix to obtain a measurement data set Y; transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely recovering to intermediate reconstruction through a full-connection layer to obtain a preliminary reconstructed image X'; reconstructing an image through a residual denoising network to obtain a final reconstructed image X'; and determining a loss function of the original image, and reversely transferring and updating the network weight by the loss function to realize optimization. The invention also discloses a deep learning image compression sensing system based on the Internet of vehicles. The invention greatly reduces the data transmission quantity, reduces the bandwidth pressure, greatly saves the flow cost and the limitation of the storage space, simultaneously restores the real-time response, and can obtain better effects on the accuracy of the reconstructed image and the inhibition of the image noise.

Description

Deep learning image compressed sensing method and system based on Internet of vehicles
Technical Field
The invention relates to the technical field of deep learning, in particular to a deep learning image compressed sensing method and system based on the Internet of vehicles.
Background
Along with the construction of the cloud data center of the Internet of vehicles and the comprehensive service platform and the application of industries, massive vehicle visual data, position data, vehicle states, faults, environment data, speed, acceleration and other driver behavior data, road network data and visual data are collected, and a basic data source is provided for analyzing the behaviors of mobile objects. In the process of building end-cloud end integrated development of the Internet of vehicles big data, the safety transmission of the data, the flow tariffs, the efficiency and the like are factors which need to be considered urgently.
The image and video are used as important carriers for reproducing and recording human visual information, are generally represented by using discrete values after high-density sampling, have huge data volume, and consume a large amount of network bandwidth and storage space when being directly transmitted without compression coding; instability of network environment and channel quality can also cause data errors or loss during transmission; the problems of storage medium aging, machine failure, etc. require further redundancy to be introduced based on the original data to achieve reliable storage. Therefore, efficient encoding, robust transmission and high reliability storage of image video play a vital role in the transfer, communication and storage of information. The mass data is transmitted in real time, and the network end cloud platform is limited by cloud computing and network transmission charge, so that the key point of research is how to ensure the data security and reduce the transmitted data quantity while the transmission speed reaches the limit.
The problem of compressed sensing is a challenging problem in the field of signal processing, and because the neural network can fully learn the prior information in the image, reconstructing the compressed sensing signal through the neural network has become a popular method in recent years. Compressed sensing is a new but in big data environments, its inherent drawbacks: relying on expert knowledge, inability to migrate, poor accuracy, etc., has highlighted itself to a point where it cannot be ignored. Because unstructured data, which occupies a major part of big data, often has unknown and changeable patterns, cannot obtain priori knowledge, and it is difficult to build an explicit mathematical model, so that more intelligent data mining technology needs to be developed.
The deep learning and the compressed sensing technology are combined, visual data under a vehicle network are efficiently compressed, collected and reconstructed on a cloud end, when a deep neural network for large-scale cloud visual analysis is used, computing load, transmission load and generalization capability of a cloud server can be well balanced when network bandwidth is fixed, and cloud end data are monitored through a network real-time intelligent terminal. Because the compressed sensing is rebuilt based on the neural network, when the compressed signal with larger dimension is input into the network, the full-connection layer and the highly deep convolutional neural network need a large amount of parameters, and the gradient explosion or the gradient disappearance phenomenon easily occurs to a large amount of parameters, so that the high-quality rebuilding can not be completed.
Disclosure of Invention
The invention aims at providing the deep learning image compression sensing method based on the Internet of vehicles, which is used for compressing mass data of the vehicle-mounted terminal of the Internet of vehicles, transmitting the compressed mass data to the cloud end and then to the mobile terminal, greatly reducing the data transmission quantity, relieving the bandwidth pressure and greatly saving the flow cost and the limitation of the storage space.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the deep learning image compression sensing method based on the Internet of vehicles comprises the steps of performing block compression sampling on pictures acquired by an intelligent terminal of the Internet of vehicles through a measurement matrix to obtain a measurement data set Y; transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely recovering to intermediate reconstruction through a full-connection layer to obtain a preliminary reconstructed image X'; reconstructing an image through a residual denoising network to obtain a final reconstructed image X'; and determining a loss function of the original image, and reversely transferring and updating the network weight by the loss function to realize optimization.
The method for carrying out block compression sampling on the pictures acquired by the intelligent terminal of the Internet of vehicles through the measurement matrix specifically comprises the following steps: dividing a picture acquired by the intelligent terminal of the Internet of vehicles into 33 multiplied by 33 non-overlapping picture blocks, supplementing the blocks with edges less than 33 multiplied by 33 with 0 to obtain an image block set, namely an original real image X, and respectively performing compressed sampling processing on the divided images by using a standard normal distributed measurement matrix to obtain a measurement data set Y.
The step of transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely, recovering the measurement data set Y to be an intermediate reconstruction through a full connection layer specifically comprises the following steps: the measurement data set Y is transmitted to a cloud server through wireless transmission, and then the low-dimensional measurement data set Y is restored to the high-dimensional data set before sampling through a deep neural network model, namely through full-connection layer and Reshape operation, namely the preliminary reconstructed image X'.
The image reconstruction through the residual denoising network specifically refers to inputting a preliminary reconstructed image X 'into a residual denoising network formed by a convolution layer, and further reconstructing and recovering the image to obtain an output of the residual denoising network, namely a final reconstructed image X'.
The determining and the loss function of the original image, the loss function reversely transmits and updates the network weight, and the optimization is specifically: and calculating the mean square error between the primary reconstruction X ', the final reconstruction image X' and the original real image X through the loss function, and reversely transferring the mean square error to update the parameters of the neural network.
The residual denoising network comprises 4 residual denoising modules, wherein each residual denoising module consists of 4 convolution layers, 2 activation functions, 4 batch processing functions and 1 soft threshold function, the 4 convolution layers are respectively a first convolution layer, a second convolution layer, a third convolution layer and a fourth convolution layer, a preliminary reconstructed image X' is connected with the first convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the first convolution layer is 1 channel, the output of the first convolution layer is 32 channels, the output of the first convolution layer is connected with the second convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the second convolution layer is 32 channels, the output of the second convolution layer is subjected to denoising processing through the soft threshold, and the soft threshold theta is 0.01; the output of the second convolution layer is connected with a third convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the third convolution layer is 32 channels, the output of the third convolution layer is connected with a fourth convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the fourth convolution layer is 32 channels, the output of the fourth convolution layer is 1 channel, the sum of the output of the fourth convolution layer and the primary reconstructed image X 'is used as the output of a residual denoising module, namely an intermediate reconstructed image, the intermediate reconstructed image is used as the input of a next residual denoising module, the images are further reconstructed and restored, and finally the output of a residual denoising network is obtained, namely a final reconstructed image X'.
Another object of the present invention is to provide a system for a deep learning image compressed sensing method based on internet of vehicles, comprising:
the intelligent terminal of the Internet of vehicles monitors and extracts mass video image data of the Internet of vehicles in real time, performs compression sampling through an encoder and transmits the mass video image data to the cloud server in real time through a wireless network;
the cloud server recovers and reconstructs the data through the deep network, and further performs corresponding task analysis on driving behaviors;
the intelligent terminal responds to the cloud server task analysis in real time correspondingly, so that unsafe factors of driving behaviors are avoided, and the aim of safe traveling is fulfilled.
According to the technical scheme, the beneficial effects of the invention are as follows: according to the method, mass data of the vehicle-mounted terminal of the Internet of vehicles are compressed and transmitted to the cloud end in real time and then transmitted to the mobile terminal, so that the data transmission quantity is greatly reduced, the bandwidth pressure is reduced, the flow cost and the limitation of storage space are greatly saved, simultaneously, the real-time response is recovered, and the accuracy of reconstructed images and the suppression of image noise can achieve good effects.
Drawings
FIG. 1 is a schematic diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a deep neural network model;
fig. 3 is a schematic diagram of a residual denoising module.
Detailed Description
As shown in fig. 1, in a deep learning image compression sensing method based on the internet of vehicles, a picture acquired by an intelligent terminal of the internet of vehicles is subjected to block compression sampling through a measurement matrix to obtain a measurement data set Y; transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely recovering to intermediate reconstruction through a full-connection layer to obtain a preliminary reconstructed image X'; reconstructing an image through a residual denoising network to obtain a final reconstructed image X'; and determining a loss function of the original image, and reversely transferring and updating the network weight by the loss function to realize optimization.
The method for carrying out block compression sampling on the pictures acquired by the intelligent terminal of the Internet of vehicles through the measurement matrix specifically comprises the following steps: dividing a picture acquired by the intelligent terminal of the Internet of vehicles into 33 multiplied by 33 non-overlapping picture blocks, supplementing the blocks with edges less than 33 multiplied by 33 with 0 to obtain an image block set, namely an original real image X, and respectively performing compressed sampling processing on the divided images by using a standard normal distributed measurement matrix to obtain a measurement data set Y.
The step of transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely, recovering the measurement data set Y to be an intermediate reconstruction through a full connection layer specifically comprises the following steps: the measurement data set Y is transmitted to a cloud server through wireless transmission, and then the low-dimensional measurement data set Y is restored to the high-dimensional data set before sampling through a deep neural network model, namely through full-connection layer and Reshape operation, namely the preliminary reconstructed image X'.
The image reconstruction through the residual denoising network specifically refers to inputting a preliminary reconstructed image X 'into a residual denoising network formed by a convolution layer, and further reconstructing and recovering the image to obtain an output of the residual denoising network, namely a final reconstructed image X'.
The determining and the loss function of the original image, the loss function reversely transmits and updates the network weight, and the optimization is specifically: and calculating the mean square error between the primary reconstruction X ', the final reconstruction image X' and the original real image X through the loss function, and reversely transferring the mean square error to update the parameters of the neural network.
As shown in fig. 3, the residual denoising network includes 4 residual denoising modules, the residual denoising modules are composed of 4 convolution layers, 2 activation functions, 4 batch processing functions and 1 soft threshold function, the residual denoising network in the step (3) includes a plurality of residual denoising modules, the residual denoising modules are composed of 4 convolution layers, 2 activation functions, 4 batch processing functions and 1 soft threshold function, the 4 convolution layers are respectively a first convolution layer, a second convolution layer, a third convolution layer and a fourth convolution layer, the primary reconstructed image X' is firstly connected with the first convolution layer with the convolution kernel size of 3×3, the input of the first convolution layer is 1 channel, the output of the first convolution layer is 32 channels, the output of the first convolution layer is connected with the second convolution layer with the convolution kernel size of 3×3, the input of the second convolution layer is 32 channels, the output of the second convolution layer is subjected to denoising processing through the soft threshold, and the soft threshold θ is 0.01; the output of the second convolution layer is connected with a third convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the third convolution layer is 32 channels, the output of the third convolution layer is connected with a fourth convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the fourth convolution layer is 32 channels, the output of the fourth convolution layer is 1 channel, the sum of the output of the fourth convolution layer and the primary reconstructed image X 'is used as the output of a residual denoising module, namely an intermediate reconstructed image, the intermediate reconstructed image is used as the input of a next residual denoising module, the images are further reconstructed and restored, and finally the output of a residual denoising network is obtained, namely a final reconstructed image X'.
The system comprises:
the intelligent terminal of the Internet of vehicles monitors and extracts mass video image data of the Internet of vehicles in real time, performs compression sampling through an encoder and transmits the mass video image data to the cloud server in real time through a wireless network;
the cloud server recovers and reconstructs the data through the deep network, and further performs corresponding task analysis on driving behaviors;
the intelligent terminal responds to the cloud server task analysis in real time correspondingly, so that unsafe factors of driving behaviors are avoided, and the aim of safe traveling is fulfilled.
As shown in fig. 2, image information is collected by a vehicle-mounted terminal such as an automobile data recorder, an original large-size image xe (nxn) is divided into a series of square images with the same size and relatively small sizes, elements of the square images are rearranged into nx1, and the divided small-size images are respectively subjected to compression sampling processing by using a measurement matrix Φe (mxn, m < N) of standard normal distribution. n is the dimension of the signal data before compression sampling, and m is the dimension of the signal data after compression sampling. Dividing the original large-size image into image blocks with the size of 33 multiplied by 33The size n of the image is 1089, the dimension m of the signal data after compression sampling is 272, the compression ratio is 25%, namely y i =Φx i ,i=1,2,3,…(Φ∈272×1089,x i E 1089 x 1) to obtain compressed sampled data y 1 ,y 2 … as training data, training data y is created 1 ,y 2 … and original image x 1 ,x 2 … are training sets in one-to-one correspondence.
Inputting training data into a deep neural network, and compressing sampled data y by blocks of an original image 1 ,y 2 … pass through the first layer of the deep neural network respectively: the linear full-connection layer has the structure that the number of nodes of the input layer is the dimension m of the compressed and sampled signal data, the number of nodes of the output layer is the dimension n of the original signal data, and no activation function exists. So that the original image blocks compress the sampled data y 1 ,y 2 … (272X 1) from low dimension back to high dimension x 'before sampling' 1 ,x′ 2 … (1089×1) and is dimensioned 33×33 by Reshape operation.
The large-size image is compressed and reconstructed after being divided into small blocks, so that the use of parameters of a full-connection layer is reduced, gradient explosion or diffusion in the training process is avoided, and the reconstruction of the compressed image is realized.
The vehicle-mounted terminal, namely the vehicle data recorder and the cloud end are provided with the function processing model, the data task with large calculation amount, which is acquired by the vehicle-mounted terminal, is uploaded to the cloud end server after being compressed by the image compression coding model, decoding and recovering are carried out by the image compression decoding model, the compressed data is obtained by the cloud end and is subjected to recovery post-processing, the transmission delay response can be effectively reduced, the overall response speed is improved, the precision of the model is ensured, and then analysis processing, such as image recognition and target detection, is carried out by utilizing the function processing model of the cloud end, and the data task is fed back to the vehicle-mounted terminal or the mobile intelligent terminal for corresponding scene response. According to the method, mass data compression of the vehicle-mounted terminal of the Internet of vehicles is transmitted to the cloud end in real time and then to the mobile terminal, so that the limit of flow and cost is greatly saved, simultaneously, the real-time response is recovered, and the accuracy of reconstructed images and the suppression of image noise can achieve good effects.
In summary, the method and the system for compressing the mass data of the vehicle-mounted terminal of the Internet of vehicles transmit the mass data to the cloud end in real time and then to the mobile terminal, so that the data transmission quantity is greatly reduced, the bandwidth pressure is reduced, the flow expense and the limitation of the storage space are greatly saved, simultaneously, the real-time response is recovered, and the accuracy of the reconstructed image and the suppression of the image noise can achieve good effects.

Claims (4)

1. A deep learning image compressed sensing method based on the Internet of vehicles is characterized in that: the method comprises the steps that a picture acquired by an intelligent terminal of the Internet of vehicles is subjected to block compression sampling through a measurement matrix to obtain a measurement data set Y; transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely recovering to intermediate reconstruction through a full-connection layer to obtain a preliminary reconstructed image X'; reconstructing an image through a residual denoising network to obtain a final reconstructed image X'; determining a loss function of the original image, and reversely transferring and updating network weights by the loss function to realize optimization;
the method for carrying out block compression sampling on the pictures acquired by the intelligent terminal of the Internet of vehicles through the measurement matrix specifically comprises the following steps: dividing a picture acquired by an intelligent terminal of the Internet of vehicles into 33 multiplied by 33 non-overlapping picture blocks, supplementing the blocks with edges less than 33 multiplied by 33 with 0 to obtain an image block set, namely an original real image X, and respectively performing compressed sampling processing on the divided images by using a standard normal distribution measurement matrix to obtain a measurement data set Y;
the image reconstruction through the residual denoising network specifically refers to inputting a preliminary reconstructed image X 'into a residual denoising network formed by a convolution layer, and further reconstructing and recovering the image to obtain an output of the residual denoising network, namely a final reconstructed image X';
the residual denoising network comprises 4 residual denoising modules, wherein each residual denoising module consists of 4 convolution layers, 2 activation functions, 4 batch processing functions and 1 soft threshold function, the 4 convolution layers are respectively a first convolution layer, a second convolution layer, a third convolution layer and a fourth convolution layer, a preliminary reconstructed image X' is connected with the first convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the first convolution layer is 1 channel, the output of the first convolution layer is 32 channels, the output of the first convolution layer is connected with the second convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the second convolution layer is 32 channels, the output of the second convolution layer is subjected to denoising processing through the soft threshold, and the soft threshold theta is 0.01; the output of the second convolution layer is connected with a third convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the third convolution layer is 32 channels, the output of the third convolution layer is connected with a fourth convolution layer with the convolution kernel size of 3 multiplied by 3, the input of the fourth convolution layer is 32 channels, the output of the fourth convolution layer is 1 channel, the sum of the output of the fourth convolution layer and the primary reconstructed image X 'is used as the output of a residual denoising module, namely an intermediate reconstructed image, the intermediate reconstructed image is used as the input of a next residual denoising module, the images are further reconstructed and restored, and finally the output of a residual denoising network is obtained, namely a final reconstructed image X'.
2. The deep learning image compression sensing method based on the internet of vehicles according to claim 1, wherein the method comprises the following steps: the step of transmitting the measurement data set Y to a cloud server in a wireless manner, and importing a trained deep neural network model, namely, recovering the measurement data set Y to be an intermediate reconstruction through a full connection layer specifically comprises the following steps: the measurement data set Y is transmitted to a cloud server through wireless transmission, and then the low-dimensional measurement data set Y is restored to the high-dimensional data set before sampling through a deep neural network model, namely through full-connection layer and Reshape operation, namely the preliminary reconstructed image X'.
3. The deep learning image compression sensing method based on the internet of vehicles according to claim 1, wherein the method comprises the following steps: the determining and the loss function of the original image, the loss function reversely transmits and updates the network weight, and the optimization is specifically: and calculating the mean square error between the primary reconstruction X ', the final reconstruction image X' and the original real image X through the loss function, and reversely transferring the mean square error to update the parameters of the neural network.
4. A system for implementing the deep learning image compressed sensing method based on the internet of vehicles as claimed in any one of claims 1 to 3, wherein: comprising the following steps:
the intelligent terminal of the Internet of vehicles monitors and extracts mass video image data of the Internet of vehicles in real time, performs compression sampling through an encoder and transmits the mass video image data to the cloud server in real time through a wireless network;
the cloud server recovers and reconstructs the data through the deep network, and further performs corresponding task analysis on driving behaviors;
the intelligent terminal responds to the cloud server task analysis in real time correspondingly, so that unsafe factors of driving behaviors are avoided, and the aim of safe traveling is fulfilled.
CN202011260561.7A 2020-11-12 2020-11-12 Deep learning image compressed sensing method and system based on Internet of vehicles Active CN112348914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011260561.7A CN112348914B (en) 2020-11-12 2020-11-12 Deep learning image compressed sensing method and system based on Internet of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011260561.7A CN112348914B (en) 2020-11-12 2020-11-12 Deep learning image compressed sensing method and system based on Internet of vehicles

Publications (2)

Publication Number Publication Date
CN112348914A CN112348914A (en) 2021-02-09
CN112348914B true CN112348914B (en) 2023-08-18

Family

ID=74363611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011260561.7A Active CN112348914B (en) 2020-11-12 2020-11-12 Deep learning image compressed sensing method and system based on Internet of vehicles

Country Status (1)

Country Link
CN (1) CN112348914B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906870B (en) * 2021-03-17 2022-10-18 清华大学 Network model compression cloud service method and device based on small samples
CN113066033B (en) * 2021-04-19 2023-11-17 智领高新科技发展(北京)有限公司 Multi-stage denoising system and method for color image
CN113317798A (en) * 2021-05-20 2021-08-31 郑州大学 Electrocardiogram compressed sensing reconstruction system based on deep learning
CN113793340B (en) * 2021-08-31 2023-10-13 南京邮电大学 Image segmentation neural network and remote biological imaging method and system
CN114065193B (en) * 2021-11-23 2024-05-07 北京邮电大学 Deep learning security method applied to image task in edge cloud environment
CN114245140B (en) * 2021-11-30 2022-09-02 慧之安信息技术股份有限公司 Code stream prediction method and device based on deep learning
CN114374744B (en) * 2022-03-23 2022-11-01 北京鉴智科技有限公司 Data returning method and device, vehicle-mounted terminal and cloud server
CN117272688B (en) * 2023-11-20 2024-02-13 四川省交通勘察设计研究院有限公司 Compression and decompression method, device and system for structural mechanics simulation data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363204A (en) * 2019-06-24 2019-10-22 杭州电子科技大学 A kind of object expression method based on multitask feature learning
CN111798531A (en) * 2020-07-08 2020-10-20 南开大学 Image depth convolution compressed sensing reconstruction method applied to plant monitoring

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363204A (en) * 2019-06-24 2019-10-22 杭州电子科技大学 A kind of object expression method based on multitask feature learning
CN111798531A (en) * 2020-07-08 2020-10-20 南开大学 Image depth convolution compressed sensing reconstruction method applied to plant monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多尺度残差网络的全局图像压缩感知重构;涂云轩;冯玉田;;工业控制计算机(07);全文 *

Also Published As

Publication number Publication date
CN112348914A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN112348914B (en) Deep learning image compressed sensing method and system based on Internet of vehicles
CN112634276B (en) Lightweight semantic segmentation method based on multi-scale visual feature extraction
CN110517329B (en) Deep learning image compression method based on semantic analysis
CN110166779B (en) Video compression method based on super-resolution reconstruction
CN111462013B (en) Single-image rain removing method based on structured residual learning
CN110099280B (en) Video service quality enhancement method under limitation of wireless self-organizing network bandwidth
CN109451308A (en) Video compression method and device, electronic equipment and storage medium
CN111460999A (en) Low-altitude aerial image target tracking method based on FPGA
CN109871778B (en) Lane keeping control method based on transfer learning
CN109672885B (en) Video image coding and decoding method for intelligent monitoring of mine
CN114373023A (en) Point cloud geometric lossy compression reconstruction device and method based on points
CN113192084A (en) Machine vision-based highway slope micro-displacement deformation monitoring method
CN117078539A (en) CNN-transducer-based local global interactive image restoration method
CN116600119A (en) Video encoding method, video decoding method, video encoding device, video decoding device, computer equipment and storage medium
CN116309225A (en) Construction method of lightweight multispectral image fusion model based on convolutional neural network
CN116155873A (en) Cloud-edge collaborative image processing method, system, equipment and medium
CN115131673A (en) Task-oriented remote sensing image compression method and system
CN113346966A (en) Channel feedback method for unmanned aerial vehicle inspection communication subsystem of smart power grid
CN113949880A (en) Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method
CN113962400A (en) Wireless federal learning method based on 1bit compressed sensing
CN113240589A (en) Image defogging method and system based on multi-scale feature fusion
CN112150566A (en) Dense residual error network image compressed sensing reconstruction method based on feature fusion
CN111736999A (en) Neural network end cloud collaborative training system capable of reducing communication cost
CN110381076B (en) Single-band matrix type DEM data progressive refinement type transmission method and system
CN110753241B (en) Image coding and decoding method and system based on multiple description networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant