CN115271088A - Capacity-saving transmission and storage of sensor data - Google Patents

Capacity-saving transmission and storage of sensor data Download PDF

Info

Publication number
CN115271088A
CN115271088A CN202210457664.5A CN202210457664A CN115271088A CN 115271088 A CN115271088 A CN 115271088A CN 202210457664 A CN202210457664 A CN 202210457664A CN 115271088 A CN115271088 A CN 115271088A
Authority
CN
China
Prior art keywords
samples
image data
station
source station
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210457664.5A
Other languages
Chinese (zh)
Inventor
G·布劳特
高见昌渡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Publication of CN115271088A publication Critical patent/CN115271088A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3059Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

Method for transmitting sensor data from a source station to a sink station via a network, having the steps of: transmitting, from the source station to the sink station via the network, a sample having a first high quality of sensor data; converting, by the sink station, the samples to a second lower quality while reducing data capacity, and then back-computing the samples to a first high quality using a trained machine learning model, wherein the converted samples recreate a reduced capacity version of the sensor data that the source station may have transmitted in place of the version having the first high quality; and in response to the back-computed samples being consistent with the samples received from the source station, the sink station instructing the source station to transmit the image data to the sink station in a reduced-capacity version.

Description

Capacity-saving transmission and storage of sensor data
Technical Field
The present invention relates to the transmission and storage of sensor data, such as image data recorded by a camera, e.g. on or in a vehicle, via a network.
Background
Driver assistance systems and systems for at least partially autonomous driving increasingly use machine learning models for evaluating images from the vehicle environment. In order to train such a system, it is necessary to collect training images recorded by the vehicle under real traffic conditions.
The continuous recording and storage of the vehicle environment images can also be used to preserve evidence to clarify the mischief problem after an accident.
Cameras are also used to monitor the interior space of taxis or self-driving rental vehicles, also for safety reasons. For example, an attack between a taxi driver and a passenger may be identified based on the camera image, or a responsible person causing dirt and damage to a rental vehicle may be determined.
High quality image data has a large capacity. It is when transmitting via a mobile radio network that the monthly costs are strongly dependent on the available capacity quota of the data transmission. If the quota is exhausted, the speed is typically limited until it is practically unavailable and/or a premium is charged.
Disclosure of Invention
Within the scope of the invention, a method for transmitting sensor data, in particular image data, from a source station to a sink station via a network has been developed. The image data may relate to still images or moving images (image sequences) of any imaging modality. In addition to camera images, thermal images, ultrasound images, radar images or lidar images, for example, may also be used here.
The method can also be applied in particular to sensor data which can be converted into image data, for example. For example, various other sensor data may be converted to a frequency spectrum or similar representation and then further processed as an image.
Sensor data samples having a first high quality are transmitted from a source station to a sink station via a network. This amounts to receiving such sensor data samples at a first high quality from the source station via the network by the sink station. For example, the image data samples may be still images, video frames, or a sequence of such frames.
The samples are converted to a second lower quality by the sink station while reducing the data capacity, and then back-calculated to a first high quality using a trained machine learning model. Here, "by sink station" should not be understood restrictively as having to perform the conversion and/or inverse calculations on the same device that received the samples from the source station. But it may also be sufficient that e.g. the sink causes these actions to be performed on other devices. For example, a container or other execution environment in which conversion and/or reverse computing is performed may be produced in the cloud for a short period of time. Furthermore, the sink does not necessarily have to be a physical computer. For example, a sink may be a cloud entity that receives a sample and then instantiates the container or the execution environment. For example, amazon AWS "Lambda function" may be used as such a cloud entity.
The converted samples recreate a reduced-capacity version of the sensor data that the source station may have transmitted instead of the first high-quality version. This may be, for example, a version obtained by setting the sensor used to record the image to a lower quality level. However, it may also be a version determined by the source station from sensor data originally detected at high quality, for example by scaling down, down sampling or lossy compression.
In response to the back-calculated samples being consistent with the samples received from the source station, the sink station instructs the source station to transmit the sensor data to the sink station in a reduced capacity version. This means that bandwidth is saved for future transmission of other sensor data, such as in particular image data.
It has been recognized that considerable data capacity can be saved in this way in the case of continuous transmission of sensor data from a source station to a sink station. For example, if the machine learning model is able to correctly upgrade image data received from the source station at a lower quality to a high quality, the image can still be used at a high quality for further processing. Furthermore, image data received at low quality may also be archived in a space-saving manner. Thus, for example, in the case of video surveillance, it is usual to predetermine the time period during which the recorded images must be archived, wherein these images are in fact rarely accessed as a result of a specific accident.
To achieve this effect, the machine learning model need not be able to improve the quality of all received image data samples. Even if this is successful only for part of the samples, savings in data capacity have been demonstrated. Those samples that have to be provided by the source station with high quality may be saved, for example, together with samples that are provided with low quality and that can be back-computed by the machine learning model.
In this context, "high quality", "lower quality", and the difference between the two quality levels are specified in what respect depends on the context of the given application. This context for example decides which image information is absolutely necessary for further processing of the image data and which image information can be discarded. For example, the second lower quality image data may have a higher quality than the first high quality image data
Lower pixel resolution, and/or
Lower frame rate, and/or
A lower color depth,
and/or the second lower quality image data may be lossy compressed to a greater degree than the first high quality image data.
For example, when calculating back to a higher frame rate, an intermediate frame may be inserted between every two frames of lower quality image data that correctly interpolates the further development of the scene between the two frames displayed in the image data.
For example, high quality image data may be color, while lower quality image data exists only in grayscale. In the reverse calculation, the grayscale image may be colored.
For example, lower quality image data may have quantization artifacts of lossy compression (e.g., according to the H.264/H.265/VP8/VP9 codec). The inverse calculation can remove these artifacts.
A machine learning model is generally understood to be a model which embodies a powerful generalization capability of functions parameterized with adaptable parameters. In particular, the machine learning model may include an artificial neural network, KNN, and/or the machine learning model may be KNN. Particularly advantageously, a generator generating a countermeasure Network (generic adaptive Network) GAN, a recurrent neural Network RNN or a decoder of an encoder-decoder arrangement trained as an automatic encoder is selected as the machine learning model.
The generator of GAN is typically trained to produce data, here: sensor data, in particular image data, of a specific target field. At the same time, the discriminators are trained in such training to distinguish the sensor data of the target domain produced by the generator from the "true" sensor data of the target domain. Here, the target domain is a domain of high-quality sensor data. This means that the discriminator tries to distinguish whether the high quality samples fed to it were actually recorded at such high quality or back-calculated from the lower quality samples. Under this correlation, an additional check as to whether the back-calculated sample is exactly consistent with the high quality sample provided by the source station represents a "adjustment" of the GAN to the scene to which the sample relates: instead of looking for any high-quality sensor data, those data relating to the same scene are sought, since conclusions should be drawn precisely from this scene when the sensor data are processed further.
The automatic encoder includes an encoder and a decoder. The encoder maps the input samples of sensor data into a representation in "latent space" that is significantly lower in dimension than the raw sensor data. The decoder attempts to reconstruct the raw sensor data again from the representation. The encoder and decoder are trained together towards the goal that the reconstruction is optimal. The encoder learns that: information that is most important for the reconstruction is selected from the sensor data in case the sensor data is compressed into the representation. The decoder learns that: the scarce information in the representation is best evaluated.
After indicating to the source station that the sensor data is transmitted in a future reduced capacity version, it may be checked, e.g. periodically or aperiodically, whether the reverse calculation is still functioning properly. To this end, the sink station may, for example, again instruct the source station to transmit high quality sensor data samples. It can then be checked in the same way as before whether the artificially corrupted version of the sample can be successfully back-computed again.
In an advantageous design, the machine learning model is trained towards a goal that the machine learning model correctly reconstructs the samples received from the source station from the converted samples in the future, in response to the back-calculated samples not being consistent with the samples received from the source station. For this purpose, for example, a plurality of training samples can also be collected.
In this way, the machine learning model can learn back to high quality starting from a completely untrained state. However, the machine learning model may also be further trained starting from an already pre-trained state. For example, the machine learning model may be provided in a general pre-training state and then specially trained for a particular existing application.
As explained above, sensor data samples, in particular image data samples, which advantageously arrive at the sink in a reduced volume version are back-calculated to a first high quality for further use and/or processing by the machine learning model. Such further use and/or processing may utilize any mixture of back-calculated samples on the one hand and samples received at high quality from the beginning on the other hand.
If the machine learning model has not been able to upgrade the lower quality of a particular sample to high quality in the first place, the machine learning model may accurately learn this through further training. From this point on, high quality can be accessed even if the sample itself is no longer stored with high quality. Thus, advantageously, in response to the back-calculated samples coinciding with the samples received from the source station, the samples converted to the second lower quality are stored in the memory and the samples received from the source station are discarded.
In an advantageous design, the samples received from the source station are converted to a second lower quality by a domain transfer using a further machine learning model. For example, a CycleGAN network may be used for this purpose. The domain transfer can be learned based on exemplary high-quality and low-quality sensor data without specific knowledge of the exact difference of the two quality levels. In this way, more complex degradations, such as those caused to the image data by a combination of shorter exposure times and lossy compression, can also be learned.
As explained earlier, the effect of bandwidth savings during transmission is particularly pronounced when the source and sink stations communicate via a mobile radio network. When transmitting via these networks, the costs depend particularly strongly on the data capacity of the transmission.
In an advantageous embodiment, the image data are detected as sensor data by at least one camera observing the surroundings and/or the interior of the vehicle. The source station can be carried by a vehicle, for example.
For example, the observation of the environment may be used as an "enhanced tachograph" in which the vehicle user has no opportunity to manipulate the image data or suppress image data that he does not like in a manner that is advantageous to him, since these image data show, for example, traffic violations or driving errors. For example, read access to image data by a vehicle user may also be limited to situations where evidence is needed in the case of a particular accident. The privacy of other road participants entering the "enhanced tachograph" image is thus better protected.
The same is true when observing the vehicle interior space. For example, the observations may be used to identify robberies, attacks, or other dangerous conditions. For example, a potentially dangerous condition may be identified using machine learning or other methods, and personnel of the security center may be permitted to view the image data only if such a condition is identified. For example, the image archive may be accessed depending on whether specific damage or dirt of the vehicle interior space should be clarified. It is for shared vehicles and other vehicles with frequently replaced users that there is always dirt and damage that the user does not want to be responsible for.
The described back-calculation from lower quality to high quality enables bandwidth savings during transmission in the previously described method, but can also be used to save hardware costs when collecting image data with a fleet.
The invention therefore also provides a method for processing image data recorded by a camera on a vehicle of a fleet of vehicles.
The method assumes:
a first part of the vehicles of the fleet are each equipped with a first type of camera capable of recording images with a first high quality and a second type of camera capable of recording images with a second lower quality, and
the second part of the vehicle fleet is equipped with only the second type of camera.
The machine learning model is trained based on first high quality image data and second lower quality image data recorded by a first portion of vehicles of the fleet, with the goal of reconstructing the first high quality image data from the second lower quality image data. Thus, lower quality image data are fed to the machine learning model and the image data reconstructed therefrom are checked to what extent they correspond to the respectively associated high quality image data. The parameters of the machine learning model are adapted such that these reconstructions are continuously improved.
With the machine learning model trained in this manner, the second lower quality image data recorded by the second portion of vehicles of the fleet is processed into the upscaled image data. The first high quality image data is combined with the upscaled image data for further processing.
In this way, each vehicle for further processing will eventually provide high quality image data for further processing, even if only a portion of the vehicles are equipped with higher value cameras. This is advantageous, for example, for "crowdsourcing" of image data, where participants obtain available cameras in a borrowed manner and should collect image material at a central location via a mobile radio network. It is advantageous here that only cheaper cameras with lower quality need to be provided for the participants, since, as a rule of thumb, there is always a certain "loss" of cameras that are not being returned. Furthermore, the bandwidth savings described in connection with the first method also play a role in the transmission via the mobile radio network, so that the participants can use their usual capacity quota for data transmission without having to increase the capacity quota themselves.
In addition to applications on vehicles operating in road traffic, applications, in particular, for example, on production machines, lawn robots and other devices connected via a narrow-band network connection and/or a network connection charged according to bandwidth consumption, are also advantageous for the described method.
In particular, the methods may be fully or partially computer implemented. The invention thus also relates to a computer program having machine-readable instructions which, when executed on one or more computers, cause the one or more computers to perform one of the methods described. In this sense, embedded systems of control devices and technical devices of a vehicle, which are likewise capable of executing machine-readable instructions, should also be regarded as computers.
The invention relates equally to a machine-readable data carrier and/or to a download product with the computer program. A downloaded product is a digital product that is transferable via a data network, i.e. downloadable by a user of the data network, which digital product can be sold in an online shop for immediate download.
Furthermore, a computer may be provided with said computer program, said machine-readable data carrier or said downloadable product.
Further measures to improve the invention are described in more detail below together with the description of preferred embodiments of the invention based on the figures.
Drawings
Fig. 1 shows an embodiment of a method 100 for transmitting sensor data 4 from a source station 1 to a sink station 3 via a network 2;
fig. 2 shows an exemplary arrangement of a source station 1, a network 2 and a sink station 3 for performing the method 100;
fig. 3 illustrates an embodiment of a method 200 for a vehicle to process image data 4 from cameras on vehicles 81-88 of a fleet 8.
Detailed Description
Fig. 1 is a schematic flow chart diagram of an embodiment of a method 100 for transmitting sensor data 4 from a source station 1 to a sink station 3 via a network 2.
In step 105, a source station carried by the vehicle is selected. In step 106, image data detected by at least one camera observing the environment and/or the interior space of the vehicle is selected as sensor data 4.
In step 110, samples 4+ having a first high quality of image data are transmitted to sink 3 via network 2.
In step 120, sample 4+ is converted by sink 3 to a second lower quality, while reducing the data capacity, resulting in sample # 4. This may be done, for example, by domain transfer using a special machine learning model 7, according to block 121. The converted sample 4# forms a reduced-capacity version 4 which recreates the image data 4 which the source station 1 may have transmitted instead of the version 4+ with the first high quality.
In step 130, the transformed sample 4# is back-calculated to a first high quality using the machine learning model 5, resulting in a back-calculated sample 4 ×. According to block 131, the machine learning model 5 may in particular be a decoder such as a generator generating an antagonistic network GAN, a recurrent neural network RNN or an encoder-decoder device trained as an auto-encoder.
In step 140, it is checked whether the back-calculated samples 4 coincide with the samples 4+ received from the source station 1, using any metric applicable to each specific application.
If samples 4 x and 4+ do not coincide (true value 0), then the machine learning model 5 is trained in step 160 as the target that will correctly reconstruct, in the future, the samples 4+ received from the source station 1 from the transformed samples 4#. Here, the parameters 5a of the model 5 are optimized.
Conversely, if samples 4 x and 4+ agree (true value 1), sink station 3 indicates to source station 1 to transmit image data 4 to sink station 3 in a future reduced-capacity version 4-in step 150. This reduced volume image data 4-is then back calculated in step 170 as a first high quality for further use and/or processing by the machine learning model 5, resulting in samples 4.
Alternatively or in combination therewith, the sample 4# converted to the second lower quality may also be stored in the memory 6 in step 180. Then in step 190, the samples 4+ received from the source station 1 may be discarded.
Fig. 2 shows a schematic arrangement of a source station 1, a network 2 and a sink station 3. The source station 1 is able to provide samples 4+ of high quality image data and samples 4-of lower quality.
Samples 4+ with high quality can be transported directly from the sink 3 for further processing. The samples 4 with lower quality-are processed by the machine learning model 5 into back-calculated high-quality samples 4-so that the samples 4 can be used together with the samples 4+ with high quality for further processing.
To train the machine learning model 5, the transformed samples 4# with lower quality may be created from the samples 4+ with high quality, and it may be checked whether the samples 4 thus back-calculated coincide with the original samples 4+ with high quality. The check may be repeated periodically or aperiodically. Depending on the result of this check, the sink station 3 controls whether the source station transmits high quality samples 4+ or lower quality samples 4-in the future.
Fig. 3 is a schematic flow chart of an embodiment of a method 200 for processing image data as sensor data 4. The image data is recorded by cameras on the vehicles 81-88 of the platoon 8.
The vehicles 81-84 of the first part 8a of the platoon 8 are equipped with
A first type of camera 91a-94a capable of recording image data 4+ having a first high quality, and
second type cameras 91b-94b capable of recording image data 4-of a second, lower quality.
The second part 8b of the vehicle fleet 8, vehicles 85-88, are equipped with only the second type of cameras 95b-98b, respectively.
At step 210, the machine learning model 5 is trained based on first high quality image data 4+ recorded by the vehicles 81-84 of the first part 8a of the vehicle fleet 8 and second lower quality image data 4-, with the goal of reconstructing the first high quality image data 4+ from the second lower quality image data 4-. The fully trained state of the machine learning model 5 is denoted by reference numeral 5.
In step 220, the second lower quality image data 4 recorded by the second portion 8b of vehicles 85-88 of the fleet 8 is processed into ascending image data 4 using the trained machine learning model 5.
In step 230, the first high quality image data 4+ is combined with the upscaled image data 4+ for further processing.

Claims (15)

1. Method (100) for transmitting sensor data (4), in particular image data, from a source station (1) to a sink station (3) via a network (2), having the following steps:
receiving, by the sink station (3), samples (4 +) of sensor data (4) at a first high quality from the source station (1) via the network (2);
-converting (120) the samples (4 +) to a second lower quality by the sink station (3) while reducing the data capacity, and then back-calculating (130) the samples (4 +) to a first high quality using a trained machine learning model (5), wherein the converted samples (4 #) recreate a reduced capacity version (4-) of the sensor data (4) that the source station (1) may have transmitted in place of the version (4 +) with the first high quality; and
in response to the back-calculated samples (4 x) coinciding (140) with the samples (4 +) received from the source station (1), the sink station (3) instructs (150) the source station (1) to transmit image data (4) to the sink station (3) in a reduced-capacity version (4-).
2. Method (100) for transmitting sensor data (4), in particular image data, from a source station (1) to a sink station (3) via a network (2), having the following steps:
transmitting (110) samples (4 +) with a first high quality of sensor data (4) from the source station (1) to the sink station (3) via the network (2);
-converting (120) the samples (4 +) to a second lower quality by the sink station (3) while reducing the data capacity, and then back-computing (130) the samples (4 +) to a first high quality using a trained machine learning model (5), wherein the converted samples (4 #) recreate a reduced capacity version (4-) of the sensor data (4) that the source station (1) may have transmitted in place of the version (4 +) having the first high quality; and
in response to the back-calculated samples (4 x) coinciding (140) with the samples (4 +) received from the source station (1), the sink station (3) instructs (150) the source station (1) to transmit image data (4) to the sink station (3) in a reduced-capacity version (4-).
3. The method (100) according to any of claims 1-2, wherein the machine learning model (5) is trained (160) towards a target that will correctly reconstruct, in the future, the samples (4 +) received from the source station (1) from the transformed samples (4 #), in response to the back-calculated samples (4 x) not being consistent (140) with the samples (4 +) received from the source station (1).
4. The method (100) according to any one of claims 1 to 3, wherein samples (4-) of sensor data (4) arriving at the sink (3) in a reduced capacity version are back-calculated (170) to a first high quality for further use and/or processing by the machine learning model (5).
5. The method (100) according to any of claims 1 to 4, wherein in response to the back-calculated samples (4 x) coinciding (140) with the samples (4 +) received from the source station (1), the samples (4 #) converted to the second lower quality are stored (180) in a memory (6) and the samples (4 +) received from the source station (1) are discarded (190).
6. The method (100) according to any one of claims 1 to 5, wherein the image data as sensor data (4) has at the second lower quality than at the first high quality
Lower pixel resolution, and/or
Lower frame rate, and/or
A lower color depth, a higher color depth,
and/or the image data as sensor data (4) is lossy compressed at a second, lower quality to a greater extent than the first, high quality.
7. The method (100) according to any one of claims 1 to 6, wherein a decoder of a generator generating the antagonistic network GAN, the recurrent neural network RNN or an encoder-decoder device trained as an auto-encoder is selected (131) as the machine learning model (5).
8. The method (100) according to any of claims 1 to 7, wherein the samples (4 +) received from the source station (1) are converted (121) to a second lower quality by domain transfer using a further machine learning model (7).
9. The method (100) according to any of claims 1 to 8, wherein the source station (1) and the sink station (3) communicate via a mobile radio network.
10. The method (100) according to any one of claims 1 to 9, wherein the image data is detected (106) as sensor data (4) by at least one camera observing the environment and/or the interior space of the vehicle.
11. The method (100) according to claim 10, wherein the source station (1) is carried (105) by the vehicle.
12. A method (200) for processing image data (4) recorded by cameras on vehicles (81-88) of a vehicle fleet (8), wherein
The vehicles (81-84) of the first part (8 a) of the platoon (8) are each equipped with a camera (91 a-94 a) of a first type capable of recording image data (4 +) with a first high quality and a camera (91 b-94 b) of a second type capable of recording image data (4-) with a second lower quality, and
a second part (8 b) of the vehicles (85-88) of the platoon (8) is equipped with only a second type of camera (95 b-98 b),
and wherein the method (200) has the steps of:
training (210) a machine learning model (5) based on first high-quality image data (4 +) and second lower-quality image data (4-) recorded by a first part (8 a) of vehicles (81-84) of the fleet (8), with the aim of reconstructing the first high-quality image data (4 +) from the second lower-quality image data (4-);
processing (220) second lower quality image data (4-) recorded by a second portion (8 b) of vehicles (85-88) of the fleet (8) into ascending image data (4) using the trained machine learning model (5);
combining (230) the first high quality image data (4 +) with the up-scaled image data (4 x) for further processing.
13. A computer program having machine-readable instructions which, when executed on one or more computers, cause the one or more computers to perform the method of any one of claims 1 to 12.
14. A machine-readable data carrier and/or download product with a computer program according to claim 13.
15. One or more computers provided with a computer program according to claim 13 and/or with a machine-readable data carrier according to claim 14 and/or downloading products.
CN202210457664.5A 2021-04-29 2022-04-28 Capacity-saving transmission and storage of sensor data Pending CN115271088A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021204289.9 2021-04-29
DE102021204289.9A DE102021204289A1 (en) 2021-04-29 2021-04-29 Volume-saving transmission and storage of sensor data

Publications (1)

Publication Number Publication Date
CN115271088A true CN115271088A (en) 2022-11-01

Family

ID=83600935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210457664.5A Pending CN115271088A (en) 2021-04-29 2022-04-28 Capacity-saving transmission and storage of sensor data

Country Status (2)

Country Link
CN (1) CN115271088A (en)
DE (1) DE102021204289A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3038370A1 (en) 2014-12-22 2016-06-29 Alcatel Lucent Devices and method for video compression and reconstruction
EP3777195A4 (en) 2018-04-09 2022-05-11 Nokia Technologies Oy An apparatus, a method and a computer program for running a neural network

Also Published As

Publication number Publication date
DE102021204289A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
US10867404B2 (en) Distance estimation using machine learning
US20200097841A1 (en) Systems and methods for processing vehicle data
US10068140B2 (en) System and method for estimating vehicular motion based on monocular video data
US20200041276A1 (en) End-To-End Deep Generative Model For Simultaneous Localization And Mapping
US10803324B1 (en) Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
US20170124848A1 (en) Image-based remote observation and alarm device and method for in-car moving objects
US11514371B2 (en) Low latency image processing using byproduct decompressed images
CN114303177A (en) System and method for generating video data sets with different fatigue degrees through transfer learning
JP7053213B2 (en) Operation data analysis device
CN112633120A (en) Intelligent roadside sensing system based on semi-supervised learning and model training method
CN110942097A (en) Imaging-free classification method and system based on single-pixel detector
CN112712608B (en) System and method for collecting performance data by a vehicle
WO2020252926A1 (en) Method and device for prediction of automatic driving behaviors, computer device and storage medium
CN115271088A (en) Capacity-saving transmission and storage of sensor data
CN115776610B (en) Camera shooting control method and device for cargo monitoring of freight vehicle
CN115115530B (en) Image deblurring method, device, terminal equipment and medium
EP4120042A1 (en) Decision-making method for agent action, and related device
CN110012351B (en) Label data acquisition method, memory, terminal, vehicle and Internet of vehicles system
CN113554179B (en) Information processing system
US20240193954A1 (en) Image processing method
CN104871544B (en) Coding method and code device
US20230100728A1 (en) A system, an arrangement, a computer software module arrangement, a circuitry arrangement and a method for improved image processing utilzing two entities
US11770505B1 (en) Adaptive storage reduction of image and sensor data and intelligent video stream restoration
JP6820133B2 (en) In-vehicle monitoring information generation control device and in-vehicle monitoring information generation control method
US20240127395A1 (en) Resolution converter, resolution conversion method, and resolution conversion computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination