WO2019047663A1 - Procédé et dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo - Google Patents

Procédé et dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo Download PDF

Info

Publication number
WO2019047663A1
WO2019047663A1 PCT/CN2018/099391 CN2018099391W WO2019047663A1 WO 2019047663 A1 WO2019047663 A1 WO 2019047663A1 CN 2018099391 W CN2018099391 W CN 2018099391W WO 2019047663 A1 WO2019047663 A1 WO 2019047663A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
image data
automatic driving
reading
Prior art date
Application number
PCT/CN2018/099391
Other languages
English (en)
Chinese (zh)
Inventor
闫泳杉
郁浩
郑超
唐坤
张云飞
姜雨
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2019047663A1 publication Critical patent/WO2019047663A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1744Redundancy elimination performed by the file system using compression, e.g. sparse files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures

Definitions

  • the present invention relates to the field of computers, and in particular, to a method and apparatus for storing end-to-end automatic driving data based on a video format.
  • an automatic driving system generally uses a model established by data acquired in real time in front, output steering angle, and speed to perform deep learning. The more data collected, the more favorable the generated model is for deep learning. However, these data usually need to be stored in a specific file, which requires a large storage space, which limits the development of deep learning in the field of automatic driving.
  • One of the technical problems solved by the present invention is that the data collected in front of the automatic driving system needs to occupy a large storage space.
  • a method for storing end-to-end automatic driving data based on a video format including:
  • the image data is stored as a video file using the video compression parameters.
  • a storage device for end-to-end automatic driving data based on a video format including:
  • the present embodiment stores the read posture data as image data in the order of the time stamp and stores it as a video file, the storage space occupied by the data can be reduced, and the amount of access of the network I/O can also be reduced to establish a more A good autonomous driving data model, which in turn improves the learning efficiency of deep learning in the field of automatic driving.
  • FIG. 1 shows a flow chart of a method for storing end-to-end autopilot data based on a video format in accordance with an embodiment of the present invention.
  • FIG. 2 is a flow chart showing a method for storing end-to-end automatic driving data based on a video format according to Embodiment 1 of the present invention.
  • FIG. 3 is a flow chart showing a method for storing end-to-end automatic driving data based on a video format according to Embodiment 2 of the present invention.
  • FIG. 4 is a block diagram showing a storage device for end-to-end automatic driving data based on a video format in accordance with an embodiment of the present invention.
  • FIG. 5 is a block diagram showing a storage device for end-to-end automatic driving data based on a video format according to Embodiment 3 of the present invention.
  • Fig. 6 is a block diagram showing a storage device for end-to-end automatic driving data based on a video format proposed in Embodiment 4 of the present invention.
  • Computer device also referred to as “computer” in the context, is meant an intelligent electronic device that can perform predetermined processing, such as numerical calculations and/or logical calculations, by running a predetermined program or instruction, which can include a processor and The memory is executed by the processor to execute a predetermined process pre-stored in the memory to execute a predetermined process, or is executed by hardware such as an ASIC, an FPGA, a DSP, or the like, or a combination of the two.
  • Computer devices include, but are not limited to, servers, personal computers, notebook computers, tablets, smart phones, and the like.
  • the computer device includes a user device and a network device.
  • the user equipment includes, but is not limited to, a computer, a smart phone, a PDA, etc.
  • the network device includes but is not limited to a single network server, a server group composed of multiple network servers, or a cloud computing based computer Or a cloud composed of a network server, wherein cloud computing is a type of distributed computing, a super virtual computer composed of a group of loosely coupled computers.
  • the computer device can be operated separately to implement the present invention, and can also access the network and implement the present invention by interacting with other computer devices in the network.
  • the network in which the computer device is located includes, but is not limited to, the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
  • the user equipment, the network equipment, the network, and the like are merely examples, and other existing or future possible computer equipment or networks, such as those applicable to the present invention, are also included in the scope of the present invention. It is included here by reference.
  • FIG. 1 is a flow chart of a method of storing end-to-end autopilot data based on a video format, in accordance with one embodiment of the present invention.
  • the video format-based end-to-end automatic driving data storage method includes the following steps:
  • the video compression parameter is first determined.
  • the test data may be subjected to a compression test by a compression parameter, and then determined as a video compression parameter according to the compression ratio of the compression test.
  • the compression parameters therein include, but are not limited to, at least one of a codec, an inter-frame allocation code rate (crf), or a color space.
  • the gesture data is read.
  • the attitude data output by the predetermined automatic driving system can be read in real time, and the posture data is stored as a time-stamped data sequence.
  • step S120 after the posture data is read, the image data is sequentially read according to the time stamp of the posture data.
  • the image data may be read in the order of the time stamp of the posture data, and the image data may be stored as an image data sequence in the order of the time stamp.
  • the read image data can be stored as a video file using the video compression parameters described above.
  • the image data may be compression-stored into a video file by a predetermined video format, and each frame of the video file corresponds to an image in the image data.
  • the embodiment further stores the video file on the predetermined server according to the type of the posture data output by the automatic driving system.
  • the types of gesture data include, but are not limited to, speed data, steering angle data, road network data, and the like.
  • the storage space occupied by the data can be reduced, and the access of the network I/O can also be reduced.
  • Volume to build a better autonomous driving data model which in turn improves the learning efficiency of deep learning in the field of automatic driving.
  • the image acquired by the sensor is stored in the HDF5 file for use by machine learning and control software.
  • This method will result in an HDF5 file storing images that is too large and will significantly increase the overhead of network I/O, so the traditional data acquisition method is not conducive to deep learning of the automatic driving system.
  • this embodiment proposes another storage method for the end-to-end automatic driving data based on the video format. As shown in FIG. 2, the method includes the following steps:
  • the color space can be used as a compression parameter due to less landscape changes on both sides of the road.
  • the color characteristics of snow, desert, forest, etc. the same in multiple image data. The color is uniformly compressed, and only the changes in the road surface are stored separately.
  • an inter-frame allocation code rate can be used as a compression parameter.
  • By dividing the code rate between frames it is possible to analyze which are important frames and which are secondary frames, and important frames get more bytes.
  • an object that is not moving in the image or a moving object that is far away is set as a secondary frame, and only when the distance is less than the threshold, the moving object appears in the compression parameter as an important frame. This can give a clearer feeling and significantly reduce the size of the video file, because usually the human eye only pays attention to the moving object, and does not recognize the background.
  • the automatic driving system outputs a set of posture data every predetermined time, and the posture data usually includes image data, speed data, steering angle data, and road network data.
  • This embodiment mainly reads image data therein.
  • the image data is read in the order of time stamps of the posture data.
  • the attitude data output by the autopilot system is time stamped, which can be used to indicate the order in which the gesture data is generated, and the image data storage in a chronological order can more accurately characterize the image acquired by the autopilot system.
  • all the posture data are read in the order of time stamps to ensure that the posture data is consistent with the time stamp of the image data.
  • the image data is stored as a data sequence for subsequent steps to be called.
  • This embodiment generates a video file in the FFmpeg format.
  • FFmpeg can be used to record, convert, digital audio, video, and convert these into streams.
  • FFmpeg can not only compress multiple image data to generate video files, but also convert between multiple video formats.
  • the number of image data used to generate a video file is different each time according to different compression parameters.
  • the color space is used as the compression parameter, it is possible to compress 10,000 images each time to generate a 24 frame/second video file, the length of the video file is 7 minutes, the occupied space is generally 20-50M, and the original image is occupied by storage.
  • the space is about 1G, and the compressed video file not only occupies less storage space, but also has low network I/O overhead.
  • the attitude data output by the automatic driving system is compressed and stored as a video file according to a predetermined compression parameter and a video format, which can significantly reduce the storage space occupied by the posture data, and can also ensure the clarity of the stored video file. Integrity, therefore, can improve the depth learning efficiency of the automatic driving system.
  • the image acquired by the sensor is stored in the HDF5 file for use by machine learning and control software.
  • This method will cause the HDF5 file to store images to be too large, and will obviously increase the overhead of network I/O. Image storage will also result in too many files being stored, which is not conducive to editing and management, so the traditional data acquisition method is not conducive to automatic driving. Deep learning of the system.
  • this embodiment proposes a storage method for end-to-end automatic driving data based on a video format. As shown in FIG. 3, the method includes the following steps:
  • different parameters can be used on the test data, such as codec, inter-frame allocation code rate, color space, etc., to compare the compression of these compression parameters and the image clarity after compression.
  • an inter-frame allocation code rate can be used as a compression parameter.
  • By dividing the code rate between frames it is possible to analyze which are important frames and which are secondary frames.
  • a non-moving object in the image or a moving object farther away from the image is set as a secondary frame, and only when the distance is less than the threshold, the moving object appears in the compression parameter as an important frame.
  • the image thus compressed can highlight a moving object, that is, an object that has an image for automatic driving, and other objects that do not move will not occupy more storage space.
  • the compression effects of the other two compression parameters are significantly worse, so for the road conditions in the urban area, the embodiment preferably uses the inter-frame allocation code rate as the compression parameter.
  • the autopilot system outputs a set of pose data every predetermined time, and each set of pose data is time stamped, which can be used to indicate the order in which the pose data is generated, and the image data storage can be more accurately characterized in chronological order.
  • the image captured by the autopilot system is a set of pose data every predetermined time, and each set of pose data is time stamped, which can be used to indicate the order in which the pose data is generated, and the image data storage can be more accurately characterized in chronological order.
  • all the posture data are read in the order of time stamps to ensure that the posture data is consistent with the time stamp of the image data.
  • the image data is stored as a data sequence for subsequent steps to be called.
  • the image data is generated into a video file by using an AVC encoding format
  • the video file has a code rate of 208 kbps, a frame rate of 14 fps, and a resolution of 448 ⁇ 336.
  • the inter-frame allocation code rate is used as the compression parameter
  • the length of the video file generated by compressing 10,000 images is 12 minutes
  • the occupied space is generally 40-70M
  • the storage space occupied by the original image is about 1G, after compression.
  • Video files not only take up less storage space, but also have lower network I/O overhead.
  • the type of the attitude data generally includes speed data, steering angle data, road network data, and the like. Therefore, in this embodiment, the attitude data is divided into two categories for storage, the first type is dynamic data, including speed data, steering angle data, motor vehicle data, etc., and the second is static data, including building data, real-time road condition data, Traffic signal data, etc. Video files stored in this category are easy to edit and manage, improving the efficiency of deep learning.
  • the attitude data output by the automatic driving system is compressed and stored as a video file according to a predetermined compression parameter and a video format, which can significantly reduce the storage space occupied by the posture data, and can also ensure the clarity of the stored video file. Integrity, and easy to edit and manage, avoiding additional decompression process, thus improving the deep learning efficiency of the autopilot system.
  • FIG. 4 is a block diagram of a storage device for end-to-end automatic driving data based on a video format, in accordance with one embodiment of the present invention.
  • the video format-based end-to-end automatic driving data storage device (hereinafter referred to as “storage device”) includes the following devices:
  • Means for determining video compression parameters and reading posture data (hereinafter referred to as “compressed reading device") 410;
  • image reading device Means for reading image data according to the time stamp of the posture data (hereinafter referred to as "image reading device") 420;
  • Means for storing the image data as a video file using the video compression parameter (hereinafter referred to as "video generating device") 430.
  • the video compression parameters are first determined by the compressed reading device 410.
  • the test data may be subjected to a compression test by the compression reading device 410 through a compression parameter, and then determined as a video compression parameter according to the compression ratio of the compression test.
  • the compression parameters therein include, but are not limited to, at least one of a codec, an inter-frame allocation code rate (crf), or a color space.
  • the gesture data is read.
  • the attitude data output by the predetermined automatic driving system can be read in real time by the compression reading device 410, and the posture data is stored as a time-stamped data sequence.
  • the image data is read by the image reading device 420 in accordance with the time stamp of the posture data.
  • the image data may be read by the image reading device 420 in the order of the time stamp of the posture data, and the image data may be stored as an image data sequence in the order of the time stamp.
  • the read image data can be stored as a video file by the video generating device 430.
  • the image data may be compression-stored into a video file by the video generating device 430 through a predetermined video format, and each frame of the video file corresponds to one image in the image data.
  • the embodiment further stores the video file on the predetermined server according to the type of the posture data output by the automatic driving system by the video generating device 430.
  • the types of gesture data include, but are not limited to, speed data, steering angle data, road network data, and the like.
  • the storage space occupied by the data can be reduced, and the access of the network I/O can also be reduced.
  • Volume to build a better autonomous driving data model which in turn improves the learning efficiency of deep learning in the field of automatic driving.
  • the image acquired by the sensor is stored in the HDF5 file for use by machine learning and control software.
  • This method will result in an HDF5 file storing images that is too large and will significantly increase the overhead of network I/O, so the traditional data acquisition method is not conducive to deep learning of the automatic driving system.
  • this embodiment proposes another storage device for end-to-end automatic driving data based on the video format, as shown in FIG. 5, including the following devices:
  • compression parameter determining device Means for determining a video compression parameter (hereinafter referred to as "compression parameter determining device”) 510;
  • Attitude data reading device a device for reading posture data (hereinafter referred to as "attitude data reading device") 520;
  • first image data reading device Means for reading image data in order of time stamps of the posture data
  • first video file generating device Means for generating a video file by using the image data in a predetermined format
  • the compression parameter determining device 510 When choosing compression parameters, consider the environment in which the autonomous driving system is located. For example, when driving on a sparsely populated highway, the color space can be used as a compression parameter due to less landscape changes on both sides of the road. For the color characteristics of snow, desert, forest, etc., the compression parameter determining device 510 will be more The same color in the image data is uniformly compressed, and only the changes in the road surface are separately stored.
  • an inter-frame allocation code rate can be used as a compression parameter.
  • By assigning a bit rate between frames it is possible to analyze which are important frames and which are secondary frames, and important frames get more bytes.
  • an object that is not moving in the image or a moving object that is far away is set as a secondary frame, and only when the distance is less than the threshold, the moving object appears in the compression parameter as an important frame. This can give a clearer feeling and significantly reduce the size of the video file, because usually the human eye only pays attention to the moving object, and does not recognize the background.
  • the automatic driving system outputs a set of posture data every predetermined time, and the posture data usually includes image data, speed data, steering angle data, and road network data.
  • the present embodiment mainly reads image data therein by the posture data reading device 520.
  • the attitude data output by the autopilot system is time stamped, which can be used to indicate the order in which the gesture data is generated, and the image data storage in a chronological order can more accurately characterize the image acquired by the autopilot system.
  • all the posture data are read by the first image data reading device 530 in the order of time stamps to ensure that the posture data coincides with the time stamp of the image data.
  • the image data is stored as a data sequence for subsequent steps to be called.
  • This embodiment generates a video file in the FFmpeg format.
  • FFmpeg can be used to record, convert, digital audio, video, and convert these into streams.
  • FFmpeg can not only compress multiple image data to generate video files, but also convert between multiple video formats.
  • the number of image data used to generate a video file is different each time according to different compression parameters.
  • the first video file generating device 540 can select to compress 10,000 images each time to generate a video file of 24 frames/second, and the length of the video file is 7 minutes, and the occupied space is generally 20 -50M, the original image occupies about 1G of storage space.
  • the compressed video file not only occupies less storage space, but also has lower network I/O overhead.
  • the attitude data output by the automatic driving system is compressed and stored as a video file according to a predetermined compression parameter and a video format, which can significantly reduce the storage space occupied by the posture data, and can also ensure the clarity of the stored video file. Integrity, therefore, can improve the depth learning efficiency of the automatic driving system.
  • the image acquired by the sensor is stored in the HDF5 file for use by machine learning and control software.
  • This method will cause the HDF5 file to store images to be too large, and will obviously increase the overhead of network I/O. Image storage will also result in too many files being stored, which is not conducive to editing and management, so the traditional data acquisition method is not conducive to automatic driving. Deep learning of the system.
  • the present embodiment proposes a storage device for end-to-end automatic driving data based on a video format, as shown in FIG. 6, including the following devices:
  • compression and reading device Means for determining video compression parameters and reading posture data (hereinafter referred to as “compression and reading device”) 610;
  • Second image data reading device Means for reading image data according to the time stamp of the posture data (hereinafter referred to as "second image data reading device") 620;
  • Means for generating a video file by using the image data in a predetermined format hereinafter referred to as "second video file generating device" 630;
  • Means for storing the type of the posture data output by the automatic driving system of the video file on a predetermined server hereinafter referred to as "classification storage device" 640.
  • the compression and reading device 610 can use different parameters on the test data, such as codec, inter-frame allocation code rate, color space, etc., to compare the compression and compression of these compression parameters. After the image clarity.
  • an inter-frame allocation code rate can be used as a compression parameter.
  • By dividing the code rate between frames it is possible to analyze which are important frames and which are secondary frames.
  • a non-moving object in the image or a moving object farther away from the image is set as a secondary frame, and only when the distance is less than the threshold, the moving object appears in the compression parameter as an important frame.
  • the thus compressed image can highlight a moving object, that is, an object that has an image for automatic driving, and other immovable objects do not occupy more storage space.
  • the compression effects of the other two compression parameters are significantly worse, so for the road conditions in the urban area, the embodiment preferably uses the inter-frame allocation code rate as the compression parameter.
  • the autopilot system outputs a set of pose data every predetermined time, and each set of pose data is time stamped, which can be used to indicate the order in which the pose data is generated, and the image data storage can be more accurately characterized in chronological order.
  • the image captured by the autopilot system is a set of pose data every predetermined time, and each set of pose data is time stamped, which can be used to indicate the order in which the pose data is generated, and the image data storage can be more accurately characterized in chronological order.
  • all the posture data are read by the second image data reading device 620 in the order of time stamps to ensure that the posture data coincides with the time stamp of the image data.
  • the image data is stored as a data sequence for subsequent steps to be called.
  • the second video file generating device 630 generates the video file by using the AVC encoding format, and the video file has a code rate of 208 kbps, a frame rate of 14 fps, and a resolution of 448 ⁇ 336.
  • the inter-frame allocation code rate is used as the compression parameter, the length of the video file generated by compressing 10,000 images is 12 minutes, the occupied space is generally 40-70M, and the storage space occupied by the original image is about 1G, after compression.
  • Video files not only take up less storage space, but also have lower network I/O overhead.
  • the type of the attitude data generally includes speed data, steering angle data, road network data, and the like. Therefore, the embodiment stores the posture data into two categories by the classification storage device 640.
  • the first category is dynamic data, including speed data, steering angle data, motor vehicle data, etc.
  • the second is static data, including building data. , real-time traffic data, traffic signal data, etc.
  • Video files stored in this category are easy to edit and manage, improving the efficiency of deep learning.
  • the attitude data output by the automatic driving system is compressed and stored as a video file according to a predetermined compression parameter and a video format, which can significantly reduce the storage space occupied by the posture data, and can also ensure the clarity of the stored video file. Integrity, and easy to edit and manage, avoiding additional decompression process, thus improving the deep learning efficiency of the autopilot system.
  • the present invention can be implemented in software and/or a combination of software and hardware.
  • the various devices of the present invention can be implemented using an application specific integrated circuit (ASIC) or any other similar hardware device.
  • the software program of the present invention may be executed by a processor to implement the steps or functions described above.
  • the software program (including related data structures) of the present invention can be stored in a computer readable recording medium such as a RAM memory, a magnetic or optical drive or a floppy disk and the like.
  • some of the steps or functions of the present invention may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé et un dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo. Ledit procédé comprend : la détermination d'un paramètre de compression vidéo et la lecture de données d'attitude (S110) ; la lecture de données d'image selon l'ordre d'estampilles temporelles des données d'attitude (S120) ; et l'utilisation du paramètre de compression vidéo pour stocker le fichier vidéo dans un serveur prédéfini selon le type des données d'attitude délivrées par le système de conduite automatique, et le stockage des données d'image sous forme de fichier vidéo (S130). Le procédé stocke les données d'attitude lues en tant que données d'image selon l'ordre d'estampilles temporelles et stocke ces dernières sous forme de fichier vidéo, et peut ainsi réduire l'espace de stockage occupé par les données, et réduire également le volume d'accès aux E/S de réseau, pour établir un meilleur modèle de données de conduite automatique, en améliorant en outre l'efficacité d'un apprentissage profond dans le domaine de la conduite automatique.
PCT/CN2018/099391 2017-09-05 2018-08-08 Procédé et dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo WO2019047663A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710792055.4A CN107767486A (zh) 2017-09-05 2017-09-05 一种基于视频格式的端到端自动驾驶数据的存储方法及装置
CN201710792055.4 2017-09-05

Publications (1)

Publication Number Publication Date
WO2019047663A1 true WO2019047663A1 (fr) 2019-03-14

Family

ID=61265036

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099391 WO2019047663A1 (fr) 2017-09-05 2018-08-08 Procédé et dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo

Country Status (2)

Country Link
CN (1) CN107767486A (fr)
WO (1) WO2019047663A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767486A (zh) * 2017-09-05 2018-03-06 百度在线网络技术(北京)有限公司 一种基于视频格式的端到端自动驾驶数据的存储方法及装置
CN110033780B (zh) * 2019-04-07 2020-12-08 西安电子科技大学 基于FFmpeg和EMIF驱动的音视频数据传输方法
CN112204975B (zh) * 2019-04-29 2024-06-07 百度时代网络技术(北京)有限公司 自动驾驶车辆中视频压缩的时间戳和元数据处理
CN114241622B (zh) * 2020-09-09 2024-01-16 丰田自动车株式会社 信息管理系统以及在该信息管理系统中使用的便携终端、图像管理服务器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126276B2 (en) * 2001-02-21 2012-02-28 International Business Machines Corporation Business method for selectable semantic codec pairs for very low data-rate video transmission
CN104284233A (zh) * 2009-10-19 2015-01-14 鹰图公司 视频和遥测数据的数据搜索、解析和同步
US20160173883A1 (en) * 2014-12-16 2016-06-16 Sean J. Lawrence Multi-focus image data compression
CN106060468A (zh) * 2016-06-23 2016-10-26 乐视控股(北京)有限公司 视频采集装置、视频传输系统和视频传输方法
CN107767486A (zh) * 2017-09-05 2018-03-06 百度在线网络技术(北京)有限公司 一种基于视频格式的端到端自动驾驶数据的存储方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8126276B2 (en) * 2001-02-21 2012-02-28 International Business Machines Corporation Business method for selectable semantic codec pairs for very low data-rate video transmission
CN104284233A (zh) * 2009-10-19 2015-01-14 鹰图公司 视频和遥测数据的数据搜索、解析和同步
US20160173883A1 (en) * 2014-12-16 2016-06-16 Sean J. Lawrence Multi-focus image data compression
CN106060468A (zh) * 2016-06-23 2016-10-26 乐视控股(北京)有限公司 视频采集装置、视频传输系统和视频传输方法
CN107767486A (zh) * 2017-09-05 2018-03-06 百度在线网络技术(北京)有限公司 一种基于视频格式的端到端自动驾驶数据的存储方法及装置

Also Published As

Publication number Publication date
CN107767486A (zh) 2018-03-06

Similar Documents

Publication Publication Date Title
US20210160556A1 (en) Method for enhancing resolution of streaming file
WO2019047663A1 (fr) Procédé et dispositif de stockage de données de conduite automatique de bout en bout se basant sur un format vidéo
Anjum et al. Video stream analysis in clouds: An object detection and classification framework for high performance video analytics
WO2017005149A1 (fr) Procédé et dispositif d'accélération de jeu
CN116188821B (zh) 版权检测方法、系统、电子设备和存储介质
JP5478047B2 (ja) 映像データ圧縮前処理方法およびこれを用いた映像データ圧縮方法と映像データ圧縮システム
CN103024437B (zh) 视频数据完整性检测方法
CN102231820B (zh) 一种监控图像处理的方法、装置和系统
WO2023179161A1 (fr) Procédé et appareil de commande de fréquence de trame vidéo, et dispositif électronique et support de stockage
Balchandani et al. A deep learning framework for smart street cleaning
CN113688839B (zh) 视频处理方法及装置、电子设备、计算机可读存储介质
WO2020185433A1 (fr) Identification sélective de données à fournir en tant qu'entrée dans un modèle de traitement d'image d'après des données de mouvement provenant d'une vidéo numérique
US10924637B2 (en) Playback method, playback device and computer-readable storage medium
CN112182289B (zh) 一种基于Flink框架的数据去重方法及装置
KR102130076B1 (ko) 특징 영역의 학습 중요도를 바탕으로 스트리밍 파일의 해상도를 개선하는 방법
Singla et al. HEVC based tampered video database development for forensic investigation
US11042274B2 (en) Extracting demonstrations from in-situ video content
CN106203244B (zh) 一种镜头类型的确定方法及装置
KR101212845B1 (ko) 동영상 샘플링 방법 및 시스템
CN109886234B (zh) 目标检测方法、装置、系统、电子设备、存储介质
CN112347996A (zh) 一种场景状态判断方法、装置、设备及存储介质
CN117176979B (zh) 多源异构视频的内容帧提取方法、装置、设备及存储介质
KR101174176B1 (ko) 동영상 샘플링 방법 및 시스템
KR102130078B1 (ko) 해상도 향상도를 바탕으로 인공지능 파라미터를 변경하는 시스템
KR102130077B1 (ko) 격자 생성 패턴 정보를 바탕으로 해상도를 개선하는 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18852969

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC , EPO FORM 1205A DATED 05.08.2020.

122 Ep: pct application non-entry in european phase

Ref document number: 18852969

Country of ref document: EP

Kind code of ref document: A1