CN115278361B - Driving video data extraction method, system, medium and electronic equipment - Google Patents

Driving video data extraction method, system, medium and electronic equipment Download PDF

Info

Publication number
CN115278361B
CN115278361B CN202210861696.1A CN202210861696A CN115278361B CN 115278361 B CN115278361 B CN 115278361B CN 202210861696 A CN202210861696 A CN 202210861696A CN 115278361 B CN115278361 B CN 115278361B
Authority
CN
China
Prior art keywords
video data
target
event
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210861696.1A
Other languages
Chinese (zh)
Other versions
CN115278361A (en
Inventor
刘义顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210861696.1A priority Critical patent/CN115278361B/en
Publication of CN115278361A publication Critical patent/CN115278361A/en
Application granted granted Critical
Publication of CN115278361B publication Critical patent/CN115278361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method, a system, a medium and electronic equipment for extracting driving video data, which are used for acquiring video data, a target signal and the duration time of the target signal of a vehicle in the driving process and identifying the characteristics of the target signal; when the target signal comprises the target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data. And taking a target signal generated by the automatic driving system when the algorithm is executed as a reference, and extracting video data recorded in the driving process, so that a large amount of video data related to the automatic driving system is automatically acquired, and the extraction is time-saving and labor-saving.

Description

Driving video data extraction method, system, medium and electronic equipment
Technical Field
The application relates to the technical field of automobile data, in particular to a method, a system, a medium and electronic equipment for extracting driving video data.
Background
With the continuous development of the intellectualization and networking of automobiles, the automatic driving algorithm training depends on a large amount of vehicle end data, and video or picture data generated by a vehicle camera can be used for training the automatic driving algorithm.
However, in the prior art, the video and the image are collected through the vehicle-mounted camera and then manually screened, so that the process of the data sample is very complicated and is not intelligent enough.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention provides a method, a system, a medium and an electronic device for extracting driving video data, so as to solve the above technical problems.
The invention provides a method for extracting driving video data, which comprises the following steps:
acquiring video data, a target signal and duration time of the target signal of a vehicle in the running process, wherein the video data are acquired through a camera preset in the vehicle, and the target signal is generated by an automatic driving system of the vehicle;
performing feature recognition on the target signal and the video data;
when the target signal comprises a target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data;
and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data.
In an embodiment of the present invention, performing information filtering on the target video data includes:
extracting an image to be processed from the target video data;
performing feature recognition on the image to be processed;
when the image to be processed comprises second image features, erasing the second image features in the image to be processed to obtain a processed image;
storing the processed image into a preset cache in a preset format;
and merging the processed images in the cache to generate a target video file which does not contain the second image characteristics, and finishing information filtering of the target video data.
In an embodiment of the present invention, transmitting the target video data after information filtering to the target location includes:
acquiring attribute information of the target video file;
and taking the attribute information as a preset interface for parameter entering, and transmitting the target video data after information filtering to a background system through the interface.
In an embodiment of the present invention, after obtaining the target video data, the method further includes:
determining a first event number according to the target signal and a first event data table, and determining a second event number according to the first image characteristic and a second event data table; the first event data table comprises a mapping relation between a target signal and a first event number, and the second event data table comprises a mapping relation between a first image feature and a second event number;
constructing first event information with the first event number and the duration of the target signal; constructing second event information according to the second event number and the time parameter;
uploading the first event information and the second event information to a background system, and displaying the first event information and the second event information through the background system.
In an embodiment of the present invention, after uploading the first event information and the second event information to a background system, the method further includes:
intercepting a first target parameter of the first event information, and establishing a first event notification according to the first target parameter and a pre-established notification template; intercepting a second target parameter of the second event information, and establishing a second event notification according to the second target parameter and a pre-established notification template;
when the uploading mode of the first event information and the second event information is active uploading, controlling the background system to automatically send the first event notification and the second event notification to a vehicle;
and when the uploading mode of the first event information and the second event information is passive uploading, controlling a background system to send the first event notification and the second event notification to the vehicle according to the request information sent by the vehicle.
In an embodiment of the present invention, after uploading the first event information and the second event information to a background system, the method further includes:
transmitting the first event information and target video data associated with the first event information to a database for storage;
and sending the second event information and the target video data associated with the second event information to a database for storage.
The present invention also provides a video acquisition system for a vehicle, the system comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring video data, a target signal and the duration time of the target signal of a vehicle in the running process, the video data are acquired through a camera preset in the vehicle, and the target signal is generated by an automatic driving system of the vehicle;
the identification module is used for carrying out feature identification on the target signal and the video data;
the video intercepting module is used for intercepting the video data according to the duration time of the target signal when the target signal comprises the target feature, so as to obtain target video data; extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data;
and the filtering and storing module is used for carrying out information filtering on the target video data, transmitting the target video data subjected to information filtering to a target position and completing extraction of the video data.
The invention also provides an electronic device comprising:
one or more processors;
and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the electronic equipment realizes the extraction method of the driving video data.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform a method of extracting driving video data as described above.
The invention has the beneficial effects that: according to the extraction method of the driving video data, the target signal and the duration time of the target signal of the vehicle in the driving process are acquired, and the characteristics of the target signal are identified; when the target signal comprises the target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data. And taking a target signal generated by the automatic driving system when the algorithm is executed as a reference, and extracting video data recorded in the driving process, so that a large amount of video data related to the automatic driving system is automatically acquired, and the extraction is time-saving and labor-saving.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. In the drawings:
FIG. 1 is an application scenario diagram of an extraction system of driving video data shown in an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a method of extracting driving video data according to an exemplary embodiment of the present application;
FIG. 3 is a diagram illustrating steps in an exemplary embodiment of a method for extracting video data of a vehicle;
FIG. 4 is a block diagram of an extraction system of driving video data according to an exemplary embodiment of the present application
Fig. 5 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present invention, it will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present invention.
Firstly, it should be noted that related algorithms, such as computer vision, are needed to be relied on in the running process of the automatic driving system of the automobile; the algorithms require a large amount of video data for training and verification during training; a large amount of video data is thus acquired to train and verify the computer vision algorithm.
Fig. 1 is an application scenario diagram of an extraction system of driving video data according to an exemplary embodiment of the present application, where an in-vehicle camera 140 performs video data acquisition on an automotive surrounding environment, and the acquired video data is used to train an autopilot algorithm or verify the autopilot algorithm. In this embodiment, the vehicle-mounted system 110 is used to obtain signals of the automatic driving system 120, and the automatic driving system 120 generates various signals when executing various algorithms, for example, by calculating signals of various sensors (such as millimeter wave laser radar), obtaining a distance from a front vehicle, generating an acceleration/deceleration control signal for automatic following according to the distance, and identifying a target signal, thus obtaining actions executed by the vehicle and encountered events; and then, according to the scene required by the algorithm training/verification, corresponding actions and events are selected, so that the scene required by the algorithm can be associated with the target signal;
based on the above principle, in this embodiment, by acquiring and identifying a target signal in the automatic driving system and taking the target signal as a reference, video data acquired during driving of the vehicle is extracted, so as to obtain target video data (associated by time) associated with the target signal, and meanwhile, the target video data is transferred to the server 130 for storage.
As shown in fig. 2, in an exemplary embodiment, the method for extracting driving video data at least includes steps S210 to S250, which are described in detail as follows:
s210, acquiring video data, a target signal and duration time of the target signal of a vehicle in the running process, wherein the video data are acquired through a camera preset in the vehicle, and the target signal is generated by an automatic driving system of the vehicle;
in this embodiment, the video data is collected by a vehicle-mounted camera, such as a vehicle recorder, a digital camera, and the like, and the target signal is generated by a vehicle-mounted system and transmitted through an interface of the vehicle-mounted system;
s220, carrying out feature recognition on the target signal and the video data;
in step S220, the target signal may be an external signal (such as a sensor signal) and internal SWC (Software Component ) signals, and the target signal is identified by the characteristics of coding, key value, character string, etc.; the corresponding event when the target signal is generated may be monitoring for collisions, system optimization, etc.; taking a collision monitoring example, the automatic driving system judges that the current vehicle is about to collide with a front obstacle according to an external signal (such as a millimeter wave radar signal), and generates a signal for controlling the current vehicle to decelerate, wherein the deceleration signal can be a target signal;
the video data consists of a plurality of frames of images, and the feature recognition is carried out on the video data, namely the images in the video are recognized, the frame extraction mode can be selected for the selection of the images, and the frame-by-frame recognition can be carried out on the images;
s230, when the target signal comprises target characteristics, intercepting video data according to the duration time of the target signal to obtain target video data;
in step S230, taking the monitoring collision as an example, the deceleration signal is taken as a target signal, and the intercepted target video data should include a front obstacle, so that the target video data is used for verifying the autopilot algorithm, and meanwhile, the algorithm can be trained as a data sample;
extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data;
the first image feature will be described below with an example, in some embodiments, an automatic driving system of a vehicle has a sign recognition function, and the automatic driving system obtains an image of a traffic sign through a camera and recognizes the image to generate a recognition result; when the identification result comprises the first image feature, the corresponding image can be confirmed to be acquired by the camera when the vehicle is in the target scene or event, and then the scene or event in which the vehicle is confirmed.
The image in the video data is firstly identified, then the image containing the target scene or event (such as traffic sign) is extracted, and the target video data can be directly obtained. And verifying the algorithm by utilizing the target video data, determining whether the parameters of the algorithm are correct according to the verification result, and adjusting the parameters of the algorithm, thereby realizing the closed loop of algorithm training.
S240, carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish extraction of the video data.
In step S240, the purpose of information filtering the target video data is to desensitize the video data and remove sensitive information, such as a human face; and storing the target video data subjected to desensitization to establish a database for automatic driving algorithm verification and training.
In an embodiment of the present invention, the process of filtering information of the target video data may include steps S310 to S350, which are described in detail below:
s310, extracting an image to be processed from target video data;
in the present embodiment, an extraction period is set, and an image to be processed is extracted from video data in accordance with the extraction period.
S320, carrying out feature recognition on the image to be processed;
in step S320, the image to be processed is identified by using a pre-established image identification model, and the image identification model is obtained through training of a training data set containing sensitive information;
s330, when the image to be processed comprises the second image features, erasing the second image features in the image to be processed to obtain a processed image;
in step S330, the second image features are typically features related to privacy, such as a face, a sign including a place name, and license plate information; the purpose of step S330 is to erase sensitive information in the image, thereby completing image desensitization.
S340, storing the processed image into a preset cache in a preset format;
cache (cache) refers to a type of high-speed memory that has a faster access speed than a general random access memory (Random Access Memory, abbreviated: RAM). The images in this embodiment are stored in the cache in the map format, and the key and the value of the processed image packet stored in the map format are the key and the value of the processed image packet, and the key is the time and the value is the data of each frame of image.
S350, combining the processed images in the cache to generate a target video file which does not contain the second image features, and finishing information filtering of the target video data.
In step S350, the processed images in the buffer memory are assembled according to the H264 coding, and the target video file is generated, and the target video file does not include the second image feature, so that the information filtering, i.e. the desensitization, is completed.
In an embodiment of the present invention, the process of transmitting the target video data after the information filtering to the target location may include steps S410 to S420, which are described in detail as follows:
s410, acquiring attribute information of a target video file, wherein the attribute information comprises file size, file name, file type and file path;
s420, taking the attribute information as a preset interface for parameter entering, and transmitting the target video data after information filtering to a background system through the interface.
In this embodiment, the parameter entering refers to a necessary parameter of the calling interface, and in this embodiment, the attribute information is called an http protocol and the target video data after information filtering is transmitted to the background system by using a get method, where the background system in this embodiment may be a designated position in the vehicle autopilot controller.
In an embodiment of the present invention, the process after obtaining the target video data further includes steps S510 to S530, which are described in detail below:
s510, determining a first event number according to the target signal and the first event data table; determining a second event number according to the first image feature and the second event data table; the first event data table comprises a mapping relation between a target signal and a first event number, and the second event data table comprises a mapping relation between a first image feature and a second event number;
in this embodiment, a first event data table and a second event data table are pre-established, and then corresponding first event numbers and second event numbers are obtained according to the target signal and the first image feature.
S520, constructing first event information according to the first event number and the duration of the target signal; constructing second event information according to the second event number and the time parameter;
in step S520, the first event number and the second event number are used to represent the event type, so as to facilitate classification storage and viewing.
S530, uploading the first event information and the second event information to a background system, and displaying the first event information and the second event information through the background system.
In step S530, the event information uploading interface deployed based on the mqtt protocol uploads the first event information and the second event information to the background system through the event information uploading interface, which may be a cloud system, for visual display.
In an embodiment of the present invention, the process after uploading the first event information and the second event information to the background system further includes steps S610 to S630, which are described in detail below:
s610, intercepting a first target parameter of first event information, and establishing a first event notification according to the first target parameter and a pre-established notification template; intercepting a second target parameter of the second event information, and establishing a second event notification according to the second target parameter and a pre-established notification template;
in this embodiment, the name, kind, event, and other target parameters of the first event information or the second event information are extracted, and then the first event notification or the second event notification is established.
S620, when the uploading mode of the first event information and the second event information is active uploading, controlling the background system to automatically send the first event notification and the second event notification to the vehicle;
in step S720, when the first event information and the second event information are actively reported by a person in the vehicle, the first event notification and the second event notification are directly sent to the vehicle to be provided for the relevant person to view.
S630, when the uploading mode of the first event information and the second event information is passive uploading, the background system is controlled to send the first event notification and the second event notification to the vehicle according to the request information sent by the vehicle.
In step S630, when the first event information and the second event information are reported by the autopilot system, it indicates that the relevant staff member does not actively acquire the target video data, so that the first event notification and the second event notification can be acquired after the request.
In an embodiment of the present invention, the process after uploading the first event information and the second event information to the background system further includes step S710, which is described in detail below:
s710, the first event information and target video data associated with the first event information are sent to a database to be stored; and sending the second event information and the target video data associated with the second event information to a database for storage.
In this embodiment, the first event information and the second event information are added with an identity tag field to be associated with the target video data, and then the associated file is transferred to the data for storage by an https protocol post method.
As shown in fig. 3, the implementation procedure in this embodiment is as follows:
the automatic driving system and the camera monitor events in the driving process;
when an event occurs, intercepting video data according to the duration time of a target signal and the time parameter of a video to obtain target video data;
desensitizing the video and packaging the video to generate a target video file;
generating event information according to the event-related target signals and the first image characteristic information;
generating an event notification according to the event information, and pulling out the event notification to prompt the vehicle;
and associating the event information with the target video file and storing the event information and the target video file into a database.
According to the extraction method of the driving video data, the target signal and the duration time of the target signal of the vehicle in the driving process are acquired, and the characteristics of the target signal are identified; when the target signal comprises the target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data. And taking a target signal generated by the automatic driving system when the algorithm is executed as a reference, and extracting video data recorded in the driving process, so that a large amount of video data related to the automatic driving system is automatically acquired, and the extraction is time-saving and labor-saving.
As shown in fig. 4, the present invention also provides a video acquisition system for a vehicle, the system comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring video data, a target signal and the duration time of the target signal of a vehicle in the running process, the video data are acquired through a camera preset in the vehicle, and the target signal is generated by an automatic driving system of the vehicle;
the identification module is used for carrying out feature identification on the target signal and the video data;
the video intercepting module is used for intercepting the video data according to the duration time of the target signal when the target signal comprises the target feature, so as to obtain target video data; extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data;
and the filtering and storing module is used for carrying out information filtering on the target video data, transmitting the target video data subjected to information filtering to a target position and completing extraction of the video data.
According to the extraction system of the driving video data, the target signal and the duration time of the target signal of the vehicle in the driving process are acquired, and the characteristics of the target signal are identified; when the target signal comprises the target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data. And taking a target signal generated by the automatic driving system when the algorithm is executed as a reference, and extracting video data recorded in the driving process, so that a large amount of video data related to the automatic driving system is automatically acquired, and the extraction is time-saving and labor-saving.
It should be noted that, the extraction system of the driving video data provided in the foregoing embodiment and the extraction method of the driving video data provided in the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit perform the operation has been described in detail in the method embodiment, which is not described herein again. In practical application, the extracting system for driving video data provided in the above embodiment may distribute the functions to be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above, which is not limited herein.
The embodiment of the application also provides electronic equipment, which comprises: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the electronic equipment realizes the extraction method of the driving video data provided in each embodiment.
Fig. 5 shows a schematic diagram of a computer system suitable for use in implementing the electronic device of the embodiments of the present application. It should be noted that, the computer system 500 of the electronic device shown in fig. 5 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 5, the computer system 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 502 or a program loaded from a storage section 508 into a random access Memory (Random Access Memory, RAM) 503. In the RAM 503, various programs and data required for the system operation are also stored. The CPU 501, ROM 502, and RAM 503 are connected to each other through a bus 504. An Input/Output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. When executed by a Central Processing Unit (CPU) 501, performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the method of extracting driving video data as described above. The computer-readable storage medium may be included in the electronic device described in the above embodiment or may exist alone without being incorporated in the electronic device.
Another aspect of the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the extraction method of the driving video data provided in the above embodiments.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the appended claims.

Claims (9)

1. The method for extracting the driving video data is characterized by comprising the following steps:
acquiring video data, a target signal and duration time of the target signal in the running process of a vehicle, wherein the video data are acquired through a camera preset in the vehicle, and the target signal is generated based on an automatic driving algorithm;
performing feature recognition on the target signal and the video data;
when the target signal comprises a target feature, intercepting the video data according to the duration time of the target signal to obtain target video data; extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data, verifying the automatic driving algorithm based on the target video data, and adjusting the parameter of the automatic driving algorithm;
and carrying out information filtering on the target video data, and transmitting the target video data subjected to information filtering to a target position to finish the extraction of the video data.
2. The method for extracting driving video data according to claim 1, wherein the step of performing information filtering on the target video data comprises the steps of:
extracting an image to be processed from the target video data;
performing feature recognition on the image to be processed;
when the image to be processed comprises second image features, erasing the second image features in the image to be processed to obtain a processed image;
storing the processed image into a preset cache in a preset format;
and merging the processed images in the cache to generate a target video file which does not contain the second image characteristics, and finishing information filtering of the target video data.
3. The method for extracting driving video data according to claim 2, wherein transmitting the information-filtered target video data to the target location comprises:
acquiring attribute information of the target video file;
and taking the attribute information as a preset interface for parameter entering, and transmitting the target video data after information filtering to a background system through the interface.
4. The method for extracting driving video data according to claim 1, further comprising, after obtaining the target video data:
determining a first event number according to the target signal and a first event data table, and determining a second event number according to the first image characteristic and a second event data table; the first event data table comprises a mapping relation between a target signal and a first event number, and the second event data table comprises a mapping relation between a first image feature and a second event number;
constructing first event information with the first event number and the duration of the target signal; constructing second event information according to the second event number and the time parameter;
uploading the first event information and the second event information to a background system, and displaying the first event information and the second event information through the background system.
5. The method for extracting driving video data according to claim 4, wherein after uploading the first event information and the second event information to a background system, further comprising:
intercepting a first target parameter of the first event information, and establishing a first event notification according to the first target parameter and a pre-established notification template; intercepting a second target parameter of the second event information, and establishing a second event notification according to the second target parameter and a pre-established notification template;
when the uploading mode of the first event information and the second event information is active uploading, controlling the background system to automatically send the first event notification and the second event notification to a vehicle;
and when the uploading mode of the first event information and the second event information is passive uploading, controlling a background system to send the first event notification and the second event notification to the vehicle according to the request information sent by the vehicle.
6. The method for extracting driving video data according to claim 4, wherein after uploading the first event information and the second event information to a background system, further comprising:
transmitting the first event information and target video data associated with the first event information to a database for storage;
and sending the second event information and the target video data associated with the second event information to a database for storage.
7. A video extraction system for a vehicle, the system comprising:
the acquisition module is used for acquiring video data, target signals and duration time of the target signals of the vehicle in the running process, wherein the video data are acquired through a camera preset in the vehicle, and the target signals are generated based on an automatic driving algorithm;
the identification module is used for carrying out feature identification on the target signal and the video data;
the video intercepting module is used for intercepting the video data according to the duration time of the target signal when the target signal comprises the target feature, so as to obtain target video data; extracting a time parameter of an image containing a first image feature when the image in the video data contains the first image feature; intercepting the video data according to the time parameter to obtain target video data, verifying the automatic driving algorithm based on the target video data, and adjusting the parameter of the automatic driving algorithm;
and the filtering and storing module is used for carrying out information filtering on the target video data, transmitting the target video data subjected to information filtering to a target position and completing extraction of the video data.
8. An electronic device, the electronic device comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the electronic device to implement a method of extracting driving video data as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform a method of extracting driving video data according to any one of claims 1 to 6.
CN202210861696.1A 2022-07-20 2022-07-20 Driving video data extraction method, system, medium and electronic equipment Active CN115278361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210861696.1A CN115278361B (en) 2022-07-20 2022-07-20 Driving video data extraction method, system, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210861696.1A CN115278361B (en) 2022-07-20 2022-07-20 Driving video data extraction method, system, medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN115278361A CN115278361A (en) 2022-11-01
CN115278361B true CN115278361B (en) 2023-08-01

Family

ID=83767608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210861696.1A Active CN115278361B (en) 2022-07-20 2022-07-20 Driving video data extraction method, system, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115278361B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
CN110333730A (en) * 2019-08-12 2019-10-15 安徽江淮汽车集团股份有限公司 Verification method, platform and the storage medium of automatic Pilot algorithm expectation function safety
CN114708535A (en) * 2022-03-31 2022-07-05 阿波罗智联(北京)科技有限公司 Method and device for testing event detection algorithm, electronic equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10455185B2 (en) * 2016-08-10 2019-10-22 International Business Machines Corporation Detecting anomalous events to trigger the uploading of video to a video storage server
CN108769594B (en) * 2018-06-05 2020-08-07 北京智行者科技有限公司 Data monitoring method
CN109743579A (en) * 2018-12-24 2019-05-10 秒针信息技术有限公司 A kind of method for processing video frequency and device, storage medium and processor
CN109714644B (en) * 2019-01-22 2022-02-25 广州虎牙信息科技有限公司 Video data processing method and device, computer equipment and storage medium
CN111582006A (en) * 2019-02-19 2020-08-25 杭州海康威视数字技术股份有限公司 Video analysis method and device
WO2021244591A1 (en) * 2020-06-03 2021-12-09 上海商汤临港智能科技有限公司 Driving auxiliary device and method, and vehicle and storage medium
CN111881734A (en) * 2020-06-17 2020-11-03 武汉光庭信息技术股份有限公司 Method and device for automatically intercepting target video
US12008812B2 (en) * 2020-09-30 2024-06-11 Alarm.Com Incorporated Simultaneous playback of continuous video recordings from multiple recording devices
CN113450474A (en) * 2021-06-28 2021-09-28 通视(天津)信息技术有限公司 Driving video data processing method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408940A (en) * 2016-11-02 2017-02-15 南京慧尔视智能科技有限公司 Microwave and video data fusion-based traffic detection method and device
CN110333730A (en) * 2019-08-12 2019-10-15 安徽江淮汽车集团股份有限公司 Verification method, platform and the storage medium of automatic Pilot algorithm expectation function safety
CN114708535A (en) * 2022-03-31 2022-07-05 阿波罗智联(北京)科技有限公司 Method and device for testing event detection algorithm, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Unmanned Vehicle Trajectory Tracking Method based on Improved Model-free Adaptive Control Algorithm;Dongdong Yuan;《2020 IEEE 9th Data Driven Control and Learning Systems Conference (DDCLS)》;全文 *
基于智能控制算法的自动驾驶系统优化研究;霍桂利;《现代电子技术》;全文 *
深度学习在自动驾驶领域应用综述;段续庭;《无人系统技术》;全文 *

Also Published As

Publication number Publication date
CN115278361A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN108860162B (en) Electronic device, safety early warning method based on user driving behavior and storage medium
US9495601B2 (en) Detecting and reporting improper activity involving a vehicle
WO2022078077A1 (en) Driving risk early warning method and apparatus, and computing device and storage medium
CN113205088B (en) Obstacle image presentation method, electronic device, and computer-readable medium
US20240083443A1 (en) Driving state monitoring device, driving state monitoring method, and driving state monitoring system
US20220139090A1 (en) Systems and methods for object monitoring
CN108230669B (en) Road vehicle violation detection method and system based on big data and cloud analysis
CN115203078A (en) Vehicle data acquisition system, method, equipment and medium based on SOA architecture
JP2024019277A (en) Driving condition monitoring device, driving condition monitoring system, driving condition monitoring method, and drive recorder
CN115278361B (en) Driving video data extraction method, system, medium and electronic equipment
DE112018004773T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS, PROGRAM AND VEHICLE
CN108960160B (en) Method and device for predicting structured state quantity based on unstructured prediction model
CN111985304A (en) Patrol alarm method, system, terminal equipment and storage medium
CN110853364A (en) Data monitoring method and device
CN112585957A (en) Station monitoring system and station monitoring method
CN112969053B (en) In-vehicle information transmission method and device, vehicle-mounted equipment and storage medium
CN112859109B (en) Unmanned aerial vehicle panoramic image processing method and device and electronic equipment
CN115019511A (en) Method and device for identifying illegal lane change of motor vehicle based on automatic driving vehicle
CN111400687B (en) Authentication method, authentication device and robot
CN111696368A (en) Overspeed illegal data generation method and illegal server
US20230274586A1 (en) On-vehicle device, management system, and upload method
CN112614347B (en) Fake plate detection method and device, computer equipment and storage medium
US11521331B2 (en) Method and apparatus for generating position information, device, and medium
WO2021235126A1 (en) Information processing device and information processing method
WO2023170768A1 (en) Control device, monitoring system, control method, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant