CN115720253A - Video processing method, device, vehicle and storage medium - Google Patents

Video processing method, device, vehicle and storage medium Download PDF

Info

Publication number
CN115720253A
CN115720253A CN202211395823.XA CN202211395823A CN115720253A CN 115720253 A CN115720253 A CN 115720253A CN 202211395823 A CN202211395823 A CN 202211395823A CN 115720253 A CN115720253 A CN 115720253A
Authority
CN
China
Prior art keywords
video
emergency state
videos
target
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211395823.XA
Other languages
Chinese (zh)
Other versions
CN115720253B (en
Inventor
侯旭光
姚昂
吴祥
李世龙
权伍明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Automobile Group Co Ltd
Original Assignee
Guangzhou Automobile Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Automobile Group Co Ltd filed Critical Guangzhou Automobile Group Co Ltd
Priority to CN202211395823.XA priority Critical patent/CN115720253B/en
Publication of CN115720253A publication Critical patent/CN115720253A/en
Application granted granted Critical
Publication of CN115720253B publication Critical patent/CN115720253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a video processing method, a video processing device, a vehicle and a storage medium. The method comprises the following steps: continuously acquiring N videos through an image acquisition device, and storing the N videos in a first appointed folder; acquiring a target video from the N videos based on the time information of the vehicle entering the emergency state; and splicing the plurality of target videos to obtain the emergency state video, and storing the emergency state video in a second designated folder. According to the technical scheme, after the vehicle enters the emergency state, the image acquisition device can be multiplexed to acquire the acquired video in a circular recording mode to obtain the emergency state video for recording the environmental information of the vehicle before and after the vehicle enters the emergency state, the process only needs to consume small calculation force, consumption of hardware resources of the electronic controller is reduced, and therefore the electronic controller has enough hardware resources to process other services in the vehicle, and the card pause feeling is reduced.

Description

Video processing method, device, vehicle and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a video processing method, an apparatus, a vehicle, and a storage medium.
Background
The automobile data recorder can record the environment of the automobile in the driving process in real time, and is widely applied to the field of automobiles.
In the related art, the electronic controller needs to process other services in the vehicle besides controlling the automobile data recorder, and when the vehicle enters an emergency state, the automobile data recorder consumes a large amount of calculation power of the electronic controller when emergently collecting the environment in the driving process of the vehicle, so that the electronic controller is blocked when processing other services.
Disclosure of Invention
The application provides a video processing method, a video processing device, a vehicle and a storage medium.
In a first aspect, an embodiment of the present application provides a video processing method, where the method includes: continuously acquiring N videos through an image acquisition device, storing the N videos in a first appointed folder, wherein N is a positive integer greater than 2, and deleting the videos in the first appointed folder when the storage time length is greater than or equal to a preset time length; after the vehicle is monitored to enter the emergency state, acquiring a target video from the N videos based on the time information of the vehicle entering the emergency state; and under the condition that a plurality of target videos exist, splicing the plurality of target videos to obtain an emergency state video, storing the emergency state video in a second specified folder, wherein files in the second specified folder do not respond to a specified deletion instruction, the specified deletion instruction is a deletion instruction other than the deletion instruction triggered by a user, and the emergency state video is used for recording environmental information of the vehicle before and after entering the emergency state.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including: the video acquisition module is used for continuously acquiring N videos through the image acquisition device, wherein N is a positive integer greater than 2; the first storage module is used for storing the N videos in a first appointed folder, and the videos in the first appointed folder are deleted under the condition that the storage time length is greater than or equal to the preset time length; the video acquisition module is used for acquiring a target video from a plurality of videos based on time information of the vehicle entering an emergency state after the vehicle is monitored to be in the emergency state; the video processing module is used for splicing a plurality of target videos to obtain an emergency state video under the condition that the target videos are multiple, and the emergency state video is used for recording environmental information of the vehicle before and after entering the emergency state; and the second storage module is used for storing the emergency state video in a second specified folder, wherein the files in the second specified folder do not respond to a specified deletion instruction, and the specified deletion instruction is other deletion instructions except the deletion instruction triggered by the user.
In a third aspect, an embodiment of the present application provides a vehicle, including: one or more processors; a memory; an image acquisition device; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the video processing method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which computer program instructions are stored, and the computer program instructions can be called by a processor to execute the video processing method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when executed, is configured to implement the video processing method according to the first aspect.
Compared with the prior art, according to the video processing method provided by the embodiment of the application, after the vehicle enters the emergency state, the three target videos are obtained from the videos acquired by the image acquisition device in the circulating recording mode to be spliced to obtain the emergency state video for recording the environmental information before and after the vehicle enters the emergency state.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
Fig. 2 is a flowchart of a video processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of video processing provided by an embodiment of the present application.
Fig. 4 is a flowchart of a video processing method according to an embodiment of the present application.
Fig. 5 is a schematic interface diagram for playing an emergency video according to an embodiment of the present application.
Fig. 6 is a block diagram of a video processing apparatus according to an embodiment of the present application.
Fig. 7 is a block diagram of a vehicle according to an embodiment of the present application.
FIG. 8 is a block diagram of a computer-readable storage medium provided by one embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary only for explaining the present application and are not to be construed as limiting the present application.
In order to make the technical solutions of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Referring to fig. 1, a schematic diagram of an implementation environment provided by an embodiment of the application is shown. The implementation environment includes a vehicle 100. Vehicle 100 refers to a powered or towed Vehicle for use by a person or for transporting goods, and includes, but is not limited to, a car, a Sport Utility Vehicle (SUV), a Multi-Purpose Vehicle (MPV), and the like. The vehicle 100 is provided with an image acquisition device and an electronic controller, which are communicatively connected to each other.
The image acquisition device is used for recording environmental information during the driving process of the vehicle 100. Optionally, the image capturing device is a vehicle data recorder, and the vehicle data recorder can be installed on the front windshield of the vehicle close to the rear of the central rearview mirror. In some embodiments, the vehicle event data recorder continues to perform image acquisition during the driving process of the vehicle 100 to obtain a plurality of videos with the same duration, and then sequentially sends the videos to the electronic controller, and the electronic controller stores the videos, but deletes the videos when the storage duration of the videos reaches a preset duration so as to accommodate videos newly acquired by the vehicle event data recorder. This mode of operation is referred to as a loop recording mode.
The electronic controller is used to process various services in the vehicle 100, including, but not limited to: control of drive systems, control of brake systems, control of entertainment systems, control of environmental components within the vehicle, such as air conditioners, mood lights, etc. In the embodiment of the application, when the electronic controller monitors that the vehicle enters an emergency state (for example, a collision accident occurs, an emergency brake occurs, and the like), the electronic controller multiplexes videos acquired by the image acquisition device in a cyclic recording mode, and splices the videos to obtain an emergency state video for recording environmental information of the vehicle 100 before and after the vehicle enters the emergency state, and the process only consumes a small amount of computing power of the electronic controller, so that the consumption of hardware resources of the electronic controller is reduced, and therefore the electronic controller has enough hardware resources to process other services in the vehicle 100, and the pause feeling is reduced.
Referring to fig. 2, a flowchart of a video processing method according to an embodiment of the present application is shown. The execution subject of each step in the method may be an electronic controller, and the method includes the following processes.
Step S201, continuously collecting N videos through an image collecting device, and storing the N videos in a first appointed folder.
N is a positive integer greater than 2. In the examples of the present application. The image acquisition device continuously acquires videos in the driving process of the vehicle to obtain N videos with the same duration.
And deleting the videos in the first appointed folder when the storage time is longer than or equal to the preset time so as to accommodate the videos newly acquired by the image acquisition device. In other possible embodiments, in the event that the data amount of the video in the first designated folder is greater than the preset data amount, the electronic controller deletes the first stored video or videos to accommodate the video newly captured by the image capture device.
Step S202, after the vehicle is monitored to be in the emergency state, the target video is obtained from the N videos based on the time information of the vehicle entering the emergency state.
Emergency conditions include, but are not limited to: the emergency state represents that the vehicle is likely to have an accident, and at the moment, the current environmental information of the vehicle needs to be accurately recorded so as to facilitate the follow-up rescue, pursuit and the like.
In some embodiments, the vehicle obtains the vehicle acceleration in real time through the acceleration sensor, and determines that the vehicle enters the emergency state in the case that the absolute value of the vehicle acceleration is greater than a first preset value. The first predetermined value is set experimentally or empirically, and is not limited in the embodiments of the present application. The absolute value of the acceleration of the vehicle is larger than a first preset value, and the vehicle is represented to have sudden braking or suddenly accelerate. It should be noted that, when the vehicle just enters the driving state, there is also a case of a large acceleration, however, this is not a case that the vehicle has an accident, and therefore, when it is determined whether the vehicle enters the emergency state through the acceleration of the vehicle, it is also necessary to obtain the driving speed of the vehicle before the acceleration is greater than the first preset value, if the driving speed of the vehicle before the acceleration is greater than the first preset value is less than the preset driving speed, it is indicated that the vehicle is in the starting acceleration stage with a high probability, but not in the emergency state, and if the driving speed of the vehicle before the acceleration is greater than the first preset value is greater than the preset driving speed, it is indicated that the vehicle enters the emergency state.
In other embodiments, the vehicle monitors a change in steering wheel angle and determines that the vehicle is in an emergency state if the rate of change of steering wheel angle is greater than a second predetermined value. The second predetermined value is set experimentally or empirically, and is not limited in the embodiments of the present application. The change rate of the steering wheel angle is larger than a second preset value, which represents that the steering wheel angle of the vehicle is changed greatly in a short time, namely that the vehicle is turned emergently.
In other embodiments, the vehicle determines that it is in an emergency state in the event that a collision event is detected. Optionally, the vehicle may analyze the video frames captured by the image capture device to determine whether a collision event has occurred.
In some embodiments, the time information that the vehicle enters the emergency state includes a target time at which the vehicle enters the emergency state, and the target videos include a first target video, a second target video, and a third target video. In this embodiment, step S202 may alternatively be implemented as the following sub-steps: after the vehicle is monitored to be in an emergency state, determining a video with the acquisition time including a target moment in the N videos as a first target video; determining videos, of which the acquisition time is before the acquisition time of the first target video and the time interval between the acquisition time of the first target video and the acquisition time of the N videos is smaller than or equal to the first time interval, as second target videos; and determining the video, of the N videos, of which the acquisition time is after the acquisition time of the first target video and the time interval with the acquisition time of the first target video is less than or equal to the second time interval as a third target video.
The first time interval and the second time interval can be determined according to the duration of the video collected by the image collecting device, and the first time interval and the second time interval can be the same or different. Specifically, the first time interval may be a positive integer multiple of a duration of a video acquired by the image acquisition device, and the second time interval may also be a positive integer multiple of a duration of a video acquired by the image acquisition device, so that when an emergency video is generated subsequently, the video acquired by the image acquisition video may be directly acquired without splitting the video.
Further, the above-mentioned first time interval and second time interval are the duration of the image capturing video, and further, determining, as the second target video, a video of which the capturing time is before the capturing time of the first target video and the time interval with the capturing time of the first target video is less than or equal to the first time interval, the video of which the capturing time is before the capturing time of the first target video among the N videos, includes: and determining the video with the acquisition time before that of the first target video and the minimum time interval with the acquisition time of the first target video in the N videos as a second target video. Determining a video, of the N videos, of which the acquisition time is after the acquisition time of the first target video and the time interval with the acquisition time of the first target video is less than or equal to the second time interval, as a third target video, including: and determining the video with the acquisition time after the acquisition time of the first target video and the minimum time interval with the acquisition time of the first target video in the N videos as a third target video.
Referring collectively to fig. 3, a schematic diagram of video processing provided by one embodiment of the present application is shown. The image acquisition device acquires N videos in a circulating recording mode, at the time of t0, the vehicle monitors that the vehicle enters an emergency state, then searches videos with acquisition time including the time of t0 from the N videos to serve as a first target video 31, then determines a video with the acquisition time being before the acquisition time of the first target video 31 and closest to the acquisition time of the first target video 31 to be a second target video 32, and determines a video with the acquisition time being after the acquisition time of the first target video 31 and closest to the acquisition time of the first target video 31 to be a third target video 33.
It should be noted that the acquisition time of the first target video, the acquisition time of the second target video, and the acquisition time of the third target video may be the same or different. In some embodiments, the vehicle acquires the second target video when entering the emergency state, the first target video when the first target video acquisition is completed, and the third target video when the third target video acquisition is completed. In other embodiments, the vehicle simultaneously acquires the first target video, the second target video, and the third target video after the third target video is captured.
In other possible embodiments, if the image capture device is not capturing video before the emergency state (e.g., the image capture device is in a state of being just installed), the electronic controller may obtain a first target video whose capture time includes the target time, and a third target video whose capture time is after the target time; if the image acquisition device stops working after the vehicle enters an emergency state, the electronic controller can acquire a first target video with acquisition time including a target moment and a third target video with acquisition time before the target moment; if both of the above situations occur, the electronic controller may only acquire the first target video whose acquisition time includes the target video.
And step S203, under the condition that a plurality of target videos exist, splicing the plurality of target videos to obtain an emergency state video, and storing the emergency state video in a second specified folder.
The emergency video is used for recording environmental information of the vehicle before and after entering the emergency state. The video in the second designated folder does not respond to the designated deletion instruction, and the designated deletion instruction is a deletion instruction other than the deletion instruction triggered by the user, that is, the file in the designated storage path can only be deleted manually by the user and cannot be deleted automatically, so that the situation that the emergency video is deleted by mistake is avoided, and the situation that the emergency video is deleted by mistake is avoided. The first designated folder and the second designated folder are not identical.
In the case where the target video includes the first target video, the second target video, and the third target video, step S203 is implemented as: and splicing the second target video, the first target video and the third target video end to end according to the sequence of the acquisition time to obtain the emergency state video.
Under the condition that the target video comprises the first target video and the third target video, the step S203 is implemented by splicing the first target video and the third target video end to end according to the sequence of the acquisition time to obtain the emergency state video. Under the condition that the target video comprises the first target video and the second target video, the step S203 is implemented by splicing the second target video and the first target video end to end according to the sequence of the acquisition time to obtain the emergency state video. In the event that the target video includes only the first target video, the electronic controller determines the first target video as an emergency status video. Referring to fig. 3 again, the electronic controller sequentially stitches the second target video 32, the first target video 31, and the third target video 33 to obtain an emergency state video 34.
To sum up, the technical scheme provided by the embodiment of the application obtains three target videos from a plurality of videos acquired by the image acquisition device in a circulating recording mode after the vehicle enters the emergency state, so as to splice and obtain the emergency state videos for recording environmental information before and after the vehicle enters the emergency state, and the videos acquired by the image acquisition device in the circulating recording mode are multiplexed in the process of obtaining the emergency state videos, so that the video splicing process only consumes a small amount of calculation power of the electronic controller, the consumption of hardware resources of the electronic controller is reduced, and therefore the electronic controller has enough hardware resources to process other services in the vehicle, and the pause feeling of the vehicle is reduced.
In some embodiments, after the vehicle obtains the emergency video, the name of the emergency video may also be set for subsequent searching. Optionally, the vehicle sets the name of the emergency state video based on the time information that the vehicle enters the emergency state. Optionally, the name of the emergency video contains the time information. For example, if the target time of the vehicle entering the emergency state is 10/19/2022 at 15 pm for 23 minutes, the name of the emergency state video may be "2210191523". By the method, the user can quickly search the emergency state video according to the time when the vehicle enters the emergency state, and the searching efficiency is improved.
Further, the vehicle may also set the name of the emergency state video based on the time information when the vehicle enters the emergency state and the location information where the vehicle is located when the vehicle enters the emergency state. The vehicle can acquire the position information of the vehicle when the vehicle enters the emergency state through the positioning module. The positioning module may be a GPS module in a vehicle. In this embodiment, the name of the emergency state video includes, in addition to the above time information, position information where the vehicle is located when entering the emergency state. In conjunction with the above example, the vehicle may have a rear-end collision at the xx high-speed toll station at 15 pm, 23 pm, 10/19/2022, and the name of the emergency video may be set to "xx high-speed toll station 2210191523". By the method, the user can quickly search the video in the emergency state according to the time and the position information of the vehicle entering the emergency state, and the searching efficiency is improved.
In some embodiments, the vehicle may also obtain a target video frame from the emergency video and then set the target video frame as a video cover of the emergency video. The target video frame is used for recording environmental information when the vehicle enters an emergency state, namely the target video frame is a video frame collected by the image collecting device when the vehicle enters the emergency state (namely the target moment). By the method, the user can quickly know the content of the emergency video when looking at the video interface of the emergency video.
Referring to fig. 4, a flowchart of a video processing method according to an embodiment of the present application is shown. The method comprises the following processes.
Step S401, continuously collecting N videos through an image collecting device, and storing the N videos in a first appointed folder.
And deleting the videos in the first appointed folder under the condition that the storage time length is greater than or equal to the preset time length.
Step S402, after the vehicle is monitored to be in the emergency state, the target video is obtained from the N videos based on the time information of the vehicle entering the emergency state.
Step S403, when there are multiple target videos, performing stitching processing on the multiple target videos to obtain an emergency video, and storing the emergency video in a second designated folder.
The emergency video is used for recording environmental information of the vehicle before and after entering the emergency state. The files in the second designated folder do not respond to the designated deletion instruction, and the designated deletion instruction is a deletion instruction other than the deletion instruction triggered by the user.
Step S404, after receiving a playing instruction for the emergency video, playing the emergency video.
In some embodiments, the vehicle is provided with a touch-controlled center control screen, the name of the emergency state video is displayed on the center control screen, and the electronic controller receives the playing instruction when the triggering signal for the name of the emergency state video is acquired. The trigger signal may be any one of a single-click trigger signal, a double-click trigger signal, and a long-press trigger signal. In some embodiments, the vehicle is provided with a non-touch center control screen and an operable control, and a user can trigger the play instruction for the emergency state video through the triggering operation of the operable control. In other embodiments, the playing instruction is a voice signal, the electronic controller collects the voice signal through the voice collecting device, and receives the playing instruction when the voice signal is detected to include the specified keyword.
Step S405, in the process of playing the emergency video, displaying a playing progress bar of the emergency video.
The playing progress bar is used for indicating the playing progress of the emergency video. The play progress bar includes a target mark. The target mark is used for marking a video frame collected by the image collecting device when the vehicle enters an emergency state.
Referring to fig. 5 in combination, which shows a schematic view of playing an emergency video provided in an embodiment of the present application, in the process of playing the emergency video, the central control screen displays a play progress bar 51, and the play progress bar 51 includes a target mark 52.
And step S406, after receiving the trigger signal aiming at the target mark, displaying the video frame collected by the image collecting device when the vehicle enters the emergency state.
The trigger signal may be any one of a single-click trigger signal, a double-click trigger signal, and a long-press trigger signal of the user for the target mark. Furthermore, the trigger signal may also be a speech signal.
In the embodiment of the application, after the user triggers the target mark through the trigger signal, the central control screen skips to display the video frame acquired by the image acquisition device when the vehicle enters the emergency state, and through the mode, the picture when the vehicle enters the emergency state can be quickly positioned, so that the searching efficiency is improved. Referring again to fig. 5, after the user triggers the target mark 52, the central control screen displays a video frame 53 captured by the image capturing device when the vehicle enters an emergency state.
To sum up, according to the technical scheme provided by the embodiment of the application, the target mark is displayed on the playing progress bar of the emergency state video when the emergency state video is played, and after the target mark is triggered by a user, the central control screen skips to display the video frame acquired by the image acquisition device when the vehicle enters the emergency state, so that the rapid positioning of the image when the vehicle enters the emergency state is realized, and the searching efficiency is improved.
In some embodiments, the vehicle further performs subsequent video processing steps to obtain the emergency state video if it is monitored that the ratio between the available hardware resources of the electronic controller and the total hardware resources is less than a preset ratio. The preset proportion is set according to experiments or experience. The occupation ratio between the available hardware resources of the electronic controller and the total hardware resources is smaller than the preset occupation ratio, so that the available hardware resources of the electronic controller are insufficient, the video acquired by the image acquisition device in a circulating recording mode can be multiplexed at the moment, the calculation power is reduced, the consumption of the hardware resources of the electronic controller is reduced, the electronic controller has enough hardware resources to process other services, and the stuck feeling is reduced.
Referring to fig. 6, a block diagram of a video processing apparatus according to an embodiment of the present application is shown. The device includes: the system comprises a video acquisition module 610, a first storage module 620, a video acquisition module 630, a video processing module 630 and a second storage module 650.
The video acquisition module 610 is configured to continuously acquire N videos through the image acquisition device, where N is a positive integer greater than 2.
The first storage module 620 is configured to store the N videos in a first designated folder, where the videos in the first designated folder are deleted when the storage duration is greater than or equal to the preset duration.
The video acquiring module 630 is configured to acquire a target video from a plurality of videos based on information of a time when the vehicle enters the emergency state after the vehicle is monitored to be in the emergency state.
The video processing module 640 is configured to, when multiple target videos exist, perform stitching processing on the target videos to obtain an emergency state video, where the emergency state video is used to record environment information of a vehicle before and after entering an emergency state.
And a second storage module 650, configured to store the emergency video in a second specified folder, where files in the second specified folder do not respond to a specified deletion instruction, and the specified deletion instruction is a deletion instruction other than the deletion instruction triggered by the user.
To sum up, the technical scheme provided by the embodiment of the application obtains three target videos from a plurality of videos acquired by the image acquisition device in a circulating recording mode after the vehicle enters the emergency state, so as to splice and obtain the emergency state videos for recording environmental information before and after the vehicle enters the emergency state, and the videos acquired by the image acquisition device in the circulating recording mode are multiplexed in the process of obtaining the emergency state videos, so that the video splicing process only consumes a small amount of calculation power of the electronic controller, the consumption of hardware resources of the electronic controller is reduced, and therefore the electronic controller has enough hardware resources to process other services in the vehicle, and the pause feeling of the vehicle is reduced.
In some embodiments, the time information that the vehicle enters the emergency state includes a target time; the target videos comprise a first target video, a second target video and a third target video; a video acquisition module 630, configured to: after the vehicle is monitored to be in an emergency state, determining a video with the acquisition time including a target moment in the N videos as a first target video; determining a video, of the N videos, of which the acquisition time is before the acquisition time of the first target video and the time interval with the acquisition time of the first target video is less than the first time interval, as a second target video; and determining the videos, of the N videos, of which the acquisition time is after that of the first target video and the time interval with the acquisition time of the first target video is smaller than the second time interval, as third target videos.
In some embodiments, the video obtaining module 630 is configured to determine, as the second target video, a video of the N videos, which has a capturing time before a capturing time of the first target video and a time interval with the capturing time of the first target video being the smallest; and determining the video with the smallest time interval with the acquisition time of the first target video in the N videos as a third target video.
In some embodiments, the apparatus further comprises: a play module (not shown in the figure). A play module to: after receiving a playing instruction for the emergency state video, playing the emergency state video; in the process of playing the emergency state video, displaying a playing progress bar of the emergency state video, wherein the playing progress bar comprises a target mark, and the target mark is used for indicating the position of a video frame acquired by an image acquisition device in the emergency state video when a vehicle enters the emergency state; and after receiving a trigger signal aiming at the target mark, playing a video frame acquired by the image acquisition device when the vehicle enters an emergency state.
In some embodiments, video processing module 640 is to: and splicing the second target video, the first target video and the third target video end to end according to the sequence of the acquisition time to obtain the emergency state video.
In some embodiments, the apparatus further comprises: naming the module (not shown in the figure). A naming module to: the name of the video to be in the emergency state is set based on the time information when the vehicle enters the emergency state.
In some embodiments, a naming module to: the method comprises the steps that position information of a vehicle in an emergency state is obtained through a positioning module; and setting the name of the emergency state video based on the time information when the vehicle enters the emergency state and the position information when the vehicle enters the emergency state.
In some embodiments, the apparatus further comprises: a cover setting module (not shown). A cover setting module for: acquiring a target video frame from the emergency state video, wherein the target video frame is used for recording environmental information when the vehicle enters the emergency state; and setting the target video frame as a cover of the emergency state video.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 7, it shows that the embodiment of the present application further provides a vehicle 700, where the vehicle 700 includes: one or more processors 710, memory 720, and one or more applications. Wherein one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the methods described in the above embodiments.
Processor 710 may include one or more processing cores. The processor 710 interfaces with various components and circuitry throughout the battery management system to perform various functions of the battery management system and to process data by executing or performing instructions, programs, code sets, or instruction sets stored in the memory 720 and invoking data stored in the memory 720. Alternatively, the processor 710 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 710 may integrate one or a combination of a Central Processing Unit (CPU) 710, a Graphics Processing Unit (GPU) 710, a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 710, but may be implemented by a communication chip.
The Memory 720 may include a Random Access Memory (RAM) 720 and a Read-Only Memory (ROM) 720. The memory 720 may be used to store instructions, programs, code sets, or instruction sets. The memory 720 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The memory data area may also store data created by the electronic device map during use (e.g., phonebook, audiovisual data, chat log data), etc.
Referring to fig. 8, a computer-readable storage medium 800 is provided according to an embodiment of the present application, in which a computer program instruction 810 is stored in the computer-readable storage medium 800, and the computer program instruction 810 can be called by a processor to execute the method described in the above embodiment.
The computer-readable storage medium 800 may be, for example, a flash Memory, an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Electrically Programmable Read-Only Memory (EPROM), a hard disk, or a Read-Only Memory (ROM). Optionally, the Computer-readable Storage Medium includes a Non-volatile Computer-readable Storage Medium (Non-transitory Computer-readable Storage Medium). The computer readable storage medium 800 has storage space for computer program instructions 810 to perform any of the method steps of the method described above. The computer program instructions 810 may be read from or written to one or more computer program products.
Although the present application has been described with reference to the preferred embodiments, it is to be understood that the present application is not limited to the disclosed embodiments, but rather, the present application is intended to cover various modifications, equivalents and alternatives falling within the spirit and scope of the present application.

Claims (10)

1. A method of video processing, the method comprising:
continuously acquiring N videos through an image acquisition device, storing the N videos in a first appointed folder, wherein N is a positive integer greater than 2, and deleting the videos in the first appointed folder when the storage time length is greater than or equal to a preset time length;
after a vehicle is monitored to enter an emergency state, acquiring a target video from N videos based on time information of the vehicle entering the emergency state;
and under the condition that a plurality of target videos exist, splicing the plurality of target videos to obtain an emergency state video, and storing the emergency state video in a second specified folder, wherein files in the second specified folder do not respond to specified deletion instructions, the specified deletion instructions are other deletion instructions except the deletion instructions triggered by a user, and the emergency state video is used for recording environmental information of the vehicle before and after entering the emergency state.
2. The method of claim 1, wherein the time information that the vehicle enters the emergency state includes a target time; the target videos comprise a first target video, a second target video and a third target video; the acquiring a target video from a plurality of videos based on the time information when the vehicle enters the emergency state includes:
after the vehicle is monitored to be in the emergency state, determining videos, the acquisition time of which comprises the target moment, in the N videos as the first target video;
determining videos of the N videos, wherein the acquisition time of the videos is before the acquisition time of the first target video, and the time interval between the videos and the acquisition time of the first target video is less than a first time interval, as the second target video;
and determining the videos, of the N videos, of which the acquisition time is after the acquisition time of the first target video and the time interval with the acquisition time of the first target video is less than a second time interval as the third target video.
3. The method according to claim 2, wherein the determining, as the second target video, a video of the N videos whose capturing time is before the capturing time of the first target video and whose time interval with the capturing time of the first target video is smaller than a first time interval comprises:
determining a video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video in the N videos as the second target video;
the determining, as the third target video, a video of the N videos whose acquisition time is after the acquisition time of the first target video and whose time interval with the acquisition time of the first target video is less than a second time interval includes:
and determining the video with the smallest time interval between the acquisition time of the first target video and the acquisition time of the first target video in the N videos as the third target video.
4. The method according to any one of claims 1 to 3, wherein after the splicing processing is performed on the plurality of target videos to obtain the emergency state video, the method further comprises:
after receiving a playing instruction for the emergency video, playing the emergency video;
in the process of playing the emergency state video, displaying a playing progress bar of the emergency state video, wherein the playing progress bar comprises a target mark, and the target mark is used for indicating the position of a video frame acquired by the image acquisition device in the emergency state video when a vehicle enters the emergency state;
and after receiving a trigger signal aiming at the target mark, playing a video frame acquired by the image acquisition device when the vehicle enters the emergency state.
5. The method according to any one of claims 1 to 3, wherein after the splicing processing is performed on the plurality of target videos to obtain the emergency state video, the method further comprises:
setting a name of the emergency state video based on the time information that the vehicle enters the emergency state.
6. The method of claim 5, further comprising:
acquiring the position information of the vehicle when the vehicle enters the emergency state through a positioning module;
the setting the name of the emergency state video based on the time information that the vehicle enters the emergency state includes:
and setting the name of the emergency state video based on the time information when the vehicle enters the emergency state and the position information when the vehicle enters the emergency state.
7. The method according to any one of claims 1 to 3, wherein after the splicing processing is performed on the plurality of target videos to obtain the emergency state video, the method further comprises:
acquiring a target video frame from the emergency state video, wherein the target video frame is used for recording environmental information when the vehicle enters the emergency state;
and setting the target video frame as a cover of the emergency state video.
8. A video processing apparatus, characterized in that the apparatus comprises:
the video acquisition module is used for continuously acquiring N videos through the image acquisition device, wherein N is a positive integer greater than 2;
the first storage module is used for storing the N videos in a first appointed folder, and the videos in the first appointed folder are deleted under the condition that the storage time length is greater than or equal to the preset time length;
the video acquisition module is used for acquiring a target video from a plurality of videos based on time information of the vehicle entering an emergency state after the vehicle is monitored to be in the emergency state;
the video processing module is used for splicing a plurality of target videos under the condition that the target videos are multiple to obtain an emergency state video, and the emergency state video is used for recording environmental information of the vehicle before and after entering the emergency state;
and the second storage module is used for storing the emergency state video in a second specified folder, wherein files in the second specified folder do not respond to a specified deletion instruction, and the specified deletion instruction is other deletion instructions except the deletion instruction triggered by the user.
9. A vehicle, characterized by comprising:
one or more processors;
a memory;
an image acquisition device;
one or more application programs, wherein one or more of the application programs are stored in the memory and configured to be executed by one or more of the processors, the one or more application programs configured to perform the video processing method of any of claims 1-7.
10. A computer-readable storage medium having computer program instructions stored therein, the computer program instructions being invokable by a processor to perform a video processing method according to any of claims 1 to 8.
CN202211395823.XA 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium Active CN115720253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211395823.XA CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211395823.XA CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN115720253A true CN115720253A (en) 2023-02-28
CN115720253B CN115720253B (en) 2024-05-03

Family

ID=85255065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211395823.XA Active CN115720253B (en) 2022-11-08 2022-11-08 Video processing method, device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115720253B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028435A (en) * 2023-03-30 2023-04-28 深圳市深航华创汽车科技有限公司 Data processing method, device and equipment of automobile data recorder and storage medium
CN116740837A (en) * 2023-06-25 2023-09-12 广东省安全生产技术中心有限公司 Black box for whole process tracing of limited space operation
CN116798144A (en) * 2023-04-18 2023-09-22 润芯微科技(江苏)有限公司 Collision video storage method, system, device and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003607A (en) * 2010-06-18 2012-01-05 Yazaki Corp Drive recorder for vehicle and recorded information management method
CN102722574A (en) * 2012-06-05 2012-10-10 深圳市中兴移动通信有限公司 Device and method for naming photo/video file on basis of shooting position and time
CN106027934A (en) * 2016-07-13 2016-10-12 深圳市爱培科技术股份有限公司 Vehicle driving video storing method and system based on rearview mirror
CN107564130A (en) * 2016-07-02 2018-01-09 上海卓易科技股份有限公司 Driving recording method and drive recorder, mobile terminal
CN110381357A (en) * 2019-08-15 2019-10-25 杭州鸿晶自动化科技有限公司 A kind of processing method of driving recording video
CN110570542A (en) * 2019-08-08 2019-12-13 北京汽车股份有限公司 Video recording method, device, vehicle and machine readable storage medium
CN114640823A (en) * 2022-02-22 2022-06-17 东风汽车集团股份有限公司 Emergency video recording method based on cockpit domain controller
CN114783180A (en) * 2022-04-07 2022-07-22 合众新能源汽车有限公司 Vehicle collision accident recording method and system and vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012003607A (en) * 2010-06-18 2012-01-05 Yazaki Corp Drive recorder for vehicle and recorded information management method
CN102722574A (en) * 2012-06-05 2012-10-10 深圳市中兴移动通信有限公司 Device and method for naming photo/video file on basis of shooting position and time
CN107564130A (en) * 2016-07-02 2018-01-09 上海卓易科技股份有限公司 Driving recording method and drive recorder, mobile terminal
CN106027934A (en) * 2016-07-13 2016-10-12 深圳市爱培科技术股份有限公司 Vehicle driving video storing method and system based on rearview mirror
CN110570542A (en) * 2019-08-08 2019-12-13 北京汽车股份有限公司 Video recording method, device, vehicle and machine readable storage medium
CN110381357A (en) * 2019-08-15 2019-10-25 杭州鸿晶自动化科技有限公司 A kind of processing method of driving recording video
CN114640823A (en) * 2022-02-22 2022-06-17 东风汽车集团股份有限公司 Emergency video recording method based on cockpit domain controller
CN114783180A (en) * 2022-04-07 2022-07-22 合众新能源汽车有限公司 Vehicle collision accident recording method and system and vehicle

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028435A (en) * 2023-03-30 2023-04-28 深圳市深航华创汽车科技有限公司 Data processing method, device and equipment of automobile data recorder and storage medium
CN116028435B (en) * 2023-03-30 2023-07-21 深圳市深航华创汽车科技有限公司 Data processing method, device and equipment of automobile data recorder and storage medium
CN116798144A (en) * 2023-04-18 2023-09-22 润芯微科技(江苏)有限公司 Collision video storage method, system, device and computer readable storage medium
CN116740837A (en) * 2023-06-25 2023-09-12 广东省安全生产技术中心有限公司 Black box for whole process tracing of limited space operation

Also Published As

Publication number Publication date
CN115720253B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
CN115720253B (en) Video processing method, device, vehicle and storage medium
US10068390B2 (en) Method for obtaining product feedback from drivers in a non-distracting manner
US11361555B2 (en) Road environment monitoring device, road environment monitoring system, and road environment monitoring program
CN111913769A (en) Application display method, device and equipment
CN111445599B (en) Automatic short video generation method and device for automobile data recorder
CN113071511A (en) Method and device for displaying reverse image, electronic equipment and storage medium
JP2007141212A (en) Driving assisting method and driving assisting device
CN114489509A (en) Video storage method and device of automobile data recorder, electronic equipment and storage medium
US20210354713A1 (en) Agent control device, agent control method, and storage medium storing agent control program
JP5085693B2 (en) Driving support device and driving support method
CN113791840A (en) Management system, management method, management device, management equipment and storage medium
CN116028435B (en) Data processing method, device and equipment of automobile data recorder and storage medium
CN114636568B (en) Test method and device for automatic emergency braking system, vehicle and storage medium
CN109543639B (en) Information display method, system, server and storage medium
KR101526673B1 (en) Method for search image data of black box
CN115690944B (en) Vehicle information acquisition method and device, vehicle and storage medium
CN118130110A (en) Vehicle testing method, device, electronic equipment and computer readable storage medium
US20240134785A1 (en) Apparatus for testing multimedia device and method thereof
CN114743290A (en) Driving record control method and device and automobile
US20240069899A1 (en) Server, non-transitory storage medium, and software update method
JP5008100B2 (en) Driving support device, driving support system, driving support software, and driving support method
CN115147810A (en) Vehicle attribute tracking method, system, equipment and medium in panoramic environment
CN115147952A (en) Recording method and device for vehicle emergency state and electronic equipment
CN115883782A (en) Sensor starting method and device, electronic equipment, storage medium and vehicle
CN115601381A (en) Vehicle door clamped object detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant