CN115713816A - Driving record management method, device, equipment and medium - Google Patents

Driving record management method, device, equipment and medium Download PDF

Info

Publication number
CN115713816A
CN115713816A CN202211438175.1A CN202211438175A CN115713816A CN 115713816 A CN115713816 A CN 115713816A CN 202211438175 A CN202211438175 A CN 202211438175A CN 115713816 A CN115713816 A CN 115713816A
Authority
CN
China
Prior art keywords
video
driving
moment
vehicle
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211438175.1A
Other languages
Chinese (zh)
Inventor
谈荣民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hozon New Energy Automobile Co Ltd
Original Assignee
Hozon New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hozon New Energy Automobile Co Ltd filed Critical Hozon New Energy Automobile Co Ltd
Priority to CN202211438175.1A priority Critical patent/CN115713816A/en
Publication of CN115713816A publication Critical patent/CN115713816A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The method judges whether video generation conditions are met or not according to the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder. And after the video generation condition is determined to be met, packaging the videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file. In the embodiment of the application, if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; otherwise, the first moment is the video end moment of the last driving video file. Through the process, the corresponding video file can be generated according to the road section where the vehicle runs and the road section congestion condition, the video query requirement is better met, and related personnel can conveniently check the video file.

Description

Driving record management method, device, equipment and medium
Technical Field
The present application relates to the field of information management, and in particular, to a method, an apparatus, a device, and a medium for managing driving records.
Background
The automobile data recorder is a digital electronic recording device which records and stores the running speed, time and other running state information of the vehicle and can realize data transmission through an interface. The recorded driving video is widely applied to the judgment of traffic accidents to guarantee the rights and interests of drivers.
Currently, a common automobile data recorder generates a driving video by all driving records collected during the starting period after each starting. The video mode usually generates a larger video file, and the driving records are only a small part of the video file which is usually required to be viewed, so that the video file is extremely unchanged for subsequent video query.
Although some automobile data recorders have the function of supporting the user to set the video generation time, the user can set the video generation time by himself, for example, the automobile data acquired by the automobile data recorders every ten minutes is packaged into one automobile video. However, if the specific time of the driving record to be queried cannot be determined, the generated videos need to be queried one by one, and a large query limit exists.
Disclosure of Invention
The embodiment of the application provides a driving record management method, a driving record management device and a driving record management medium. The method and the device are used for generating corresponding video files according to the road sections where the vehicles run and road section congestion conditions so as to be convenient for relevant people to view.
In order to achieve the above purpose, the technical solution of the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a driving record management method, where the method includes:
enabling the automobile data recorder in response to the automobile data recording indication;
judging whether video generation conditions are met or not based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder;
after the video generation condition is determined to be met, packaging videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the previous driving video file;
and responding to the video uploading instruction, and uploading the generated driving video file to the cloud.
In some possible embodiments, the video generation condition comprises any one or a combination of the following conditions:
determining the change of a road section where a vehicle is located under a first condition;
and secondly, determining that the congestion rate of the road section where the vehicle is located is greater than a congestion threshold value, and the interval between the current moment and the video ending moment of the previous vehicle video file reaches a specified duration.
In some possible embodiments, the method further comprises:
after videos collected by the automobile data recorder from the first moment to the current moment are packaged into an automobile driving video file, adding identification information to the automobile driving video file;
the identification information is used for a storage file index of the driving video file at a cloud end, and comprises a road section identification and a time identification; the road section identification is used for indicating a road section where a vehicle is located in the driving video file, and the time identification is used for indicating video starting time and video ending time.
In some possible embodiments, the uploading the generated driving video file to the cloud end in response to the video upload instruction includes:
receiving video information of a video to be uploaded; the video information is used for indicating a road section where a vehicle is located and/or any time node, and the video information is a search item for searching driving video files through a cloud end;
and selecting target identifications meeting all indications in the video information from the identification information of the video files of the vehicles, and uploading the video files of the vehicles carrying the target identifications to a cloud.
In some possible embodiments, the method further comprises:
receiving warning information of a warning event in response to a vehicle warning indication; the alarm information comprises the event type of the alarm event and the event occurrence time;
and determining a video autonomous uploading strategy corresponding to the event type, selecting an alarm file from the driving video files according to the event occurrence moment by executing the video autonomous uploading strategy, and uploading the alarm file to a cloud.
In some possible embodiments, the event types include a first type characterizing a vehicle-triggered traffic accident, and/or a second type characterizing an abnormality of a vehicle component during travel;
the selecting an alarm file from the driving video files according to the event occurrence time by executing the video autonomous uploading strategy comprises the following steps:
if the event type is the first type, the alarm file is a driving video file containing the event occurrence moment in video time;
and if the event type is the second type, the alarm files are all driving video files encapsulated in the event occurrence time and the event ending time.
In a second aspect, an embodiment of the present application provides a driving record management apparatus, where the apparatus includes:
the driving recording module is configured to execute enabling of a driving recorder in response to the driving recording indication;
the condition judging module is configured to judge whether video generation conditions are met or not based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the driving recorder;
the video generation module is configured to package videos collected by the automobile data recorder from a first moment to a current moment into an automobile video file after the video generation condition is determined to be met; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the previous driving video file;
and the video uploading module is configured to respond to the video uploading instruction and upload the generated driving video file to the cloud.
In some possible embodiments, the video generation conditions include any one or a combination of the following conditions:
determining the change of a road section where a vehicle is located under a first condition;
and secondly, determining that the congestion rate of the road section where the vehicle is located is greater than a congestion threshold value, and the interval between the current moment and the video ending moment of the previous vehicle video file reaches a specified duration.
In some possible embodiments, the video generation module is further configured to:
after videos collected by the automobile data recorder from a first moment to a current moment are packaged into an automobile driving video file, adding identification information to the automobile driving video file;
the identification information is used for a storage file index of the driving video file at a cloud end, and comprises a road section identification and a time identification; the road section identification is used for indicating a road section where a vehicle is located in the driving video file, and the time identification is used for indicating the video starting time and the video ending time.
In some possible embodiments, the uploading the generated driving video file to the cloud in response to the video upload instruction is performed, and the video upload module is configured to:
receiving video information of a video to be uploaded; the video information is used for indicating a road section where a vehicle is located and/or any time node, and the video information is a search item for searching driving video files through a cloud end;
and selecting target identifications meeting all indications in the video information from the identification information of the video files of the vehicles, and uploading the video files of the vehicles carrying the target identifications to a cloud.
In some possible embodiments, the apparatus further comprises:
an alert module configured to perform receiving alert information for an alert event in response to a vehicle alert indication; the alarm information comprises the event type of the alarm event and the event occurrence time;
and determining a video autonomous uploading strategy corresponding to the event type, selecting an alarm file from the driving video files according to the event occurrence moment by executing the video autonomous uploading strategy, and uploading the alarm file to a cloud.
In some possible embodiments, the event types include a first type characterizing a vehicle-triggered traffic accident, and/or a second type characterizing an abnormality of a vehicle component during travel;
executing the video autonomous uploading strategy to select an alarm file from the driving video files according to the event occurrence time, wherein the alarm module is configured to:
if the event type is the first type, the alarm file is a driving video file containing the event occurrence moment in video time;
and if the event type is the second type, the alarm files are all driving video files encapsulated in the event occurrence time and the event ending time.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is caused to implement any one of the methods in the first aspect.
In a fourth aspect, the present application further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any one of the methods of the first aspect.
In a fifth aspect, an embodiment of the present application is a computer program product, which includes computer instructions stored in a computer-readable storage medium; when the processor of the computer device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, causing the computer device to perform the method of any of the first aspects described above.
According to the embodiment of the application, whether the video generation condition is met or not is judged according to the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder. And after the video generation condition is determined to be met, packaging the videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file. In the embodiment of the application, if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; otherwise, the first moment is the video end moment of the last driving video file. Through the process, the corresponding video file can be generated according to the road section where the vehicle runs and the road section congestion condition, the video query requirement is better met, and related personnel can conveniently check the video file.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the present disclosure. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
Fig. 1 is a schematic view of a video generation method of a current automobile data recorder according to an embodiment of the present application;
fig. 2 is an overall flowchart of a driving record management method according to an embodiment of the present application;
fig. 3 is a schematic diagram of generating a driving video file according to an embodiment of the present application;
fig. 4 is a schematic diagram of an identifier adding process provided in an embodiment of the present application;
fig. 5 is a schematic view of congestion identification provided in an embodiment of the present application;
fig. 6 is a schematic view of a video uploading process provided in an embodiment of the present application;
fig. 7 is a structural diagram of a driving record management device 700 according to an embodiment of the present application;
fig. 8 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
The terms "first" and "second" in the description and claims of the present application and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the term "comprises" and any variations thereof are intended to cover non-exclusive protection. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The "plurality" in the present application may mean at least two, for example, two, three or more, and the embodiments of the present application are not limited.
The video storage mode of automobile data recorder mainly contains two kinds of local storage and high in the clouds storage, and the local storage need be followed the on-vehicle machine and taken out physical memory such as hard disk, USB dish and just can look over the video, and the convenience is relatively poor. The cloud storage is to transmit the video file generated by the automobile data recorder to the cloud, and then the user calls the file index from the cloud to obtain the automobile video to be inquired. Although cloud storage is more convenient, the current automobile data recorder mostly stores the complete video acquired during the starting period in the cloud after being started each time. The method not only brings large mobile flow consumption to the vehicle machine, but also occupies a large amount of cloud storage space.
In practical application, part of the automobile data recorders have a function of supporting a user to set video generation time. Specifically, as shown in fig. 1, a user may set a specific video generation time in the vehicle event data recorder, for example, at every ten minutes, the driving records collected within ten minutes are packaged into a driving video. The user can selectively upload part of the driving videos to the cloud, and the storage index of the file is the acquisition time of the corresponding driving record. However, when the relevant personnel check the videos, if the specific time of the driving record to be inquired cannot be determined, the videos can only be screened one by one.
Therefore, the video generation mode of the current automobile data recorder has large query limitation, and is not beneficial to the viewing of a user. In order to solve the above problems, the inventive concept of the embodiments of the present application is: whether the video generation condition is met or not is judged based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder. And after the video generation condition is determined to be met, packaging the videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file. In the embodiment of the application, if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; otherwise, the first moment is the video end moment of the last driving video file. Through the process, the corresponding video file can be generated according to the road section where the vehicle runs and the road section congestion condition, the video query requirement is better met, and related personnel can conveniently check the video file.
Next, as shown in fig. 2, fig. 2 shows an overall flowchart of a driving record management method provided in an embodiment of the present application, where the method includes:
step 201: enabling the automobile data recorder in response to the automobile data recording indication;
step 202: judging whether video generation conditions are met or not based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder;
according to the embodiment of the application, the specific position and the road section information of a vehicle in a traffic network are obtained in real time by calling the high-precision navigation map during the starting period of the driving recorder, and the road section information specifically comprises the road section where the vehicle is located and the congestion rate of the road section.
In implementation, the video generation condition can be set for the automobile data recorder in advance. The video generation condition is the minimum unit for video packaging of the driving record. In consideration of the fact that video query in practical application mostly needs video backtracking of certain events occurring during the driving process of a vehicle, the location of the event occurrence is usually clear by a driver. Based on this, the video generation conditions in the embodiment of the present application include any one or a combination of the following conditions:
determining the change of a road section where a vehicle is located under a first condition;
namely, video generation is carried out by taking the road section where the vehicle is located as a unit, and the currently acquired driving record which is not packaged as the video is packaged as a driving video file when the road section where the vehicle is located is changed. All driving records of the vehicle in a certain road section are recorded in each driving video file obtained in the way.
And secondly, determining that the congestion rate of the road section where the vehicle is located is greater than a congestion threshold value, and the interval between the current moment and the video ending moment of the previous vehicle video file reaches a specified duration.
The speed of the vehicle in the road section is slow when the road section is congested and the road section is long. If the video generation is performed by taking the road section where the vehicle is located as a unit, the driving video file is large, and the viewing by a user is not facilitated. Based on this, in the embodiment of the application, if the congestion rate of the road segment where the vehicle is located is greater than the specified threshold, the video generation is performed by taking the specified time length as a unit. Namely, when the congestion rate of the road section where the vehicle is located is high, the driving speed of the vehicle can be determined to be slow, and in order to avoid the situation that the generated driving video file is large, the currently acquired driving record which is not packaged into the video is packaged into the driving video file every specified time.
In order to be suitable for more scenes, the video generation conditions I and II can be provided for a user to select, and the video generation of the video generation conditions I and II in a combined mode is supported. Specifically, the video generation is performed in a first condition mode when the congestion rate of the road section where the vehicle is located is not greater than the congestion threshold, and the video generation is performed in a second condition mode when the congestion rate of the road section where the vehicle is located is greater than the congestion threshold.
Step 203: after the video generation condition is determined to be met, packaging videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the last driving video file;
to facilitate understanding of the application of the video generation conditions, the following description will be given by taking the form of a combination of the video generation conditions, as shown in fig. 3. Assuming that the vehicle starts a driving recorder on the way of the road segment A (point a is shown in the figure) and the congestion rate of the road segment A is not larger than the congestion threshold value, when the vehicle drives to the road segment B (point B is shown in the figure), the representation meets the first video generation condition due to the change of the lane where the vehicle is located. And packaging the videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file 1. Since the automobile data recorder firstly meets the video generation condition after being started, the first time here is the time when the automobile data recorder starts to work, that is, the time when the vehicle is at the point a shown in fig. 3. And the current time is the time when the vehicle reaches the point b, and the driving video file 1 obtained by the method records the driving record of the vehicle during the whole driving period from the point a to the point b.
As shown in fig. 3, it is assumed that the congestion rate of the road B is not greater than the congestion threshold when the vehicle enters the road B from the road a. However, when the vehicle travels to the road B (point c is shown in the figure), the congestion rate of the road B suddenly increases to exceed the congestion threshold, and at this time, it is determined whether the interval between the current time and the video end time of the previous driving video file (i.e., the driving video file 1) reaches a specified time length. And if the specified duration is reached, the representation meets a second video generation condition. And packaging the videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file 2. At this time, the video generation condition is not satisfied for the first time after the automobile data recorder is started, so the first time here is the video end time of the previous driving video file 1. The driving video file 2 generated at this time records the driving record during the vehicle driving from the point a to the point c.
Correspondingly, if the specified time length is not reached, the driving video file 2 that the vehicle drives from the point a to the point d is generated when the vehicle drives to the point d reaching the specified time length. Suppose that the congestion rate of the road B is reduced below the congestion threshold again after the vehicle moves to the point d. At this time, when the vehicle enters the C road segment (point e is shown in the figure), the video generation condition one is triggered again, and the driving record of the vehicle driving from the point d to the point e is packaged as the driving video file 3. Through the process, a user can set video generation conditions according to the requirement, specifically, video generation can be carried out by taking the road section as a unit, and video generation can also be carried out by taking the specified duration as a unit when the road section congestion rate is higher, so that the problem that the video content is too long to inquire the required video content from the video content is solved.
Step 204: and responding to the video uploading instruction, and uploading the generated driving video file to the cloud.
At present, a cloud storage mode of driving recording mainly triggers an uploading instruction by a user, for example, the user selects a part of video files from driving video files cached locally in a driving recorder to upload to the cloud, or the user actively sets to upload all driving video files cached in the driving recorder to the cloud.
The video uploading mode has the following problem that if the uploading instruction is triggered by the user independently, the situation that the video file to be viewed by the user does not exist in the cloud end exists when the user views the video file. At the moment, the user can only select the video file which is viewed by the generation from the cache video of the newly-reproduced automobile data recorder and upload the video file. If all the driving video files are uploaded, a large amount of flow is consumed, and the occupation amount of the cloud storage space is large.
In order to solve the problem, in the embodiment of the application, after the videos collected by the automobile data recorder from the first moment to the current moment are packaged into the automobile driving video file through the process, the identification information is added to the automobile driving video file. In specific implementation, the identification information can be used as the name of the driving video file, and the identification information is used as a storage file index of the driving video file in the cloud. Therefore, after the videos are uploaded to the cloud, the user can search the corresponding driving video file by carrying out fuzzy search on the video names.
The identification information in the embodiment of the application comprises a road section identification and a time identification, wherein the road section identification is used for indicating a road section where a vehicle is located in a driving video file, and the time identification is used for indicating the video starting time and the video ending time. The identifier adding process is described with reference to the above example of fig. 3, and is specifically illustrated in fig. 4. Suppose the vehicle is traveling to point a at 10. The driving video files 1 and 3 in fig. 3 are videos generated when the video generation condition is met. Taking the driving video file 1 as an example, the identification information is { road section a, year, month, day: 10. After the driving video file 1 is uploaded to the cloud, the user can use a keyword road section a, or the year, month and day: the driving video file 1 can be searched by searching at any time point within the range of 10.
As shown in fig. 5, in order to adapt to more application scenarios, in the embodiment of the present application, when the congestion rate of a road segment is greater than the congestion threshold, a label representing the congestion condition is added to the driving video file. As shown in fig. 3, the driving video file 2 is a video generated when the video generation condition two is satisfied, that is, the congestion rate of the road segment corresponding to the driving video file 2 is greater than the congestion threshold, at this time, a congestion flag may be added to the identification information of the driving video file 2, so that the identification information { road segment B, year, month, day: 10.
Through the process, the user can adaptively upload the driving video file to the cloud end, and when the target video file to be viewed by the user does not exist in the cloud end, the cloud end can issue a video uploading instruction to the driving recorder to trigger the driving recorder to automatically upload the target video file cached locally to the cloud end for the user to view.
The following is also illustrated by the example of fig. 3, and is specifically shown in fig. 6. Assuming that the driving video files 1-3 generated in fig. 3 are not uploaded to the cloud by the user, when the user calls the driving video file from the cloud through a client such as a smart computer or a smart phone in an application scene, the user wants to call the driving video file 1 of the vehicle on the road section a, and the user searches the driving video file stored in the cloud in a mode of searching the road section a, year, month and day through the client. Because the driving video file 1 is not uploaded to the cloud by the user, the cloud issues a video uploading instruction carrying video information of a video to be uploaded to the driving recorder after the retrieval fails. The video information is used for indicating the road section where the vehicle is located and/or any time node. It should be understood that the video information is a search term for searching the driving video file through the cloud, that is, "road a, year, month and day" input by the user in fig. 6.
At this time, after receiving the video information, the driving recorder selects a target identifier meeting all indications in the video information from the identification information of each driving video file (i.e. driving video files 1-3) cached locally, and uploads the driving video file (i.e. driving video file 1) carrying the target identifier to the cloud.
In addition, the embodiment of the application is provided with a corresponding video autonomous uploading strategy aiming at the vehicle alarm indication, and if the vehicle alarm indication is received during the starting period of the driving recorder, the video autonomous uploading strategy is executed to select the alarm file from the video files of the driving according to the occurrence moment of the event and upload the alarm file to the cloud.
In practice, the alert information for an alert event is determined after receiving a vehicle alert indication. The alarm information includes the event type of the alarm event and the event occurrence time. The event types in the embodiment of the application include a first type for characterizing vehicle triggering traffic accidents and/or a second type for characterizing vehicle parts generating abnormity during driving. Specifically, it is assumed that the vehicle sensor triggers an alarm event, such as a scratch of a vehicle, a collision of a vehicle body, and an air bag ejection, which is the first type of alarm event. And the vehicle sensor triggers alarm events such as over-pressure of the vehicle body, under-pressure of the tire and the like, namely the alarm events of the second type.
In implementation, if the event type of the alarm event is the first type, the alarm file is a driving video file containing the event occurrence time within the video time. If the alarm of the vehicle collision is received by 10 points, the driving video file at the moment including 10 points is automatically uploaded to the cloud as an alarm file, and specifically, an alarm identifier described by an alarm event can be added to the identifier information of the alarm file.
In addition, if the alarm that the tire pressure of the vehicle is too small is received at 10 o 'clock, and the tire pressure is recovered to the safety range at 11 o' clock, all driving video files packaged from 10 o 'clock of the occurrence time of the alarm event to 11 o' clock of the event end time are automatically uploaded to the cloud. In addition, if the driver drives into the maintenance center at 10 o 'clock 20 and turns off the driving recorder after receiving the alarm that the tire pressure of the vehicle is too low at 10 o' clock, the driving record which is not encapsulated from 10 o 'clock to 10 o' clock 20 can be cached in the driving recorder, and is automatically uploaded to the cloud after the driving recorder is restarted. Therefore, the situation that the driver forgets to actively upload the driving video file during the warning period to cause the driving video file during the warning period to be cleaned and lost can be avoided.
Based on the same inventive concept, an embodiment of the present application provides a driving record management device 700, specifically as shown in fig. 7, including:
a driving recording module 701 configured to execute enabling a driving recorder in response to a driving recording instruction;
a condition determination module 702 configured to determine whether a video generation condition is satisfied based on a road segment where the vehicle is located and/or a congestion rate of the road segment where the vehicle is located during operation of the driving recorder;
the video generation module 703 is configured to perform, after it is determined that the video generation condition is met, packaging videos collected by the automobile data recorder from a first moment to a current moment into an automobile video file; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the previous driving video file;
and the video uploading module 704 is configured to upload the generated driving video file to the cloud end in response to the video uploading instruction.
In some possible embodiments, the video generation conditions include any one or a combination of the following conditions:
determining the change of a road section where a vehicle is located under a first condition;
and secondly, determining that the congestion rate of the road section where the vehicle is located is greater than a congestion threshold value, and the interval between the current moment and the video ending moment of the previous vehicle video file reaches a specified duration.
In some possible embodiments, the video generation module is further configured to:
after videos collected by the automobile data recorder from a first moment to a current moment are packaged into an automobile driving video file, adding identification information to the automobile driving video file;
the identification information is used for a storage file index of the driving video file at a cloud end, and comprises a road section identification and a time identification; the road section identification is used for indicating a road section where a vehicle is located in the driving video file, and the time identification is used for indicating video starting time and video ending time.
In some possible embodiments, the uploading the generated driving video file to the cloud in response to the video uploading instruction is performed, and the video uploading module is configured to:
receiving video information of a video to be uploaded; the video information is used for indicating a road section where a vehicle is located and/or any time node, and the video information is a search item for searching driving video files through a cloud end;
and selecting target identifications meeting all indications in the video information from the identification information of the video files of the vehicles, and uploading the video files of the vehicles carrying the target identifications to a cloud.
In some possible embodiments, the apparatus further comprises:
an alert module configured to perform receiving alert information for an alert event in response to a vehicle alert indication; the alarm information comprises the event type of the alarm event and the event occurrence time;
and determining a video autonomous uploading strategy corresponding to the event type, selecting an alarm file from the driving video files according to the event occurrence moment by executing the video autonomous uploading strategy, and uploading the alarm file to a cloud.
In some possible embodiments, the event types include a first type characterizing a vehicle-triggered traffic accident, and/or a second type characterizing an abnormality of a vehicle component during travel;
executing the video autonomous uploading strategy to select an alarm file from the driving video files according to the event occurrence time, wherein the alarm module is configured to:
if the event type is the first type, the alarm file is a driving video file containing the event occurrence moment in video time;
and if the event type is the second type, the alarm file is all driving video files encapsulated in the event occurrence time and the event ending time.
An electronic device 130 according to this embodiment of the present application is described below with reference to fig. 8. The control device 130 shown in fig. 8 is only an example, and should not bring any limitation to the function and the range of use of the embodiment of the present application.
As shown in fig. 8, the control device 130 is in the form of a general control device. The components of the control device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 that couples various system components including the memory 132 and the processor 131.
Bus 133 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 132 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment.
The control device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the control device 130, and/or with any devices (e.g., router, modem, etc.) that enable the control device 130 to communicate with one or more other control devices. Such communication may occur via input/output (I/O) interfaces 135. Also, the control device 130 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 136. As shown, network adapter 136 communicates with other modules for controlling device 130 over bus 133. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the control device 130, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 132 comprising instructions, executable by the processor 131 of the apparatus to perform the method described above is also provided. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising computer programs/instructions which, when executed by the processor 131, implement any one of the driving record management methods as provided herein.
In exemplary embodiments, various aspects of a driving record management method provided by the present application may also be implemented in the form of a program product, which includes program code for causing a computer device to perform the steps in a driving record management method according to various exemplary embodiments of the present application described above in this specification when the program product runs on the computer device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for driving record management of the embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a control device. However, the program product of the present application is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the PowerPC programming language or similar programming languages. The program code may execute entirely on the user control device, partly on the user device, as a stand-alone software package, partly on the user control device and partly on the remote control device, or entirely on the remote control device or server. In the case of a remote control device, the remote control device may be connected to the user control device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external control device (e.g., over the internet using an internet service provider).
It should be noted that although in the above detailed description several units or sub-units of the apparatus are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image scaling apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image scaling apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image scaling apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image scaling device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer implemented process such that the instructions which execute on the computer or other programmable device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for managing driving record, the method comprising:
enabling the automobile data recorder in response to the automobile data recording indication;
judging whether video generation conditions are met or not based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the automobile data recorder;
after the video generation condition is determined to be met, packaging videos collected by the automobile data recorder from the first moment to the current moment into an automobile video file; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the previous driving video file;
and responding to the video uploading instruction, and uploading the generated driving video file to the cloud.
2. The method according to claim 1, wherein the video generation condition comprises any one or a combination of the following conditions:
determining the change of a road section where a vehicle is located under a first condition;
and secondly, determining that the congestion rate of the road section where the vehicle is located is greater than a congestion threshold value, and the interval between the current moment and the video ending moment of the previous vehicle video file reaches a specified duration.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
after videos collected by the automobile data recorder from a first moment to a current moment are packaged into an automobile driving video file, adding identification information to the automobile driving video file;
the identification information is used for a storage file index of the driving video file in a cloud end, and comprises road section identification and time identification; the road section identification is used for indicating a road section where a vehicle is located in the driving video file, and the time identification is used for indicating video starting time and video ending time.
4. The method according to claim 3, wherein the uploading the generated driving video file to a cloud end in response to a video uploading instruction comprises:
receiving video information of a video to be uploaded; the video information is used for indicating a road section where a vehicle is located and/or any time node, and the video information is a search item for searching driving video files through a cloud end;
and selecting target identifications meeting all indications in the video information from identification information of the video files of all the traveling vehicles, and uploading the traveling video files carrying the target identifications to a cloud.
5. The method of claim 3, further comprising:
receiving warning information of a warning event in response to a vehicle warning indication; the alarm information comprises the event type of the alarm event and the event occurrence time;
and determining a video autonomous uploading strategy corresponding to the event type, selecting an alarm file from the driving video files according to the event occurrence moment by executing the video autonomous uploading strategy, and uploading the alarm file to a cloud.
6. The method of claim 5, wherein the event types include a first type characterizing a vehicle-triggered traffic accident, and/or a second type characterizing a vehicle component generating an anomaly during travel;
the selecting an alarm file from the driving video files according to the event occurrence time by executing the video autonomous uploading strategy comprises the following steps:
if the event type is the first type, the alarm file is a driving video file containing the event occurrence moment in video time;
and if the event type is the second type, the alarm files are all driving video files encapsulated in the event occurrence time and the event ending time.
7. A driving record management apparatus, characterized in that the apparatus comprises:
the driving recording module is configured to execute enabling of a driving recorder in response to the driving recording indication;
the condition judging module is configured to judge whether video generation conditions are met or not based on the road section where the vehicle is located and/or the congestion rate of the road section where the vehicle is located in the working process of the driving recorder;
the video generation module is configured to package videos collected by the automobile data recorder from a first moment to a current moment into an automobile video file after the video generation condition is determined to be met; if the video generation condition is met for the first time, the first moment is the moment when the automobile data recorder starts to work; if the video generation condition is not met for the first time, the first moment is the video ending moment of the previous driving video file;
and the video uploading module is configured to respond to the video uploading instruction and upload the generated driving video file to the cloud.
8. An electronic device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised by the method of any one of claims 1 to 6 in accordance with the obtained program instructions.
9. A computer-readable storage medium, characterized in that it stores a computer program comprising program instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1-6.
10. A computer program product, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method according to any of the preceding claims 1-6.
CN202211438175.1A 2022-11-16 2022-11-16 Driving record management method, device, equipment and medium Pending CN115713816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211438175.1A CN115713816A (en) 2022-11-16 2022-11-16 Driving record management method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211438175.1A CN115713816A (en) 2022-11-16 2022-11-16 Driving record management method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115713816A true CN115713816A (en) 2023-02-24

Family

ID=85233577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211438175.1A Pending CN115713816A (en) 2022-11-16 2022-11-16 Driving record management method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115713816A (en)

Similar Documents

Publication Publication Date Title
US11170589B2 (en) Emergency event based vehicle data logging
CN105743986A (en) Internet of Vehicles based smart city dynamic video monitoring system
CN109804367A (en) Use the distributed video storage and search of edge calculations
US20200077292A1 (en) Data collection apparatus, data collection system, data collection method, and on-vehicle device
CN106056697B (en) A kind of event-monitoring methods, devices and systems
EP3618011B1 (en) Data collection apparatus, on-vehicle device, data collection system, and data collection method
US11546395B2 (en) Extrema-retentive data buffering and simplification
US8031084B2 (en) Method and system for infraction detection based on vehicle traffic flow data
CN104318327A (en) Predictive parsing method for track of vehicle
US11838364B2 (en) Extrema-retentive data buffering and simplification
US11202030B2 (en) System and method for providing complete event data from cross-referenced data memories
US20200137142A1 (en) Vehicle data offloading systems and methods
CN104284162A (en) Video retrieval method and system
CN103559274A (en) Vehicle condition information query method and device
CN109636946A (en) A kind of calculation method for log of driving a vehicle
JP7290501B2 (en) INFORMATION COLLECTION APPARATUS AND METHOD AND INFORMATION COLLECTION SYSTEM
CN112150807B (en) Vehicle early warning method and device, storage medium and electronic equipment
CN115713816A (en) Driving record management method, device, equipment and medium
WO2021116899A1 (en) Automotive data processing system with efficient generation and exporting of metadata
CN114858175B (en) Vehicle travel method, device, equipment and storage medium
CN104978323A (en) Method and system thereof for searching and sharing driving record film
US20230290194A1 (en) Technologies for determining driver efficiency
CN114064276A (en) Vehicle-using scene perception method and vehicle-mounted system
CN109272602B (en) Unmanned vehicle data recording method, device, equipment and storage medium
CN114359857A (en) Processing method, device and system for reported information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant after: United New Energy Automobile Co.,Ltd.

Address before: 314500 988 Tong Tong Road, Wu Tong Street, Tongxiang, Jiaxing, Zhejiang

Applicant before: Hozon New Energy Automobile Co., Ltd.

CB02 Change of applicant information