WO2021213181A1 - 视频处理方法及装置 - Google Patents

视频处理方法及装置 Download PDF

Info

Publication number
WO2021213181A1
WO2021213181A1 PCT/CN2021/085707 CN2021085707W WO2021213181A1 WO 2021213181 A1 WO2021213181 A1 WO 2021213181A1 CN 2021085707 W CN2021085707 W CN 2021085707W WO 2021213181 A1 WO2021213181 A1 WO 2021213181A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
stream
packet
video stream
vehicle
Prior art date
Application number
PCT/CN2021/085707
Other languages
English (en)
French (fr)
Inventor
侯朋飞
谭利文
张磊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21791660.0A priority Critical patent/EP4131979A4/en
Priority to JP2022564352A priority patent/JP2023522429A/ja
Publication of WO2021213181A1 publication Critical patent/WO2021213181A1/zh
Priority to US17/970,930 priority patent/US11856321B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41422Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance located in transportation means, e.g. personal vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera

Definitions

  • the embodiments of the present application relate to the field of information technology information, and in particular, to a video processing method and device.
  • a camera can be installed at the rear of the vehicle to assist in reversing, and a driving recorder can be installed on the vehicle to record the driving process.
  • a camera component such as a driving recorder installed on a vehicle usually completes the preview, recording, saving, and playback of the video independently.
  • the camera component In the video recording process, in order to provide video failure recovery capabilities, the camera component usually needs to write index data while writing video data.
  • index data There are currently two ways to implement index data. The first is to calculate the index space according to the resolution and frame rate and pre-write the index, and then continuously write the video data into the memory. Since the pre-written index usually has an error with the size of the actual index space and the actual index position, the index space is wasted.
  • the other is to continuously write the index to the memory during the recording of the video.
  • the continuous writing of the index during the recording of the video will take up more storage space, which results in a larger storage space occupied by the recorded video.
  • the embodiments of the present application provide a video processing method and device to solve the problem of large storage space occupied by recorded video in the prior art.
  • the embodiments of the present application provide a video processing method.
  • the method can be applied to a video processing device or a chip in a video processing device.
  • the video processing device can be, for example, a vehicle-mounted terminal.
  • the method will be described below by taking the application to a vehicle-mounted terminal as an example.
  • the vehicle-mounted terminal receives the video stream collected by the camera component, and converts the video stream into at least one first data packet, wherein the first data packet is independently encoded or decoded.
  • the vehicle-mounted terminal synthesizes the first data packet acquired in the first reporting period into a first video segment, and saves the first video segment.
  • the first reporting period is less than or equal to the maximum allowable lost video duration
  • the maximum allowable lost video duration is the maximum time allowed for the user to take the video taken by the lost camera assembly when a fault occurs.
  • the collected video stream is converted into at least one first data packet that can be independently encoded or decoded, when a failure occurs during the video recording process, the first data packet saved by the vehicle-mounted terminal is composed
  • the first video segment can also complete the decoding of the video alone, so there is no need to write the index when recording the video, which saves the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • the first data packet is a transport stream TS packet.
  • the vehicle-mounted terminal may remove the protocol header from the video stream and obtain at least one TS packet from the video stream.
  • the vehicle-mounted terminal directly removes the protocol header to obtain at least one TS packet from the video stream, and synthesizes the at least one TS packet into the first video segment to save. Since the first video segment synthesized in the TS packet can be independently encoded or decoded, the storage space occupied by the recorded video is saved, so that a longer video can be stored in the storage space.
  • the vehicle-mounted terminal obtains at least one ES packet from the video stream, and then encapsulates the at least one ES packet into at least one TS packet .
  • the vehicle-mounted terminal obtains at least one ES packet from the video stream by removing the protocol header, encapsulates the ES packet into a TS packet, and finally encapsulates at least one TS packet.
  • the vehicle-mounted terminal obtains the bare bit stream corresponding to the video stream from the video stream, and encodes the bare bit stream to generate At least one ES package. Finally, the vehicle-mounted terminal encapsulates at least one ES packet into at least one TS packet.
  • the vehicle-mounted terminal obtains the bare bit stream corresponding to the video stream by removing the protocol header, and encodes the bare bit stream to generate an ES packet, and then Encapsulate the ES packet into a TS packet, and finally synthesize at least one TS packet into the first video segment and save it. Since the first video segment synthesized in the TS packet can be independently encoded or decoded, the storage space occupied by the recorded video is saved, so that a longer video can be stored in the storage space.
  • the protocol header is a real-time streaming protocol RTSP header.
  • the data amount of each first data packet is the same.
  • the vehicle-mounted terminal may also merge the at least one first video segment into at least one video file according to the standard data volume of the video file, and save the at least one video file according to the first video format.
  • the video processing method provided by this implementable manner, it is avoided that the video is saved in the form of a large number of first video clips, which makes it more convenient for the user to view the video.
  • the video file can be stored according to the playback format of the video playback component, which is beneficial to quickly play on the video playback component.
  • the vehicle-mounted terminal can also obtain the speed of the vehicle where the camera component is located. If the speed of the vehicle in the first time period decreases by more than the first threshold, the first video in the second time period The clips are saved to a separate storage area.
  • the starting time point of the first time period is the time point when the driving pedal of the vehicle is depressed, the time length of the first time period is a first preset time length; the middle of the second time period The time point is the time point when the driving pedal of the vehicle is depressed, and the duration of the second time period is a second preset duration.
  • the video can be recorded urgently when the vehicle brakes or crashes, and the vehicle-mounted terminal can quickly extract the key event video in an independent storage area.
  • the vehicle-mounted terminal can also locate the first video segment in the second time period according to the time stamp of the first video segment, and the time stamp of the first video segment is used to identify the recording start time of the first video segment time.
  • an embodiment of the present application provides a video processing device.
  • the video processing device includes: a receiving module for receiving a video stream collected by a camera component; a processing module for converting the video stream into at least one first data packet, The first data packet is independently encoded or decoded; the first data packet obtained in the first reporting period is synthesized into the first video segment, and the first video segment is saved.
  • the first reporting period is less than or equal to the maximum allowable lost video duration, and the maximum The allowable lost video duration is the maximum time allowed by the user for the video taken by the lost camera assembly when a malfunction occurs.
  • the processing module is specifically configured to remove the protocol header in the video stream and obtain at least one TS packet from the video stream.
  • the processing module is specifically configured to remove the protocol header in the video stream, obtain at least one ES packet from the video stream, and encapsulate the at least one ES packet into at least one TS Bag.
  • the processing module is specifically used to remove the protocol header in the video stream and obtain the bare bit stream corresponding to the video stream from the video stream; encode the bare bit stream , Generate at least one ES packet; encapsulate the at least one ES packet into at least one TS packet.
  • the protocol header is a real-time streaming protocol RTSP header.
  • the data amount of each first data packet is the same.
  • the processing module is further configured to merge the at least one first video segment into at least one video file according to the standard data volume of the video file; and save the at least one video file according to the first video format.
  • the receiving module is also used to obtain the speed of the vehicle where the camera component is located;
  • the processing module is also used to save the first video clip in the second time period to an independent storage area if the speed of the vehicle decreases in the first time period by more than the first threshold.
  • the starting time point of the first time period is the time point when the vehicle pedal is depressed
  • the duration of the first time period is the first preset duration
  • the middle time point of the second time period is the time when the vehicle pedal is depressed.
  • the duration of the second time period is the second preset duration.
  • the processing module is further configured to locate the first video segment in the second time period according to the time stamp of the first video segment, and the time stamp of the first video segment is used to identify the recording of the first video segment The time of the start time.
  • inventions of the present application provide a vehicle-mounted terminal.
  • the vehicle-mounted terminal includes: a processor, a memory, a transmitter, and a receiver; the transmitter and the receiver are coupled to the processor, and the processor controls the sending action of the transmitter, and the processor Control the receiving action of the receiver;
  • the memory is used to store computer executable program code, and the program code includes information; when the processor executes the information, the information causes the network device to execute the video processing method provided by each possible implementation manner of the first aspect.
  • an embodiment of the present application provides a chip, including a processor, configured to call and run a computer program from a memory, so that the device with the chip installed executes the video processing method provided in the implementation manner of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium for storing a computer program, and the computer program enables a computer to execute the video processing method provided in the implementation manner of the first aspect.
  • an embodiment of the present application provides a computer program product, including computer program information, and the computer program information enables a computer to execute the video processing method provided in the implementation manner of the first aspect.
  • an embodiment of the present application provides a computer program that enables a computer to execute the video processing method provided in the implementation manner of the first aspect.
  • an embodiment of the present application provides a storage medium on which a computer program is stored, including: when the program is executed by a processor, the video processing method of the first aspect or various implementation manners of the first aspect.
  • the video processing method and device receive the video stream collected by the camera component and convert the video stream into at least one first data packet.
  • the first data packet is independently encoded or decoded, and then the first data packet is then reported.
  • the first data packet acquired in the period is saved as the first video segment.
  • the first video segment composed of the saved first data packet can also be independently Complete the video decoding, so there is no need to write the index when recording the video, saving the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • Fig. 1 is a schematic diagram of a video processing method in the prior art
  • FIG. 2 is a system architecture diagram of a video processing method provided by an embodiment of this application.
  • FIG. 3 is a schematic flowchart of a video processing method provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of synthesizing a first video segment provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of interaction of a video processing method provided by an embodiment of this application.
  • FIG. 6 is a schematic flowchart of another video processing method provided by an embodiment of the application.
  • FIG. 7 is a schematic diagram of interaction of another video processing method provided by an embodiment of the application.
  • FIG. 8 is a schematic flowchart of still another video processing method provided by an embodiment of this application.
  • FIG. 9 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of interaction of yet another video processing method provided by an embodiment of this application.
  • FIG. 11 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • FIG. 12 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • FIG. 13 is a schematic diagram of interaction of yet another video processing method provided by an embodiment of this application.
  • FIG. 14 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • 15 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • FIG. 16 is a schematic diagram of video segment synthesis according to an embodiment of the application.
  • FIG. 17 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the application.
  • FIG. 18 is a schematic diagram of a first time period and a second time period provided by an embodiment of this application.
  • FIG. 19 is a schematic diagram of video storage in a vehicle emergency incident provided by an embodiment of this application.
  • FIG. 20 is a schematic structural diagram of a video processing device provided by an embodiment of this application.
  • FIG. 21 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the application.
  • a camera can be installed at the rear of the vehicle to assist in reversing, and a driving recorder can be installed on the vehicle to record the driving process.
  • Fig. 1 is a schematic diagram of a video processing method in the prior art.
  • the camera of the driving recorder sends the video stream to the driving recorder to complete the video by itself after shooting a video stream. Preview and play, and save the video to the Secure Digital Memory Card (SD card) that comes with the driving recorder.
  • SD card Secure Digital Memory Card
  • the camera component In the video recording process, in order to provide video failure recovery capabilities, the camera component usually needs to write index data while writing video data.
  • the index data contains information such as the frame rate and resolution of the video, and is used to decode the video data when the video is played.
  • the other is to continuously write the index to the memory during the recording of the video.
  • the continuous writing of the index during the recording of the video will take up more storage space, which results in a larger storage space occupied by the recorded video.
  • an embodiment of the present application provides a video processing method to reduce the storage space occupied by the recorded video.
  • the received video stream is saved in a directly decodable form, so there is no need to write an index when recording the video, which saves the storage space occupied by the recorded video, and thus can save the video for a longer period of time.
  • the video processing method provided by the embodiments of this application is not only suitable for processing video streams taken by the camera components of vehicles, such as dash cams, etc., but also can be applied to video streams taken by other non-vehicle video components, such as surveillance cameras. , Cameras, etc.
  • FIG. 2 is a system architecture diagram of a video processing method provided by an embodiment of the application. As shown in FIG. 2, it includes a camera component 11, a processor 12, a memory 13 and a video playback component 14.
  • the vehicle-mounted camera component 11 shoots a video
  • the processor 12 converts the video stream into at least one independently encoded or decoded first data packet.
  • the processor 12 synthesizes the multiple first data packets into the first video segment according to the reporting period and saves it in the memory 13.
  • the video playback component 14 may extract the first video segment from the memory 13 for playback.
  • the embodiment of the present application does not limit the type of the camera component 11, and for example, it may be a driving recorder, a video camera, and the like.
  • the embodiment of the present application also does not limit the type of the memory 13, which may be a hard disk, an SD card, etc., as an example.
  • the processor 12 may be a processor of a vehicle-mounted terminal.
  • the video playback component 14 may be a video playback module on a vehicle-mounted terminal, a mobile phone on a vehicle, or the like.
  • the execution body of the video processing method provided in the embodiments of the present application is a video processing device.
  • the video processing device can be implemented by any software and/or hardware, and can be part or all of a vehicle-mounted terminal, for example, in a vehicle-mounted terminal. Processor.
  • FIG. 3 is a schematic flowchart of a video processing method provided by an embodiment of the application. This embodiment relates to a specific process of how a vehicle-mounted terminal saves a received video stream. As shown in Figure 3, the video processing method includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • the camera component collects the video, it can transmit the video stream to the vehicle-mounted terminal.
  • the embodiment of the present application does not limit the type of the camera component, for example, it may be a driving recorder, a video camera, and the like.
  • the embodiment of the application does not limit the number of camera components, and it can be one or more. Accordingly, the vehicle-mounted terminal can receive video streams sent by one camera component, or can receive video streams sent by multiple camera components.
  • the embodiments of the present application also do not limit the type of video stream, which can be specifically set according to the transmission protocol.
  • the video stream may be a real-time streaming protocol (RTSP) stream.
  • RTSP is an application layer protocol in the transmission control protocol/internet protocol (TCP/IP) system.
  • TCP/IP transmission control protocol/internet protocol
  • the packaging format in the RTSP stream can be multiple, for example, it can be a transport stream (Transport stream). , TS) format, which can be elementary stream (Elementary Stream, ES) format, or bare stream format.
  • the bare bit stream can be encoded as an ES stream, and the ES stream can be packaged as a TS stream.
  • a bare bit stream is a data stream that has not been encoded.
  • the bare bit stream contains both audio data and video data.
  • the ES stream is a data stream containing only one kind of content, and is composed of several ES packets, such as an ES stream containing only video data or an ES stream containing only audio data.
  • the ES packets in the ES stream can be further encapsulated into TS packets to form a TS stream, and the TS packets can be independently encoded or decoded.
  • the above-mentioned video stream further includes a protocol header, and the terminal device needs to remove the protocol header in the video stream when processing the video stream.
  • the video stream is an RTSP stream, correspondingly, the video stream includes an RTSP header.
  • the vehicle-mounted terminal can extend the RTSP instruction, and the extended RTSP instruction is used to increase the video stream type query and setting capabilities. After receiving the video stream, the vehicle-mounted terminal can determine the packaging format of the video stream based on the RTSP instruction.
  • the vehicle-mounted terminal converts the video stream into at least one first data packet, and the first data packet is independently encoded or decoded.
  • the vehicle-mounted terminal after receiving the video stream, the vehicle-mounted terminal needs to first remove the protocol header of the video stream, and then convert the video stream into at least one first data packet according to the packaging type of the video stream.
  • the vehicle-mounted terminal can use the above-mentioned extended RTSP instruction to query the packaging format of the video stream.
  • the first data packet may be a TS packet.
  • FIG. 4 is a schematic diagram of a video stream storage provided by an embodiment of the application.
  • the camera component obtains the video collection data, it sends the video stream to the Manufacturing Data Collection (MDC) of the vehicle terminal, and the MDC sends the video stream to the parser of the vehicle terminal to remove the parser.
  • MDC Manufacturing Data Collection
  • the RTSP header of the video stream can be a TS stream, an ES stream or a bare bit stream, and the specific video stream format can be determined by the packaging format specified in RTSP.
  • the parser can obtain at least one TS packet after removing the RTSP header, and then the parser sends the at least one TS packet to the output device of the vehicle terminal. If the video stream is an ES stream, the parser can obtain at least one ES packet after removing the RTSP header, and then the parser sends at least one ES packet to the packer of the vehicle terminal so that the packer encapsulates the at least one ES packet into at least one The TS packet is sent to the output device.
  • the parser can send the bare bit stream to the encoder of the vehicle terminal for encoding to obtain at least one ES packet, and then the encoder sends at least one ES packet to the encoder of the vehicle terminal The packer, so that the packer encapsulates the at least one ES packet into at least one TS packet and sends it to the exporter.
  • the vehicle-mounted terminal synthesizes at least one first data packet acquired in the first reporting period into a first video segment, and saves the first video segment.
  • the vehicle-mounted terminal may synthesize the at least one first data packet acquired in the first reporting period into a first video segment, and save the first video segment.
  • the first video segment may be a TS video segment.
  • the above reporting period can be 1 second, 0.5 seconds, etc.
  • the embodiment of this application does not limit the duration of the first reporting period.
  • the above reporting period can be less than or equal to the maximum allowable lost video duration, where the maximum allowable lost video duration is The maximum time that the user allows the lost camera assembly to take video when a failure occurs. Exemplarily, if the maximum allowable lost video duration is 1 second, correspondingly, the reporting period may be 1 second or 0.5 second.
  • the vehicle-mounted terminal synthesizes at least the first data packet buffered in the reporting period into the first video segment every 0.5 seconds, and outputs the first video segment to Memory. Since the memory stores video clips every 0.5 seconds, even if a power failure occurs, only 0.5 seconds of video data will be lost at most.
  • the vehicle-mounted terminal may determine the number of first data packets in the first reporting period according to the resolution of the video and the duration of the first reporting period. Exemplarily, if the first reporting period is 0.5 seconds and the resolution of the video is 1080P, the first data packet buffer of about 1MB is generated in 0.5 seconds according to the 1080P calculation. When the first data packet is a TS packet, since each TS packet always contains 188 bytes, 5577 TS packets can be generated in the first reporting period. Subsequently, the output device of the vehicle-mounted terminal can output the first video segment composed of 5577 TS packets to the memory.
  • each data packet can be simply connected end to end in chronological order to synthesize the first video segment.
  • the at least one TS packet can be sent to the output device.
  • the TS packet queue is waiting to be written out in one reporting period, and the TS packets are synthesized into the first video segment. If there are 6 TS packets in one reporting period, the 6 TS packets are connected end to end to form the first video segment. Subsequently, the outputter outputs the first video segment to the memory.
  • FIG. 5 is a schematic diagram of interaction of a video processing method provided by an embodiment of this application. As shown in FIG. 5, taking the video stream as the TS stream as an example, the processing method of the collected video will be described.
  • the vehicle camera After the vehicle camera collects the video stream, the vehicle camera sends the video stream to the MDC of the vehicle terminal, and then the MDC sends the video stream to the parser of the vehicle terminal so that the parser removes the RTSP header and obtains at least one TS packet .
  • the parser sends at least one TS packet to the exporter of the vehicle-mounted terminal, so that the exporter composes at least one TS packet in the first reporting period into a first video segment, and then sends the first video segment to the SD card for storage.
  • the video playback component can extract the first video clip from the SD card for playback.
  • the parser can convert at least one TS packet into a naked code stream that can be directly previewed and played, and send the naked code stream to the car recorder or video Play component for video preview.
  • the video processing method provided by the embodiments of the present application receives a video stream collected by a camera component and converts the video stream into at least one first data packet, which can be independently encoded or decoded. Subsequently, the first data packet obtained in the first reporting period is then saved as the first video segment.
  • the collected video stream is converted into at least one first data packet that can be independently encoded or decoded, when a failure occurs during video recording, the first video segment composed of the saved first data packet can also be independently Complete the video decoding, so there is no need to write the index when recording the video, saving the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • the video stream sent by the camera component received by the vehicle-mounted terminal can be in a variety of different forms.
  • the video stream includes TS packets, and in some embodiments, the video stream includes ES packets.
  • the video stream can be a bare stream.
  • the video stream can be converted into at least one first data packet in different ways, and then the at least one first data packet is combined into at least one first video segment.
  • FIG. 6 is a schematic flowchart of another video processing method provided by an embodiment of this application
  • FIG. 7 is an interactive schematic diagram of another video processing method provided by an embodiment of this application
  • FIG. 7 corresponds to the video processing method in FIG. 6.
  • the video processing method includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • step S301 the specific implementation process and implementation principle of step S301 are similar to those of step S201 in FIG. 3, and will not be repeated here. Specifically, after the vehicle camera collects the video stream, the vehicle camera sends the video stream to the MDC of the vehicle terminal, and then the MDC sends the video stream to the parser of the vehicle terminal.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains at least one TS packet from the video stream.
  • the vehicle-mounted terminal can directly obtain at least one TS packet from the video stream after removing the protocol header in the video stream.
  • the parser after the parser obtains the video stream containing TS packets, it can remove the RTSP header of the video stream to obtain at least one TS packet. Subsequently, the parser sends at least one TS packet to the output device of the vehicle-mounted terminal.
  • the vehicle-mounted terminal synthesizes at least one TS packet acquired in the first reporting period into a first video segment, and saves the first video segment.
  • step S303 the specific implementation process and implementation principle of step S303 are similar to those of step S203 in the first embodiment, and will not be repeated here. Specifically, as shown in FIG. 7, after the outputter synthesizes at least one TS in the first reporting period into a TS video segment, the TS video segment is stored in the memory.
  • the vehicle-mounted terminal after receiving the video stream containing the TS packet collected by the camera component, can convert the video stream into at least one first video segment and save it in the memory. In addition, after receiving the video stream containing TS packets collected by the camera component, the vehicle-mounted terminal can also process the video stream and send it to the video playback component for direct preview playback. Referring to Figure 8, the method further includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains at least one TS packet from the video stream.
  • steps S401-S402 are similar to those of steps S301-S302 in FIG. 6, and will not be repeated here.
  • the parser may send the at least one TS packet to the unpacker of the vehicle-mounted terminal.
  • the vehicle-mounted terminal unpacks at least one TS packet, and obtains at least one elementary stream ES packet.
  • the vehicle-mounted terminal may convert the at least one TS packet into at least one ES packet.
  • the unpacker may unpack at least one TS packet, thereby obtaining at least one ES packet.
  • each TS package corresponds to multiple ES packages.
  • the unpacker can send the unpacked ES packet to the decoder.
  • the embodiment of the present application does not limit how the unpacker unpacks the TS package, and it can be performed according to the existing TS package unpacking method, so as to obtain at least one ES package.
  • the vehicle-mounted terminal decodes at least one ES packet, and obtains a naked code stream to be previewed corresponding to the video stream.
  • the decoder obtains at least one ES packet
  • the at least one ES packet can be decoded to obtain the naked code stream to be previewed corresponding to the video stream.
  • the embodiment of the present application restricts how the decoder decodes the ES packet, which can be performed according to the existing ES packet decoding method, so as to obtain the naked code stream to be previewed corresponding to the video stream.
  • the vehicle-mounted terminal sends the naked code stream to be previewed to the video playback component.
  • the decoder may send the naked code stream to be previewed to the video playback component.
  • the renderer of the video playback component can render the video based on the bare stream for previewing.
  • the video processing method provided by the embodiment of the present application receives the video stream collected by the camera component, removes the protocol header in the video stream, obtains at least one TS packet from the video stream, and integrates the first data packet obtained in the first reporting period Save as the first video clip.
  • the collected video stream is converted into at least one first data packet that can be independently encoded or decoded, when a failure occurs during video recording, the first video segment composed of the saved first data packet can also be independently Complete the video decoding, so there is no need to write the index when recording the video, saving the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • FIG. 9 is a schematic flowchart of another video processing method provided by an embodiment of this application
  • FIG. 10 is an interactive schematic diagram of still another video processing method provided by an embodiment of this application
  • FIG. 10 corresponds to the video processing method in FIG.
  • Video processing methods including:
  • S501 The vehicle-mounted terminal receives the video stream collected by the camera component.
  • step S401 the specific implementation process and implementation principle of step S401 are similar to those of step S201 in the first embodiment, and will not be repeated here. Specifically, after the vehicle camera collects the video stream, the vehicle camera sends the video stream to the MDC of the vehicle terminal, and then the MDC sends the video stream to the parser of the vehicle terminal.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains at least one ES packet from the video stream.
  • the vehicle-mounted terminal can obtain at least one ES packet from the video stream after removing the protocol header in the video stream.
  • the parser of the vehicle terminal after the parser of the vehicle terminal obtains the video stream containing the ES packet, it can remove the RTSP header of the video stream to obtain at least one ES packet. Subsequently, the parser sends at least one ES packet to the packer of the vehicle-mounted terminal.
  • the vehicle-mounted terminal encapsulates at least one ES packet into at least one TS packet.
  • the vehicle-mounted terminal may encapsulate the at least one ES packet into at least one TS packet.
  • the encapsulator may encapsulate the at least one ES packet into at least one TS packet, and then send the at least one TS packet to the output device of the vehicle terminal.
  • the vehicle-mounted terminal synthesizes at least one TS packet acquired in the first reporting period into a first video segment, and saves the first video segment.
  • step S504 the outputter synthesizes at least one TS packet in the first reporting period into a TS video segment, and then stores the TS video segment in the memory.
  • the vehicle-mounted terminal after receiving the video stream containing the ES packet collected by the camera component, can convert the video stream into at least one first video segment and save it in the memory. In addition, after receiving the video stream collected by the camera component, the vehicle-mounted terminal can also process the video stream and send it to the video playback component for direct preview and playback. Referring to Figure 11, the method further includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains at least one ES packet from the video stream.
  • steps S601-S602 are similar to those of steps S501-S502 in FIG. 5, and will not be repeated here.
  • the parser may send the at least one ES packet to the decoder of the vehicle-mounted terminal.
  • the vehicle-mounted terminal decodes at least one ES packet, and obtains a naked code stream to be previewed corresponding to the video stream.
  • the decoder after the decoder receives at least one ES packet sent by the parser, it can decode the at least one ES packet, so as to obtain the naked code stream to be previewed corresponding to the video stream.
  • the vehicle-mounted terminal transmits the naked code stream to be previewed to the video playback component.
  • steps S603-S604 are similar to those of steps S404-S405 in FIG. 7, and will not be repeated here.
  • the decoder may send the naked code stream to be previewed to the video playback component.
  • the renderer of the video playback component can render the video based on the bare stream for previewing.
  • the video processing method provided by the embodiment of the present application receives the video stream collected by the camera component, removes the protocol header in the video stream, obtains at least one ES packet from the video stream, and encapsulates the at least one ES packet into at least one TS packet.
  • the collected video stream is converted into at least one first data packet that can be independently encoded or decoded, when a failure occurs during video recording, the first video segment composed of the saved first data packet can also be independently Complete the video decoding, so there is no need to write the index when recording the video, saving the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • FIG. 12 is a schematic flowchart of another video processing method provided by an embodiment of this application
  • FIG. 13 is an interactive schematic diagram of another video processing method provided by an embodiment of this application
  • FIG. 13 corresponds to the video processing method in FIG. 12.
  • the video processing method includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • step S701 the specific implementation process and implementation principle of step S701 are similar to those of step S201 in FIG. 3, and will not be repeated here. Specifically, after the vehicle camera collects the video stream, the vehicle camera sends the video stream to the MDC of the vehicle terminal, and then the MDC sends the video stream to the parser of the vehicle terminal.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains the bare bit stream corresponding to the video stream from the video stream.
  • the parser obtains the video stream, the RTSP header of the video stream can be removed to obtain the bare bit stream. Subsequently, the parser sends the bare bit stream to the encoder of the vehicle terminal.
  • the vehicle-mounted terminal encodes the bare code stream to generate at least one ES packet.
  • the embodiment of the present application restricts how to encode the bare bitstream to generate the ES packet, and the existing bare bitstream encoding method can be used. Specifically, as shown in FIG. 13, the encoder encodes the bare bit stream, generates at least one ES packet, and sends the at least one ES packet to the packer of the vehicle terminal.
  • the vehicle-mounted terminal encapsulates at least one ES packet into at least one TS packet.
  • the packetizer encapsulates at least one ES packet into at least one TS packet, and then sends the at least one TS packet to the output device of the vehicle terminal.
  • the vehicle-mounted terminal synthesizes at least one TS packet obtained in the first reporting period into a first video segment, and saves the first video segment.
  • the TS video segment is stored in the memory.
  • the vehicle-mounted terminal after receiving the bare code stream collected by the camera component, can convert the video stream into at least one first video segment and save it in the memory. In addition, after receiving the video stream collected by the camera component, the vehicle-mounted terminal can also process the video stream and send it to the video playback component for direct preview and playback. Referring to Figure 14, the method further includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • the vehicle-mounted terminal removes the protocol header in the video stream, and obtains the bare bit stream corresponding to the video stream from the video stream.
  • the vehicle-mounted terminal transmits the bare bit stream corresponding to the video stream to the video playback component.
  • step S803 the specific implementation process and implementation principle of step S803 are similar to those of step S405 in FIG. 7, and will not be repeated here.
  • the parser in the vehicle terminal can remove the protocol header in the video stream, obtain the naked code stream corresponding to the video stream from the video stream, and send the naked code stream to be previewed to the video playback component. Subsequently, the renderer of the video playback component can render the video based on the bare stream for previewing.
  • the video processing method receives the video stream collected by the camera component, removes the protocol header in the video stream, obtains the naked code stream corresponding to the video stream from the video stream, and then encodes the naked code stream to generate At least one ES packet, and at least one ES packet is encapsulated into at least one TS packet.
  • the collected video stream is converted into at least one first data packet that can be independently encoded or decoded, when a failure occurs during video recording, the first video segment composed of the saved first data packet can also be independently Complete the video decoding, so there is no need to write the index when recording the video, saving the storage space occupied by the recorded video, so that the video can be saved for a longer time.
  • the first video segment may also be merged into a video file when it is stored and then stored.
  • the following describes how the vehicle-mounted terminal combines multiple first video clips into one video file.
  • FIG. 15 is a schematic flowchart of another video processing method provided by an embodiment of this application. Based on the foregoing embodiment, the video processing method includes:
  • the vehicle-mounted terminal receives the video stream collected by the camera component.
  • the vehicle-mounted terminal converts the video stream into at least one first data packet, and the first data packet is independently encoded or decoded.
  • the vehicle-mounted terminal synthesizes at least one first data packet acquired in the first reporting period into a first video segment, and saves the first video segment.
  • steps S901-S903 are similar to those of steps S201-S203 in FIG. 3, and will not be repeated here.
  • the vehicle-mounted terminal merges the at least one video segment into at least one video file according to the standard data volume of the video file.
  • the vehicle-mounted terminal converts the video stream into at least one first data packet, it can merge the at least one video segment into at least one video file according to the standard data volume of the video file.
  • the first data package can be a TS package. Since each TS package has its own metadata, the TS package has no additional metadata information. There is no need to parse the data package when merging.
  • the file append method can be simply connected end to end quickly. Multiple video clips are merged into one video file.
  • the embodiment of the application does not limit the standard data volume of the video file, and can be specifically set according to actual conditions.
  • each video file is 10M
  • one video segment is 1M
  • 10 video segments can be synthesized into one video file.
  • FIG. 16 is a schematic diagram of video segment synthesis according to an embodiment of the application. Specifically, referring to FIG. 16, the synthesizer of the vehicle-mounted terminal receives TS video segments Video1, Video2, and Video3 from the output device of the vehicle-mounted terminal. If the total video data volume of TS video clips Video1, Video2 and Video3 meets the preset standard data volume, the synthesizer will merge the TS video clips Video1, Video2 and Video3 into one video file, and send the video file to the vehicle terminal for conversion Device.
  • the vehicle-mounted terminal saves at least one video file according to the first video format.
  • the vehicle-mounted terminal may be preset with the video format of the video file.
  • the TS video file itself can be played, and the TS video file can also be converted into a more general MP4 file through a converter.
  • the first video format is TS
  • the TS video file output by the on-board terminal is saved without conversion.
  • the first video format is MP4
  • the TS video file output by the on-board terminal needs to be converted to MP4 format accordingly. save.
  • the converter of the vehicle-mounted terminal can convert the format of the video file sent by the synthesizer according to the video format set by the user, and send the converted video file to the memory for storage.
  • the preset video format is the MP4 format
  • the format of the video file is converted to the MP4 format.
  • the MP4 format can be more suitable for the playback format of the vehicle terminal.
  • At least one video segment is merged into at least one video file according to the standard data volume of the video file, and the at least one video file is saved according to the first video format.
  • TS video clips can be merged by file appending to achieve fast video recovery.
  • the video stream can be encapsulated into multiple video formats according to user needs.
  • FIG. 17 is a schematic flowchart of another video processing method provided by an embodiment of this application.
  • the video processing method includes:
  • the vehicle-mounted terminal obtains the speed of the vehicle where the camera component is located.
  • the vehicle-mounted terminal can detect the speed of the vehicle where the camera component is located in real time.
  • the embodiments of the present application do not limit how to obtain the speed of the vehicle.
  • the vehicle where the camera component is located may be provided with a speed sensor, and the real-time speed of the vehicle can be determined by the speed sensor.
  • the vehicle-mounted terminal can obtain the real-time satellite positioning of the vehicle to calculate the speed of the vehicle.
  • the vehicle-mounted terminal saves the first video segment in the second time period to an independent storage area.
  • the starting time point of the first time period is the time point when the vehicle pedal is depressed
  • the duration of the first time period is the first preset duration
  • the middle time point of the second time period is the time when the vehicle pedal is depressed.
  • the duration of the second time period is the second preset duration.
  • FIG. 18 is a schematic diagram of a first time period and a second time period provided by an embodiment of this application.
  • the first preset duration may be 150ms
  • the second preset duration may be 300ms.
  • the time point t1 when the vehicle pedal is depressed can be used as the starting time point of the first time period, and the time point t2 after t1, which is 150 ms away from t1, can be used as the first time point.
  • the end time point of the time period, the time period between t1 and t2 is the first time period.
  • the time t1 when the driver steps on the driving pedal of the vehicle can be regarded as the middle time point of the second time period, and the time point t0 before t1 and t1 is 150ms different from t1 as the starting time of the second time period At this point, the time point t2, which is 150 ms later than t1, is taken as the end time point of the first time period, and the time period between t0 and t2 is the second time period.
  • the vehicle-mounted terminal can locate the second time period according to the timestamp of the first video segment.
  • the first video clip within.
  • each first video clip has a corresponding time stamp corresponding to it. Therefore, the vehicle-mounted terminal can use the timestamp of the first video segment to locate the first video segment.
  • the time stamp of the first video segment can be used to identify the time at which the recording of the first video segment starts, and is recorded and generated by the vehicle-mounted terminal.
  • the first video segment is composed of multiple TS packets, and the time at which the recording of the first video segment starts may be the time when the recording of the first TS packet starts.
  • the vehicle-mounted terminal saves the first video clip, it may save the first video clip together with the time stamp of the first video clip.
  • the time stamp of the first video clip can be used to quickly locate the key video within 15 seconds before and after the brake.
  • step 1102 if the speed of the vehicle decreases in the first time period by more than the first threshold, the vehicle-mounted terminal saves the first video clip in the second time period to an independent storage area. , You can also save the time stamp of the first video clip to an independent storage area.
  • FIG. 19 is a schematic diagram of video storage during a vehicle emergency incident provided by an embodiment of the application.
  • the vehicle controller VCU
  • the VCU may send instruction information to the scanner of the vehicle-mounted terminal, and the instruction information is used to instruct the scanner to filter the first video segment stored in the second time period from the memory.
  • the scanner can scan the time stamp of the first video clip. , Filter out Video1 and Video2. Subsequently, the scanner sends Video1 and Video2 to the synthesizer of the vehicle-mounted terminal, and the synthesizer synthesizes Video1 and Video2 into video files, and sends them to the converter of the vehicle-mounted terminal. Finally, the converter converts the video file into MP4 format and sends it to the memory for independent storage.
  • the VCU can identify and report a sudden braking event by judging the braking amplitude. If the VCU detects that the speed drop is greater than 25km/h in the first time period of 150ms, it can be judged that an accident has occurred. At this time, the VCU defines and reports the emergency event. Send instruction information to the scanner to instruct the scanner to quickly locate the key video within 15 seconds before and after the brake through the time stamp, and move the key video to a dedicated storage area.
  • the vehicle-mounted terminal obtains the speed of the vehicle where the camera component is located, and locates the first video segment in the second time period according to the timestamp of the first video segment, if the speed of the vehicle is in the first time period If the drop rate in the video exceeds the first threshold, the first video segment in the second time period is saved to an independent storage area.
  • the key event video can be quickly moved to an independent storage area through a timestamp, and the key event video can be quickly extracted in the independent storage area after the accident, so that more valuable videos can be obtained faster.
  • the aforementioned program can be stored in a computer readable storage medium, and when the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
  • FIG. 20 is a schematic structural diagram of a video processing device provided by an embodiment of this application.
  • the video processing device can be implemented by software, hardware or a combination of the two to execute the above video processing method.
  • the video processing device 200 includes: a receiving module 201 and a processing module 202.
  • the receiving module 201 is used to receive the video stream collected by the camera component
  • the processing module 202 is configured to convert the video stream into at least one first data packet, the first data packet is independently encoded or decoded; the first data packet obtained in the first reporting period is synthesized into a first video segment, and the first data packet is Video clips are saved, the first reporting period is less than or equal to the maximum allowable lost video duration, and the maximum allowable lost video duration is the maximum time allowed for the user to take the video taken by the lost camera component when a failure occurs.
  • the first data packet is a transport stream TS packet.
  • the processing module 202 is specifically configured to remove the protocol header in the video stream and obtain at least one TS packet from the video stream.
  • the processing module 202 is specifically configured to remove the protocol header in the video stream, obtain at least one ES packet from the video stream, and encapsulate the at least one ES packet into at least one ES packet. TS package.
  • the processing module 202 is specifically configured to remove the protocol header in the video stream, and obtain the bare bit stream corresponding to the video stream from the video stream; Encoding, generating at least one ES packet; encapsulating the at least one ES packet into at least one TS packet.
  • the protocol header is a real-time streaming protocol RTSP header.
  • the data amount of each first data packet is the same.
  • the processing module 202 is further configured to merge at least one first video segment into at least one video file according to the standard data amount of the video file; and save at least one video file according to the first video format.
  • the receiving module 201 is also used to obtain the speed of the vehicle where the camera component is located;
  • the processing module 202 is further configured to save the first video clip in the second time period to an independent storage area if the rate of decrease in the speed of the vehicle in the first time period exceeds the first threshold.
  • the starting time point of the first time period is the time point when the vehicle pedal is depressed
  • the duration of the first time period is the first preset duration
  • the middle time point of the second time period is the time when the vehicle pedal is depressed.
  • the duration of the second time period is the second preset duration.
  • the processing module 202 is further configured to locate the first video segment in the second time period according to the timestamp of the first video segment, and the timestamp of the first video segment is used to identify the first video segment The time when the recording started.
  • the video processing device provided by the embodiment of the present application can execute the actions of the video processing method in the foregoing method embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 21 is a schematic structural diagram of a vehicle-mounted terminal provided by an embodiment of the application.
  • the vehicle-mounted terminal may include: a processor 211 (such as a CPU), a memory 212, a receiver 213, and a transmitter 214; the receiver 213 and the transmitter 214 are coupled to the processor 211, and the processor 211 controls the receiver In the receiving action of 213, the processor 211 controls the sending action of the transmitter 214.
  • the memory 212 may include a high-speed RAM memory, or may also include a non-volatile memory NVM, such as at least one disk memory.
  • the memory 212 may store various information to complete various processing functions and implement the methods of the embodiments of the present application. step.
  • the vehicle-mounted terminal involved in the embodiment of the present application may further include: a power supply 215, a communication bus 216, and a communication port 219.
  • the receiver 213 and the transmitter 214 may be integrated in the transceiver of the vehicle-mounted terminal, or may be independent transceiver antennas on the vehicle-mounted terminal.
  • the communication bus 216 is used to implement communication connections between components.
  • the aforementioned communication port 219 is used to implement connection and communication between the vehicle-mounted terminal and other peripherals.
  • the above-mentioned memory 212 is used to store computer executable program code, and the program code includes information; when the processor 211 executes the information, the information causes the processor 211 to perform the processing actions of the vehicle-mounted terminal in the above-mentioned method embodiment, so that The transmitter 214 executes the sending action of the vehicle-mounted terminal in the foregoing method embodiment, and causes the receiver 213 to execute the receiving action of the vehicle-mounted terminal in the foregoing method embodiment.
  • the implementation principles and technical effects are similar and will not be repeated here.
  • the embodiment of the present application also provides a chip including a processor and an interface.
  • the interface is used to input and output data or instructions processed by the processor.
  • the processor is used to execute the method provided in the above method embodiment.
  • This chip can be applied to in-vehicle terminals.
  • the embodiment of the present application also provides a program, which is used to execute the method provided in the above method embodiment when the program is executed by the processor.
  • the embodiment of the present application also provides a program product, such as a computer-readable storage medium, in which instructions are stored, and when the program product runs on a computer, the computer executes the method provided in the foregoing method embodiment.
  • a program product such as a computer-readable storage medium, in which instructions are stored, and when the program product runs on a computer, the computer executes the method provided in the foregoing method embodiment.
  • the above embodiments it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention are generated in whole or in part.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • Computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • computer instructions may be transmitted from a website, computer, server, or data center through a cable (such as Coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to transmit to another website site, computer, server or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Television Signal Processing For Recording (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

一种视频处理方法及装置,该方法包括:接收摄像组件采集的视频流(S201);将视频流转换为至少一第一数据包,第一数据包独立编码或解码(S202);将第一上报周期内获取的第一数据包作为第一视频片段保存(S203)。该方法中,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。

Description

视频处理方法及装置
本申请要求于2020年04月24日提交中国专利局、申请号为202010331493.2、申请名称为“视频处理方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及信息技术信领域,尤其涉及一种视频处理方法及装置。
背景技术
随着经济水平的不断发展,私家车变得越来越普及。车主通常会给私家车加装各种摄像组件,例如,可以在车辆尾部加装摄像头来辅助倒车,在车辆上可以加装行车记录仪来记录行车过程。
现有技术中,车辆上加装的摄像组件(如行车记录仪)通常独自完成视频的预览、录制、保存和播放。在视频录制过程中,为了提供视频故障恢复能力,摄像组件在写入视频数据的同时,通常需要写入索引数据。当前有两种索引数据的实现方式,第一种是根据分辨率和帧率计算索引空间并预写入索引,再将视频数据不断写入存储器。由于预先写入索引通常和实际索引空间的大小和实际索引位置存在误差,造成索引空间浪费。
另一种是在录制视频的过程中不断的写出索引到存储器,然而,录制视频的过程中不断的写出索引会占用较多的存储空间,由此造成录制视频占用的存储空间较大。
发明内容
本申请实施例提供一种视频处理方法及装置,以解决现有技术中录制视频占用的存储空间较大的问题。
第一方面,本申请实施例提供一种视频处理方法,该方法可以应用于视频处理装置、也可以应用于视频处理装置中的芯片,视频处理装置可例如车载终端。下面以应用于车载终端为例对该方法进行描述。该方法中,车载终端接收摄像组件采集的视频流,并将视频流转换为至少一第一数据包,其中的第一数据包独立编码或解码。随后,车载终端将第一上报周期内获取的第一数据包合成为第一视频片段,并将第一视频片段保存。其中,第一上报周期小于或等于最大允许丢失视频时长,最大允许丢失视频时长为发生故障时用户允许丢失的摄像组件拍摄的视频的最长时间。
通过第一方面提供的视频处理方法,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,车载终端保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
在一种可实施的方式中,第一数据包为传输流TS包。
在一种可实施的方式中,若视频流包含有TS包,车载终端可以去除视频流中的 协议头后,从视频流中获取至少一TS包。
通过该可实施方式提供的视频处理方法,当视频流包含有TS包时,车载终端直接去除协议头从视频流中获取至少一TS包,并将至少一TS包合成第一视频片段保存。由于以TS包合成的第一视频片段可独立编码或解码,进而节省了录制视频占用的存储空间,使得存储空间中可以保存更长时间的视频。
在一种可实施的方式中,若视频流包含有ES包,车载终端去除视频流中的协议头后,从视频流中获取至少一ES包,再将至少一ES包封装为至少一TS包。
通过该可实施方式提供的视频处理方法,当视频流包含有ES包时,车载终端通过去除协议头从视频流中获取至少一ES包,并将ES包封装为TS包,最后将至少一TS包合成第一视频片段保存。由于以TS包合成的第一视频片段可独立编码或解码,进而节省了录制视频占用的存储空间,使得存储空间中可以保存更长时间的视频。
在一种可实施的方式中,若视频流为裸码流,车载终端去除视频流中的协议头后,从视频流中获取视频流对应的裸码流,并将裸码流进行编码,生成至少一ES包。最后,车载终端将至少一ES包封装为至少一TS包。
通过该可实施方式提供的视频处理方法,当视频流为裸码流时,车载终端通过去除协议头从视频流中获取视频流对应的裸码流,将裸码流进行编码生成ES包,再将ES包包封装为至TS包,最后将至少一TS包合成第一视频片段保存。由于以TS包合成的第一视频片段可独立编码或解码,进而节省了录制视频占用的存储空间,使得存储空间中可以保存更长时间的视频。
在一种可实施的方式中,协议头为实时流传输协议RTSP头。
在一种可实施的方式中,每个第一数据包的数据量相同。
在一种可实施的方式中,车载终端还可以根据视频文件的标准数据量,将至少一第一视频片段合并成至少一视频文件,并根据第一视频格式,保存至少一视频文件。
通过该可实施方式提供的视频处理方法,避免了视频以大量第一视频片段的形式进行保存,使得用户查阅视频时更加方便。同时,可以根据视频播放组件的播放格式存储视频文件,有利于快速在视频播放组件上进行播放。
在一种可实施的方式中,车载终端还可以获取摄像组件所在车辆的速度,若车辆的速度在第一时间段内的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域。
其中,所述第一时间段的起始时间点为所述车辆的行车踏板被踩下的时间点,所述第一时间段的时长为第一预设时长;所述第二时间段的中间时间点为所述车辆的行车踏板被踩下的时间点,所述第二时间段的时长为第二预设时长。
通过该可实施方式提供的视频处理方法,可以在车辆刹车或撞击时紧急记录视频,车载终端可以在独立存储区域快速提取关键事件视频。
在一种可实施的方式中,车载终端还可以根据第一视频片段的时间戳定位第二时间段内的第一视频片段,第一视频片段的时间戳用于标识第一视频片段录制开始时刻的时间。
第二方面,本申请实施例提供一种视频处理装置,视频处理装置包括:接收模块,用于接收摄像组件采集的视频流;处理模块,用于将视频流转换为至少一第一数据包, 第一数据包独立编码或解码;将第一上报周期内获取的第一数据包合成为第一视频片段,并将第一视频片段保存,第一上报周期小于或等于最大允许丢失视频时长,最大允许丢失视频时长为发生故障时用户允许丢失的摄像组件拍摄的视频的最长时间。
在一种可实施的方式中,若视频流包含有TS包,处理模块具体用于去除视频流中的协议头,从视频流中获取至少一TS包。
在一种可实施的方式中,若视频流包含有ES包,处理模块具体用于去除视频流中的协议头,从视频流中获取至少一ES包;将至少一ES包封装为至少一TS包。
在一种可实施的方式中,若视频流为裸码流,处理模块,具体用于去除视频流中的协议头,从视频流中获取视频流对应的裸码流;将裸码流进行编码,生成至少一ES包;将至少一ES包封装为至少一TS包。
在一种可实施的方式中,协议头为实时流传输协议RTSP头。
在一种可实施的方式中,每个第一数据包的数据量相同。
在一种可实施的方式中,处理模块,还用于根据视频文件的标准数据量,将至少一第一视频片段合并成至少一视频文件;根据第一视频格式,保存至少一视频文件。
在一种可实施的方式中,接收模块,还用于获取摄像组件所在车辆的速度;
处理模块,还用于若车辆的速度在第一时间段内的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域
其中,第一时间段的起始时间点为车辆的行车踏板被踩下的时间点,第一时间段的时长为第一预设时长;第二时间段的中间时间点为车辆的行车踏板被踩下的时间点,第二时间段的时长为第二预设时长。
在一种可实施的方式中,处理模块,还用于根据第一视频片段的时间戳定位第二时间段内的第一视频片段,第一视频片段的时间戳用于标识第一视频片段录制开始时刻的时间。
第三方面,本申请实施例提供一种车载终端,车载终端包括:处理器、存储器、发送器和接收器;发送器和接收器耦合至处理器,处理器控制发送器的发送动作,处理器控制接收器的接收动作;
其中,存储器用于存储计算机可执行程序代码,程序代码包括信息;当处理器执行信息时,信息使网络设备执行如第一方面的各可能的实施方式所提供的视频处理方法。
第四方面,本申请实施例提供一种芯片,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有芯片的设备执行如第一方面的实施方式所提供的视频处理方法。
第五方面,本申请实施例提供一种计算机可读存储介质,用于存储计算机程序,计算机程序使得计算机执行如第一方面的实施方式所提供的视频处理方法。
第六方面,本申请实施例提供一种计算机程序产品,包括计算机程序信息,该计算机程序信息使得计算机执行如第一方面的实施方式所提供的视频处理方法。
第七方面,本申请实施例提供一种计算机程序,计算机程序使得计算机执行如第一方面的实施方式所提供的视频处理方法。
第八方面,本申请实施例提供一种存储介质,其上存储有计算机程序,包括:该 程序被处理器执行时上述第一方面或第一方面的各种实施方式的视频处理方法。
本申请实施例提供的视频处理方法及装置,通过接收摄像组件采集的视频流,并将视频流转换为至少一第一数据包,第一数据包独立编码或解码,随后,再将第一上报周期内获取的第一数据包作为第一视频片段保存。通过该方式,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
附图说明
图1为现有技术中的一种视频处理方法的示意图;
图2为本申请实施例提供的一种视频处理方法的系统架构图;
图3为本申请实施例提供的一种视频处理方法的流程示意图;
图4为本申请实施例提供的一种第一视频片段的合成示意图;
图5为本申请实施例提供的一种视频处理方法的交互示意图;
图6为本申请实施例提供的另一种视频处理方法的流程示意图;
图7为本申请实施例提供的另一种视频处理方法的交互示意图;
图8为本申请实施例提供的再一种视频处理方法的流程示意图;
图9为本申请实施例提供的又一种视频处理方法的流程示意图;
图10为本申请实施例提供的又一种视频处理方法的交互示意图;
图11为本申请实施例提供的又一种视频处理方法的流程示意图;
图12为本申请实施例提供的又一种视频处理方法的流程示意图;
图13为本申请实施例提供的又一种视频处理方法的交互示意图;
图14为本申请实施例提供的又一种视频处理方法的流程示意图;
图15为本申请实施例提供的又一种视频处理方法的流程示意图;
图16为本申请实施例提供的一种视频片段合成示意图;
图17为本申请实施例提供的一种车载终端的结构示意图;
图18为本申请实施例提供的一种第一时间段和第二时间段的示意图;
图19为本申请实施例提供的一种车辆突发事件时视频存储的示意图;
图20为本申请实施例提供的一种视频处理装置的结构示意图;
图21为本申请实施例提供的一种车载终端的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
随着经济水平的不断发展,私家车变得越来越普及。车主通常会给私家车加装各种摄像组件,例如,可以在车辆尾部加装摄像头来辅助倒车,在车辆上可以加装行车记录仪来记录行车过程。
现有技术中,车辆上加装的摄像组件通常独自完成视频的预览、录制、保存和播放。图1为现有技术中的一种视频处理方法的示意图,如图1所示,示例性的,行车记录仪自带的摄像头在拍摄视频流后,将视频流发送给行车记录仪自行完成视频预览和播放,并将视频存入行车记录仪自带的安全数码卡(Secure Digital Memory Card,SD card)中。
在视频录制过程中,为了提供视频故障恢复能力,摄像组件在写入视频数据的同时,通常需要写入索引数据。该索引数据包含有视频的帧率和分辨率等信息,用于播放视频时对视频数据进行解码。当前有两种索引数据的实现方式,第一种是根据分辨率和帧率计算索引空间并预写入索引,再将视频数据不断写入存储器。由于预先写入索引通常也和实际索引空间的大小和实际索引位置存在误差,造成索引空间浪费,通常不采用第一种方式。
另一种是在录制视频的过程中不断的写出索引到存储器,然而,录制视频的过程中不断的写出索引会占用较多的存储空间,由此造成录制视频占用的存储空间较大。
考虑到上述问题,本申请实施例提供了一种视频处理方法,以减少录制视频占用的存储空间。本申请实施例中,接收到的视频流以可直接解码的形式保存,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
本申请实施例提供的视频处理方法,不但可以适用于处理车辆的摄像组件拍摄的视频流,例如行车记录仪等,还可以运用于其他非车辆上的视频组件拍摄的视频流,例如,监控摄像头、摄像机等。
图2为本申请实施例提供的一种视频处理方法的系统架构图。如图2所示,包括有摄像组件11、处理器12、存储器13和视频播放组件14。当车载的摄像组件11拍摄视频时,可以将拍摄到的视频流传输给处理器12,处理器12将视频流转换成至少一独立编码或解码的第一数据包。随后,处理器12根据上报周期将多个第一数据包合成第一视频片段保存在存储器13中。当需要进行播放时,视频播放组件14可以从存储器13中提取第一视频片段进行播放。
其中,本申请实施例对于摄像组件11的类型不做限制,示例性的,可以为行车记录仪、摄像机等。本申请实施例对于存储器13的类型也不做限制,示例性的,可以为硬盘、SD卡等。
示例性的,处理器12可以为车载终端的处理器。视频播放组件14,可以为车载终端上的视频播放模块、车辆上的手机等。
可以理解,本申请实施例提供的视频处理方法的执行主体为视频处理装置,该视频处理装置可以由任意的软件和/或硬件实现,可以是车载终端的部分或全部,例如可以是车载终端中的处理器。
下面以集成或安装有相关执行代码的车载终端为例,以具体地实施例对本申请实施例的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。
图3为本申请实施例提供的一种视频处理方法的流程示意图,本实施例涉及的是车载终端如何将接收到的视频流进行保存的具体过程。如图3所示,该视频处理方法,包括:
S201、车载终端接收摄像组件采集的视频流。
本步骤中,摄像组件在采集到视频后,可以将视频流传输给车载终端。
其中,本申请实施例对于摄像组件的类型不做限制,例如可以为行车记录仪、摄像机 等。本申请实施例对于摄像组件的数量不做限制,可以一个,也可以为多个,相应的,车载终端可以接收一个摄像组件发送的视频流,也可以接收多个摄像组件发送的视频流。
本申请实施例对于视频流的类型也不做限制,可以根据传输协议具体设置,示例性的,视频流可以为实时流传输协议(real time streaming protocol,RTSP)流。RTSP是传输控制协议/网际协议(transmission control protocol/internet protocol,TCP/IP)体系中的一个应用层协议,RTSP流中的打包格式可以为多种,示例性的,可以为传输流(Transport stream,TS)格式,可以为基本码流(Elementary Stream,ES)格式,还可以为裸码流格式。
其中,裸码流可以编码为ES流,ES流可以打包为TS流。裸码流是未经过编码的数据流,裸码流中同时包含有音频数据和视频数据。ES流为只包含一种内容的数据流,由若干个ES包组成,例如只包含视频数据的ES流或只包含音频数据的ES流。在对裸码流进行编码时,可以首先将视频数据和音频数据进行划分,将裸码流编码为只包含视频数据的ES流和只包含音频数据的ES流。ES流中的ES包可以进一步封装为TS包,从而组成TS流,TS包可独立编码或解码。
在一些实施例中,上述视频流还包括有协议头,终端设备在处理视频流时需要先去除视频流中的协议头。示例性的,若视频流为RTSP流,则相应的,视频流包含有RTSP头。
在一些实施例中,若视频流为RTSP流,则相应的,车载终端可以扩展RTSP指令,扩展的RTSP指令用于增加视频流类型查询和设置能力。车载终端在接收到视频流后,可以基于RTSP指令确定视频流的打包格式。
S202、车载终端将视频流转换为至少一第一数据包,第一数据包独立编码或解码。
在本步骤中,车载终端在接收到视频流后,需要首先去除视频流的协议头,随后,再根据视频流的打包类型将视频流转化为至少一第一数据包。其中,车载终端可以采用上述扩展的RTSP指令查询视频流的打包格式。
其中,第一数据包可以为TS包。
示例性的,图4为本申请实施例提供的一种视频流存储示意图。如图4所示,摄像组件获取视频采集数据后,向车载终端的批量数据收集器(Manufacturing Data Collection,MDC)发送视频流,MDC将视频流发送给车载终端的解析器,以使解析器去除视频流的RTSP头。其中,摄像头采集的视频流可以为TS流、ES流或裸码流,具体视频流格式可由RTSP中指定的打包格式确定。
示例性的,若视频流为TS流,则解析器去除RTSP头后可以获取至少一TS包,随后,解析器将至少一TS包发送给车载终端的输出器。若视频流为ES流,则解析器去除RTSP头后可以获取至少一ES包,随后,解析器将至少一ES包发送给车载终端的打包器以使打包器将至少一ES包封装为至少一TS包后发送给输出器。若视频流为裸码流,则解析器去除RTSP头后可以将裸码流发送给车载终端的编码器进行编码,获取至少一ES包,随后,编码器将至少一ES包发送给车载终端的打包器,以使打包器将至少一ES包封装为至少一TS包后发送给输出器。
S203、车载终端将第一上报周期内获取的至少一第一数据包合成为第一视频片段,并保存第一视频片段。
在本步骤中,车载终端将视频流转换为至少一第一数据包之后,可以将第一上报周期内获取的至少一第一数据包合成为第一视频片段,并保存第一视频片段。其中,第一视频 片段可以为TS视频片段。
上述上报周期可以为1秒,可以为0.5秒等,本申请实施例对于第一上报周期的时长不做限制,上述上报周期可以小于或等于最大允许丢失视频时长,其中,最大允许丢失视频时长为发生故障时用户允许丢失的摄像组件拍摄视频的最长时间。示例性的,若最大允许丢失视频时长为1秒,则相应的,上报周期可以为1秒,也可以为0.5秒。
在一些实施例中,若上报周期为0.5秒,则相应的,车载终端每隔0.5秒将该上报周期内缓存的至少第一数据包合成为第一视频片段,并将第一视频片段输出到存储器。由于存储器每隔0.5秒存储一次视频片段,即使发生断电故障,最多只丢失0.5秒的视频数据。
本申请中,车载终端可以根据视频的分辨率和第一上报周期的时长,确定第一上报周期内第一数据包的数量。示例性的,若第一上报周期是0.5秒,视频的分辨率为1080P,按照1080P计算0.5秒约产生1MB的第一数据包缓存。当第一数据包为TS包时,由于每个TS包固定包含有188字节,则第一上报周期可产生5577个TS包。随后,车载终端的输出器可以将5577个TS包合成的第一视频片段输出给存储器。
需要说明的是,本申请实施例对于如何将至少一第一数据包合成第一视频片段不做限制,可以简单地将每个数据包按照时间顺序进行首尾相接,合成第一视频片段。
下面参见图4,以一个上报周期内有6个TS包为例,对如何存储视频流进行说明。在将视频流转换为至少一TS包后,可以将至少一TS包发送给输出器。在输出器中,一个上报周期内等待写出TS包队列,并将TS包合成为第一视频片段。若一个上报周期内有6个TS包,则将6个TS包首尾相接组成第一视频片段。随后,输出器将第一视频片段输出至存储器。
图5为本申请实施例提供的一种视频处理方法的交互示意图。如图5所示,以视频流为TS流为例,对采集到的视频的处理方式进行说明。在车机摄像头采集到视频流后,车机摄像头将视频流发送给车载终端的MDC,随后,MDC将视频流发送给车载终端的解析器,以使解析器去除RTSP头,获取至少一TS包。解析器将至少一TS包发送给车载终端的输出器,以使输出器将第一上报周期内的至少一TS包组成第一视频片段后,将第一视频片段发送给SD卡保存。当用户需要播放第一视频片段时,可以由视频播放组件从SD卡中提取第一视频片段进行播放。此外,在用户需要通过车记录仪或视频播放组件进行视频预览时,解析器可以将至少一TS包转化为可以直接进行预览播放的裸码流,并将裸码流发送给行车记录仪或视频播放组件进行视频预览。
本申请实施例提供的视频处理方法,通过接收摄像组件采集的视频流,并将视频流转换为至少一第一数据包,第一数据包可以独立编码或解码。随后,再将第一上报周期内获取的第一数据包作为第一视频片段保存。通过该方式,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
基于此,本申请中存储视频时无索引空间浪费,能保存更长时间、更高帧率和更高分辨率的视频,能提供更灵活的保存策略。并且,通过视频流格式自适应转换,对于不同封装格式的数据包均可以转换成可自动编解码的视频数据进行保存。
在上述实施例的基础上,车载终端接收的摄像组件发送的视频流可以为多种不同形式, 在一些实施例中,视频流包含有TS包,在一些实施例中,视频流包含有ES包,在另一些实施例中,视频流可以为裸码流。针对不同打包格式的视频流,可以采用不同方式将视频流转换为至少一第一数据包,再将至少一第一数据包组合成至少一第一视频片段。
下面对视频流包含有TS包时如何将视频流转换为至少一第一视频片段进行说明。图6为本申请实施例提供的另一种视频处理方法的流程示意图,图7为本申请实施例提供的另一种视频处理方法的交互示意图,图7与图6中视频处理方法相对应。在上述实施例的基础上,该视频处理方法,包括:
S301、车载终端接收摄像组件采集的视频流。
本实施例中,步骤S301的具体实现过程和实现原理与图3中步骤S201的类似,此处不再赘述。具体的,车机摄像头采集到视频流后,车机摄像头将视频流发送给车载终端的MDC,随后,MDC将视频流发送给车载终端的解析器。
S302、车载终端去除视频流中的协议头,从视频流中获取至少一TS包。
在本步骤中,车载终端接收摄像组件采集的视频流之后,由于视频流包含有TS包,车载终端在去除视频流中的协议头之后,可以直接从视频流中获取至少一TS包。
具体的,如图7所示,以视频流为TS流为例,解析器获取到包含有TS包的视频流后,可以去除视频流的RTSP头,获取至少一TS包。随后,解析器将至少一TS包发送给车载终端的输出器。
S303、车载终端将第一上报周期内获取的至少一TS包合成为第一视频片段,并将第一视频片段保存。
本实施例中,步骤S303的具体实现过程和实现原理与实施例一中步骤S203的类似,此处不再赘述。具体的,如图7所示,输出器将第一上报周期内的至少一TS合成为TS视频片段后,将TS视频片段保存在存储器中。
在上述实施例中,车载终端接收到摄像组件采集到的包含有TS包的视频流后,可以将视频流转换为至少一第一视频片段保存到存储器中。此外,车载终端在接收到摄像组件采集的包含有TS包的视频流后,还可以将视频流处理后发送给视频播放组件直接进行预览播放。参见图8,该方法还包括:
S401、车载终端接收摄像组件采集的视频流。
S402、车载终端去除视频流中的协议头,从视频流中获取至少一TS包。
本实施例中,步骤S401-S402的具体实现过程和实现原理与图6中步骤S301-S302的类似,此处不再赘述。
具体的,参见图7,在解析器从视频流中获取至少一TS包后,可以将至少一TS包发送给车载终端的解包器。
S403、车载终端对至少一个TS包进行解包,获取至少一基本码流ES包。
在步骤中,车载终端在去除视频流中的协议头从视频流中获取至少一TS包之后,可以将至少一TS包转换成至少一ES包。具体的,参见图7,在解包器获得解析器发送的至少一TS包后,可以对至少一个TS包进行解包,从而获取至少一ES包。其中,每个TS包对应有多个ES包。然后解包器可将解包得到的ES包发送给解码器。本申请实施例对于解包器如何对TS包进行解包不做限制,可以根据现有的TS包解包方式进行,从而获取至少一ES包。
S404、车载终端对至少一个ES包进行解码,获取视频流对应的待预览的裸码流。
具体的,参见图7,在解码器获取至少一ES包后,可以对至少一个ES包进行解码,获取视频流对应的待预览的裸码流。
其中,本申请实施例对于如何解码器对ES包进行解码进行限制,可以根据现有的ES包解码方式进行,从而获取视频流对应的待预览的裸码流。
S405、车载终端将待预览的裸码流发送给视频播放组件。
具体的,参见图7,在解码器获取视频流对应的待预览的裸码流之后,解码器可以将待预览的裸码流发送给视频播放组件。随后,视频播放组件的渲染器可以基于裸码流渲染视频进行预览。
本申请实施例提供的视频处理方法,通过接收摄像组件采集的视频流,去除视频流中的协议头,从视频流中获取至少一TS包,并将第一上报周期内获取的第一数据包作为第一视频片段保存。通过该方式,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
下面对视频流包含有ES包时如何将视频流转换为至少一第一视频片段进行说明。图9为本申请实施例提供的又一种视频处理方法的流程示意图,图10为本申请实施例提供的再一种视频处理方法的交互示意图,图10与图9中视频处理方法对应,该视频处理方法,包括:
S501、车载终端接收摄像组件采集的视频流。
本实施例中,步骤S401的具体实现过程和实现原理与实施例一中步骤S201的类似,此处不再赘述。具体的,车机摄像头采集到视频流后,车机摄像头将视频流发送给车载终端的MDC,随后,MDC将视频流发送给车载终端的解析器。
S502、车载终端去除视频流中的协议头,从视频流中获取至少一ES包。
在本步骤中,车载终端接收摄像组件采集的视频流之后,由于视频流包含有ES包,车载终端在去除视频流中的协议头之后,可以从视频流中获取至少一ES包。
具体的,如图10所示,以视频流为ES流为例,车载终端的解析器获取到包含有ES包的视频流后,可以去除视频流的RTSP头,获取至少一ES包。随后,解析器将至少一ES包发送给车载终端的打包器。
S503、车载终端将至少一ES包封装为至少一TS包。
在本步骤中,在去除视频流中的协议头从视频流中获取至少一ES包之后,车载终端可以将至少一ES包封装为至少一TS包。
具体的,如图10所示,打包器在接收到至少一ES包之后,可以将至少一ES包封装为至少一TS包,再将至少一TS包发送给车载终端的输出器。
S504、车载终端将第一上报周期内获取的至少一TS包合成为第一视频片段,并将第一视频片段保存。
本实施例中,步骤S504的具体实现过程和实现原理与图3中步骤S203的类似,此处不再赘述。具体的,如图10所示,输出器将第一上报周期内的至少一TS包合成为TS视频片段后,将TS视频片段保存在存储器中。
在上述实施例的基础上,车载终端接收到摄像组件采集到的包含有ES包的视频流后,可以将视频流转换为至少一第一视频片段保存到存储器中。此外,车载终端在接收到摄像组件采集的视频流后,还可以将视频流处理后发送给视频播放组件直接进行预览播放。参见图11,该方法还包括:
S601、车载终端接收摄像组件采集的视频流。
S602、车载终端去除视频流中的协议头,从视频流中获取至少一ES包。
本实施例中,步骤S601-S602的具体实现过程和实现原理与图5中步骤S501-S502的类似,此处不再赘述。
具体的,参见图10,在解析器从视频流中获取至少一ES包后,可以将至少一ES包发送给车载终端的解码器。
S603、车载终端对至少一个ES包进行解码,获取视频流对应的待预览的裸码流。
具体的,参见图10,在解码器接收到解析器发送的至少一ES包后,可以对至少一ES包进行解码,从而获取视频流对应的待预览的裸码流。
S604、车载终端将待预览的裸码流传输给视频播放组件。
本实施例中,步骤S603-S604的具体实现过程和实现原理与图7中步骤S404-S405的类似,此处不再赘述。
具体的,参见图10,在解码器获取视频流对应的待预览的裸码流之后,解码器可以将待预览的裸码流发送给视频播放组件。随后,视频播放组件的渲染器可以基于裸码流渲染视频进行预览。
本申请实施例提供的视频处理方法,通过接收摄像组件采集的视频流,去除视频流中的协议头,从视频流中获取至少一ES包,将至少一ES包封装为至少一TS包。通过该方式,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
下面对视频流为裸码流时如何将视频流转换为至少一第一视频片段进行说明。图12为本申请实施例提供的又一种视频处理方法的流程示意图,图13为本申请实施例提供的另一种视频处理方法的交互示意图,图13与图12中视频处理方法对应。该视频处理方法,包括:
S701、车载终端接收摄像组件采集的视频流。
本实施例中,步骤S701的具体实现过程和实现原理与图3中步骤S201的类似,此处不再赘述。具体的,车机摄像头采集到视频流后,车机摄像头将视频流发送给车载终端的MDC,随后,MDC将视频流发送给车载终端的解析器。
S702、车载终端去除视频流中的协议头,从视频流中获取视频流对应的裸码流。
具体的,如图13所示,以视频流为裸码流为例,解析器获取到视频流后,可以去除视频流的RTSP头,获取裸码流。随后,解析器将裸码流发送给车载终端的编码器。
S703、车载终端将裸码流进行编码,生成至少一ES包。
本申请实施例对于如何将将裸码流进行编码生成ES包进行限制,可以采用现有的裸码流编码方式进行。具体的,如图13所示,编码器对裸码流进行编码,生成至少一ES包, 并将至少一ES包发送给车载终端的打包器。
S704、车载终端将至少一ES包封装为至少一TS包。
具体的,如图13所示,包器在将至少一ES包封装为至少一TS包后,再将至少一TS包发送给车载终端的输出器。
S705、车载终端将第一上报周期内获取的至少一TS包合成为第一视频片段,并将第一视频片段保存。
具体的,如图13所示,输出器将第一上报周期内的至少一TS合成为TS视频片段后,将TS视频片段保存在存储器中。
在上述实施例的基础上,车载终端接收到摄像组件采集到的裸码流后,可以将视频流转换为至少一第一视频片段保存到存储器中。此外,车载终端在接收到摄像组件采集的视频流后,还可以将视频流处理后发送给视频播放组件直接进行预览播放。参见图14,该方法还包括:
S801、车载终端接收摄像组件采集的视频流。
S802、车载终端去除视频流中的协议头,从视频流中获取视频流对应的裸码流。
S803、车载终端将视频流对应的裸码流传输给视频播放组件。
本实施例中,步骤S803的具体实现过程和实现原理与图7中步骤S405的类似,此处不再赘述。
具体的,参见图13,车载终端中的解析器可以去除视频流中的协议头,从视频流中获取视频流对应的裸码流,并将待预览的裸码流发送给视频播放组件。随后,视频播放组件的渲染器可以基于裸码流渲染视频进行预览。
本申请实施例提供的视频处理方法,通过接收摄像组件采集的视频流,去除视频流中的协议头,从视频流中获取视频流对应的裸码流,随后,将裸码流进行编码,生成至少一ES包,并将至少一ES包封装为至少一TS包。通过该方式,由于将采集到的视频流转化为可以独立编码或解码的至少一第一数据包,当视频录制过程中出现故障时,保存的第一数据包组成的第一视频片段也可以独自完成视频的解码,从而无需在录制视频时写入索引,节省了录制视频占用的存储空间,从而可以保存更长时间的视频。
在上述实施例的基础上,第一视频片段在进行存储时还可以合并成视频文件后再进行存储。下面对车载终端如何将多个第一视频片段组合成一个视频文件进行说明。图15为本申请实施例提供的又一种视频处理方法的流程示意图,在上述实施例的基础上,该视频处理方法,包括:
S901、车载终端接收摄像组件采集的视频流。
S902、车载终端将视频流转换为至少一第一数据包,第一数据包独立编码或解码。
S903、车载终端将第一上报周期内获取的至少一第一数据包合成为第一视频片段,并保存第一视频片段。
本实施例中,步骤S901-S903的具体实现过程和实现原理与图3中步骤S201-S203的类似,此处不再赘述。
S904、车载终端根据视频文件的标准数据量,将至少一个视频片段合并成至少一个视频文件。
在本步骤中,车载终端将视频流转换为至少一第一数据包后,可以根据视频文件的标 准数据量,将至少一个视频片段合并成至少一个视频文件。
其中,第一数据包可以为TS包,由于每个TS包都自带元数据,TS包无附加元数据信息,合并时不需要解析数据包,通过文件追加方式可以简单的首尾相接快速将多个视频片段合并为一个视频文件。
本申请实施例对于视频文件的标准数据量不做限制,可以根据实际情况具体设置。
示例性的,若每个视频文件的标准数据量为10M,一个视频片段为1M,则相应的可以将10个视频片段合成为一个视频文件。
图16为本申请实施例提供的一种视频片段合成示意图,具体的,参见图16,车载终端的合成器从车载终端的输出器接收TS视频片段Video1、Video2和Video3。若TS视频片段Video1、Video2和Video3总的视频数据量符合预设的标准数据量,则合成器将TS视频片段Video1、Video2和Video3合并为一个视频文件,并将视频文件发送给车载终端的转换器。
S905、车载终端根据第一视频格式,保存至少一个视频文件。
在本步骤中,车载终端可以预设有视频文件的视频格式。示例性的,TS视频文件本身可以播放,也可以通过转换器将TS视频文件转换为更通用的MP4文件。若第一视频格式为TS,则相应的,车载终端输出的TS视频文件不进行转换直接进行保存,若第一视频格式为MP4,则相应的,车载终端输出的TS视频文件需要转换为MP4格式保存。
具体的,如图16所示,车载终端的转换器可以根据用户设置的视频格式对合成器发送的视频文件的格式进行转换,并将转换后的视频文件发送给存储器进行保存。若预设的视频格式为MP4格式,则将视频文件的格式转换为MP4格式。其中,MP4格式可以更适用于车载终端的播放格式。
本申请实施例提供的视频处理方法,根据视频文件的标准数据量,将至少一个视频片段合并成至少一个视频文件,并根据第一视频格式,保存至少一个视频文件。通过该方式,利用TS的自带元数据的特性,可通过文件追加方式合并TS视频片段,实现快速视频恢复,此外,还可以根据用户的需求,将视频流封装为多种视频格式。
下面对紧急情况下如何保存视频进行说明。图17为本申请实施例提供的又一种视频处理方法的流程示意图,在上述实施例的基础上,该视频处理方法,包括:
S1101、车载终端获取摄像组件所在车辆的速度。
在本步骤中,车载终端可以实时检测摄像组件所在车辆的速度。
本申请实施例对于如何获取车辆的速度不做限制,在一些实施例中,摄像组件所在的车辆可以设置有速度传感器,通过速度传感器可以确定车辆的实时速度。在另一些实施例中,车载终端可以获取车辆的实时卫星定位,从而计算出车辆的速度。
S1102、若车辆的速度在第一时间段内的降幅超过第一阈值,车载终端则将第二时间段内的第一视频片段保存至独立存储区域。
其中,第一时间段的起始时间点为车辆的行车踏板被踩下的时间点,第一时间段的时长为第一预设时长;第二时间段的中间时间点为车辆的行车踏板被踩下的时间点,第二时间段的时长为第二预设时长。
在本步骤中,车辆的速度在第一时间段内的降幅度超过第一阈值,则可以确定车辆发生事故,此时可以将第二时间段内的第一视频片段保存至独立存储区域。需要说明的是, 本申请实施例对于第一预设时长和第二预设时长不做限制,可以根据实际情况具体设置。图18为本申请实施例提供的一种第一时间段和第二时间段的示意图。其中的第一预设时长可以为150ms,第二预设时长可以为300ms。如图18所示,对于第一时间段,可以将车辆的行车踏板被踩下的时间点t1作为第一时间段的起始时间点,将t1之后与t1相差150ms的时间点t2作为第一时间段的结束时间点,则t1至t2之间的时间段则为第一时间段。对于第二时间段,可以将驾驶员踩下车辆的行车踏板的时间点t1作为第二时间段的中间时间点,将t1之前与t1相差150ms的时间点t0作为第二时间段的起始时间点,将t1之后与t1相差150ms的时间点t2作为第一时间段的结束时间点,则t0至t2之间的时间段则为第二时间段。
本申请实施例对于车载终端如何确定第二时间段内的第一视频片段也不做限制,在一种可选的实施方式中,车载终端可以根据第一视频片段的时间戳定位第二时间段内的第一视频片段。具体的,由于第二时间段可能会对应多个第一视频片段,每个第一视频片段都有相应时间戳与其相对应。因此,车载终端可以使用第一视频片段的时间戳来定位第一视频片段。
其中,第一视频片段的时间戳可以用于标识第一视频片段录制开始时刻的时间,并由车载终端记录和生成。示例性的,如图4所示,第一视频段是由多个TS包组成的,第一视频片段录制开始时刻的时间可以是第一个TS包录制开始的时间。后续,车载终端在保存该第一视频片段时,可以将第一视频片段与该第一视频片段的时间戳一同保存。当车载终端发生事故时,可通过第一视频片段的时间戳,快速定位刹车前后15秒时间段内的关键视频。
在一种可能的实现方式中,在步骤1102中,若车辆的速度在第一时间段内的降幅超过第一阈值,车载终端除了将第二时间段内的第一视频片段保存至独立存储区域,还可以将第一视频片段的时间戳也一同保存至独立存储区域。
图19为本申请实施例提供的一种车辆突发事件时视频存储的示意图,如图19所示,车辆上的传感器检测到车辆的速度后,可以将车辆的速度传输给车载终端中的整车控制器(VCU),VCU检测到车辆的速度在第一时间段内的降幅超过第一阈值,则判断车辆发生紧急事件。若发送紧急事件,VCU可以向车载终端的扫描器发送指示信息,该指示信息用于指示扫描器从存储器中过滤出第二时间段内保存的第一视频片段。若扫描器接收到指示信息时存储器中保存有第一视频片段Video1、Video2和Video3,其中的Video1和Video2是在第二时间段内保存的,则扫描器可以通过扫描第一视频片的时间戳,过滤出Video1和Video2。随后,扫描器将Video1和Video2发送给车载终端的合成器,由合成器将Video1和Video2合成为视频文件,并发送给车载终端的转换器。最后,转化器将视频文件转换为MP4格式并发送至存储器进行独立存储。
示例性的,VCU可以通过判断刹车幅度识别并上报急刹车事件,若VCU检测到第一时间段150ms内速度下降幅度大于25km/h,则可以判断发生事故,此时VCU定义和上报紧急事件,向扫描器发送指示信息,来指示扫描器通过时间戳快速定位到刹车前后15秒时间段内的关键视频,并将关键视频移动到专用存储区域。
本申请实施例提供的视频处理方法,车载终端获取摄像组件所在车辆的速度,并根据第一视频片段的时间戳定位第二时间段内的第一视频片段,若车辆的速度在第一时间段内 的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域。通过该方式,在事故发生时可以通时间戳快速将关键事件视频移动至独立存储区域,在事故后可以在独立存储区域进行关键事件视频的快速提取,从而更快获取更有价值的视频。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序信息相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
图20为本申请实施例提供的一种视频处理装置的结构示意图。该视频处理装置可以通过软件、硬件或者两者的结合实现,以执行上述视频处理方法。如图20所示,该视频处理装置200包括:接收模块201和处理模块202。
接收模块201,用于接收摄像组件采集的视频流;
处理模块202,用于将视频流转换为至少一第一数据包,第一数据包独立编码或解码;将第一上报周期内获取的第一数据包合成为第一视频片段,并将第一视频片段保存,第一上报周期小于或等于最大允许丢失视频时长,最大允许丢失视频时长为发生故障时用户允许丢失的摄像组件拍摄的视频的最长时间。
一种可选的实施方式中,第一数据包为传输流TS包。
一种可选的实施方式中,若视频流包含有TS包,处理模块202具体用于去除视频流中的协议头,从视频流中获取至少一TS包。
一种可选的实施方式中,若视频流包含有ES包,处理模块202具体用于去除视频流中的协议头,从视频流中获取至少一ES包;将至少一ES包封装为至少一TS包。
一种可选的实施方式中,若视频流为裸码流,处理模块202,具体用于去除视频流中的协议头,从视频流中获取视频流对应的裸码流;将裸码流进行编码,生成至少一ES包;将至少一ES包封装为至少一TS包。
一种可选的实施方式中,协议头为实时流传输协议RTSP头。
一种可选的实施方式中,每个第一数据包的数据量相同。
一种可选的实施方式中,处理模块202,还用于根据视频文件的标准数据量,将至少一第一视频片段合并成至少一视频文件;根据第一视频格式,保存至少一视频文件。
一种可选的实施方式中,接收模块201,还用于获取摄像组件所在车辆的速度;
处理模块202,还用于若车辆的速度在第一时间段内的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域
其中,第一时间段的起始时间点为车辆的行车踏板被踩下的时间点,第一时间段的时长为第一预设时长;第二时间段的中间时间点为车辆的行车踏板被踩下的时间点,第二时间段的时长为第二预设时长。
一种可选的实施方式中,处理模块202,还用于根据第一视频片段的时间戳定位第二时间段内的第一视频片段,第一视频片段的时间戳用于标识第一视频片段录制开始时刻的时间。
本申请实施例提供的视频处理装置,可以执行上述方法实施例中视频处理方法的动作,其实现原理和技术效果类似,在此不再赘述。
图21为本申请实施例提供的一种车载终端的结构示意图。如图21所示,该车载终端 可以包括:处理器211(例如CPU)、存储器212、接收器213和发送器214;接收器213和发送器214耦合至处理器211,处理器211控制接收器213的接收动作、处理器211控制发送器214的发送动作。存储器212可能包含高速RAM存储器,也可能还包括非易失性存储器NVM,例如至少一个磁盘存储器,存储器212中可以存储各种信息,以用于完成各种处理功能以及实现本申请实施例的方法步骤。可选的,本申请实施例涉及的车载终端还可以包括:电源215、通信总线216以及通信端口219。接收器213和发送器214可以集成在车载终端的收发信机中,也可以为车载终端上独立的收发天线。通信总线216用于实现元件之间的通信连接。上述通信端口219用于实现车载终端与其他外设之间进行连接通信。
在本申请实施例中,上述存储器212用于存储计算机可执行程序代码,程序代码包括信息;当处理器211执行信息时,信息使处理器211执行上述方法实施例中车载终端的处理动作,使发送器214执行上述方法实施例中车载终端的发送动作,使接收器213执行上述方法实施例中车载终端的接收动作,其实现原理和技术效果类似,在此不再赘述。
本申请实施例还提供了一种芯片,包括处理器和接口。其中接口用于输入输出处理器所处理的数据或指令。处理器用于执行以上方法实施例中提供的方法。该芯片可以应用于车载终端中。
本申请实施例还提供一种程序,该程序在被处理器执行时用于执行以上方法实施例提供的方法。
本申请实施例还提供一种程序产品,例如计算机可读存储介质,该程序产品中存储有指令,当其在计算机上运行时,使得计算机执行上述方法实施例提供的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本发明实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。

Claims (21)

  1. 一种视频处理方法,其特征在于,包括:
    接收摄像组件采集的视频流;
    将所述视频流转换为至少一第一数据包,所述第一数据包独立编码或解码;
    将第一上报周期内获取的第一数据包合成为第一视频片段,并将所述第一视频片段保存,所述第一上报周期小于或等于最大允许丢失视频时长,所述最大允许丢失视频时长为发生故障时用户允许丢失的所述摄像组件拍摄的视频的最长时间。
  2. 根据权利要求1所述的方法,其特征在于,所述第一数据包为传输流TS包。
  3. 根据权利要求2所述的方法,其特征在于,若所述视频流包含有TS包,所述将所述视频流转换为至少一第一数据包,包括:
    去除所述视频流中的协议头,从所述视频流中获取至少一TS包。
  4. 根据权利要求2所述的方法,其特征在于,若所述视频流包含有ES包,所述将所述视频流转换为至少一数据包,包括:
    去除所述视频流中的协议头,从所述视频流中获取至少一ES包;
    将所述至少一ES包封装为至少一TS包。
  5. 根据权利要求2所述的方法,其特征在于,若所述视频流为裸码流,所述将所述视频流转换为至少一第一数据包,包括:
    去除所述视频流中的协议头,从所述视频流中获取所述视频流对应的裸码流;
    将所述裸码流进行编码,生成至少一ES包;
    将所述至少一ES包封装为至少一TS包。
  6. 根据权利要求3-5任一项所述的方法,其特征在于,所述协议头为实时流传输协议RTSP头。
  7. 根据权利要求3-5任一项所述的方法,其特征在于,每个第一数据包的数据量相同。
  8. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    根据视频文件的标准数据量,将至少一第一视频片段合并成至少一视频文件;
    根据第一视频格式,保存所述至少一视频文件。
  9. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    获取所述摄像组件所在车辆的速度;
    若所述车辆的速度在第一时间段内的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域;其中,所述第一时间段的起始时间点为所述车辆的行车踏板被踩下的时间点,所述第一时间段的时长为第一预设时长;所述第二时间段的中间时间点为所述车辆的行车踏板被踩下的时间点,所述第二时间段的时长为第二预设时长。
  10. 根据权利要求9所述的方法,其特征在于,所述将第二时间段内的第一视频片段保存至独立存储区域之前,所述方法还包括:
    根据所述第一视频片段的时间戳定位所述第二时间段内的第一视频片段,所述第一视频片段的时间戳用于标识所述第一视频片段录制开始时刻的时间。
  11. 一种视频处理装置,其特征在于,包括:
    接收模块,用于接收摄像组件采集的视频流;
    处理模块,用于将所述视频流转换为至少一第一数据包,所述第一数据包独立编码或 解码;将第一上报周期内获取的第一数据包合成为第一视频片段,并将所述第一视频片段保存,所述第一上报周期小于或等于最大允许丢失视频时长,所述最大允许丢失视频时长为发生故障时用户允许丢失的所述摄像组件拍摄的视频的最长时间。
  12. 根据权利要求11所述的装置,其特征在于,所述第一数据包为传输流TS包。
  13. 根据权利要求12所述的装置,其特征在于,若所述视频流包含有TS包,所述处理模块具体用于去除所述视频流中的协议头,从所述视频流中获取至少一TS包。
  14. 根据权利要求12所述的装置,其特征在于,若所述视频流包含有ES包,所述处理模块具体用于去除所述视频流中的协议头,从所述视频流中获取至少一ES包;将所述至少一ES包封装为至少一TS包。
  15. 根据权利要求12所述的装置,其特征在于,若所述视频流为裸码流,所述处理模块,具体用于去除所述视频流中的协议头,从所述视频流中获取所述视频流对应的裸码流;将所述裸码流进行编码,生成至少一ES包;将所述至少一ES包封装为至少一TS包。
  16. 根据权利要求13-15任一项所述的装置,其特征在于,所述协议头为实时流传输协议RTSP头。
  17. 根据权利要求11-15任一项所述的装置,其特征在于,每个第一数据包的数据量相同。
  18. 根据权利要求12所述的装置,其特征在于,所述处理模块,还用于根据视频文件的标准数据量,将至少一第一视频片段合并成至少一视频文件;根据第一视频格式,保存所述至少一视频文件。
  19. 根据权利要求12所述的装置,其特征在于,接收模块,还用于获取所述摄像组件所在车辆的速度;
    处理模块,还用于若所述车辆的速度在第一时间段内的降幅度超过第一阈值,则将第二时间段内的第一视频片段保存至独立存储区域;其中,所述第一时间段的起始时间点为所述车辆的行车踏板被踩下的时间点,所述第一时间段的时长为第一预设时长;所述第二时间段的中间时间点为所述车辆的行车踏板被踩下的时间点,所述第二时间段的时长为第二预设时长。
  20. 根据权利要求19所述的装置,其特征在于,所述处理模块,还用于根据所述第一视频片段的时间戳定位所述第二时间段内的第一视频片段,所述第一视频片段的时间戳用于标识所述第一视频片段录制开始时刻的时间。
  21. 一种程序产品,其特征在于,所述程序产品包括计算机程序,所述计算机程序存储在可读存储介质中,通信装置的至少一个处理器可以从所述可读存储介质读取所述计算机程序,所述至少一个处理器执行所述计算机程序使得通信装置实施如权利要求1-10任意一项所述的方法。
PCT/CN2021/085707 2020-04-24 2021-04-06 视频处理方法及装置 WO2021213181A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21791660.0A EP4131979A4 (en) 2020-04-24 2021-04-06 VIDEO PROCESSING METHOD AND DEVICE
JP2022564352A JP2023522429A (ja) 2020-04-24 2021-04-06 映像処理方法および装置
US17/970,930 US11856321B1 (en) 2020-04-24 2022-10-21 Video processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010331493.2A CN113553467A (zh) 2020-04-24 2020-04-24 视频处理方法及装置
CN202010331493.2 2020-04-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/970,930 Continuation US11856321B1 (en) 2020-04-24 2022-10-21 Video processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2021213181A1 true WO2021213181A1 (zh) 2021-10-28

Family

ID=78129581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/085707 WO2021213181A1 (zh) 2020-04-24 2021-04-06 视频处理方法及装置

Country Status (5)

Country Link
US (1) US11856321B1 (zh)
EP (1) EP4131979A4 (zh)
JP (1) JP2023522429A (zh)
CN (2) CN113553467A (zh)
WO (1) WO2021213181A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002353A (zh) * 2011-09-16 2013-03-27 杭州海康威视数字技术股份有限公司 对多媒体文件进行封装的方法及装置
CN105049920A (zh) * 2015-07-27 2015-11-11 青岛海信移动通信技术股份有限公司 一种多媒体文件的录制方法和装置
CN105141868A (zh) * 2015-07-28 2015-12-09 北京奇虎科技有限公司 视频数据保护系统及其所涉各端的安全保护、传输方法
CN106060571A (zh) * 2016-05-30 2016-10-26 湖南纽思曼导航定位科技有限公司 一种行车记录仪及视频直播方法

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295752B1 (en) * 1997-08-14 2007-11-13 Virage, Inc. Video cataloger system with audio track extraction
AU4265101A (en) * 2000-04-05 2001-10-15 Sony United Kingdom Limited Identifying, recording and reproducing information
JP5082209B2 (ja) * 2005-06-27 2012-11-28 株式会社日立製作所 送信装置、受信装置、及び映像信号送受信システム
EP4040784A1 (en) * 2013-11-14 2022-08-10 KSI Data Sciences, Inc. A system and method for managing and analyzing multimedia information
US10067813B2 (en) * 2014-11-21 2018-09-04 Samsung Electronics Co., Ltd. Method of analyzing a fault of an electronic system
CN106899880B (zh) * 2015-12-19 2020-02-18 联芯科技有限公司 将多媒体数据分段保存的方法及系统
US9871994B1 (en) * 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
CN105872428A (zh) * 2016-04-12 2016-08-17 北京奇虎科技有限公司 视频数据保护方法及装置
CN107566768A (zh) * 2017-07-25 2018-01-09 深圳市沃特沃德股份有限公司 视频录制方法和装置
CN110324549B (zh) * 2018-03-28 2022-05-13 沈阳美行科技股份有限公司 一种录像方法、装置和设备
KR20230017817A (ko) * 2020-05-25 2023-02-06 엘지전자 주식회사 멀티 레이어 기반 영상 코딩 장치 및 방법
WO2021246840A1 (ko) * 2020-06-06 2021-12-09 엘지전자 주식회사 스케일러빌리티를 위한 서브-비트스트림 추출 기반 영상 코딩 장치 및 방법

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002353A (zh) * 2011-09-16 2013-03-27 杭州海康威视数字技术股份有限公司 对多媒体文件进行封装的方法及装置
CN105049920A (zh) * 2015-07-27 2015-11-11 青岛海信移动通信技术股份有限公司 一种多媒体文件的录制方法和装置
CN105141868A (zh) * 2015-07-28 2015-12-09 北京奇虎科技有限公司 视频数据保护系统及其所涉各端的安全保护、传输方法
CN106060571A (zh) * 2016-05-30 2016-10-26 湖南纽思曼导航定位科技有限公司 一种行车记录仪及视频直播方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4131979A4

Also Published As

Publication number Publication date
EP4131979A1 (en) 2023-02-08
CN113553467A (zh) 2021-10-26
EP4131979A4 (en) 2023-07-19
CN113766160A (zh) 2021-12-07
US11856321B1 (en) 2023-12-26
JP2023522429A (ja) 2023-05-30

Similar Documents

Publication Publication Date Title
JP6920578B2 (ja) 映像ストリーミング装置、映像編集装置および映像配信システム
US10978109B2 (en) Synchronously playing method and device of media file, and storage medium
CN110545491B (zh) 一种媒体文件的网络播放方法、装置及存储介质
CN110545483B (zh) 网页中切换分辨率播放媒体文件的方法、装置及存储介质
CN107093436B (zh) 预录的音视频数据的存储方法及装置、移动终端
JP2004534484A5 (zh)
US7555009B2 (en) Data processing method and apparatus, and data distribution method and information processing apparatus
CN114640886B (zh) 自适应带宽的音视频传输方法、装置、计算机设备及介质
CN1893383A (zh) 提供根据剩余存储器容量的可记录时间的方法及其终端
WO2020093931A1 (zh) 字幕数据处理方法、装置、设备和计算机存储介质
CN113382278B (zh) 视频推送方法、装置、电子设备和可读存储介质
WO2021213181A1 (zh) 视频处理方法及装置
CN112887679A (zh) 无损视频远程采集方法和系统
US20160142461A1 (en) Method and device for transmission of multimedia data
WO2023082880A1 (zh) 一种车辆娱乐信息域控制器与行车记录生成方法
CN109600651B (zh) 文档类直播交互数据和音视频数据同步方法和系统
US20190149793A1 (en) Apparatus and method for recording and storing video
CN109302574B (zh) 一种处理视频流的方法和装置
CN210839882U (zh) 一种低延时音频传输装置
CN114786036B (zh) 自动驾驶车辆的监控方法及装置、存储介质、计算机设备
CN104869357B (zh) 一种传输车载视频数据的方法及系统
US20230308497A1 (en) Streaming media processing method, transmitting device and receiving device
CN114760375B (zh) 多屏车载系统用数据发送方法、装置、传输方法及车辆
CN117294690B (zh) 一种评估QoE的方法及电子设备
KR101964649B1 (ko) 미디어 전송 방법 및 그 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21791660

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022564352

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021791660

Country of ref document: EP

Effective date: 20221026

NENP Non-entry into the national phase

Ref country code: DE