CN107948571B - Video recording method, device and computer readable storage medium - Google Patents

Video recording method, device and computer readable storage medium Download PDF

Info

Publication number
CN107948571B
CN107948571B CN201711447298.0A CN201711447298A CN107948571B CN 107948571 B CN107948571 B CN 107948571B CN 201711447298 A CN201711447298 A CN 201711447298A CN 107948571 B CN107948571 B CN 107948571B
Authority
CN
China
Prior art keywords
data
image data
reverse order
reverse
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711447298.0A
Other languages
Chinese (zh)
Other versions
CN107948571A (en
Inventor
刘林汶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201711447298.0A priority Critical patent/CN107948571B/en
Publication of CN107948571A publication Critical patent/CN107948571A/en
Application granted granted Critical
Publication of CN107948571B publication Critical patent/CN107948571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being at least another television signal

Abstract

The invention discloses a method, a device and a computer readable storage medium for video recording, which belong to the technical field of video recording, and the method comprises the following steps: receiving original data of a recorded video; synchronously carrying out real-time forward coding and real-time reverse coding on the original data; and according to the rule of the extended video, storing the original video file obtained after the sequential coding and the extended information including the reverse-order coded data to obtain the final video file. The method, the device and the computer readable storage medium for recording the video can simultaneously realize real-time forward sequence recording and real-time reverse sequence recording of the video, increase the interestingness and playability of video recording and playing and enrich the user experience.

Description

Video recording method, device and computer readable storage medium
Technical Field
The present invention relates to the field of video recording technologies, and in particular, to a method and an apparatus for video recording, and a computer-readable storage medium.
Background
With the continuous development of mobile terminal technology, the functions of mobile terminals become more and more abundant, and in the prior art, video recording by mobile terminal equipment has become very common. However, in the process of recording a video, the conventional mobile terminal can only record the video normally in the positive sequence, and the recorded video can only be played in the positive sequence without complex post-processing, so that both the recording form and the playing form are very single, and the user experience requirements cannot be met.
Disclosure of Invention
The invention mainly aims to provide a video recording method, a video recording device and a computer readable storage medium, which aim to realize real-time forward sequence recording and real-time reverse sequence recording of videos at the same time, increase the interestingness and playability of video recording and playing and enrich the user experience.
In order to achieve the above object, the present invention provides a method for recording video, comprising the following steps: receiving original data of a recorded video; synchronously carrying out real-time forward coding and real-time reverse coding on the original data; and according to the rule of the extended video, storing the original video file obtained after the sequential coding and the extended information including the reverse-order coded data to obtain the final video file.
Optionally, the raw data includes image data, image preview data, and screen data.
Optionally, the step of synchronously performing real-time forward encoding and real-time reverse encoding on the original data specifically includes: and carrying out real-time positive sequence coding on the original data by calling a system control.
Optionally, the step of synchronously performing real-time forward encoding and real-time reverse encoding on the original data further includes: and sequentially carrying out data difference coding on the N image data between every two key frames in a reverse order one by one to obtain reverse order coded data of each image data, and further finishing the real-time reverse order coding of the original data.
Optionally, the step of performing data difference coding on all image data between every two key frames in reverse order one by one to obtain reverse order coded data of each image data specifically includes: receiving N pieces of image data between two key frames; saving the Nth image data as initial image data; and carrying out data difference coding on the Nth image data and the N-1 th image data in a reverse order one by one from the Nth image data to obtain reverse order coded data of each image data, wherein the data difference coding is carried out on the Nth image data and the N-1 th image data to be used as the reverse order coded data of the N-1 th image data.
Optionally, the step of synchronously performing real-time forward encoding and real-time reverse encoding on the original data further includes: and when the original data is subjected to real-Time reverse order coding, synchronously performing reverse order PTS (Presentation Time Stamp) calculation to obtain the PTS Time of each image data under the reverse order condition.
Optionally, the extension information is stored at the end of the original video file, and the extension information further includes reverse PTS data information and reverse flag information.
Optionally, the reverse order flag information includes start position information of the reverse order encoded data in the total file, length information of the reverse order encoded data, start position information of the reverse order PTS data in the total file, length information of the reverse order PTS data, and last data information of the file.
In addition, to achieve the above object, the present invention further provides a video recording apparatus, which includes a memory, a processor, and a program stored in the memory and executable on the processor, wherein the program implements the steps of the above method when executed by the processor.
Furthermore, to achieve the above object, the present invention also proposes a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the above-mentioned method.
After receiving the original data of the recorded video, the method and the device for recording the video synchronously carry out real-time forward coding and real-time reverse coding on the original data, and then store the original video file obtained after the forward coding and the extended information including the reverse coded data according to the rule of the extended video to obtain the final video file. Therefore, the reverse order recording of the video can be synchronously completed while the video is recorded in the forward order, the video is recorded in real time without buffering and post-processing, and meanwhile, the reverse order effect is achieved by expanding the information of the original video file without damaging the original video structure (namely, the video can be played in sequence). Therefore, the method, the device and the computer readable storage medium for recording the video can simultaneously realize real-time forward sequence recording and real-time reverse sequence recording of the video, increase the interestingness and playability of video recording and playing, and enrich the user experience.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
Fig. 2 is a diagram of a communication network system architecture on which the mobile terminal shown in fig. 1 is based.
Fig. 3 is a flowchart of a video recording method according to an embodiment of the present invention.
Fig. 4 is a detailed flowchart of step S120 of the video recording method shown in fig. 3.
Fig. 5 is a detailed flowchart of step S122 of the video recording method shown in fig. 4.
Fig. 6 is a schematic diagram of forward encoding and reverse encoding according to an embodiment of the present invention.
Fig. 7 is a block diagram of the final video file according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit a signal during a message transmission or a call, and specifically, receive a downlink message from a base station and then process the received downlink message to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display a message input by the user or a message provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character messages and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives a touch message from the touch sensing device, converts the touch message into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute a command sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data messages, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific messages regarding service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
Example one
As shown in fig. 3, an embodiment of the present invention provides a method for recording a video, including the following steps:
step S110: raw data of a recorded video is received.
Specifically, in the process of recording a video, the mobile terminal device generates original data of the recorded video, where the original data specifically includes data in multiple data formats, such as image data, image preview data, and screen data.
Step S120: and synchronously carrying out real-time forward coding and real-time reverse coding on the original data.
Specifically, after receiving the original data of the recorded video, the mobile terminal may store the original data in the corresponding video format after data encoding, so as to store the recorded video file. As can be seen, video recording is a process of encoding these raw data in real time. Generally, the current mainstream video coding formats include h.264, h.265, MPEG-4, MPEG-2, WMA-HD, VC-1, and the like, and in the current video recording process, no matter what coding format is adopted to code the original data of the recorded video, the original data are only subjected to real-time forward coding, and the original data are not subjected to real-time reverse coding, so that the video file stored after the current video recording is completed can only be subjected to video playing in sequence without further processing in the later period, and cannot be subjected to video playing in reverse sequence. To solve the problem, the method for recording video of the present invention, after receiving the original data of the recorded video, performs real-time forward encoding and real-time backward encoding on the original data synchronously, as shown in fig. 4, and specifically includes the following operations:
step S121: and carrying out real-time positive sequence coding on the original data by calling a system control.
Specifically, in the current video recording process, no matter what encoding format is adopted to encode the original data of the recorded video, the original data are subjected to real-time positive sequence encoding, that is, the positive sequence data encoding can be supported in the prior art, and the real-time positive sequence encoding of the original data can be completed by generally calling a system control.
Step S122: and sequentially carrying out data difference coding on the N image data between every two key frames in a reverse order one by one to obtain reverse order coded data of each image data, and further finishing real-time reverse order coding on the original data.
Specifically, the video recording method according to the embodiment of the present invention performs real-time forward encoding on the original data, and simultaneously performs real-time reverse encoding on the original data, specifically performs data difference encoding on N pieces of image data between every two key frames in reverse order one by one to obtain reverse encoded data of each piece of image data, thereby completing real-time reverse encoding on the original data. That is, in this method, the reverse order encoding process is performed in batches with N pieces of image data between every two key frames as one set. Taking one key frame Per Second as an example, in the case of a video input of 30FPS (Frames Per Second), the data to be processed at once is 30 pieces of image data, and in the case of a video input of 10FPS, the data to be processed at once is 10 pieces of image data. As shown in fig. 5, the whole operation process of each batch of reverse order encoding processing is specifically as follows:
step S1221: n pieces of image data between two key frames are received.
Step S1222: the nth image data is saved as the start image data.
Step S1223: and carrying out data difference coding on the Nth image data and the N-1 th image data in a reverse order one by one from the Nth image data to obtain reverse order coded data of each image data, wherein the data difference coding is carried out on the Nth image data and the N-1 th image data to be used as the reverse order coded data of the N-1 th image data.
The above-mentioned steps are described in detail with reference to specific examples. Taking a key frame of one second as an example, in the case of a video input 5FPS, data processed once per batch is 5 pieces of image data. That is, 5 pieces of image data between two key frames are received first, and as shown in fig. 6, the first row video input represents 5 pieces of received original image data, the 1 st image data is the letter a, the 2 nd image data is the letter A, B, the 3 rd image data is the letter A, B, C, the 4 th image data is the letter A, B, C, D, and the 5 th image data is the letter A, B, C, D, E. And the positive sequence coded data of each corresponding image data represented by the second row of positive sequences, wherein the 1 st image data is the initial image data, the original image data is stored and is not subjected to positive sequence coding processing, the 1 st image data is directly used as the positive sequence coded data of the 1 st image data, the 2 nd positive sequence coded data of the 2 nd image data is stored with the 2 nd and 1 st difference data, the 3 rd positive sequence coded data of the 3 rd image data is stored with the 3 rd and 2 nd difference data, the 4 th positive sequence coded data of the 4 th image data is stored with the 4 th and 3 rd difference data, and the 5 th positive sequence coded data of the 5 th image data is stored with the 5 th and 4 th difference data, and the real-time positive sequence coding shows an animation effect of ABCDE sequence appearance. The reverse order coded data of each corresponding image data represented by the reverse order of the third row is obtained by performing data difference coding on 5 original image data of the first row one by one in a reverse order, namely, the 5 th image data (namely, the last image data) is stored firstly to be used as initial image data, and the image data is directly used as the reverse order coded data of the 5 th image data without performing reverse order coding processing. Then, data difference encoding is performed on the 4 th image data one by one in reverse order to obtain reverse order encoded data of the 4 th image data, reverse order encoded data of the 3 rd image data, reverse order encoded data of the 2 nd image data, and reverse order encoded data of the 1 st image data in this order. The 4 th image data is stored with data difference coding between the 5 th image data and the 4 th image data, the 3 rd image data is stored with data difference coding between the 4 th image data and the 3 rd image data, the 2 nd image data is stored with data difference coding between the 3 rd image data and the 2 nd image data, and the 1 st image data is stored with data difference coding between the 2 nd image data and the 1 st image data, and the reverse coded data of the 4 th image data shows the animation effect of the EDCBA reverse order.
Step S123: and synchronously performing reverse PTS calculation when the original data is subjected to real-time reverse order coding to obtain the PTS time of each piece of image data under the condition of reverse order.
Specifically, PTS (Presentation Time Stamp) data records the Presentation Time of encoded data of each piece of image data, and as shown in fig. 6, positive-sequence PTS data records the Presentation Time of positive-sequence encoded data of each piece of image data, that is, represents the Presentation times of positive-sequence encoded data of the 1 st piece of image data, positive-sequence encoded data of the 2 nd piece of image data, positive-sequence encoded data of the 3 rd piece of image data, positive-sequence encoded data of the 4 th piece of image data, and positive-sequence encoded data of the 5 th piece of image data, the Presentation times of which are respectively one fifth second at 1 st, one fifth second at 2 nd, one fifth at 3 rd, one fifth at 4 th, and one fifth at 5 th. Because the method also carries out real-time reverse order coding on the original data, the reverse order PTS calculation needs to be synchronously carried out to obtain the PTS time of each image data under the condition of reverse order. That is, as shown in fig. 6, the reverse PTS data records the presentation time of the reverse encoded data for each image data, that is, the presentation times of the reverse encoded data for the 1 st image data, the reverse encoded data for the 2 nd image data, the reverse encoded data for the 3 rd image data, the reverse encoded data for the 4 th image data, and the reverse encoded data for the 5 th image data in the reverse order are 5 th fifth second, 4 th fifth second, 3 rd fifth second, 2 nd fifth second, and 1 st fifth second, respectively.
Step S130: and according to the rule of the extended video, storing the original video file obtained after the sequential coding and the extended information including the reverse-order coded data to obtain the final video file.
Specifically, in the prior art, a recorded video file can be obtained by performing positive sequence encoding on original data, that is, the existing recorded video file only contains positive sequence encoded data and can only be played in sequence. The video recording method of the method can synchronously complete reverse order coding on the original data while carrying out forward order coding on the original data. And finally, storing the original video file obtained after the forward coding and the extension information including the reverse coding data according to the rule of the extension video to obtain the final video file. That is, as shown in fig. 7, the final video file includes not only the original video file but also extended information including the reverse order encoded data. Meanwhile, the extension information is stored at the tail of the original video file, and the extension information further comprises reverse PTS data information and reverse marking information. The extended information is stored at the tail part of the original video file, so that the structure of the original video is not damaged, namely the sequential playing of the final video file is not influenced, and meanwhile, the extended information comprises reverse order coded data, reverse order PTS (partial sequence reference signal) data information and reverse order marking information, wherein the reverse order coded data is stored as the reverse order coded data of each piece of image data. The reverse PTS data information holds the PTS time in the reverse order of each piece of image data, i.e., the presentation time of the reverse-order encoded data of each piece of image data. The reverse order flag information sequentially includes 5 data, which are start position information (represented by encoder _ order _ index) of the reverse order encoded data in the total file, length information (represented by encoder _ order _ length) of the reverse order encoded data, start position information (represented by PTS _ order _ index) of the reverse order PTS data in the total file, length information (represented by PTS _ order _ length) of the reverse order PTS data, and last data information (represented by order _ flag) of the file (represented by order _ flag, to represent an extended reverse order video file). Through the storage of the extended information, the final video file can be played in a reverse order. Specifically, when the waterfall is recorded, the default effect is that the water flow is downward, and the reverse effect is that the water flow is upward.
Example two
The second embodiment of the present invention provides a video recording device, which includes a memory, a processor, and a program stored in the memory and capable of running on the processor, wherein when the program is executed by the processor, the following specific steps are implemented as shown in fig. 3:
step S110: raw data of a recorded video is received.
Specifically, in the process of recording a video, the mobile terminal device generates original data of the recorded video, where the original data specifically includes data in multiple data formats, such as image data, image preview data, and screen data.
Step S120: and synchronously carrying out real-time forward coding and real-time reverse coding on the original data.
Specifically, after receiving the original data of the recorded video, the mobile terminal may store the original data in the corresponding video format after data encoding, so as to store the recorded video file. As can be seen, video recording is a process of encoding these raw data in real time. Generally, the current mainstream video coding formats include h.264, h.265, MPEG-4, MPEG-2, WMA-HD, VC-1, and the like, and in the current video recording process, no matter what coding format is adopted to code the original data of the recorded video, the original data are only subjected to real-time forward coding, and the original data are not subjected to real-time reverse coding, so that the video file stored after the current video recording is completed can only be subjected to video playing in sequence without further processing in the later period, and cannot be subjected to video playing in reverse sequence. To solve the problem, the method for recording video of the present invention, after receiving the original data of the recorded video, performs real-time forward encoding and real-time backward encoding on the original data synchronously, as shown in fig. 4, and specifically includes the following operations:
step S121: and carrying out real-time positive sequence coding on the original data by calling a system control.
Specifically, in the current video recording process, no matter what encoding format is adopted to encode the original data of the recorded video, the original data are subjected to real-time positive sequence encoding, that is, the positive sequence data encoding can be supported in the prior art, and the real-time positive sequence encoding of the original data can be completed by generally calling a system control.
Step S122: and sequentially carrying out data difference coding on the N image data between every two key frames in a reverse order one by one to obtain reverse order coded data of each image data, and further finishing real-time reverse order coding on the original data.
Specifically, the video recording method according to the embodiment of the present invention performs real-time forward encoding on the original data, and simultaneously performs real-time reverse encoding on the original data, specifically performs data difference encoding on N pieces of image data between every two key frames in reverse order one by one to obtain reverse encoded data of each piece of image data, thereby completing real-time reverse encoding on the original data. That is, in this method, the reverse order encoding process is performed in batches with N pieces of image data between every two key frames as one set. Taking one key frame Per Second as an example, in the case of a video input of 30FPS (Frames Per Second), the data to be processed at once is 30 pieces of image data, and in the case of a video input of 10FPS, the data to be processed at once is 10 pieces of image data. As shown in fig. 5, the whole operation process of each batch of reverse order encoding processing is specifically as follows:
step S1221: n pieces of image data between two key frames are received.
Step S1222: the nth image data is saved as the start image data.
Step S1223: and carrying out data difference coding on the Nth image data and the N-1 th image data in a reverse order one by one from the Nth image data to obtain reverse order coded data of each image data, wherein the data difference coding is carried out on the Nth image data and the N-1 th image data to be used as the reverse order coded data of the N-1 th image data.
The above-mentioned steps are described in detail with reference to specific examples. Taking a key frame of one second as an example, in the case of a video input 5FPS, data processed once per batch is 5 pieces of image data. That is, 5 pieces of image data between two key frames are received first, and as shown in fig. 6, the first row video input represents 5 pieces of received original image data, the 1 st image data is the letter a, the 2 nd image data is the letter A, B, the 3 rd image data is the letter A, B, C, the 4 th image data is the letter A, B, C, D, and the 5 th image data is the letter A, B, C, D, E. And the positive sequence coded data of each corresponding image data represented by the second row of positive sequences, wherein the 1 st image data is the initial image data, the original image data is stored and is not subjected to positive sequence coding processing, the 1 st image data is directly used as the positive sequence coded data of the 1 st image data, the 2 nd positive sequence coded data of the 2 nd image data is stored with the 2 nd and 1 st difference data, the 3 rd positive sequence coded data of the 3 rd image data is stored with the 3 rd and 2 nd difference data, the 4 th positive sequence coded data of the 4 th image data is stored with the 4 th and 3 rd difference data, and the 5 th positive sequence coded data of the 5 th image data is stored with the 5 th and 4 th difference data, and the real-time positive sequence coding shows an animation effect of ABCDE sequence appearance. The reverse order coded data of each corresponding image data represented by the reverse order of the third row is obtained by performing data difference coding on 5 original image data of the first row one by one in a reverse order, namely, the 5 th image data (namely, the last image data) is stored firstly to be used as initial image data, and the image data is directly used as the reverse order coded data of the 5 th image data without performing reverse order coding processing. Then, data difference encoding is performed on the 4 th image data one by one in reverse order to obtain reverse order encoded data of the 4 th image data, reverse order encoded data of the 3 rd image data, reverse order encoded data of the 2 nd image data, and reverse order encoded data of the 1 st image data in this order. The 4 th image data is stored with data difference coding between the 5 th image data and the 4 th image data, the 3 rd image data is stored with data difference coding between the 4 th image data and the 3 rd image data, the 2 nd image data is stored with data difference coding between the 3 rd image data and the 2 nd image data, and the 1 st image data is stored with data difference coding between the 2 nd image data and the 1 st image data, and the reverse coded data of the 4 th image data shows the animation effect of the EDCBA reverse order.
Step S123: and synchronously performing reverse PTS calculation when the original data is subjected to real-time reverse order coding to obtain the PTS time of each piece of image data under the condition of reverse order.
Specifically, PTS (Presentation Time Stamp) data records the Presentation Time of encoded data of each piece of image data, and as shown in fig. 6, positive-sequence PTS data records the Presentation Time of positive-sequence encoded data of each piece of image data, that is, represents the Presentation times of positive-sequence encoded data of the 1 st piece of image data, positive-sequence encoded data of the 2 nd piece of image data, positive-sequence encoded data of the 3 rd piece of image data, positive-sequence encoded data of the 4 th piece of image data, and positive-sequence encoded data of the 5 th piece of image data, the Presentation times of which are respectively one fifth second at 1 st, one fifth second at 2 nd, one fifth at 3 rd, one fifth at 4 th, and one fifth at 5 th. Because the method also carries out real-time reverse order coding on the original data, the reverse order PTS calculation needs to be synchronously carried out to obtain the PTS time of each image data under the condition of reverse order. That is, as shown in fig. 6, the reverse PTS data records the presentation time of the reverse encoded data for each image data, that is, the presentation times of the reverse encoded data for the 1 st image data, the reverse encoded data for the 2 nd image data, the reverse encoded data for the 3 rd image data, the reverse encoded data for the 4 th image data, and the reverse encoded data for the 5 th image data in the reverse order are 5 th fifth second, 4 th fifth second, 3 rd fifth second, 2 nd fifth second, and 1 st fifth second, respectively.
Step S130: and according to the rule of the extended video, storing the original video file obtained after the sequential coding and the extended information including the reverse-order coded data to obtain the final video file.
Specifically, in the prior art, a recorded video file can be obtained by performing positive sequence encoding on original data, that is, the existing recorded video file only contains positive sequence encoded data and can only be played in sequence. The video recording method of the method can synchronously complete reverse order coding on the original data while carrying out forward order coding on the original data. And finally, storing the original video file obtained after the forward coding and the extension information including the reverse coding data according to the rule of the extension video to obtain the final video file. That is, as shown in fig. 7, the final video file includes not only the original video file but also extended information including the reverse order encoded data. Meanwhile, the extension information is stored at the tail of the original video file, and the extension information further comprises reverse PTS data information and reverse marking information. The extended information is stored at the tail part of the original video file, so that the structure of the original video is not damaged, namely the sequential playing of the final video file is not influenced, and meanwhile, the extended information comprises reverse order coded data, reverse order PTS (partial sequence reference signal) data information and reverse order marking information, wherein the reverse order coded data is stored as the reverse order coded data of each piece of image data. The reverse PTS data information holds the PTS time in the reverse order of each piece of image data, i.e., the presentation time of the reverse-order encoded data of each piece of image data. The reverse order flag information sequentially includes 5 data, which are start position information (represented by encoder _ order _ index) of the reverse order encoded data in the total file, length information (represented by encoder _ order _ length) of the reverse order encoded data, start position information (represented by PTS _ order _ index) of the reverse order PTS data in the total file, length information (represented by PTS _ order _ length) of the reverse order PTS data, and last data information (represented by order _ flag) of the file (represented by order _ flag, to represent an extended reverse order video file). Through the storage of the extended information, the final video file can be played in a reverse order. Specifically, when the waterfall is recorded, the default effect is that the water flow is downward, and the reverse effect is that the water flow is upward.
EXAMPLE III
A third embodiment of the present invention provides a computer-readable storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the following specific steps as shown in fig. 3:
step S110: raw data of a recorded video is received.
Specifically, in the process of recording a video, the mobile terminal device generates original data of the recorded video, where the original data specifically includes data in multiple data formats, such as image data, image preview data, and screen data.
Step S120: and synchronously carrying out real-time forward coding and real-time reverse coding on the original data.
Specifically, after receiving the original data of the recorded video, the mobile terminal may store the original data in the corresponding video format after data encoding, so as to store the recorded video file. As can be seen, video recording is a process of encoding these raw data in real time. Generally, the current mainstream video coding formats include h.264, h.265, MPEG-4, MPEG-2, WMA-HD, VC-1, and the like, and in the current video recording process, no matter what coding format is adopted to code the original data of the recorded video, the original data are only subjected to real-time forward coding, and the original data are not subjected to real-time reverse coding, so that the video file stored after the current video recording is completed can only be subjected to video playing in sequence without further processing in the later period, and cannot be subjected to video playing in reverse sequence. To solve the problem, the method for recording video of the present invention, after receiving the original data of the recorded video, performs real-time forward encoding and real-time backward encoding on the original data synchronously, as shown in fig. 4, and specifically includes the following operations:
step S121: and carrying out real-time positive sequence coding on the original data by calling a system control.
Specifically, in the current video recording process, no matter what encoding format is adopted to encode the original data of the recorded video, the original data are subjected to real-time positive sequence encoding, that is, the positive sequence data encoding can be supported in the prior art, and the real-time positive sequence encoding of the original data can be completed by generally calling a system control.
Step S122: and sequentially carrying out data difference coding on the N image data between every two key frames in a reverse order one by one to obtain reverse order coded data of each image data, and further finishing real-time reverse order coding on the original data.
Specifically, the video recording method according to the embodiment of the present invention performs real-time forward encoding on the original data, and simultaneously performs real-time reverse encoding on the original data, specifically performs data difference encoding on N pieces of image data between every two key frames in reverse order one by one to obtain reverse encoded data of each piece of image data, thereby completing real-time reverse encoding on the original data. That is, in this method, the reverse order encoding process is performed in batches with N pieces of image data between every two key frames as one set. Taking one key frame Per Second as an example, in the case of a video input of 30FPS (Frames Per Second), the data to be processed at once is 30 pieces of image data, and in the case of a video input of 10FPS, the data to be processed at once is 10 pieces of image data. As shown in fig. 5, the whole operation process of each batch of reverse order encoding processing is specifically as follows:
step S1221: n pieces of image data between two key frames are received.
Step S1222: the nth image data is saved as the start image data.
Step S1223: and carrying out data difference coding on the Nth image data and the N-1 th image data in a reverse order one by one from the Nth image data to obtain reverse order coded data of each image data, wherein the data difference coding is carried out on the Nth image data and the N-1 th image data to be used as the reverse order coded data of the N-1 th image data.
The above-mentioned steps are described in detail with reference to specific examples. Taking a key frame of one second as an example, in the case of a video input 5FPS, data processed once per batch is 5 pieces of image data. That is, 5 pieces of image data between two key frames are received first, and as shown in fig. 6, the first row video input represents 5 pieces of received original image data, the 1 st image data is the letter a, the 2 nd image data is the letter A, B, the 3 rd image data is the letter A, B, C, the 4 th image data is the letter A, B, C, D, and the 5 th image data is the letter A, B, C, D, E. And the positive sequence coded data of each corresponding image data represented by the second row of positive sequences, wherein the 1 st image data is the initial image data, the original image data is stored and is not subjected to positive sequence coding processing, the 1 st image data is directly used as the positive sequence coded data of the 1 st image data, the 2 nd positive sequence coded data of the 2 nd image data is stored with the 2 nd and 1 st difference data, the 3 rd positive sequence coded data of the 3 rd image data is stored with the 3 rd and 2 nd difference data, the 4 th positive sequence coded data of the 4 th image data is stored with the 4 th and 3 rd difference data, and the 5 th positive sequence coded data of the 5 th image data is stored with the 5 th and 4 th difference data, and the real-time positive sequence coding shows an animation effect of ABCDE sequence appearance. The reverse order coded data of each corresponding image data represented by the reverse order of the third row is obtained by performing data difference coding on 5 original image data of the first row one by one in a reverse order, namely, the 5 th image data (namely, the last image data) is stored firstly to be used as initial image data, and the image data is directly used as the reverse order coded data of the 5 th image data without performing reverse order coding processing. Then, data difference encoding is performed on the 4 th image data one by one in reverse order to obtain reverse order encoded data of the 4 th image data, reverse order encoded data of the 3 rd image data, reverse order encoded data of the 2 nd image data, and reverse order encoded data of the 1 st image data in this order. The 4 th image data is stored with data difference coding between the 5 th image data and the 4 th image data, the 3 rd image data is stored with data difference coding between the 4 th image data and the 3 rd image data, the 2 nd image data is stored with data difference coding between the 3 rd image data and the 2 nd image data, and the 1 st image data is stored with data difference coding between the 2 nd image data and the 1 st image data, and the reverse coded data of the 4 th image data shows the animation effect of the EDCBA reverse order.
Step S123: and synchronously performing reverse PTS calculation when the original data is subjected to real-time reverse order coding to obtain the PTS time of each piece of image data under the condition of reverse order.
Specifically, PTS (Presentation Time Stamp) data records the Presentation Time of encoded data of each piece of image data, and as shown in fig. 6, positive-sequence PTS data records the Presentation Time of positive-sequence encoded data of each piece of image data, that is, represents the Presentation times of positive-sequence encoded data of the 1 st piece of image data, positive-sequence encoded data of the 2 nd piece of image data, positive-sequence encoded data of the 3 rd piece of image data, positive-sequence encoded data of the 4 th piece of image data, and positive-sequence encoded data of the 5 th piece of image data, the Presentation times of which are respectively one fifth second at 1 st, one fifth second at 2 nd, one fifth at 3 rd, one fifth at 4 th, and one fifth at 5 th. Because the method also carries out real-time reverse order coding on the original data, the reverse order PTS calculation needs to be synchronously carried out to obtain the PTS time of each image data under the condition of reverse order. That is, as shown in fig. 6, the reverse PTS data records the presentation time of the reverse encoded data for each image data, that is, the presentation times of the reverse encoded data for the 1 st image data, the reverse encoded data for the 2 nd image data, the reverse encoded data for the 3 rd image data, the reverse encoded data for the 4 th image data, and the reverse encoded data for the 5 th image data in the reverse order are 5 th fifth second, 4 th fifth second, 3 rd fifth second, 2 nd fifth second, and 1 st fifth second, respectively.
Step S130: and according to the rule of the extended video, storing the original video file obtained after the sequential coding and the extended information including the reverse-order coded data to obtain the final video file.
Specifically, in the prior art, a recorded video file can be obtained by performing positive sequence encoding on original data, that is, the existing recorded video file only contains positive sequence encoded data and can only be played in sequence. The video recording method of the method can synchronously complete reverse order coding on the original data while carrying out forward order coding on the original data. And finally, storing the original video file obtained after the forward coding and the extension information including the reverse coding data according to the rule of the extension video to obtain the final video file. That is, as shown in fig. 7, the final video file includes not only the original video file but also extended information including the reverse order encoded data. Meanwhile, the extension information is stored at the tail of the original video file, and the extension information further comprises reverse PTS data information and reverse marking information. The extended information is stored at the tail part of the original video file, so that the structure of the original video is not damaged, namely the sequential playing of the final video file is not influenced, and meanwhile, the extended information comprises reverse order coded data, reverse order PTS (partial sequence reference signal) data information and reverse order marking information, wherein the reverse order coded data is stored as the reverse order coded data of each piece of image data. The reverse PTS data information holds the PTS time in the reverse order of each piece of image data, i.e., the presentation time of the reverse-order encoded data of each piece of image data. The reverse order flag information sequentially includes 5 data, which are start position information (represented by encoder _ order _ index) of the reverse order encoded data in the total file, length information (represented by encoder _ order _ length) of the reverse order encoded data, start position information (represented by PTS _ order _ index) of the reverse order PTS data in the total file, length information (represented by PTS _ order _ length) of the reverse order PTS data, and last data information (represented by order _ flag) of the file (represented by order _ flag, to represent an extended reverse order video file). Through the storage of the extended information, the final video file can be played in a reverse order. Specifically, when the waterfall is recorded, the default effect is that the water flow is downward, and the reverse effect is that the water flow is upward.
After receiving the original data of the recorded video, the method, the device and the computer-readable storage medium for recording the video synchronously perform real-time forward coding and real-time reverse coding on the original data, and then save the original video file obtained after the forward coding and the extended information including the reverse coded data according to the rule of the extended video to obtain the final video file. Therefore, the reverse order recording of the video can be synchronously completed while the video is recorded in the forward order, the video is recorded in real time without buffering and post-processing, and meanwhile, the reverse order effect is achieved by expanding the information of the original video file without damaging the original video structure (namely, the video can be played in sequence). Therefore, the method, the device and the computer readable storage medium for recording the video can simultaneously realize real-time forward sequence recording and real-time reverse sequence recording of the video, increase the interestingness and playability of video recording and playing, and enrich the user experience.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A method of video recording, the method comprising the steps of:
receiving original data of a recorded video;
synchronously carrying out real-time forward coding and real-time reverse coding on the original data;
according to the rule of the extended video, storing an original video file obtained after the sequential encoding and extended information including reverse-order encoded data to obtain a final video file;
the step of synchronously performing real-time forward coding and real-time reverse coding on the original data specifically comprises:
and carrying out real-time positive sequence coding on the original data by calling a system control.
2. The method of claim 1, wherein the raw data comprises image data, image preview data, and screen data.
3. The method according to claim 1, wherein the step of synchronizing the real-time forward encoding and the real-time backward encoding of the original data further comprises:
and sequentially carrying out data difference coding on the N image data between every two key frames in a reverse order one by one to obtain reverse order coded data of each image data, and further finishing the real-time reverse order coding of the original data.
4. The method according to claim 3, wherein the step of performing data difference encoding on all image data between every two key frames in reverse order one by one to obtain reverse order encoded data of each image data specifically comprises:
receiving N pieces of image data between two key frames;
saving the Nth image data as initial image data;
and carrying out data difference coding on the Nth image data and the N-1 th image data in a reverse order one by one from the Nth image data to obtain reverse order coded data of each image data, wherein the data difference coding is carried out on the Nth image data and the N-1 th image data to be used as the reverse order coded data of the N-1 th image data.
5. The method according to claim 3, wherein the step of synchronously real-time forward encoding and real-time backward encoding the raw data further comprises:
and synchronously performing reverse PTS calculation when the original data is subjected to real-time reverse order coding to obtain the PTS time of each piece of image data under the condition of reverse order.
6. The method according to claim 5, wherein the extension information is stored at the end of the original video file, and the extension information further comprises reverse PTS data information and reverse flag information.
7. The method according to claim 6, wherein the reverse order flag information includes start position information of reverse order encoded data in a general file, length information of reverse order encoded data, start position information of reverse order PTS data in a general file, length information of reverse order PTS data, and last data information of a file.
8. An apparatus for video recording, comprising a memory, a processor, and a program stored on the memory and executable on the processor, the program when executed by the processor implementing the steps of the method of any one of claims 1 to 7.
9. A computer-readable storage medium, having one or more programs stored thereon, the one or more programs being executable by one or more processors to perform the steps of the method of any of claims 1-7.
CN201711447298.0A 2017-12-27 2017-12-27 Video recording method, device and computer readable storage medium Active CN107948571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711447298.0A CN107948571B (en) 2017-12-27 2017-12-27 Video recording method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711447298.0A CN107948571B (en) 2017-12-27 2017-12-27 Video recording method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107948571A CN107948571A (en) 2018-04-20
CN107948571B true CN107948571B (en) 2021-11-02

Family

ID=61939456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711447298.0A Active CN107948571B (en) 2017-12-27 2017-12-27 Video recording method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107948571B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113973225A (en) * 2020-07-22 2022-01-25 阿里巴巴集团控股有限公司 Video reverse playing method and device, computer storage medium and electronic equipment
CN112733826A (en) * 2020-12-28 2021-04-30 南京披云信息科技有限公司 Image processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964458A (en) * 2005-11-08 2007-05-16 乐金电子(中国)研究开发中心有限公司 Digital content playing apparatus and inverse content storage and playing method
CN101917595A (en) * 2010-07-01 2010-12-15 李志恒 Video monitoring system for automatically marking road section information
CN101983505A (en) * 2008-04-07 2011-03-02 索尼爱立信移动通讯有限公司 Method and device for creating a media signal
CN103327318A (en) * 2012-03-22 2013-09-25 美国博通公司 Transcoding a video stream to facilitate accurate display
CN104683882A (en) * 2015-02-13 2015-06-03 北京数码视讯科技股份有限公司 Generation and play method and device for multiple speed file of stream medium
CN104754268A (en) * 2015-03-26 2015-07-01 广东欧珀移动通信有限公司 Method and device for recording reversed-sequence video
JP2015164266A (en) * 2014-02-28 2015-09-10 キヤノン株式会社 Moving picture recording system, imaging device and recording device
CN106303379A (en) * 2015-05-20 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of video file backward player method and system
CN106657850A (en) * 2016-12-02 2017-05-10 深圳市创易联合科技有限公司 Lesson content recording method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102055717B (en) * 2009-11-09 2014-08-13 华为技术有限公司 Quick playing method, terminal and server
US8437619B2 (en) * 2010-12-20 2013-05-07 General Instrument Corporation Method of processing a sequence of coded video frames

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1964458A (en) * 2005-11-08 2007-05-16 乐金电子(中国)研究开发中心有限公司 Digital content playing apparatus and inverse content storage and playing method
CN101983505A (en) * 2008-04-07 2011-03-02 索尼爱立信移动通讯有限公司 Method and device for creating a media signal
CN101917595A (en) * 2010-07-01 2010-12-15 李志恒 Video monitoring system for automatically marking road section information
CN103327318A (en) * 2012-03-22 2013-09-25 美国博通公司 Transcoding a video stream to facilitate accurate display
JP2015164266A (en) * 2014-02-28 2015-09-10 キヤノン株式会社 Moving picture recording system, imaging device and recording device
CN104683882A (en) * 2015-02-13 2015-06-03 北京数码视讯科技股份有限公司 Generation and play method and device for multiple speed file of stream medium
CN104754268A (en) * 2015-03-26 2015-07-01 广东欧珀移动通信有限公司 Method and device for recording reversed-sequence video
CN106303379A (en) * 2015-05-20 2017-01-04 杭州海康威视数字技术股份有限公司 A kind of video file backward player method and system
CN106657850A (en) * 2016-12-02 2017-05-10 深圳市创易联合科技有限公司 Lesson content recording method and system

Also Published As

Publication number Publication date
CN107948571A (en) 2018-04-20

Similar Documents

Publication Publication Date Title
CN108572764B (en) Character input control method and device and computer readable storage medium
CN112799577B (en) Method, terminal and storage medium for projecting small window
CN109040808B (en) Video interaction regulation and control method, device and computer readable storage medium
CN110187808B (en) Dynamic wallpaper setting method and device and computer-readable storage medium
CN109584897B (en) Video noise reduction method, mobile terminal and computer readable storage medium
CN108536383B (en) Game control method, game control equipment and computer readable storage medium
CN111324407A (en) Animation display method, terminal and computer readable storage medium
CN109117069B (en) Interface operation method, terminal and computer readable storage medium
CN112437472B (en) Network switching method, equipment and computer readable storage medium
CN107948571B (en) Video recording method, device and computer readable storage medium
CN112102780B (en) Display frame rate regulation and control method, device and computer readable storage medium
CN112423211A (en) Multi-audio transmission control method, equipment and computer readable storage medium
CN112712815A (en) Software-based pop sound suppression method, terminal and computer readable medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN109683796B (en) Interaction control method, equipment and computer readable storage medium
CN111970738A (en) Network switching control method, equipment and computer readable storage medium
CN109462829B (en) Call transfer method, device and computer readable storage medium
CN108184161B (en) Video playing method, mobile terminal and computer readable storage medium
CN109710168B (en) Screen touch method and device and computer readable storage medium
CN109495643B (en) Object multi-chat frame setting method and terminal
CN108897451B (en) Main and auxiliary display screen identification method, mobile terminal and computer readable storage medium
CN112887776B (en) Method, equipment and computer readable storage medium for reducing audio delay
CN114581504A (en) Depth image confidence calculation method and device and computer readable storage medium
CN109379719B (en) Application program broadcast processing method and device and computer readable storage medium
CN109495683B (en) Interval shooting method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant