CN111586476B - Video data processing method applied to tyrtc platform and related equipment - Google Patents

Video data processing method applied to tyrtc platform and related equipment Download PDF

Info

Publication number
CN111586476B
CN111586476B CN202010507918.0A CN202010507918A CN111586476B CN 111586476 B CN111586476 B CN 111586476B CN 202010507918 A CN202010507918 A CN 202010507918A CN 111586476 B CN111586476 B CN 111586476B
Authority
CN
China
Prior art keywords
video data
compressed video
compressed
video
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010507918.0A
Other languages
Chinese (zh)
Other versions
CN111586476A (en
Inventor
吴文辉
吕攀
宋琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Telyes Intelligent Technology Co ltd
Original Assignee
Shenzhen Telyes Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Telyes Intelligent Technology Co ltd filed Critical Shenzhen Telyes Intelligent Technology Co ltd
Priority to CN202010507918.0A priority Critical patent/CN111586476B/en
Publication of CN111586476A publication Critical patent/CN111586476A/en
Application granted granted Critical
Publication of CN111586476B publication Critical patent/CN111586476B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a video data processing method and device applied to a tyrtc platform, computer equipment and a readable storage medium. And then, compressing the original video data at a system layer to obtain compressed video data. Finally, the terminal transfers the compressed video data from the system layer to the application layer. The original video data obtained by the terminal initially are stored in the system layer, after the original video data are compressed in the system layer, the data volume of the compressed video data relative to the original video data is greatly reduced, and the compressed video data are transmitted to the application layer from the system layer subsequently, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.

Description

Video data processing method applied to tyrtc platform and related equipment
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video data processing method and related device applied to a tyrtc platform.
Background
With the development of mobile terminals (such as mobile phones), people like to record their daily lives by shooting videos. Because the data volume of the video is generally large, the video data collected by the terminal through the camera is stored in the system layer firstly, and then is transmitted to the application layer through the system layer to be edited and transmitted correspondingly. When video data is correspondingly transmitted in the mobile terminal, a large operation memory is required to be occupied, so that the data processing capability of the mobile terminal is greatly reduced.
Disclosure of Invention
The application mainly aims to provide a video data processing method and related equipment applied to a tyrtc platform, and aims to overcome the defect that the data processing capacity of a mobile terminal is reduced due to the fact that the data transmission quantity of existing video data is large.
In order to achieve the above object, the present application provides a video data processing method applied to a tyrtc platform, including:
acquiring original video data;
storing the original video data to a system layer;
compressing the original video data at the system layer to obtain compressed video data;
transmitting the compressed video data to an application layer.
Preferably, the step of compressing the original video data at the system layer to obtain compressed video data includes:
reading the original video data at a system layer to generate code stream data;
and coding the code stream data to obtain the compressed video data.
Further, after the step of transmitting the compressed video data to the application layer, the method includes:
acquiring an editing instruction input by a user;
and executing corresponding editing operation on the compressed video data according to the editing instruction.
Further, the editing instruction carries a plurality of first tags, and the step of executing corresponding editing operation on the compressed video data according to the editing instruction includes:
marking video frames corresponding to the first marks in the compressed video data;
and taking the marked video frame as a segmentation frame, and carrying out clipping processing on the compressed video data to obtain a plurality of video segments.
Further, the step of performing a corresponding editing operation on the compressed video data according to the editing instruction includes:
screening out fuzzy video frames from the compressed video data;
and marking and displaying the blurred video frame.
Preferably, the step of filtering out blurred video frames from the compressed video data includes:
judging whether the definition of a shooting subject in a video frame of the compressed video data reaches a threshold value;
and if the definition of the shooting subject does not reach the threshold value, marking the video frame as a fuzzy video frame.
Further, after the step of displaying the blurred video frame label, the method includes:
receiving a deleting instruction input by a user, wherein the deleting instruction carries a second mark;
and deleting the fuzzy video frame corresponding to the second mark according to the deleting instruction.
The present application further provides a video data processing apparatus applied to a tyrtc platform, including:
the first acquisition module is used for acquiring original video data;
the storage module is used for storing the original video data to a system layer;
the compression module is used for compressing the original video data at the system layer to obtain compressed video data;
and the transmission module is used for transmitting the compressed video data to an application layer.
Preferably, the compression module includes:
reading the original video data at a system layer to generate code stream data;
and coding the code stream data to obtain the compressed video data.
Further, the processing apparatus further includes:
the second acquisition module is used for acquiring an editing instruction input by a user;
and the editing module is used for executing corresponding editing operation on the compressed video data according to the editing instruction.
Further, the editing instruction carries a plurality of first marks, and the editing module includes:
a marking unit, configured to mark video frames corresponding to each of the first marks in the compressed video data;
and the clipping unit is used for clipping the compressed video data by taking the marked video frame as a segmentation frame to obtain a plurality of video segments.
Further, the editing module further includes:
the screening unit is used for screening out fuzzy video frames from the compressed video data;
and the marking unit is used for marking and displaying the fuzzy video frame.
Preferably, the screening unit includes:
a judging subunit operable to judge whether or not a degree of definition of a photographic subject in a video frame of the compressed video data reaches a threshold;
and the marking subunit is used for marking the video frame as a blurred video frame if the definition of the shooting subject does not reach the threshold value.
Further, the editing module further includes:
the receiving unit is used for receiving a deleting instruction input by a user, and the deleting instruction carries a second mark;
and the deleting unit is used for deleting the blurred video frame corresponding to the second mark according to the deleting instruction.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
According to the video data processing method and the related equipment applied to the tyrtc platform, the terminal obtains original video data through the camera, and then the original video data are stored in the system layer. And then, compressing the original video data at a system layer to obtain compressed video data. Finally, the terminal transfers the compressed video data from the system layer to the application layer. The original video data obtained by the terminal initially are stored in the system layer, after the original video data are compressed in the system layer, the data volume of the compressed video data relative to the original video data is greatly reduced, and then the compressed video data are transmitted from the system layer to the application layer, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.
Drawings
FIG. 1 is a flow chart illustrating the steps of a video data processing method applied to a tyrtc platform according to an embodiment of the present application;
fig. 2 is a block diagram illustrating an overall structure of a video data processing apparatus applied to a tyrtc platform according to an embodiment of the present application;
fig. 3 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, in an embodiment of the present application, a video data processing method applied to a tyrtc platform is provided, including:
s1, acquiring original video data;
s2, storing the original video data to a system layer;
s3, compressing the original video data at the system layer to obtain compressed video data;
and S4, transmitting the compressed video data to an application layer.
the tyrtc platform refers to an astral RTC platform, wherein the astral RTC platform runs on the public Internet, is positioned on a communication capability platform which provides OTT real-time communication service in a pure Internet mode, and can realize intercommunication with network communication systems such as IMS/soft switch/PSTN of Chinese telecommunication and the like. The method provides voice, video and data real-time communication in application and among applications for client application, and an application interface comprises the following steps: the method comprises the steps that the SDK (including Android, iOS and JS SDK) facing to a PC (personal computer), a Pad (personal computer), a mobile phone terminal and a Web application front end supports various programming languages and terminal platforms; the method is applied to RESTAPI initiated by a front end and a server background of a third party. the SDK of the wing RTC platform is packaged by the tyrtc module, and the functions of point-to-point audio and video call, multi-person voice and video call and IM message can be easily realized by using the module.
In this embodiment, the terminal acquires an image through the camera to obtain original video data. In the shooting process of the camera, the acquired original video data can be stored in a system layer of the terminal. The terminal reads original video data through a video compressor at a system layer to generate code stream data; and coding the code stream data by using a video compressor to complete the compression processing of the original video data to obtain compressed video data. And finally, the terminal transmits the compressed video data to the application layer so as to edit the compressed video data and the like in the application layer in the following. After the terminal compresses the original video data at the system layer, the data volume of the compressed video data relative to the original video data is greatly reduced, and the compressed video data is transmitted to the application layer from the system layer subsequently, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.
Preferably, the step of compressing the original video data at the system layer to obtain compressed video data includes:
s301, reading the original video data at a system layer to generate code stream data;
s302, coding the code stream data to obtain the compressed video data.
In this embodiment, the terminal reads, at the system layer, the original video data in the sliding window according to the preset format by using the video compressor, and generates code stream data. The preset format is a format corresponding to a video compression standard, and the video compression standard is preferably an H.264 standard or an HEVC standard. And the terminal uses the video compressor to encode the code stream data according to a preset format to generate a code stream, so that compressed video data is obtained. The video compressor is configured to, during the process of encoding the encoded stream data, obtain the encoded stream data according to the same preset format as the preset format used during reading (for example, when the video compressor reads the original video data according to the format corresponding to the h.264 standard, the encoded stream data is encoded according to the format corresponding to the h.264 standard), so as to ensure that the video compression process has a unified video compression standard, and avoid the problems of video data loss, confusion and the like.
Further, after the step of transmitting the compressed video data to the application layer, the method includes:
s5, acquiring an editing instruction input by a user;
and S6, executing corresponding editing operation on the compressed video data according to the editing instruction.
In this embodiment, after the compressed video data is transmitted from the system layer to the application layer, the user can edit the compressed video data accordingly. Specifically, after receiving an editing instruction input by a user, the terminal performs an editing operation corresponding to the editing instruction on the compressed video number at the application layer. For example, if the editing instruction input by the user is a brightness adjustment instruction, and the brightness adjustment instruction carries a brightness adjustment value, the terminal may correspondingly adjust the brightness of the entire video according to the brightness adjustment value to complete the editing operation.
Further, the editing instruction carries a plurality of first tags, and the step of performing corresponding editing operation on the compressed video data according to the editing instruction includes:
s601, marking video frames corresponding to the first marks in the compressed video data;
and S602, taking the marked video frame as a segmentation frame, and performing clipping processing on the compressed video data to obtain a plurality of video segments.
In this embodiment, the editing instruction carries a first flag, and the editing instruction is specifically a clipping instruction, where the number of the first flag in this embodiment is at least one. The user carries one or more first tags when entering a clipping instruction. And the terminal sequentially marks each video frame corresponding to the first mark in the compressed video data according to the first mark input by the user. After the video frames are marked, the terminal clips the marked video frames according to the marking sequence, so that the compressed video data is divided into a plurality of video segments, and the division point of each video segment is the marked video frame. After finishing the clipping processing of the compressed video data, the user can arbitrarily combine or delete each video segment according to the needs of the user.
Further, the step of performing a corresponding editing operation on the compressed video data according to the editing instruction includes:
s603, screening out fuzzy video frames from the compressed video data;
and S604, marking and displaying the blurred video frame.
In this embodiment, the editing instruction is specifically a fuzzy frame screening instruction, and after receiving the fuzzy frame screening instruction input by the user, the terminal performs definition recognition on each video frame in the compressed video data. The basis for the terminal to identify the definition of the video frame is a shooting subject in the video frame, such as a character image, a building image, and the like. Specifically, the terminal determines whether the sharpness of the shooting subject in the video frame of the compressed video data reaches a threshold, where the threshold may be an initial value provided inside the terminal, or may be written in the current blurred frame screening instruction by the user. And if the definition of the shooting subject in the video frames does not reach the threshold, the terminal judges the video frames of which the definition of the shooting subject does not reach the threshold as the blurred video, so that the blurred video frames are screened out from the compressed video data. And the terminal displays the fuzzy video frames obtained by screening in a labeling way (for example, marks the playing time of the fuzzy video frames in the compressed video data) so as to be convenient for a user to view.
Preferably, the step of filtering out blurred video frames from the compressed video data includes:
s6031, judging whether the definition of a shooting subject in a video frame of the compressed video data reaches a threshold value;
and S6032, if the definition of the shooting subject does not reach the threshold value, marking the video frame as a fuzzy video frame.
In this embodiment, in the process of screening the blurred video frames, the terminal needs to determine whether the definition of the shooting subject in each video frame of the compressed video data reaches the threshold. The shooting subject can be a person image, a building image and other objects, the selection of the shooting subject can be set by a user (for example, when the current compressed video data is a self-shot video of the user, the user can select the person image as the shooting subject when inputting a fuzzy frame screening instruction), or can be automatically selected by a terminal (when the terminal identifies the definition of the shooting subject in the video frame A, firstly, a video frame B with the definition reaching a threshold value closest to the sequence of the video frame A is searched in the compressed video data, then, an object with obvious characteristics in the video frame B is identified as the shooting subject of the video frame A, and if the video frame B is not searched, prompt information is sent to prompt the user to manually select the shooting subject). The terminal can judge whether the definition of the shooting main body reaches the threshold value according to the definition of the outer contour of the image of the shooting main body. And if the definition of the shooting subject does not reach the threshold value, the terminal marks the video frame as a fuzzy video frame.
Further, after the step of displaying the blurred video frame with the label, the method includes:
s6033, receiving a deleting instruction input by a user, wherein the deleting instruction carries a second mark;
and S6034, deleting the fuzzy video frame corresponding to the second mark according to the deleting instruction.
In this embodiment, after the terminal marks and displays the blurred video frames, the user may select to delete all or a certain blurred video for the overall display effect of the video. Specifically, a user manually inputs a deleting instruction, wherein the deleting instruction carries a second mark, and the second mark corresponds to the blurred video frame to be deleted. And after receiving the deleting instruction, the terminal deletes the fuzzy video frame corresponding to the second mark, so that the overall quality of the video is improved, and the experience of a user watching the video is improved.
In the video data processing method applied to the tyrtc platform, the terminal acquires original video data through the camera, and then stores the original video data in the system layer. And then, compressing the original video data at a system layer to obtain compressed video data. Finally, the terminal transfers the compressed video data from the system layer to the application layer. The original video data obtained by the terminal initially are stored in the system layer, after the original video data are compressed in the system layer, the data volume of the compressed video data relative to the original video data is greatly reduced, and the compressed video data are transmitted to the application layer from the system layer subsequently, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.
Referring to fig. 2, the present embodiment further provides a video data processing apparatus applied to a tyrtc platform, including:
a first obtaining module 1, configured to obtain original video data;
the storage module 2 is used for storing the original video data to a system layer;
a compression module 3, configured to perform compression processing on the original video data at the system layer to obtain compressed video data;
and the transmission module 4 is used for transmitting the compressed video data to an application layer.
In this embodiment, the terminal acquires an image through the camera to obtain original video data. In the shooting process of the camera, the collected original video data can be stored in a system layer of the terminal. The terminal reads original video data through a video compressor at a system layer to generate code stream data; and coding the code stream data by using a video compressor to complete the compression processing of the original video data to obtain compressed video data. And finally, the terminal transmits the compressed video data to the application layer so as to edit the compressed video data and the like in the application layer in the following. After the terminal compresses the original video data at the system layer, the data volume of the compressed video data is greatly reduced relative to the original video data, and the compressed video data is transmitted to the application layer from the system layer subsequently, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.
Preferably, the compression module 3 includes:
reading the original video data at a system layer to generate code stream data;
and coding the code stream data to obtain the compressed video data.
In this embodiment, the terminal reads, at the system layer, the original video data in the sliding window according to the preset format by using the video compressor, and generates code stream data. The preset format is a format corresponding to a video compression standard, and the video compression standard is preferably an H.264 standard or an HEVC standard. And the terminal uses the video compressor to encode the code stream data according to a preset format to generate a code stream, so that compressed video data is obtained. The video compressor is configured to, during the process of encoding the encoded stream data, obtain the encoded stream data according to the same preset format as the preset format used during reading (for example, when the video compressor reads the original video data according to the format corresponding to the h.264 standard, the encoded stream data is encoded according to the format corresponding to the h.264 standard), so as to ensure that the video compression process has a unified video compression standard, and avoid the problems of video data loss, confusion and the like.
Further, the processing apparatus further includes:
the second obtaining module 5 is used for obtaining an editing instruction input by a user;
and the editing module 6 is used for executing corresponding editing operation on the compressed video data according to the editing instruction.
In this embodiment, after the compressed video data is transmitted from the system layer to the application layer, the user can edit the compressed video data accordingly. Specifically, after receiving an editing instruction input by a user, the terminal may perform an editing operation corresponding to the editing instruction on the compressed video data at the application layer. For example, the editing instruction input by the user is a brightness adjustment instruction, and the brightness adjustment instruction carries a brightness adjustment value, so that the terminal can correspondingly adjust the brightness of the whole video according to the brightness adjustment value to complete the editing operation.
Further, the editing instruction carries a plurality of first marks, and the editing module 6 includes:
a marking unit, configured to mark video frames corresponding to each of the first marks in the compressed video data;
and the clipping unit is used for clipping the compressed video data by taking the marked video frame as a segmentation frame to obtain a plurality of video segments.
In this embodiment, the editing instruction carries a first flag, and the editing instruction is specifically a clipping instruction, where the number of the first flag in this embodiment is at least one. The user carries one or more first tags when entering a clipping instruction. And the terminal sequentially marks each video frame corresponding to the first mark in the compressed video data according to the first mark input by the user. After the marking of the video frames is finished, the terminal clips the marked video frames according to the marking sequence, so that the compressed video data is divided into a plurality of video segments, and the division point of each video segment is the marked video frame. After finishing the clipping processing of the compressed video data, the user can arbitrarily combine or delete each video clip according to the needs of the user.
Further, the editing module 6 further includes:
the screening unit is used for screening out fuzzy video frames from the compressed video data;
and the marking unit is used for marking and displaying the blurred video frame.
In this embodiment, the editing instruction is specifically a fuzzy frame screening instruction, and after receiving the fuzzy frame screening instruction input by the user, the terminal performs definition identification on each video frame in the compressed video data. The basis for the terminal to identify the definition of the video frame is a shooting subject in the video frame, such as a character image, a building image, and the like. Specifically, the terminal determines whether the definition of the shooting subject in the video frame of the compressed video data reaches a threshold, where the threshold may be an initial value provided inside the terminal, or may be written in the current blurred frame screening instruction by the user. If the definition of the shooting subject in the video frames does not reach the threshold, the terminal judges the video frames of which the definition of the shooting subject does not reach the threshold as the fuzzy video, so that the fuzzy video frames are screened out from the compressed video data. And the terminal displays the fuzzy video frames obtained by screening in a labeling way (for example, marks the playing time of the fuzzy video frames in the compressed video data) so as to be convenient for a user to view.
Preferably, the screening unit includes:
a judging subunit operable to judge whether or not a degree of definition of a photographic subject in a video frame of the compressed video data reaches a threshold;
and the marking subunit is used for marking the video frame as a blurred video frame if the definition of the shooting subject does not reach the threshold value.
In this embodiment, in the process of screening the blurred video frames, the terminal needs to determine whether the definition of the shooting subject in each video frame of the compressed video data reaches the threshold. The shooting subject can be a person image, a building image and other objects, the selection of the shooting subject can be set by a user (for example, when the current compressed video data is a self-shot video of the user, the user can select the person image as the shooting subject when inputting a fuzzy frame screening instruction), or can be automatically selected by a terminal (when the terminal identifies the definition of the shooting subject in the video frame A, firstly, a video frame B with the definition reaching a threshold value closest to the sequence of the video frame A is searched in the compressed video data, then, an object with obvious characteristics in the video frame B is identified as the shooting subject of the video frame A, and if the video frame B is not searched, prompt information is sent to prompt the user to manually select the shooting subject). The terminal can judge whether the definition of the shooting main body reaches the threshold value according to the definition of the outer contour of the image of the shooting main body. And if the definition of the shooting subject does not reach the threshold value, the terminal marks the video frame as a fuzzy video frame.
Further, the editing module 6 further includes:
the receiving unit is used for receiving a deleting instruction input by a user, and the deleting instruction carries a second mark;
and the deleting unit is used for deleting the fuzzy video frame corresponding to the second mark according to the deleting instruction.
In this embodiment, after the terminal marks and displays the blurred video frames, the user may select to delete all or some blurred videos for the overall display effect of the videos. Specifically, a user manually inputs a deleting instruction, wherein the deleting instruction carries a second mark, and the second mark corresponds to the blurred video frame to be deleted. And after the terminal receives the deleting instruction, deleting the fuzzy video frame corresponding to the second mark, thereby improving the overall quality of the video and improving the experience of the user for watching the video.
According to the video data processing device applied to the tyrtc platform, the device acquires original video data through the camera, and then stores the original video data to the system layer. And then, compressing the original video data at a system layer to obtain compressed video data. Finally, the terminal transfers the compressed video data from the system layer to the application layer. The original video data obtained by the terminal initially are stored in the system layer, after the original video data are compressed in the system layer, the data volume of the compressed video data relative to the original video data is greatly reduced, and then the compressed video data are transmitted from the system layer to the application layer, so that the data transmission volume of the video is greatly reduced, and the data processing pressure of the terminal can be effectively reduced.
Referring to fig. 3, an embodiment of the present application further provides a computer device, where the computer device may be a server, and an internal structure of the computer device may be as shown in fig. 3. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as raw video data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a video data processing method applied to a tyrtc platform.
The processor executes the steps of the video data processing method applied to the tyrtc platform:
s1, acquiring original video data;
s2, storing the original video data to a system layer;
s3, compressing the original video data at the system layer to obtain compressed video data;
and S4, transmitting the compressed video data to an application layer.
Preferably, the step of compressing the original video data at the system layer to obtain compressed video data includes:
s301, reading the original video data through a video compressor at a system layer to generate code stream data;
s302, the code stream data is coded by using the video compressor to obtain the compressed video data.
Further, after the step of transmitting the compressed video data to the application layer, the method includes:
s5, acquiring an editing instruction input by a user;
and S6, executing corresponding editing operation on the compressed video data according to the editing instruction.
Further, the editing instruction carries a first flag, and the step of executing a corresponding editing operation on the compressed video data according to the editing instruction includes:
s601, according to the first mark, sequentially marking the video frames of the compressed video data;
and S602, clipping the marked video frames according to the marking sequence.
Further, the step of performing a corresponding editing operation on the compressed video data according to the editing instruction includes:
s603, screening out fuzzy video frames from the compressed video data;
and S604, marking and displaying the fuzzy video frame.
Preferably, the step of filtering out blurred video frames from the compressed video data includes:
s6031, judging whether the definition of a shooting subject in a video frame of the compressed video data reaches a threshold value;
and S6032, if the definition of the shot subject does not reach the threshold value, marking the video frame as a fuzzy video frame.
Further, after the step of displaying the blurred video frame with the label, the method includes:
s6033, receiving a deleting instruction input by a user, wherein the deleting instruction carries a second mark;
and S6034, deleting the fuzzy video frame corresponding to the second mark according to the deleting instruction.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements a video data processing method applied to a tyrtc platform, specifically:
s1, acquiring original video data;
s2, storing the original video data to a system layer;
s3, compressing the original video data at the system layer to obtain compressed video data;
and S4, transmitting the compressed video data to an application layer.
Preferably, the step of compressing the original video data at the system layer to obtain compressed video data includes:
s301, reading the original video data through a video compressor at a system layer to generate code stream data;
s302, the code stream data is coded by using the video compressor to obtain the compressed video data.
Further, after the step of transmitting the compressed video data to the application layer, the method includes:
s5, acquiring an editing instruction input by a user;
and S6, executing corresponding editing operation on the compressed video data according to the editing instruction.
Further, the editing instruction carries a first flag, and the step of executing a corresponding editing operation on the compressed video data according to the editing instruction includes:
s601, according to the first mark, sequentially marking the video frames of the compressed video data;
and S602, clipping the marked video frames according to the marking sequence.
Further, the step of performing a corresponding editing operation on the compressed video data according to the editing instruction includes:
s603, screening out fuzzy video frames from the compressed video data;
and S604, marking and displaying the blurred video frame.
Preferably, the step of filtering out blurred video frames from the compressed video data includes:
s6031, judging whether the definition of a shooting subject in a video frame of the compressed video data reaches a threshold value;
and S6032, if the definition of the shooting subject does not reach the threshold value, marking the video frame as a fuzzy video frame.
Further, after the step of displaying the blurred video frame with the label, the method includes:
s6033, receiving a deleting instruction input by a user, wherein the deleting instruction carries a second mark;
and S6034, deleting the fuzzy video frame corresponding to the second mark according to the deleting instruction.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware associated with instructions of a computer program, which may be stored on a non-volatile computer-readable storage medium, and when executed, may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (SSRDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of another identical element in a process, apparatus, article, or method comprising the element.
The above description is only for the preferred embodiment of the present application and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (6)

1. A video data processing method applied to a tyrtc platform is characterized by comprising the following steps:
acquiring original video data;
storing the original video data to a system layer;
compressing the original video data at the system layer to obtain compressed video data;
transmitting the compressed video data to an application layer;
acquiring an editing instruction input by a user;
according to the editing instruction, executing corresponding editing operation on the compressed video data;
the editing instruction carries a plurality of first marks, and the step of executing corresponding editing operation on the compressed video data according to the editing instruction comprises the following steps:
marking video frames corresponding to the first marks in the compressed video data;
taking the marked video frame as a segmentation frame, and clipping the compressed video data to obtain a plurality of video segments;
the step of executing corresponding editing operation on the compressed video data according to the editing instruction further comprises:
screening out fuzzy video frames from the compressed video data;
marking and displaying the blurred video frame;
the step of screening out blurred video frames from the compressed video data comprises the following steps:
judging whether the definition of a shooting subject in a video frame of the compressed video data reaches a threshold value;
if the definition of the shooting subject does not reach the threshold value, marking the video frame as a fuzzy video frame;
the step of judging whether the definition of the shooting subject in the video frame of the compressed video data reaches a threshold value includes:
judging whether the editing instruction contains a shooting subject selected by a user;
if the editing instruction does not contain a shooting subject selected by a user, screening historical video frames which are closest to the sequence of the current video frames and have the definition reaching a threshold value from the compressed video data, and selecting an obvious characteristic object in the historical video frames as the shooting subject;
judging whether the definition of the outer contour of the image of the shooting subject of the current video frame reaches a threshold value;
and if the definition of the outer contour of the image of the shooting subject of the current video frame reaches a threshold value, judging that the definition of the shooting subject of the current video frame reaches the threshold value.
2. The method as claimed in claim 1, wherein the step of compressing the original video data at the system layer to obtain compressed video data includes:
reading the original video data at a system layer to generate code stream data;
and coding the code stream data to obtain the compressed video data.
3. The method for processing video data according to claim 1, wherein the step of displaying the blurred video frame label comprises:
receiving a deleting instruction input by a user, wherein the deleting instruction carries a second mark;
and deleting the blurred video frame corresponding to the second mark according to the deleting instruction.
4. A video data processing apparatus applied to a tyrtc platform, comprising:
the first acquisition module is used for acquiring original video data;
the storage module is used for storing the original video data to a system layer;
the compression module is used for compressing the original video data at the system layer to obtain compressed video data;
a transmission module for transmitting the compressed video data to an application layer;
the second acquisition module is used for acquiring an editing instruction input by a user;
the editing module is used for executing corresponding editing operation on the compressed video data according to the editing instruction;
the editing instruction carries a plurality of first marks, and the step of executing corresponding editing operation on the compressed video data according to the editing instruction comprises the following steps:
marking video frames corresponding to the first marks in the compressed video data;
taking the marked video frame as a segmentation frame, and clipping the compressed video data to obtain a plurality of video segments;
the editing module further comprises:
the screening unit is used for screening out fuzzy video frames from the compressed video data;
the labeling unit is used for labeling and displaying the blurred video frame;
the screening unit includes:
a judging subunit operable to judge whether or not a degree of definition of a photographic subject in a video frame of the compressed video data reaches a threshold;
the marking subunit is used for marking the video frame as a fuzzy video frame if the definition of the shooting subject does not reach a threshold value;
the judgment subunit is specifically configured to:
judging whether the editing instruction contains a shooting subject selected by a user;
if the editing instruction does not contain a shooting subject selected by a user, screening historical video frames which are closest to the sequence of the current video frames and have the definition reaching a threshold value from the compressed video data, and selecting an obvious characteristic object in the historical video frames as the shooting subject;
judging whether the definition of the image outer contour of the shooting subject of the current video frame reaches a threshold value;
and if the definition of the outer contour of the image of the shooting subject of the current video frame reaches a threshold value, judging that the definition of the shooting subject of the current video frame reaches the threshold value.
5. A computer arrangement comprising a memory and a processor, the memory having a computer program stored therein, characterized in that the processor, when executing the computer program, is adapted to carry out the steps of the method according to any of claims 1 to 3.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 3.
CN202010507918.0A 2020-06-05 2020-06-05 Video data processing method applied to tyrtc platform and related equipment Expired - Fee Related CN111586476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010507918.0A CN111586476B (en) 2020-06-05 2020-06-05 Video data processing method applied to tyrtc platform and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010507918.0A CN111586476B (en) 2020-06-05 2020-06-05 Video data processing method applied to tyrtc platform and related equipment

Publications (2)

Publication Number Publication Date
CN111586476A CN111586476A (en) 2020-08-25
CN111586476B true CN111586476B (en) 2022-11-01

Family

ID=72127246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010507918.0A Expired - Fee Related CN111586476B (en) 2020-06-05 2020-06-05 Video data processing method applied to tyrtc platform and related equipment

Country Status (1)

Country Link
CN (1) CN111586476B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7769819B2 (en) * 2005-04-20 2010-08-03 Videoegg, Inc. Video editing with timeline representations
CN105493512B (en) * 2014-12-14 2018-07-06 深圳市大疆创新科技有限公司 A kind of method for processing video frequency, video process apparatus and display device
CN108924502A (en) * 2018-07-26 2018-11-30 成都派视科技有限公司 A kind of portable image transmission system and its figure transmission method

Also Published As

Publication number Publication date
CN111586476A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
US8798168B2 (en) Video telecommunication system for synthesizing a separated object with a new background picture
JP6247324B2 (en) Method for dynamically adapting video image parameters to facilitate subsequent applications
US20060075054A1 (en) Method and system for implementing instant communication of images through instant messaging tool
US20240179272A1 (en) Virtual image video call method, terminal device, and storage medium
CN105979189A (en) Video signal processing and storing method and video signal processing and storing system
CN110958399A (en) High dynamic range image HDR realization method and related product
CN112487396A (en) Picture processing method and device, computer equipment and storage medium
CN101296419A (en) Calling business card customization method, service implementing method, system and server thereof
CN111586476B (en) Video data processing method applied to tyrtc platform and related equipment
CN113141352B (en) Multimedia data transmission method and device, computer equipment and storage medium
CN111953980B (en) Video processing method and device
CN104049833A (en) Terminal screen image displaying method based on individual biological characteristics and terminal screen image displaying device based on individual biological characteristics
CN110765869B (en) Lip language living body detection method, system and computer equipment for collecting data by channels
CN114339118B (en) Video transmission method and system based on full duplex network
CN110581974B (en) Face picture improving method, user terminal and computer readable storage medium
CN113411503B (en) Cloud mobile phone camera preview method and device, computer equipment and storage medium
CN107612881B (en) Method, device, terminal and storage medium for transmitting picture during file transmission
JP2000261774A (en) Method for segmenting and transmitting portrait
KR100460221B1 (en) Video communication system
CN112911003B (en) Electronic data extraction method, computer device, and storage medium
CN112004065B (en) Video display method, display device and storage medium
CN112866604B (en) Video file generation method and device, computer equipment and storage medium
CN110457264B (en) Conference file processing method, device, equipment and computer readable storage medium
CN111131852B (en) Video live broadcast method, system and computer readable storage medium
CN114300007A (en) WebRTC-based audio and video recording method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221101