CN109120911B - A kind of video method for transformation, device, equipment and medium - Google Patents

A kind of video method for transformation, device, equipment and medium Download PDF

Info

Publication number
CN109120911B
CN109120911B CN201810743266.3A CN201810743266A CN109120911B CN 109120911 B CN109120911 B CN 109120911B CN 201810743266 A CN201810743266 A CN 201810743266A CN 109120911 B CN109120911 B CN 109120911B
Authority
CN
China
Prior art keywords
format
video
target video
frame
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810743266.3A
Other languages
Chinese (zh)
Other versions
CN109120911A (en
Inventor
郑伟
陈少杰
张文明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sipai Technology Co ltd
Original Assignee
Wuhan Douyu Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Douyu Network Technology Co Ltd filed Critical Wuhan Douyu Network Technology Co Ltd
Priority to CN201810743266.3A priority Critical patent/CN109120911B/en
Publication of CN109120911A publication Critical patent/CN109120911A/en
Application granted granted Critical
Publication of CN109120911B publication Critical patent/CN109120911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention discloses a kind of video method for transformation, device, equipment and medium, and method includes: downloading target video and decodes the target video;According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the frame in the target video;Wherein, R is the R channel parameters value of rgb format, and G is the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is the Y channel parameters value of yuv format, and U is the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;Video analysis is carried out to the target video after format conversion.Method and apparatus provided by the present application can solve video analysis in the prior art, existing computationally intensive, the slow technical problem of processing speed.Realize the technical effect of the efficiency and speed that improve video analysis.

Description

A kind of video method for transformation, device, equipment and medium
Technical field
The present invention relates to field of computer technology more particularly to a kind of video method for transformation, device, equipment and media.
Background technique
Currently, with the progress of network communication technology and the speed-raising of broadband network, network direct broadcasting has been obtained more and more Development and application.In order to guarantee the benign development of live streaming and obtain the data information of live streaming process, monitoring live streaming view is generally required Frequency whether there is undesirable video image, or analysis obtains the critical data in live video, for platform is broadcast live Take corrective measure.This is required by during live streaming or after obtaining video, carrying out video analysis to carry out.
Still existing video is analyzed however, either synchronizing video analysis during live streaming, is all needed It expends more resource to be decoded, convert and analyze etc. and calculate, computationally intensive, processing speed is also slow.
As it can be seen that video analysis in the prior art, there are computationally intensive, the slow technical problems of processing speed.
Summary of the invention
The present invention provides a kind of video method for transformation, device, equipment and medium, to solve video in the prior art point Analysis, existing computationally intensive, the slow technical problem of processing speed.
In a first aspect, the present invention provides a kind of video method for transformation, comprising:
It downloads target video and decodes the target video;
After the key information area for determining the target video is the single scene of color, according to formula R=Y, G=Y And B=Y, rgb format is converted by yuv format by the format of the frame in the target video;Wherein, R is that the R of rgb format is logical Road parameter value, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is that the channel Y of yuv format is joined Numerical value, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;
Video analysis is carried out to the target video after format conversion, wherein the video analysis includes: that acquisition color is single The key information area key message, the key message is analyzed.
Optionally, described according to formula R=Y, G=Y and B=Y, by the format of the frame in the target video by YUV lattice Formula is converted into rgb format, comprising: if according to color identification preset in the attribute information of the target video, described in determination Target video is the single scene of color, then according to formula R=Y, G=Y and B=Y, by the format of the frame in the target video Rgb format is converted by yuv format;Or if comparing decoded target video and preset scene characteristic, tied according to comparing Fruit determines that the target video is that the single scene of color will be in the target video then according to formula R=Y, G=Y and B=Y The format of frame rgb format is converted by yuv format.
Optionally, described according to formula R=Y, G=Y and B=Y, by the format of the frame in the target video by YUV lattice Formula is converted into rgb format, comprising: obtains first time interval;In the process of downloading target video and the decoding target video In, target frame is extracted in the frame after decoding the target video according to the first time interval;According to formula R=Y, G=Y And B=Y, rgb format is converted by yuv format by the format of the target frame;To other frames in addition to the target frame not into Row yuv format to rgb format conversion.
Optionally, the acquisition first time interval, comprising: determine the target scene type of the target video;According to The corresponding relationship of preset scene type and time interval determined between the target scene type corresponding first time Every.
Second aspect provides a kind of video reforming unit, comprising:
Download module, for downloading target video and decoding the target video;
Processing module, for after the key information area for determining the target video is the single scene of color, according to Formula R=Y, G=Y and B=Y convert rgb format by yuv format for the format of the frame in the target video;Wherein, R is The R channel parameters value of rgb format, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is YUV lattice The Y channel parameters value of formula, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;
Analysis module, for carrying out video analysis to the target video after format conversion, wherein the video analysis packet It includes: obtaining the key message of the single key information area of color, the key message is analyzed.
Optionally, the processing module is also used to: if according to color preset in the attribute information of the target video Mark determines that the target video is that the single scene of color regards the target then according to formula R=Y, G=Y and B=Y The format of frame in frequency is converted into rgb format by yuv format;Or if compare decoded target video and preset scene spy Sign determines that the target video is the single scene of color according to comparison result, then, will according to formula R=Y, G=Y and B=Y The format of frame in the target video is converted into rgb format by yuv format.
Optionally, the processing module is also used to: obtaining first time interval;In downloading target video and decode the mesh During marking video, target frame is extracted in the frame after decoding the target video according to the first time interval;According to Formula R=Y, G=Y and B=Y convert rgb format by yuv format for the format of the target frame;To in addition to the target frame Conversion of other frames without yuv format to rgb format.
Optionally, the processing module is also used to: determining the target scene type of the target video;According to preset field The corresponding relationship of scape type and time interval determines the corresponding first time interval of the target scene type.
The third aspect, provides a kind of electronic equipment, including memory, processor and storage on a memory and can handled The computer program run on device, the processor realize method described in first aspect when executing described program.
Fourth aspect provides a kind of computer readable storage medium, is stored thereon with computer program, and the program is processed Method described in first aspect is realized when device executes.
The one or more technical solutions provided in the embodiment of the present invention, have at least the following technical effects or advantages:
Method, apparatus, equipment and medium provided by the embodiments of the present application, after decoding the target video, using simplification Formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the frame in the target video, then carry out Video analysis;Wherein, R is the R channel parameters value of rgb format, and G is the G channel parameters value of rgb format, and B is that the B of rgb format is logical Road parameter value;Y is the Y channel parameters value of yuv format, and U is the U channel parameters value of yuv format, and V is that the channel V of yuv format is joined Numerical value.The simplified video format conversion formula, calculation amount have obtained biggish reduction, so that biggish reduce video point Calculation amount and calculating time in analysis, effectively raise the efficiency of video analysis.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, the followings are specific embodiments of the present invention.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of video method for transformation in the embodiment of the present invention;
Fig. 2 is the structural schematic diagram of video reforming unit in the embodiment of the present invention;
Fig. 3 is the structural schematic diagram of equipment in the embodiment of the present invention;
Fig. 4 is the structural schematic diagram of medium in the embodiment of the present invention.
Specific embodiment
The embodiment of the present application solves in the prior art by providing a kind of video method for transformation, device, equipment and medium Video analysis, existing computationally intensive, the slow technical problem of processing speed.Realize the efficiency and speed for improving video analysis Technical effect.
Technical solution in the embodiment of the present application, general thought are as follows:
Method, apparatus, equipment and medium provided by the embodiments of the present application, after decoding the target video, using simplification Formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the frame in the target video, then carry out Video analysis;Wherein, R is the R channel parameters value of rgb format, and G is the G channel parameters value of rgb format, and B is that the B of rgb format is logical Road parameter value;Y is the Y channel parameters value of yuv format, and U is the U channel parameters value of yuv format, and V is that the channel V of yuv format is joined Numerical value.The simplified video format conversion formula, calculation amount have obtained biggish reduction, so that biggish reduce video point Calculation amount and calculating time in analysis, effectively raise the efficiency of video analysis.
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Embodiment one
The present embodiment provides a kind of video method for transformation, as shown in Figure 1, comprising:
Step S101, downloading target video and the decoding target video;
Step S102, after the key information area for determining the target video is the single scene of color, according to formula R =Y, G=Y and B=Y converts rgb format by yuv format for the format of the frame in the target video;Wherein, R is RGB lattice The R channel parameters value of formula, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is the Y of yuv format Channel parameters value, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;
Step S103 carries out video analysis to the target video after format conversion, wherein the video analysis includes: to obtain The key message for the key information area for taking color single, analyzes the key message.
It should be noted that the method provided in the present embodiment both can be used for the real-time video point during live video Analysis, can be used for the video analysis of the video of historical storage, this is not restricted.
The method can be applied to single server, server group or Cloud Server, and this is not restricted.
Below with reference to Fig. 1, the specific implementation step of method provided by the present application is described in detail:
Firstly, step S101 is executed, downloading target video and the decoding target video.
Before or after executing step S101, it can also first judge whether the key information area of target video belongs to color Color single scene is not use the application real if it is video method for transformation provided by the embodiments of the present application is just used The video method for transformation of example offer is provided.
Specific judgment method can there are many, be set forth below for two kinds:
The first, preset attribute information.
Whether the key information area or entire target video for identifying target video in advance by staff are color lists One scene then modifies the color identification of the attribute information of target video in this way, if server is according to the target video Preset color identification in attribute information determines that the target video is the single scene of color, then according to formula R=Y, G=Y And B=Y, rgb format is converted by yuv format by the format of the frame in the target video.
For example, staff sets 0 for the color identification in target video attribute information and is expressed as color list The color identification in target video attribute information is set 1 scene visual for indicating not single for color by one scene video Frequently.Server can determine if to convert using video provided in this embodiment according to the value of color identification in attribute information Method.
Second, aspect ratio pair.
Preset multiple scene characteristics with the single video matching of color, compare decoded target video with it is preset Scene characteristic, determines the key information area of the target video according to comparison result or entire target video is that color is single Scene converts RGB lattice by yuv format for the format of the frame in the target video then according to formula R=Y, G=Y and B=Y Formula.
For example, being arranged has with the scene characteristic of the single video matching of color: checkerboard features image, sieve characteristic pattern Picture, panda characteristic image.Server compares decoded image and preset scene characteristic, if there is matched feature, then really The fixed target video is the single scene of color, determines and uses video method for transformation provided in this embodiment.
Certainly, in the specific implementation process, it is not limited to above two judgment method, herein with no restrictions, also no longer one by one It enumerates.
Then, step S102 is executed, after the key information area for determining the target video is the single scene of color, According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the frame in the target video;Its In, R is the R channel parameters value of rgb format, and G is the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y For the Y channel parameters value of yuv format, U is the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format.
Specifically, for the ease of analysis, can also by the format of frame after decoding from yuv format (YUV refer to brightness parameter and The separately shown pixel format of coloration parameter) it is converted into rgb format (color mode), then video is carried out to the frame after format conversion Analysis and output analysis result.This embodiment simplifies the formula of format conversion to improve analysis efficiency.
In the embodiment of the present application, different according to the classification of video, the process for converting target frame is not also identical, illustrates below It is illustrated:
The first, live video.
When the target video is live video, during server downloads live video stream from main broadcaster end, meeting Side foradownloaded video, side decoding video, while carrying out real-time format conversion and video analysis by the formula of above-mentioned simplification.
Second, historical storage video.
It, can be directly to every in decoded video when the target video is video stored on server Frame carries out traversal decoding, then decoded video frame is carried out format conversion and video analysis by the formula of above-mentioned simplification.
Subsequently, step S103 is executed, video analysis is carried out to the target video after format conversion, wherein the video Analysis includes: the key message for obtaining the single key information area of color, is analyzed the key message.
The video analysis can be the key message after extraction converts in frame, carry out further according to the key message Analysis.For example, key message be kill in game when killing number information, the object of the video analysis is to kill Number information can carry out the analysis such as user's game strengths or game difficulty according to the number information that kills of each frame.
Scheme provided in this embodiment is understood for the ease of whole, names a specific example, described in server first obtains Then the target video is downloaded in the stream address of target video.During downloading, the frame downloaded is decoded.It is solving Code during, to decoded video frame by formula R=Y, G=Y and B=Y carry out format conversion, then again to conversion after Target frame carries out video analysis and analysis result output.
In one embodiment, first time interval can also first be obtained before executing step S102.In downloading mesh During marking video and decoding the target video or after decoding the target video, according to the first time interval from Target frame is extracted in the frame of the decoded target video, YUV only is carried out according to the simplified formula in step S102 to target frame Format conversion is rgb format, is further reduced calculation amount.
In the embodiment of the present application, the target scene type of the target video can be first determined;According to preset scene The corresponding relationship of type and time interval determines the corresponding first time interval of the target scene type.
Specifically, not identical to the switching average frequency of different scenes often its key message, when corresponding first Between be spaced it is also not identical.For example, it kills killing for class game live streaming and is spaced first time interval that is often longer, therefore being arranged Longer, the interval of playing a card of chess and card games live streaming is often shorter, therefore the first time interval being arranged is shorter.It specifically can be by working Personnel pre-establish corresponding table of the scene type with time interval, obtain first time interval by tabling look-up.And target scene class The determination of type can preset scene identity in the attribute information according to the target video, determine the scene of the target video Type;Alternatively, comparing working as newest decoded target video during downloading target video and decoding the target video Previous frame and preset scene characteristic, the scene type of the target video are determined according to comparison result, herein with no restrictions.
In the embodiment of the present application, after extracting multiple target frames, format conversion and video point only are carried out to target frame Analysis, remaining frame is no longer dealt with, and to greatly reduce calculation amount, improves the speed of video analysis.
It in another embodiment, can also be before executing step S102, in the process for downloading the target video In or the downloading target video after, identify the I frame of the target video, the I frame be decoded and by simplified Formula carries out format conversion and video analysis;To remaining frame in the target video in addition to I frame without decoding, also without Format conversion.
It should be noted that I frame (I frame) is also known as Intra pictures, I frame is usually each GOP (between I frame and I frame Be GOP, general 90~600 frame just has an I frame) first frame, by moderately compressing, as the reference of random access Point can treat as image.Three kinds of frames are defined in video encoding protocol, the frame completely encoded is raw with reference to I frame before I frame At the frame only comprising difference section coding be P frame, the frame encoded there are also the frame before and after a kind of reference is B frame.Coding protocol is adopted Core algorithm is frame data compression and interframe compression, and frame data compression is the algorithm for generating I frame, and interframe compression is to generate B frame and P The algorithm of frame.Only I frame may be implemented individually to decode, and otherwise must carry out every frame decoding.
In the specific implementation process, the method for identifying the I frame of target video, can be regular according to video encoding protocol, To identify the I frame in the target video.Specifically, it is provided with mark on each I frame, can be identified by the mark I frame out, for example, can be by NALU type, i.e. parameter nal_unit_type identifies I frame, it is of course also possible to use existing Other I frame recognition methods, do not make tired state herein.
In the embodiment of the present application, different according to the classification of video, identify that the process of I frame is not also identical, illustrate below into Row explanation:
The first, live video.
When the target video is live video, during server downloads live video stream from main broadcaster end, meeting Side foradownloaded video, while the identification of I frame and Decoding Analysis are carried out to just-downloaded video, efficiently to decode I during live streaming Frame carries out real-time video analysis.
Second, historical storage video.
It, can be directly to every frame progress time in video when the target video is video stored on server It goes through, identifies all I frames to decode carry out video analysis.
Then, step S103 is executed, the I frame is decoded, and video analysis is carried out to the decoded I frame; To remaining frame in the target video in addition to I frame without decoding.
I.e. after identifying I frame, only I frame is decoded, also only to decoded I frame by the formula after aforementioned simplified into Row format conversion and video analysis, remaining frame are no longer dealt with, and to greatly reduce calculation amount, further increase video analysis Speed.
Further, in the embodiment of the present application, if taking the mode being only decoded to the I frame, for the ease of dividing Analysis can also carry out video analysis to the I frame after format conversion, generate analysis feedback information.According to the analysis feedback information, prison Control is omitted with the presence or absence of identification point.It is omitted if there is identification point, then switching encoding/decoding mode, the target currently downloaded is regarded Each frame of frequency is decoded, and is pressed aforementioned simplified formula to decoded each frame and carried out format conversion and progress video Analysis.
The generating process of the analysis feedback information, which may is that, extracts preset identification point image in the decoded I frame Information, and judge whether the identification point image information is continuous;If it is continuous, then generating characterization, there is no identification points The analysis feedback information of omission;If not continuous, then generating characterization, there are the analysis feedback informations that identification point is omitted.
For example, in killing class game, often on the picture of game it is continuously display it is current we kill time Number, then it is to identify point image that display, which kills the image of number position, can extract identification point by analyzing the parts of images Image information specifically kills number.If without identification point omit, the identification point image information (killing number) should be by The arrangement of continuous number as 1,2,3,4,5 ..., be directly after 3 if analysis finds the identification point image information 5, then it is the information near 4 that confirmation, which misses and kills number, and there are identification point omissions for confirmation, and generating characterization, there are identification point omissions Analysis feedback information.
Certainly, in the specific implementation process, above-mentioned identification point image information is continuous, not only can be the company referred to numerically It is continuous, it is also possible to continuous on letter, or comes by preset continuous information table continuous, this is not restricted.
It can preset, when there are identification point omission, setting analysis feedback information is the characters such as 0 or N, and there is no know Not Dian when omitting, setting analysis feedback information is the characters such as 1 or Y.Therefore it can be supervised by analyzing the particular content of feedback information Control is omitted with the presence or absence of identification point.
It in another embodiment, can also be before executing step S102, in the process for downloading the target video In, the target video of downloading is updated storage to caching, makes to be stored with current time in the caching past to current time The target video downloaded in preceding preset duration;Then the I frame for identifying the target video is decoded the I frame, and only Format conversion and video analysis are carried out by aforementioned simplified formula to the decoded I frame, generate analysis feedback information;To described Remaining frame in target video in addition to I frame is without decoding;According to the analysis feedback information, monitor whether that there are identification point something lost Leakage;It is omitted if there is identification point, then the buffered video section stored in the caching is extracted, to each of the buffered video section Frame is decoded, and carries out format conversion and video analysis by aforementioned simplified formula to decoded each frame, and empty described Caching.
In the embodiment of the present application, the caching to the video in downloading process is carried out using buffer register buffer, It is realized in caching by fifo and persistently preserves the target video downloaded in current time to current time forward preset duration.Institute Stating preset duration can be 1 minute or 40 seconds etc., and this is not restricted.
For example, it is assumed that preset duration is 1 minute, is currently downloading the 25th minute video frame of target video, 24-25 minutes video frames of drop are then stored in caching.With the continuation of downloading, earliest partial video can be removed in caching, It is always ensured that and is cached with the video away from modern 1 minute duration.
After monitoring discovery is omitted there are identification point, in order to make up certain identification point information, just extract existing in caching Video frame, and each frame is decoded and is analyzed.Due to having carried out decoding point to each frame in the caching Analysis, then can empty current cache, to save spatial cache and avoid repeat decoding.After emptying the cache, it will continue in caching The video for currently continuing downloading by the storage of aforementioned cache way, makes to be stored with current time in the caching to current time forward The target video downloaded in preset duration.
It can be set, the buffered video section stored in extracting the caching is equal to each frame of the buffered video section It is synchronous to execute: to identify the I frame for the target video currently downloaded, the I frame is solved during being decoded Code.Identifying and decode to the I frame of current foradownloaded video synchronous can carry out with the decoding to video in caching, to guarantee to analyze It is synchronous with live streaming.
Specifically, after decoding the target video, using simplified formula R=Y, G=Y and B=Y, by the mesh The format of frame in mark video is converted into rgb format by yuv format, then carries out video analysis;Wherein, R is that the R of rgb format is logical Road parameter value, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is that the channel Y of yuv format is joined Numerical value, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format.The simplified video format conversion is public Formula, calculation amount have obtained biggish reduction, thus the biggish calculation amount reduced in video analysis and calculating time, effectively Improve the efficiency of video analysis.
Based on the same inventive concept, present invention also provides devices corresponding with video method for transformation in embodiment one, in detail See embodiment two.
Embodiment two
A kind of video reforming unit is present embodiments provided, as shown in Fig. 2, the device includes:
Download module 201, for downloading target video and decoding the target video;
Processing module 202, for after the key information area for determining the target video is the single scene of color, root According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the frame in the target video;Wherein, R For the R channel parameters value of rgb format, G is the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is YUV The Y channel parameters value of format, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;
Analysis module 203, for carrying out video analysis to the target video after format conversion, wherein the video analysis Include: the key message for obtaining the single key information area of color, the key message is analyzed.
Optionally, the processing module 202 is also used to: if according to color preset in the attribute information of the target video Coloured silk mark determines that the target video is the single scene of color, then according to formula R=Y, G=Y and B=Y, by the target The format of frame in video is converted into rgb format by yuv format;Or if compare decoded target video and preset scene Feature determines that the target video is the single scene of color according to comparison result, then according to formula R=Y, G=Y and B=Y, Rgb format is converted by yuv format by the format of the frame in the target video.
Optionally, the processing module 202 is also used to: obtaining first time interval;In downloading target video and decoding institute During stating target video, target frame is extracted in the frame after decoding the target video according to the first time interval; According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the target frame;To except the target Conversion of other frames without yuv format to rgb format outside frame.
Optionally, the processing module 202 is also used to: determining the target scene type of the target video;According to default Scene type and time interval corresponding relationship, determine the corresponding first time interval of the target scene type.
By the device that the embodiment of the present invention two is introduced, filled used by the method to implement the embodiment of the present invention one It sets, so based on the method that the embodiment of the present invention one is introduced, the affiliated personnel in this field can understand the specific structure of the device And deformation, so details are not described herein.Device used by the method for all embodiment of the present invention one belongs to the present invention and is intended to The range of protection.
Based on the same inventive concept, the embodiment of the invention also provides the corresponding electronic equipments of method in embodiment one to implement Example, is shown in embodiment three.
Embodiment three
The present embodiment provides a kind of electronic equipment, as shown in figure 3, including memory 310, processor 320 and being stored in On reservoir 310 and the computer program 311 that can run on the processor 320 can when processor 320 executes computer program 311 To realize any embodiment in embodiment one.
Since the electronic equipment that the present embodiment is introduced is equipment used by method in implementation the embodiment of the present application one, therefore And the method introduced based on the embodiment of the present application one, those skilled in the art can understand the electronic equipment of the present embodiment Specific embodiment and its various change form, so how to be realized in the embodiment of the present application at this for the electronic equipment Method be no longer discussed in detail.As long as those skilled in the art implement to set used by the method in the embodiment of the present application It is standby, belong to the range to be protected of the application.
Based on the same inventive concept, this application provides the corresponding storage medium of embodiment one, detailed in Example four.
Example IV
The present embodiment provides a kind of computer readable storage mediums 400, as shown in figure 4, being stored thereon with computer program 411, when which is executed by processor, one any embodiment of embodiment may be implemented.
The technical solution provided in the embodiment of the present application, has at least the following technical effects or advantages:
Method and device provided by the embodiments of the present application, after decoding the target video, using simplified formula R=Y, G=Y and B=Y converts rgb format by yuv format for the format of the frame in the target video, then carries out video analysis;Its In, R is the R channel parameters value of rgb format, and G is the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y For the Y channel parameters value of yuv format, U is the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format.The simplification Video format conversion formula afterwards, calculation amount have obtained biggish reduction, thus the biggish calculating reduced in video analysis Amount and calculating time, effectively raise the efficiency of video analysis.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more, The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, those skilled in the art can carry out various modification and variations without departing from this hair to the embodiment of the present invention The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.

Claims (10)

1. a kind of video method for transformation characterized by comprising
It downloads target video and decodes the target video;
After the key information area for determining the target video is the single scene of color, according to formula R=Y, G=Y and B= Y converts rgb format by yuv format for the format of the frame in the target video;Wherein, R is the R channel parameters of rgb format Value, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is the Y channel parameters value of yuv format, U For the U channel parameters value of yuv format, V is the V channel parameters value of yuv format;
Video analysis is carried out to the target video after format conversion, wherein the video analysis includes: to obtain the single institute of color The key message for stating key information area analyzes the key message.
2. the method as described in claim 1, which is characterized in that it is described according to formula R=Y, G=Y and B=Y, by the target The format of frame in video is converted into rgb format by yuv format, comprising:
If determining that the target video is that color is single according to color identification preset in the attribute information of the target video Scene RGB is converted by yuv format by the format of the frame in the target video then according to formula R=Y, G=Y and B=Y Format;Or
If comparing decoded target video and preset scene characteristic, determine that the target video is color according to comparison result Single scene is converted the format of the frame in the target video by yuv format then according to formula R=Y, G=Y and B=Y For rgb format.
3. method according to claim 2, it is characterised in that:
Before converting rgb format by yuv format for the format of the frame in the target video, comprising: obtain at the first time Interval;
The downloading target video and the decoding target video, comprising: in downloading target video and decode the target video During, target frame is extracted in the frame after decoding the target video according to the first time interval;
It is described according to formula R=Y, G=Y and B=Y, RGB is converted by yuv format by the format of the frame in the target video Format, comprising:
According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the target frame;To except described Conversion of other frames without yuv format to rgb format outside target frame.
4. method as claimed in claim 3, which is characterized in that the acquisition first time interval, comprising:
Determine the target scene type of the target video;
According to the corresponding relationship of preset scene type and time interval, the target scene type corresponding described first is determined Time interval.
5. a kind of video reforming unit characterized by comprising
Download module, for downloading target video and decoding the target video;
Processing module, for after the key information area for determining the target video is the single scene of color, according to formula R =Y, G=Y and B=Y converts rgb format by yuv format for the format of the frame in the target video;Wherein, R is RGB lattice The R channel parameters value of formula, G are the G channel parameters value of rgb format, and B is the channel B parameter value of rgb format;Y is the Y of yuv format Channel parameters value, U are the U channel parameters value of yuv format, and V is the V channel parameters value of yuv format;
Analysis module, for carrying out video analysis to the target video after format conversion, wherein the video analysis includes: to obtain The key message for the key information area for taking color single, analyzes the key message.
6. device as claimed in claim 5, which is characterized in that the processing module is also used to:
If determining that the target video is that color is single according to color identification preset in the attribute information of the target video Scene RGB is converted by yuv format by the format of the frame in the target video then according to formula R=Y, G=Y and B=Y Format;Or
If comparing decoded target video and preset scene characteristic, determine that the target video is color according to comparison result Single scene is converted the format of the frame in the target video by yuv format then according to formula R=Y, G=Y and B=Y For rgb format.
7. device as claimed in claim 6, which is characterized in that the processing module is also used to:
Obtain first time interval;
During downloading target video and decoding the target video, according to the first time interval from decoding the mesh Target frame is extracted in frame after mark video;
According to formula R=Y, G=Y and B=Y, rgb format is converted by yuv format by the format of the target frame;To except described Conversion of other frames without yuv format to rgb format outside target frame.
8. device as claimed in claim 7, which is characterized in that the processing module is also used to:
Determine the target scene type of the target video;
According to the corresponding relationship of preset scene type and time interval, the target scene type corresponding described first is determined Time interval.
9. a kind of direct broadcast server including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, which is characterized in that the processor realizes claim 1-4 any method when executing described program.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Claim 1-4 any method is realized when execution.
CN201810743266.3A 2018-07-09 2018-07-09 A kind of video method for transformation, device, equipment and medium Active CN109120911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810743266.3A CN109120911B (en) 2018-07-09 2018-07-09 A kind of video method for transformation, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810743266.3A CN109120911B (en) 2018-07-09 2018-07-09 A kind of video method for transformation, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN109120911A CN109120911A (en) 2019-01-01
CN109120911B true CN109120911B (en) 2019-11-15

Family

ID=64862024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810743266.3A Active CN109120911B (en) 2018-07-09 2018-07-09 A kind of video method for transformation, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN109120911B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5798767A (en) * 1996-03-15 1998-08-25 Rendition, Inc. Method and apparatus for performing color space conversion using blend logic
JP3684740B2 (en) * 1997-01-24 2005-08-17 セイコーエプソン株式会社 YUV-RGB digital conversion circuit, and image display apparatus and electronic apparatus using the same
US8120623B2 (en) * 2006-03-15 2012-02-21 Kt Tech, Inc. Apparatuses for overlaying images, portable devices having the same and methods of overlaying images
US7372469B2 (en) * 2006-04-12 2008-05-13 Arcadyan Technology Corporation Image transforming method
CN102750116B (en) * 2011-11-21 2017-03-01 新奥特(北京)视频技术有限公司 A kind of method of YUV color space over range signal processing
CN103489427B (en) * 2012-06-14 2015-12-02 深圳深讯和科技有限公司 YUV converts the method and system that RGB and RGB converts YUV to
CN105578063B (en) * 2015-07-14 2018-04-10 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and terminal

Also Published As

Publication number Publication date
CN109120911A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN109089126A (en) A kind of video analysis method, apparatus, equipment and medium
CN105791977B (en) Virtual reality data processing method, equipment and system based on cloud service
CN110381322B (en) Video stream decoding method and device, terminal equipment and storage medium
CN109120995B (en) Video cache analysis method, device, equipment and medium
US8988501B2 (en) Moving image distribution server, moving image playback apparatus, control method, and recording medium
US9008187B2 (en) Moving image distribution server, moving image reproduction apparatus, control method, program, and recording medium
US10004983B2 (en) Moving image distribution server, moving image reproduction apparatus, control method, and recording medium
CN110493630A (en) The treating method and apparatus of virtual present special efficacy, live broadcast system
CN107801093B (en) Video rendering method and device, computer equipment and readable storage medium
KR101805550B1 (en) Image data encoding method for presentation virtualization and server therefor
US20180213232A1 (en) Graphical instruction data processing method and apparatus, and system
CN107333133B (en) A kind of method and device of the code stream coding of code stream receiving device
CN105025294B (en) A kind of transmission of video control device and method
CN103049238A (en) Method and device for transmitting image data
CN111031032A (en) Cloud video transcoding method and device, decoding method and device, and electronic device
CN110139128A (en) A kind of information processing method, blocker, electronic equipment and storage medium
CN107613365A (en) A kind of video broadcasting method and device
CN109120911B (en) A kind of video method for transformation, device, equipment and medium
TWI699116B (en) Method and device for post processing of a video stream
JP5817052B2 (en) Information processing apparatus, control method, program, and recording medium
CN108924575B (en) Video decoding analysis method, device, equipment and medium
CN107197295B (en) A kind of coded system and method
CN116112691A (en) Picture stream intelligent analysis and inspection method and device, storage medium and terminal
US11812118B2 (en) Client forensic watermarking device, system, and method
CN116567247A (en) Video encoding method, real-time communication method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240521

Address after: Room 1003, Building 1, Taijiale Science and Technology Industrial Park, Tongguan Road, Tianliao Community, Yutang Street, Guangming District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Borui Hi Tech Co.,Ltd.

Country or region after: China

Address before: 430000 East Lake Development Zone, Wuhan City, Hubei Province, No. 1 Software Park East Road 4.1 Phase B1 Building 11 Building

Patentee before: WUHAN DOUYU NETWORK TECHNOLOGY Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right

Effective date of registration: 20240618

Address after: 518000 1002, Building 1, Taijiale Science and Technology Industrial Park, Tongguan Road, Tianliao Community, Yutang Street, Guangming District, Shenzhen, Guangdong

Patentee after: Shenzhen Sipai Technology Co.,Ltd.

Country or region after: China

Address before: Room 1003, Building 1, Taijiale Science and Technology Industrial Park, Tongguan Road, Tianliao Community, Yutang Street, Guangming District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen Borui Hi Tech Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right