CN106204552B - A kind of detection method and device of video source - Google Patents

A kind of detection method and device of video source Download PDF

Info

Publication number
CN106204552B
CN106204552B CN201610510000.5A CN201610510000A CN106204552B CN 106204552 B CN106204552 B CN 106204552B CN 201610510000 A CN201610510000 A CN 201610510000A CN 106204552 B CN106204552 B CN 106204552B
Authority
CN
China
Prior art keywords
image
video source
target marking
marking area
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610510000.5A
Other languages
Chinese (zh)
Other versions
CN106204552A (en
Inventor
邱学忠
陈爱云
姚婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201610510000.5A priority Critical patent/CN106204552B/en
Publication of CN106204552A publication Critical patent/CN106204552A/en
Application granted granted Critical
Publication of CN106204552B publication Critical patent/CN106204552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The embodiment of the invention provides a kind of detection method of video source and devices, comprising: one or more target marking area is determined from video source images to be detected;Calculate the object feature value of the target marking area;Calculate the object feature value of the target marking area and the similarity of the reference characteristic value in preset corresponding reference mark region;When the similarity is greater than preset threshold, the target length-width ratio of the target marking area is calculated;Whether the target length-width ratio for judging the target marking area is identical as the reference length-width ratio in the reference mark region;If so, determining that the video source data does not change;Otherwise, it is determined that the video source data changes.The length-width ratio of comparison of the embodiment of the present invention target marking area and reference mark region, whether the size for accurately detecting video source images changes, further accurately detect whether the source of video source changes, improve the fluency and image quality that video source plays, and provides certain convenience for the audit of video source.

Description

A kind of detection method and device of video source
Technical field
The present invention relates to the technical fields of video data processing, more particularly to the detection method and one kind of a kind of video source The detection device of video source.
Background technique
With the development of science and technology, the entertainment of people is also more and more abundant, the TV programme of each TV station Occur like the mushrooms after rain, general user can be linked television set by video frequency collection card and acquire recording TV program or pass through The linking of devices television set recording TV program of hard disk video recorder etc, then by simple editing, format conversion can Upload to network.
Because watching video on network not limited by the time, usual people like on network using fixed broadcasting Device watches its favorite TV programme, and the size (length-width ratio) of current video source is broadly divided into 4:3,16:9,21:9, and TV The size that platform provides video source is usually 16:9 or 4:3, is transmitted by satellite-signal or cable TV signal;But due to view The problem of frequency source, the video source of above two size will appear the case where mixing, for example the video source having a size of 16:9 can mix The video source having a size of 4:3, the video source having a size of 4:3 can mix the video source having a size of 16:9, on network one For the player of a fixation, stretched operation or reduction operation can be carried out to the above-mentioned video source mixed.
But the fluency of video playing can be impacted by carrying out stretched operation or reduction operation, while playing Image quality can also be deteriorated, and directly affect the effect of spectators' viewing.
Summary of the invention
In view of the above problems, it proposes the embodiment of the present invention and overcomes the above problem or at least partly in order to provide one kind A kind of detection method of the video source to solve the above problems and a kind of corresponding detection device of video source.
To solve the above-mentioned problems, the embodiment of the invention discloses a kind of detection methods of video source, which comprises
One or more target marking area is determined from video source images to be detected;
Calculate the object feature value of the target marking area;
Calculate the object feature value of the target marking area and the reference characteristic value in preset corresponding reference mark region Similarity;
When the similarity is greater than preset threshold, the target length-width ratio of the target marking area is calculated;
The target length-width ratio for judging the target marking area, with the reference length-width ratio in the reference mark region whether phase Together;
If so, determining that the video source data does not change;Otherwise, it is determined that the video source data changes.
Optionally, include: the step of determining one or more target marking area from video source images to be detected
The image-region of preset range is selected in the video source images to be detected;
Image recognition is carried out for described image region, obtains one or more of target marking areas.
Optionally, described to carry out image recognition for described image region, obtain one or more of blip areas The sub-step in domain further comprises;
Described image region is converted to accumulative gray level image;
Binaryzation is carried out to the accumulative gray level image, obtains bianry image;
The bianry image is handled using connected domain extraction method, obtains one or more of target marking areas.
Optionally, the quantity of the video source images is multiframe, then described image region is converted to accumulative gray level image Sub-step further comprise;
Image-region in the video source images of the multiframe is converted to the gray level image of multiframe;
It calculates the maximum pixel difference between every frame and adds up, obtain inter-pixel and add up image;
The accumulative image of the interframe is handled with normalized mode again using negating, obtains the accumulative gray level image.
Optionally, described the step of is carried out by binaryzation, obtains bianry image for the accumulative gray level image are as follows:
Using gray level image is added up described in thresholding method binaryzation, the bianry image is obtained.
Optionally, the connected domain extraction method includes directly scanning labelling method and bianry image connected component labeling express method, It is then described that the bianry image is handled using connected domain extraction method, obtain the sub-step of one or more of target marking areas Further comprise:
The bianry image is extracted using directly scanning labelling method, obtains one or more of target marking areas;
Alternatively, extracting the bianry image using bianry image connected component labeling express method, obtain one or more of Target marking area.
Optionally, which is characterized in that the object feature value includes histograms of oriented gradients feature, then described in the calculating The sub-step of the object feature value of target marking area further comprises:
Processing is split to the target marking area, obtains multiple images section;
Described image section is identified comprising section;
Calculate the histograms of oriented gradients in described image section;
It is identified according to the section, the histograms of oriented gradients in described image section is connected into the direction gradient histogram Figure feature.
Optionally, the generating mode of the reference characteristic value in the preset correspondence reference mark region is as follows:
The image-region of preset range is selected in the video source images to be detected;
Image recognition is carried out for described image region, obtains one or more of reference mark regions;
Calculate the reference characteristic value in the reference mark region.
Optionally, the preset range is at the upper left corner of the video source images or the 1/4 of the upper right corner.
The embodiment of the invention also discloses a kind of detection device of video source, described device includes:
Target marking area determining module, for determining one or more blip from video source images to be detected Region;
Object feature value computing module, for calculating the object feature value of the target marking area;
Similarity calculation module, the object feature value for calculating the target marking area are marked with preset corresponding reference The similarity of the reference characteristic value in will region;
Target length-width ratio computing module, for calculating the blip area when the similarity is greater than preset threshold The target length-width ratio in domain;
Length-width ratio judgment module, for judging the target length-width ratio of the target marking area, with the reference mark area Whether the reference length-width ratio in domain is identical;
First determination module, for determining that the video source data does not change;
Second determination module, for determining that the video source data changes.
Optionally, the target marking area determining module includes:
First image-region selectes submodule, for selecting the figure of preset range in the video source images to be detected As region;
Target marking area obtains submodule, for carrying out image recognition for described image region, obtains one Or multiple target marking areas.
Optionally, the target marking area acquisition submodule further comprises;
Accumulative greyscale image transitions unit, is converted to accumulative gray level image for described image region;
Bianry image obtaining unit obtains bianry image for carrying out binaryzation to the accumulative gray level image;
Target marking area obtaining unit obtains described one for handling the bianry image using connected domain extraction method A or multiple target marking areas.
Optionally, the quantity of the video source images is multiframe, then the accumulative greyscale image transitions unit is further Including;
Greyscale image transitions subelement is converted to multiframe for the image-region in the video source images by the multiframe Gray level image;
Inter-pixel adds up image and obtains subelement, for calculating the maximum pixel difference between every frame and choosing maximum Value obtains inter-pixel and adds up image;
Accumulative gray level image obtains subelement, handles the accumulative figure of the interframe with normalized mode again using negating Picture obtains the accumulative gray level image.
Optionally, the bianry image obtaining unit includes:
Bianry image obtains subelement, for obtaining institute using gray level image is added up described in thresholding method binaryzation State bianry image.
Optionally, which is characterized in that the connected domain extraction method includes directly scanning labelling method and bianry image connected domain Express method is marked, then the target marking area obtaining unit further comprises:
First object mark region obtains subelement, for extracting the binary map using the directly scanning labelling method Picture obtains one or more of target marking areas;
Alternatively, the second target marking area obtains subelement, for using the bianry image connected component labeling express method The bianry image is extracted, one or more of target marking areas are obtained.
Optionally, the object feature value includes histograms of oriented gradients feature, then the object feature value computing module Further comprise:
Image interval obtains submodule, for being split processing to the target marking area, obtains multiple images area Between;Described image section is identified comprising section;
Histograms of oriented gradients computational submodule, for calculating the histograms of oriented gradients in described image section;
Histograms of oriented gradients feature connects submodule, for identifying according to the section, by the side in described image section The histograms of oriented gradients feature is connected into histogram of gradients.
Optionally, the similarity calculation module includes following submodule:
Second image-region selectes submodule, for selecting the figure of preset range in the video source images to be detected As region;
Reference mark region obtains submodule, for carrying out image recognition for described image region, obtains one Or multiple reference mark regions;
Reference characteristic value computational submodule, for calculating the reference characteristic value in the reference mark region.
Optionally, the preset range is at the upper left corner of the video source images or the 1/4 of the upper right corner.
The embodiment of the present invention includes following advantages:
In the embodiment of the present invention, target marking area and ginseng are compared by the way of calculating histograms of oriented gradients feature The content in mark region is examined, because histograms of oriented gradients is the structure feature for indicating gradient (edge), part can be described Shape information, also, due to histograms of oriented gradients characteristic use be piecemeal sub-unit processing mode so that image office Relationship between portion's pixel is characterized well;The content compared between the two is more accurate, automatically generates the reference of standard Mark region and reference mark region library;Using image recognition technology, image-region within a preset range is recognized accurately Target marking area calculates object feature value and reference characteristic value, repeatedly relatively target marking area object feature value with ginseng The similarity and preset threshold for examining the reference characteristic value of mark region make comparison result have robustness.
The embodiment of the present invention compares the length-width ratio of target marking area and reference mark region, accurately detects video source figure Whether the size of picture changes, and further accurately detects whether the source of video source changes, and improves video source and broadcasts The fluency and image quality put, and certain convenience is provided for the audit of video source.
Detailed description of the invention
Fig. 1 shows a kind of step flow chart of the detection method embodiment one of video source of the embodiment of the present invention;
Fig. 2 shows a kind of schematic diagrames in reference mark region library of the embodiment of the present invention;
Fig. 3 shows the target marking area schematic diagram under normal size;
Fig. 4 shows the target marking area schematic diagram under deformation size;
Fig. 5 shows the schematic diagram in reference mark region;
Fig. 6 shows a kind of step flow chart of the detection method embodiment two of video source of the embodiment of the present invention;
Fig. 7 shows one kind of the embodiment of the present invention and is converted to gray level image by color image, and greyscale image transitions are accumulative Gray level image, accumulative gray level image are reconverted into the process schematic of bianry image;
Fig. 8 shows a kind of structural block diagram of the detection device embodiment one of video source of the embodiment of the present invention;
Fig. 9 shows a kind of terminal device structural schematic diagram provided in an embodiment of the present invention.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real Applying mode, the present invention is described in further detail.
One of the core concepts of the embodiments of the present invention is, selectes to a particular range is carried out in video source images, Then image recognition is carried out to particular range, extracts mark region template, may include the mark of TV station, establish logo area The template library in domain;The selected of particular range is carried out to the video source images in later same signal source, then executes image recognition The step of, mark region is extracted, compares whether it consistent with picture size with the content of the mark region template of template library, reaches To the detection whether normal effect of video Source size.
Referring to Fig.1, a kind of step flow chart of the detection method embodiment one of video source of the invention is shown, specifically may be used To include the following steps:
Step 101, one or more target marking area is determined from video source images to be detected;
In the specific implementation, a video source data usually has the video source images of multiframe, by image recognition technology, from view One or more target marking area is determined in frequency source image, the target marking area may include the logo area of TV station Domain.
Step 102, the object feature value of the target marking area is calculated;
In another preferred embodiment in the embodiment of the present invention, the object feature value may include direction gradient histogram Figure feature, the step 102 include following sub-step:
Sub-step S1021 is split processing to the target marking area, obtains multiple images section;
Sub-step S1022, described image section are identified comprising section;
Sub-step S1023 calculates the histograms of oriented gradients in described image section;
Sub-step S1024 is identified according to the section, the histograms of oriented gradients in described image section is connected into described Histograms of oriented gradients feature.
Wherein, histograms of oriented gradients (Histogram of Oriented Gradients) feature, abbreviation HOG feature, It is a kind of feature for description image local texture that current computer vision, area of pattern recognition are in daily use, piece image point For multiple images section, multiple units are contained in each section, by multiple pixels (basic unit of image) component units.
For example, piece image is divided into 16 sections, 4 units are contained in each section, and each unit has the pixel of 8x8.Meter The histograms of oriented gradients of each unit is calculated, the histograms of oriented gradients in each section is then obtained, according to the area of image interval Between identify, the histograms of oriented gradients of each image interval is connected into the histograms of oriented gradients feature of target marking area.
It should be noted that the object feature value includes but is not limited to histograms of oriented gradients feature, implementing this hair When bright embodiment, other characteristic values can be calculated according to the actual situation, as long as the image local line of target marking area can be described Feature is managed, the embodiments of the present invention are not limited thereto.
Step 103, the object feature value of the target marking area and the ginseng in preset corresponding reference mark region are calculated Examine the similarity of characteristic value;
In another preferred embodiment of the embodiment of the present invention, the preset correspondence reference mark area in step 103 The generating mode of the reference characteristic value in domain includes following sub-step:
Sub-step S1031 selectes the image-region of preset range in the video source images to be detected;
Sub-step S1032 carries out image recognition for described image region, obtains one or more of reference mark areas Domain;
Sub-step S1033 calculates the reference characteristic value in the reference mark region.
Embodiment in order to enable those skilled in the art to better understand the present invention, below by way of an example to preset correspondence The generating mode of the reference characteristic value in reference mark region and the calculation of similarity are illustrated.The reference mark region It may include the region where TV-station logotype, reference mark region is equivalent to preset logo template, establishes reference mark area Domain library shows a kind of reference mark region library referring to Fig. 2.Target marking area and reference mark region are to belong to same letter Number source, only time location is different, that is, the sequencing obtained is different.The reference characteristic value includes that direction gradient is straight Square figure feature calculates the reference characteristic value in the reference mark region, is equivalent to the histograms of oriented gradients of calculation flag template Feature, the histograms of oriented gradients feature of comparison target marking area and each unit in each section in reference mark region are real Matter is to compare the histograms of oriented gradients feature of the unit of same position of target marking area and logo template, calculates target mark The similarity of the histograms of oriented gradients feature of the unit of will region and reference mark region same position.
For example, 4 units are contained in each section, and each unit has the picture of 8x8 when there are 16 sections in target marking area Element;Because target marking area and reference mark region belong to same signal source, only time location is different, therefore simultaneously There are 16 sections in reference mark region, and 4 units are contained in each section, and each unit has the pixel of 8x8, if, setting determines It is an identical section, when blip area in 16 sections when rule is that 4 units in the section of same position are all identical There are 14 identical sections in domain with reference mark region, then it is assumed that similarity is 87.5%.It should be noted that for decision rule Setting, the embodiment of the present invention do not make any limitation.
Step 104, when the similarity is greater than preset threshold, the target length-width ratio of the target marking area is calculated;
In specific implementation, preset threshold can be set to 80% or 90%, and variation range can be 0-100%, this Inventive embodiments do not make any limitation to this.When similarity is greater than preset threshold, the target for calculating target marking area is long Wide ratio obtains blip area specifically, calculating the size of the length direction of target marking area and the size of width direction The length-width ratio in domain.
For example, it is 85% that preset threshold, which can be set, by the target for the target marking area that step 103 is calculated Characteristic value is 87.5% with the similarity of the reference characteristic value in preset corresponding reference mark region, and similarity is greater than default threshold Value shows the target marking area under normal size referring to Fig. 3, calculate the size of target marking area length direction is 1.2 inches, the size of width direction is 0.8 inch, then the length-width ratio of target marking area is 1.5;Referring to Fig. 4, change is shown Target marking area under shape size, if calculating to obtain 1.2 inches of reference mark zone length direction, the size of width direction is 1.0 inches, then the length-width ratio for obtaining target marking area is 1.2.
Step 105, the target length-width ratio for judging the target marking area, the reference length and width with the reference mark region Than whether identical;
It should be noted that the calculation method of the reference length-width ratio in reference mark region and the target of target marking area are long The calculation method of wide ratio is consistent, and calculates the length-width ratio of the two, and whether the length-width ratio for then comparing the two is identical.
Referring to Fig. 5, show reference mark region, calculate the size in reference mark zone length direction is 1.2 inches, The size of width direction is 0.8 inch, then the reference length-width ratio for obtaining reference mark region is 1.5, while comparing blip Whether the target length-width ratio in region and the reference length-width ratio in reference mark region are identical.
Step 106, if so, determining that the video source data does not change;Otherwise, it is determined that the video source data hair Changing.
Wherein, if the reference length-width ratio in reference mark region is 1.5, and the length-width ratio of target marking area is 1.5, it is believed that The length-width ratio of the two is identical, when determining that video source data does not change, can export the normal information of video source;If with reference to mark The reference length-width ratio in will region is 1.5, and the length-width ratio of target marking area is 1.2, it is believed that the length-width ratio of the two is different, determines When video source data changes, the information of video source exception can be exported.
In the embodiment of the present invention, target marking area and ginseng are compared by the way of calculating histograms of oriented gradients feature The content in mark region is examined, because histograms of oriented gradients is the structure feature for indicating gradient (edge), part can be described Shape information, also, due to histograms of oriented gradients characteristic use be piecemeal sub-unit processing mode so that image office Relationship between portion's pixel is characterized well;The content compared between the two is more accurate, automatically generates the reference of standard Mark region and reference mark region library;Whether the size for accurately detecting video source images changes.
Referring to Fig. 6, a kind of step flow chart of the detection method embodiment two of video source of the invention is shown, specifically may be used To include the following steps:
Step 201, the image-region of preset range is selected in the video source images to be detected;
In the specific implementation, the expected range is at the upper left corner of video source images or the 1/4 of the upper right corner.For same The video source images of signal source, where mark region be typically mounted on the upper left corner or the upper right corner of screen.The present invention is real It applies in example, faintly selectes the image-region where mark region.
Step 202, described image region is converted to accumulative gray level image;
In a preferred embodiment in the embodiment of the present invention, the quantity of the video source images is multiframe, then step 202 sub-step further comprises;
Image-region in the video source images of the multiframe is converted to the gray level image of multiframe by sub-step S2021;
Sub-step S2022 calculates the maximum pixel difference between every frame and chooses maximum value, obtains the accumulative figure of inter-pixel Picture;
Sub-step S2023 handles the accumulative image of the interframe with normalized mode again using negating, obtains described tired Count gray level image.
It referring to Fig. 7, shows and gray level image is converted to by color image, greyscale image transitions are accumulative gray level image, are tired out Meter gray level image is reconverted into the process of bianry image.Specifically, the image-region in video source images is converted to gray scale Color image is really switched to gray level image by image.It is to remove color image by the process that color image switchs to gray level image Colouring information, gray value indicate image luminance information;In the prior art, color image generally uses rgb color mode It shows chroma-luminance, the chroma-luminance of image, image is shown with the superposition of three kinds of R (red), G (green), B (blue) colors R value, G value and the variation range of B value of middle pixel are 0-255, and when R value, G value and B value numerical value are all 255, this pixel is white Color;When R value, G value and B value numerical value are all 0, this pixel is black.The color image picture of RGB three-component for one Element, the chroma-luminance of the pixel generally calculate as follows: I=0.3B+0.59G+0.11R, R in above-mentioned formula, G, B points Amount is arranged to identical value, so that it may and the gray level image is obtained, the method for gray level image is switched to for color image, the present invention Embodiment does not make any limitation.
Wherein, the gray level image of multiframe is added up using following formula: Isum=max (Isum, abs (Ipre- Icur));Wherein IsumIndicate accumulative image, IpreFor a frame gray level image, IcurFor present frame gray image, abs (Ipre-Icur) Indicate the absolute value of the pixel value difference of every two interframe;To accumulative image IsumNegate and normalization operation again, reduction light It is unevenly influenced caused by image, obtains the accumulative gray level image Inew
Step 203, binaryzation is carried out to the accumulative gray level image, obtains bianry image;
In the specific implementation, binaryzation is carried out according to setting below to the accumulative gray level image,IbinaIndicate bianry image;Th indicates that segmentation threshold, segmentation threshold th are set by empirical value, It is respectively set to th={ 50,60,70 .., 240 };Set by R value, G value and B value are greater than in the pixel of accumulative gray level image Segmentation threshold th when, the numerical value of R value, G value and B value in this pixel is both configured to 255, this pixel becomes white; When R value, G value and B value are less than set segmentation threshold th in the pixel of accumulative gray level image, by this pixel R value, The numerical value of G value and B value is both configured to 0, this pixel becomes black, by the number of the R value of the accumulative gray level image, G value and B value Value is both configured to 0 or 255, obtains the bianry image, carries out erosion operation to the bianry image, removes interference noise, so Dilation operation is done again afterwards, so that bianry image is more complete.
Step 204, the bianry image is handled using connected domain extraction method, obtains one or more of blip areas Domain.
In another preferred embodiment in embodiments of the present invention, the connected domain extraction method includes that directly scanning marks Method and bianry image connected component labeling express method, then the sub-step of step 204 further comprise:
Sub-step S2041 extracts the bianry image using directly scanning labelling method, obtains one or more of targets Mark region;
Alternatively, sub-step S2042, extracts the bianry image using bianry image connected component labeling express method, obtains institute State one or more target marking areas.
In the specific implementation, the most important method of binary image analysis is exactly that connected domain is extracted, it is all bianry images point The basis of analysis, it is marked by extracting to white pixel in bianry image (target marking area), allows each individual connected domain An identified block is formed, the profile, boundary rectangle, mass center, the not geometry such as bending moment of these blocks can be further obtained Parameter.
In embodiments of the present invention, the maximum boundary rectangle for obtaining the connected region of bianry image, is determined as blip Region, and for the accuracy of result, it can determine multiple target marking areas, the i.e. different video from same signal source Multiple target marking areas is determined in source images.In the embodiment of the present invention, using bianry image connected component labeling express method or Person directly scans the extraction that labelling method carries out connected domain, allows each individual connected domain to form an identified block, obtains block Maximum boundary rectangle, be determined as target marking area;It should be noted that connected domain is extracted in the embodiment of the present invention Method is not intended to be limited in any.
Step 205, the object feature value of the target marking area is calculated;
In specific implementation, the histograms of oriented gradients of each unit is calculated, the direction gradient for then obtaining each section is straight Fang Tu identifies according to the section of image interval, the histograms of oriented gradients of each image interval is connected into target marking area Histograms of oriented gradients feature.
Step 206, the object feature value of the target marking area and the ginseng in preset corresponding reference mark region are calculated Examine the similarity of characteristic value;
In practical application, reference mark region is equivalent to preset logo template, and the embodiment of the present invention collects reference mark Reference mark region library is established in region.Target marking area and reference mark region belong to same signal source, only when Between position it is different, that is, the sequencing obtained is different.The reference characteristic value for calculating the reference mark region is equivalent to calculating mark The histograms of oriented gradients feature of will template, comparison target marking area and each unit in each section in reference mark region Histograms of oriented gradients feature compares the direction gradient histogram of the unit of the same position of target marking area and logo template It is similar to the histograms of oriented gradients feature of the unit of reference mark region same position to calculate target marking area for figure feature Degree.
Step 207, when the similarity is greater than preset threshold, the target length-width ratio of the target marking area is calculated;
In the embodiment of the present invention, the number of comparisons of target marking area and reference mark region is set, by the multiple of acquisition Target marking area is compared with reference mark region (template), is greater than in advance when the similarity is all continuous in certain number If threshold value, then it is assumed that result has robustness.It should be noted that the present invention does not make any limitation for the number compared. When similarity is greater than preset threshold, the target length-width ratio of target marking area is calculated, specifically, calculating target marking area Length direction size and width direction size, obtain the length-width ratio of target marking area.
Step 208, the target length-width ratio for judging the target marking area, the reference length and width with the reference mark region Than whether identical;
Step 209, if so, determining that the video source data does not change;Otherwise, it is determined that the video source data hair Changing.
In practical applications, when determining that video source data does not change, the normal information of video source can be exported;When When determining that video source data changes, the information of video source exception can be exported.
In the embodiment of the present invention, using image recognition technology, the mesh of image-region within a preset range is recognized accurately Mark region is marked, and establishes reference mark region library, object feature value and reference characteristic value is calculated, repeatedly compares blip area The similarity and preset threshold with the reference characteristic value in reference mark region of domain object feature value make comparison result have robust Property, in addition, comparing the length-width ratio of target marking area and reference mark region, accurately detect whether the source of video source is sent out Changing provides certain convenience for the audit of video source.
It should be noted that for simple description, therefore, it is stated as a series of action groups for embodiment of the method It closes, but those skilled in the art should understand that, embodiment of that present invention are not limited by the describe sequence of actions, because according to According to the embodiment of the present invention, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art also should Know, the embodiments described in the specification are all preferred embodiments, and the related movement not necessarily present invention is implemented Necessary to example.
Referring to Fig. 8, a kind of structural block diagram of the detection device embodiment one of video source of the invention is shown, it specifically can be with Including following module:
Target marking area determining module 301, for determining one or more target from video source images to be detected Mark region;
Object feature value computing module 302, for calculating the object feature value of the target marking area;
In a preferred embodiment in the embodiment of the present invention, object feature value computing module 302 includes submodule below Block:
Image interval obtains submodule 3021 and obtains multiple figures for being split processing to the target marking area As section;Described image section is identified comprising section;
Histograms of oriented gradients computational submodule 3022, for calculating the histograms of oriented gradients in described image section;
Histograms of oriented gradients feature connects submodule 3023, for identifying according to the section, by described image section Histograms of oriented gradients connect into the histograms of oriented gradients feature.
Similarity calculation module 303, for calculating the object feature value of the target marking area and preset corresponding ginseng Examine the similarity of the reference characteristic value of mark region;
In another preferred embodiment of the invention, similarity calculation module 303 includes submodule below:
Second image-region selectes submodule 3031, for selecting preset range in the video source images to be detected Image-region;
Reference mark region obtains submodule 3032, for carrying out image recognition for described image region, described in acquisition One or more reference mark regions;
Reference characteristic value computational submodule 3033, for calculating the reference characteristic value in the reference mark region.
Target length-width ratio computing module 304, for calculating the blip when the similarity is greater than preset threshold The target length-width ratio in region;
Length-width ratio judgment module 305, for judging the target length-width ratio of the target marking area, with the reference mark Whether the reference length-width ratio in region is identical;
First determination module 306, for determining that the video source data does not change;
Second determination module 307, for determining that the video source data changes.
For device embodiment, since it is basically similar to the method embodiment, related so being described relatively simple Place illustrates referring to the part of embodiment of the method.
Referring to Fig. 9, it illustrates the structural schematic diagrams of terminal device provided by one embodiment of the present invention.The electronic equipment Detection method for the video source for implementing to provide in above-described embodiment, specifically:
Electronic equipment 800 may include RF (Radio Frequency, radio frequency) circuit 810, include one or one with Memory 820, input unit 830, display unit 840, the sensor 850, voicefrequency circuit of upper computer readable storage medium 860, short range wireless transmission module 870, include one or more than one the processor 880 and power supply of processing core 890 equal components.It will be understood by those skilled in the art that electronic devices structure shown in Fig. 9 is not constituted to electronic equipment It limits, may include perhaps combining certain components or different component layouts than illustrating more or fewer components.Wherein:
RF circuit 810 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, one or the processing of more than one processor 880 are transferred to;In addition, the data for being related to uplink are sent to Base station.In general, RF circuit 810 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, receives Sender, coupler, LNA (Low Noise Amplifier, low-noise amplifier), duplexer etc..In addition, RF circuit 810 is also It can be communicated by wireless communication with network and other equipment.Any communication standard or agreement can be used in the wireless communication, Including but not limited to GSM (Global System of Mobile communication, global system for mobile communications), GPRS (General Packet Radio Service, general packet radio service), CDMA (Code Division Multiple Access, CDMA), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), LTE (Long Term Evolution, long term evolution), Email, SMS (Short Messaging Service, short message Service) etc..Memory 820 can be used for storing software program and module, for example, memory 820 can be used for storing acquisition language The software program of sound signal, the software program for realizing keyword identification, the software program and realization for realizing continuous speech recognition The software program etc. for reminding item is set.Processor 880 is stored in the software program and mould of memory 820 by operation Block, thereby executing in various function application and data processing, such as the embodiment of the present invention " from video source images to be detected Determine one or more target marking areas " function, the function of " object feature value for calculating the target marking area ", " object feature value for calculating the target marking area is similar with the reference characteristic value in preset corresponding reference mark region Degree ", " when the similarity is greater than preset threshold, calculating the target length-width ratio of the target marking area " " judge the mesh Mark mark region target length-width ratio, it is whether identical as reference the length-width ratio in the reference mark region ", " if so, judgement institute Video source data is stated not change;Otherwise, it is determined that the video source data changes " function etc..Memory 820 can It mainly include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image player function etc.) etc.;Storage data area can be stored according to electronic equipment 800 Use created data (such as audio data, phone directory etc.) etc..In addition, memory 820 may include that high speed is deposited at random Access to memory, can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other easily The property lost solid-state memory.Correspondingly, memory 820 can also include Memory Controller, to provide processor 880 and input Access of the unit 830 to memory 820.
Input unit 830 can be used for receiving the number or character information of input, and generate and user setting and function Control related keyboard, mouse, operating stick, optics or trackball signal input.Specifically, input unit 830 may include touching Sensitive surfaces 831 and other input equipments 832.Touch sensitive surface 831, also referred to as touch display screen or Trackpad are collected and are used Family on it or nearby touch operation (such as user using any suitable object or attachment such as finger, stylus in touch-sensitive table Operation on face 831 or near touch sensitive surface 831), and corresponding attachment device is driven according to preset formula.It is optional , touch sensitive surface 831 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus detection is used The touch orientation at family, and touch operation bring signal is detected, transmit a signal to touch controller;Touch controller is from touch Touch information is received in detection device, and is converted into contact coordinate, then gives processor 880, and can receive processor 880 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch sensitive surface 831.In addition to touch sensitive surface 831, input unit 830 can also include other input equipments 832.Specifically, Other input equipments 832 can include but is not limited to physical keyboard, function key (such as volume control button, switch key etc.), One of trace ball, mouse, operating stick etc. are a variety of.
Display unit 840 can be used for showing information input by user or the information and electronic equipment that are supplied to user 800 various graphical user interface, these graphical user interface can by figure, text, icon, video and any combination thereof Lai It constitutes.Display unit 840 may include display panel 841, optionally, can using LCD (Liquid Crystal Display, Liquid crystal display), the forms such as OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode) configure display Panel 841.Further, touch sensitive surface 831 can be covered on display panel 841, when touch sensitive surface 831 detects on it Or after neighbouring touch operation, processor 880 is sent to determine the type of touch event, is followed by subsequent processing device 880 according to touch The type of event provides corresponding visual output on display panel 841.Although in Fig. 9, touch sensitive surface 831 and display panel 841 be to realize input and input function as two independent components, but in some embodiments it is possible to by touch sensitive surface 831 integrate with display panel 841 and realize and output and input function.
Electronic equipment 800 may also include at least one sensor 850, for example, optical sensor, motion sensor and other Sensor.Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can basis The light and shade of ambient light adjusts the brightness of display panel 841, proximity sensor can when electronic equipment 800 is moved in one's ear, Close display panel 841 and/or backlight.As a kind of motion sensor, gravity accelerometer can detect all directions The size of upper (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify mobile phone posture when static Application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (for example pedometer, strikes Hit) etc.;Gyroscope, barometer, hygrometer, thermometer, infrared sensor for can also configure as electronic equipment 800 etc. other Sensor, details are not described herein.
Voicefrequency circuit 860, loudspeaker 861, microphone 862 can provide the audio interface between user and electronic equipment 800. Electric signal after the audio data received conversion can be transferred to loudspeaker 861, be converted by loudspeaker 861 by voicefrequency circuit 860 For voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 862, is connect by voicefrequency circuit 860 Audio data is converted to after receipts, then by after the processing of audio data output processor 880, another end is sent to through RF circuit 810 End, or audio data is exported to memory 820 to be further processed.Voicefrequency circuit 860 is also possible that earphone jack, To provide the communication of peripheral hardware earphone Yu electronic equipment 800.
Short range wireless transmission module 870 can be WIFI (wireless fidelity, Wireless Fidelity) module or indigo plant Tooth module etc..Electronic equipment 800 can help user to send and receive e-mail, browse net by short range wireless transmission module 870 Page and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 9 shows short-distance wireless Transmission module 870, but it is understood that, and it is not belonging to must be configured into for electronic equipment 800, it can according to need completely It omits within the scope of not changing the essence of the invention.
Processor 880 is the control centre of electronic equipment 800, utilizes various interfaces and the entire electronic equipment of connection Various pieces by running or execute the software program and/or module that are stored in memory 820, and are called and are stored in Data in reservoir 820 execute the various functions and processing data of electronic equipment 800, to carry out whole prison to electronic equipment Control.Optionally, processor 880 may include one or more processing cores;Preferably, processor 880 can integrate application processor And modem processor, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate Processor is adjusted mainly to handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor In 880.
Electronic equipment 800 further includes the power supply 890 (such as battery) powered to all parts, it is preferred that power supply can lead to Cross power-supply management system and processor 880 be logically contiguous, thus by power-supply management system realize management charging, electric discharge and The functions such as power managed.Power supply 890 can also include one or more direct current or AC power source, recharging system, electricity The random components such as source fault detection circuit, power adapter or inverter, power supply status indicator.
Although being not shown, electronic equipment 800 can also include camera, bluetooth module etc., and details are not described herein.Specifically exist In the present embodiment, the display unit of electronic equipment 800 is touch-screen display.
All the embodiments in this specification are described in a progressive manner, the highlights of each of the examples are with The difference of other embodiments, the same or similar parts between the embodiments can be referred to each other.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can provide as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can be used complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can be used one or more wherein include computer can With in the computer-usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention be referring to according to the method for the embodiment of the present invention, terminal device (system) and computer program The flowchart and/or the block diagram of product describes.It should be understood that flowchart and/or the block diagram can be realized by computer program instructions In each flow and/or block and flowchart and/or the block diagram in process and/or box combination.It can provide these Computer program instructions are set to general purpose computer, special purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to generate a machine, so that being held by the processor of computer or other programmable data processing terminal devices Capable instruction generates for realizing in one or more flows of the flowchart and/or one or more blocks of the block diagram The device of specified function.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing terminal devices In computer-readable memory operate in a specific manner, so that instruction stored in the computer readable memory generates packet The manufacture of command device is included, which realizes in one side of one or more flows of the flowchart and/or block diagram The function of being specified in frame or multiple boxes.
These computer program instructions can also be loaded into computer or other programmable data processing terminal devices, so that Series of operation steps are executed on computer or other programmable terminal equipments to generate computer implemented processing, thus The instruction executed on computer or other programmable terminal equipments is provided for realizing in one or more flows of the flowchart And/or in one or more blocks of the block diagram specify function the step of.
Although the preferred embodiment of the embodiment of the present invention has been described, once a person skilled in the art knows bases This creative concept, then additional changes and modifications can be made to these embodiments.So the following claims are intended to be interpreted as Including preferred embodiment and fall into all change and modification of range of embodiment of the invention.
Finally, it is to be noted that, herein, relational terms such as first and second and the like be used merely to by One entity or operation are distinguished with another entity or operation, without necessarily requiring or implying these entities or operation Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant meaning Covering non-exclusive inclusion, so that process, method, article or terminal device including a series of elements not only wrap Those elements are included, but also including other elements that are not explicitly listed, or further includes for this process, method, article Or the element that terminal device is intrinsic.In the absence of more restrictions, being wanted by what sentence "including a ..." limited Element, it is not excluded that there is also other identical elements in process, method, article or the terminal device for including the element.
It above to a kind of method provided by the present invention and a kind of device, is described in detail, tool used herein Principle and implementation of the present invention are described for body example, the above embodiments are only used to help understand this hair Bright method and its core concept;At the same time, for those skilled in the art, according to the thought of the present invention, specific real Apply in mode and application range that there will be changes, in conclusion the content of the present specification should not be construed as to limit of the invention System.

Claims (18)

1. a kind of detection method of video source characterized by comprising
One or more target marking area is determined from video source images to be detected;
Calculate the object feature value of the target marking area;
Calculate the object feature value of the target marking area and the phase of the reference characteristic value in preset corresponding reference mark region Like degree;
When the similarity is greater than preset threshold, the target length-width ratio of the target marking area is calculated;
Whether the target length-width ratio for judging the target marking area is identical as the reference length-width ratio in the reference mark region;
If so, determining that the video source data does not change;Otherwise, it is determined that the video source data changes.
2. the method according to claim 1, wherein being determined from video source images to be detected one or more The step of target marking area includes:
The image-region of preset range is selected in the video source images to be detected;
Image recognition is carried out for described image region, obtains one or more of target marking areas.
3. according to the method described in claim 2, it is characterized in that, it is described for described image region carry out image recognition, obtain The sub-steps of one or more of target marking areas further comprises;
Described image region is converted to accumulative gray level image;
Binaryzation is carried out to the accumulative gray level image, obtains bianry image;
The bianry image is handled using connected domain extraction method, obtains one or more of target marking areas.
4. according to the method described in claim 3, it is characterized in that, the quantity of the video source images is multiframe, then described The sub-step that image-region is converted to accumulative gray level image further comprises;
Image-region in the video source images of the multiframe is converted to the gray level image of multiframe;
Cumulative calculation is carried out using gray level image of the formula to the multiframe and obtains the accumulative image of interframe, formula is as follows:
Isum=max (Isum,abs(Ipre-Icur));Wherein IsumIndicate accumulative image, IpreFor a frame gray level image, IcurTo work as Previous frame gray level image, abs (Ipre-Icur) indicate every two interframe pixel value difference absolute value;
The accumulative image of the interframe is handled with normalized mode again using negating, obtains the accumulative gray level image.
5. according to the method described in claim 3, obtaining it is characterized in that, described carry out binaryzation to the accumulative gray level image The step of obtaining bianry image are as follows:
Using gray level image is added up described in thresholding method binaryzation, the bianry image is obtained.
6. according to the method described in claim 3, it is characterized in that, the connected domain extraction method include directly scanning labelling method with Bianry image connected component labeling express method, then it is described that the bianry image is handled using connected domain extraction method, it obtains one Or the sub-step of multiple target marking areas further comprises:
The bianry image is extracted using directly scanning labelling method, obtains one or more of target marking areas;
Alternatively, extracting the bianry image using bianry image connected component labeling express method, one or more of targets are obtained Mark region.
7. the method according to claim 1, wherein the object feature value includes histograms of oriented gradients spy It levies, then the sub-step of the object feature value for calculating the target marking area further comprises:
Processing is split to the target marking area, obtains multiple images section;
Described image section is identified comprising section;
Calculate the histograms of oriented gradients in described image section;
It is identified according to the section, it is special that the histograms of oriented gradients in described image section is connected into the histograms of oriented gradients Sign.
8. the method according to claim 1, wherein the fixed reference feature in the preset correspondence reference mark region The generating mode of value is as follows:
The image-region of preset range is selected in the video source images to be detected;
Image recognition is carried out for described image region, obtains one or more of reference mark regions;
Calculate the reference characteristic value in the reference mark region.
9. according to the method described in claim 8, it is characterized in that, the preset range is the upper left corner of the video source images Or at the 1/4 of the upper right corner.
10. a kind of detection device of video source characterized by comprising
Target marking area determining module, for determining one or more blip area from video source images to be detected Domain;
Object feature value computing module, for calculating the object feature value of the target marking area;
Similarity calculation module, for calculating the object feature value of the target marking area and preset corresponding reference mark area The similarity of the reference characteristic value in domain;
Target length-width ratio computing module, for calculating the target marking area when the similarity is greater than preset threshold Target length-width ratio;
Length-width ratio judgment module, for judging the target length-width ratio of the target marking area, with the reference mark region It is whether identical with reference to length-width ratio;
First determination module, for determining that the video source data does not change;
Second determination module, for determining that the video source data changes.
11. device according to claim 10, which is characterized in that the target marking area determining module includes:
First image-region selectes submodule, for selecting the image district of preset range in the video source images to be detected Domain;
Target marking area obtains submodule, for carrying out image recognition for described image region, obtains one or more A target marking area.
12. device according to claim 11, which is characterized in that the target marking area obtains submodule and further wraps It includes;
Accumulative greyscale image transitions unit, is converted to accumulative gray level image for described image region;
Bianry image obtaining unit obtains bianry image for carrying out binaryzation to the accumulative gray level image;
Target marking area obtaining unit, for handling the bianry image using connected domain extraction method, obtain it is one or Multiple target marking areas.
13. device according to claim 12, which is characterized in that the quantity of the video source images is multiframe, then institute Stating accumulative greyscale image transitions unit further comprises;
Greyscale image transitions subelement is converted to the gray scale of multiframe for the image-region in the video source images by the multiframe Image;
The gray level image of the multiframe is added up, interframe is obtained and adds up image, calculation formula is as follows:
Isum=max (Isum,abs(Ipre-Icur));Wherein IsumIndicate accumulative image, IpreFor a frame gray level image, IcurTo work as Previous frame gray level image, abs (Ipre-Icur) indicate every two interframe pixel value difference absolute value;
Accumulative gray level image obtains subelement, handles the accumulative image of the interframe with normalized mode again using negating, obtains Obtain the accumulative gray level image.
14. device according to claim 12, which is characterized in that the bianry image obtaining unit includes:
Bianry image obtains subelement, for obtaining described two using gray level image is added up described in thresholding method binaryzation It is worth image.
15. device according to claim 12, which is characterized in that the connected domain extraction method includes directly scanning labelling method With bianry image connected component labeling express method, then the target marking area obtaining unit further comprise:
First object mark region obtains subelement, for extracting the bianry image using the directly scanning labelling method, obtains Obtain one or more of target marking areas;
Alternatively, the second target marking area obtains subelement, for being extracted using the bianry image connected component labeling express method The bianry image obtains one or more of target marking areas.
16. device according to claim 10, which is characterized in that the object feature value includes histograms of oriented gradients spy Sign, then the object feature value computing module further comprises:
Image interval obtains submodule, for being split processing to the target marking area, obtains multiple images section;Institute Image interval is stated to identify comprising section;
Histograms of oriented gradients computational submodule, for calculating the histograms of oriented gradients in described image section;
Histograms of oriented gradients feature connects submodule, for identifying according to the section, by the direction ladder in described image section Degree histogram connects into the histograms of oriented gradients feature.
17. device according to claim 10, which is characterized in that the similarity calculation module includes following submodule:
Second image-region selectes submodule, for selecting the image district of preset range in the video source images to be detected Domain;
Reference mark region obtains submodule, for carrying out image recognition for described image region, obtains one or more A reference mark region;
Reference characteristic value computational submodule, for calculating the reference characteristic value in the reference mark region.
18. device according to claim 17, which is characterized in that the preset range is the upper left of the video source images At the 1/4 of angle or the upper right corner.
CN201610510000.5A 2016-06-30 2016-06-30 A kind of detection method and device of video source Active CN106204552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610510000.5A CN106204552B (en) 2016-06-30 2016-06-30 A kind of detection method and device of video source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610510000.5A CN106204552B (en) 2016-06-30 2016-06-30 A kind of detection method and device of video source

Publications (2)

Publication Number Publication Date
CN106204552A CN106204552A (en) 2016-12-07
CN106204552B true CN106204552B (en) 2019-07-12

Family

ID=57462819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610510000.5A Active CN106204552B (en) 2016-06-30 2016-06-30 A kind of detection method and device of video source

Country Status (1)

Country Link
CN (1) CN106204552B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10541766B2 (en) 2017-05-15 2020-01-21 The Nielsen Company (Us), Llc Resolving media source detection and simulcast monitoring ambiguities with motion sensor data
CN107844803B (en) * 2017-10-30 2021-12-28 中国银联股份有限公司 Picture comparison method and device
CN108154080B (en) * 2017-11-27 2020-09-01 北京交通大学 Method for quickly tracing to source of video equipment
CN111062309B (en) * 2019-12-13 2022-12-30 吉林大学 Method, storage medium and system for detecting traffic signs in rainy days
CN111583251A (en) * 2020-05-15 2020-08-25 国网浙江省电力有限公司信息通信分公司 Video image analysis method and device and electronic equipment
CN111988664B (en) * 2020-09-01 2022-09-20 广州酷狗计算机科技有限公司 Video processing method, video processing device, computer equipment and computer-readable storage medium
CN115619785B (en) * 2022-12-16 2023-03-10 深圳市乐讯科技有限公司 Game picture intelligent analysis method and system based on computer vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848382A (en) * 2010-05-31 2010-09-29 深圳市景阳科技股份有限公司 Method and system for adjusting video streaming image resolution ratio and code stream
CN102215375A (en) * 2011-06-24 2011-10-12 中兴通讯股份有限公司 Selection method and device for video source of sub-picture of multi-picture in multimedia conference
CN103458305A (en) * 2013-08-28 2013-12-18 小米科技有限责任公司 Video playing method and device, terminal device and server
CN104079924A (en) * 2014-03-05 2014-10-01 北京捷成世纪科技股份有限公司 Mistakenly-played video detection method and device
CN104243874A (en) * 2013-06-20 2014-12-24 冠捷投资有限公司 Sub-picture displaying method for displayer
CN105635786A (en) * 2014-11-05 2016-06-01 深圳Tcl数字技术有限公司 Advertisement delivery method and display method apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100920547B1 (en) * 2001-04-24 2009-10-08 소니 가부시끼 가이샤 Video signal processing apparatus
CN101682704A (en) * 2007-06-21 2010-03-24 汤姆森特许公司 Method and apparatus for transitioning from a first display format to a second display format

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101848382A (en) * 2010-05-31 2010-09-29 深圳市景阳科技股份有限公司 Method and system for adjusting video streaming image resolution ratio and code stream
CN102215375A (en) * 2011-06-24 2011-10-12 中兴通讯股份有限公司 Selection method and device for video source of sub-picture of multi-picture in multimedia conference
CN104243874A (en) * 2013-06-20 2014-12-24 冠捷投资有限公司 Sub-picture displaying method for displayer
CN103458305A (en) * 2013-08-28 2013-12-18 小米科技有限责任公司 Video playing method and device, terminal device and server
CN104079924A (en) * 2014-03-05 2014-10-01 北京捷成世纪科技股份有限公司 Mistakenly-played video detection method and device
CN105635786A (en) * 2014-11-05 2016-06-01 深圳Tcl数字技术有限公司 Advertisement delivery method and display method apparatus

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
一种基于内容变化的快速视频重组方法;尹雄师;《大众科技》;20080310(第103期);44-46
切换台高清/标清多格式信号混切方案分析;王珂;《电视技术》;20080317;第32卷(第03期);85-87
基于图像显著性的立体视频宏块重要性模型;陈超 等;《计算机工程》;20160131;第42卷(第1期);260-264
多媒体工业监控系统的设计与实现;葛桂萍,鲍可进;《微计算机信息》;20010930;第17卷(第9期);6-7
机载模拟光栅视频系统的视频处理技术;王彬;《航空电子技术》;20031231;第34卷(第4期);28-31

Also Published As

Publication number Publication date
CN106204552A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106204552B (en) A kind of detection method and device of video source
CN106296617B (en) The processing method and processing device of facial image
CN108287744A (en) Character displaying method, device and storage medium
CN106446841B (en) A kind of fingerprint template matching order update method and terminal
CN106204423B (en) A kind of picture-adjusting method based on augmented reality, device and terminal
CN104036536B (en) The generation method and device of a kind of stop-motion animation
CN106713840B (en) Virtual information display methods and device
CN108269220B (en) Method and device for positioning digital watermark
CN105957544B (en) Lyric display method and device
CN108205398A (en) The method and apparatus that web animation is adapted to screen
CN110209245A (en) Face identification method and Related product
CN108122528A (en) Display control method and related product
KR20090092035A (en) Method for generating mosaic image and apparatus for the same
CN110298304A (en) A kind of skin detecting method and terminal
CN106296634B (en) A kind of method and apparatus detecting similar image
CN109246474B (en) Video file editing method and mobile terminal
CN110363785A (en) A kind of super frame detection method and device of text
CN110162254A (en) A kind of display methods and terminal device
CN105898561B (en) A kind of method of video image processing and device
CN110070034A (en) Model training method, section recognition methods, device, equipment and medium
CN111325220B (en) Image generation method, device, equipment and storage medium
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN112950525A (en) Image detection method and device and electronic equipment
CN105513098B (en) Image processing method and device
CN109587552A (en) Video personage sound effect treatment method, device, mobile terminal and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant