CN103729657A - Method and device for constructing station caption sample library and method and device for identifying station caption - Google Patents

Method and device for constructing station caption sample library and method and device for identifying station caption Download PDF

Info

Publication number
CN103729657A
CN103729657A CN201410038040.5A CN201410038040A CN103729657A CN 103729657 A CN103729657 A CN 103729657A CN 201410038040 A CN201410038040 A CN 201410038040A CN 103729657 A CN103729657 A CN 103729657A
Authority
CN
China
Prior art keywords
station caption
station
caption
image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410038040.5A
Other languages
Chinese (zh)
Other versions
CN103729657B (en
Inventor
林雪辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Priority to CN201410038040.5A priority Critical patent/CN103729657B/en
Publication of CN103729657A publication Critical patent/CN103729657A/en
Application granted granted Critical
Publication of CN103729657B publication Critical patent/CN103729657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention is suitable for the field of image matching of station captions of a television station, and provides a method and a device for constructing a station caption sample library and a method and a device for identifying a station caption. The method for constructing the station caption sample library comprises the following steps: obtaining a station caption area in a television scene, and capturing the station caption area to obtain a station caption area chart; extracting a station caption part from the station caption area chart to obtain a station caption identification chart including a white station caption part and a black non-station caption part; carrying out AND operation on the station caption identification chart and the station caption area chart to obtain a station caption image including station caption content; storing the station caption identification chart and the station caption image as one station caption sample together in the station caption sample library. Required station caption samples with coincident standard are stored during station caption identification by constructing the station caption sample library so as to provide the station caption identification chart and the station caption images required during the station caption identification, so that the efficiency of the station caption identification is improved.

Description

Station caption sample library construction method and device and station caption identification method and device
Technical Field
The invention belongs to the field of television station logo image matching, and particularly relates to a method and a device for constructing a station logo sample library and a method and a device for identifying a station logo.
Background
The station logo is a symbolic identification of a television station. The station logo is usually hung on the television screen or painted on the reporter's microphone, symbolizing the broadcast nature of the television station. The station mark of the television station is used as an important mark of the television station, and has great significance for television program monitoring and media asset management departments.
However, in the current station caption identification method, usually, an image of the station caption appearance position is captured in the television picture, feature points based on a spatial domain or a frequency domain are extracted based on the image, and then matching identification of the image is performed. The existing station caption identification method is carried out based on the extracted feature points of the intercepted image, and is easily influenced by the background of a television picture in the intercepted image, if the background content of the television picture in the intercepted image is very similar to the color of the station caption, the extracted feature points generate deviation, so that the problem of misjudgment is caused, and the accuracy of identifying the station caption of the television station is reduced.
Disclosure of Invention
The invention aims to provide a station caption sample library construction method and device and a station caption identification method and device, so as to solve the problem that the conventional station caption identification method is easily influenced by background contents except for a station caption and improve the accuracy of station caption identification.
The invention is realized in such a way that a station caption sample library construction method comprises the following steps:
acquiring a station caption area in a television picture, and intercepting the station caption area to obtain a station caption area map;
extracting a station caption part from the station caption area image to obtain a station caption identification image with the station caption part being white and the non-station caption part being black;
performing AND operation on the station caption identification image and the station caption area image to acquire a station caption image containing station caption content;
and storing the station caption identification graph and the station caption image into a station caption sample library together as a station caption sample.
In a second aspect of the present invention, a station caption identifying method is provided, where the method includes:
intercepting a station logo area image to be identified in a current television picture;
matching the station caption area graph to be identified with the station caption samples in the station caption sample library one by one, and calculating a similarity metric value;
after matching of the station caption area graph to be identified and all the station caption samples in the station caption sample library is completed, selecting the station caption sample with the minimum similarity metric value as a station caption identification result;
the station caption sample is a station caption marking picture of a television station and a corresponding station caption image, and is pre-stored in a station caption sample library.
In a third aspect of the present invention, there is provided a station caption sample library constructing apparatus, including:
the station caption area image acquisition module is used for acquiring the area of the station caption in the television picture, intercepting the station caption area and acquiring a station caption area image;
the station caption marking image acquisition module is used for extracting a station caption part from the station caption area image and acquiring a station caption marking image with a white station caption part and a black non-station caption part;
the station caption image acquisition module is used for performing AND operation on the station caption marking image and the station caption area image to acquire a station caption image containing station caption contents;
and the storage module is used for storing the station caption identification chart and the station caption image into a station caption sample library together as a station caption sample.
In a fourth aspect of the present invention, a station caption identifying method is provided, where the method includes:
the intercepting module is used for intercepting a station logo area image to be identified in the current television picture;
the matching module is used for matching the station caption area graph to be identified with the station caption samples in the station caption sample library one by one and calculating a similarity metric value;
the identification module is used for selecting the station caption sample with the minimum similarity metric value as the station caption identification result after matching the station caption area graph to be identified with all the station caption samples in the station caption sample library;
the station caption sample is a station caption marking picture of a television station and a corresponding station caption image, and is pre-stored in a station caption sample library.
In the invention, when a station caption sample library is constructed, an obtained station caption area image is processed to obtain a station caption identification image and a station caption image, and the station caption identification image and the station caption image are stored together as a station caption sample; when station caption identification is carried out, the acquired station caption area image to be identified is matched with each station caption sample in the station caption sample library, the similarity metric value of the station caption area image to be identified and each station caption sample is calculated, and the station caption sample with the minimum similarity metric value is selected as the station caption identification result of the station caption area image to be identified. The station caption sample library is constructed to store the station caption samples required during station caption identification, so that a station caption identification algorithm is simplified, and the efficiency of station caption identification is improved; furthermore, the station caption identification method based on the station caption sample library solves the problem that the original station caption identification method is easily interfered by the background of the image to be identified, and improves the accuracy of station caption identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some implementation examples of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart illustrating an implementation of a station caption sample library construction method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a specific implementation of step S102 of a station caption sample library construction method according to an embodiment of the present invention;
fig. 3 is a station caption identification diagram and a station caption image of the south-of-lake satellite television according to an embodiment of the present invention;
fig. 4 is a flowchart of a first implementation of a station caption identification method according to a second embodiment of the present invention;
fig. 5 is a flowchart of a second implementation of the station caption identifying method according to the second embodiment of the present invention;
fig. 6 is a flowchart of a specific implementation of step S504 of the station caption identifying method according to the second embodiment of the present invention;
fig. 7 is an application scenario diagram of platform logo recognition in the south of the lake satellite provided by the second embodiment of the present invention;
fig. 8 is a structural diagram of a station caption sample library construction device provided in the third embodiment of the present invention;
fig. 9 is a configuration diagram of a station caption identifying apparatus according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the invention, when a station caption sample library is constructed, an obtained station caption area image is processed to obtain a station caption identification image and a station caption image, and the station caption identification image and the station caption image are stored together as a station caption sample; when station caption identification is carried out, the acquired station caption area image to be identified is matched with each station caption sample in the station caption sample library, the similarity metric value of the station caption area image to be identified and each station caption sample is calculated, and the station caption sample with the minimum similarity metric value is selected as the station caption identification result of the station caption area image to be identified. The station caption sample library is constructed, and the station caption samples with consistent specifications required in station caption identification are stored, so that the original station caption identification algorithm is simplified, and the station caption identification efficiency is improved; furthermore, the station caption identification method based on the station caption sample library solves the problem that the original station caption identification method is easily interfered by the background of the image to be identified, and improves the accuracy of station caption identification.
Example one
Fig. 1 shows an implementation flow of a station caption sample library construction method provided in an embodiment of the present invention.
As shown in fig. 1, the station caption sample library construction method includes:
in step S101, a station caption area in the television screen is obtained, and the station caption area is captured to obtain a station caption area map.
In this embodiment, the tv station logo is generally located at the upper left corner of the tv picture, and the tv station logo is always kept unchanged during the process of changing the tv picture. When a station caption sample library is constructed, the area of the station caption in a television picture is obtained, and the size of the area is M multiplied by N. M represents the number of pixels included in the longitudinal direction of the region, and N represents the number of pixels included in the width direction of the region. And intercepting a station caption area map at the upper left corner of the television picture at a certain preset time interval, and intercepting 2n frames of station caption area maps in total, wherein n is an integer greater than zero.
In step S102, a station logo portion is extracted from the station logo area map, and a station logo mark map in which the station logo portion is white and the non-station logo portion is black is obtained.
In this embodiment, the truncated 2n frames of logo region maps are divided into n groups, and each group includes two frames of logo region maps. And subtracting the pixel values of the pixels of the two frames of station logo area images in each group to obtain the pixel value difference of each pixel. Because the position and the size of the station caption part in the station caption area graph are kept unchanged, and the pixel values of the two frames of images in the station caption part are almost equal, after subtraction operation of the two frames of images, the pixel value difference of the station caption part is close to 0 and is uniform black. The image obtained by subtracting the two frame images has the effect of being similar to the effect of the truncated logo part being dug out. In order to eliminate the randomness of other parts, a threshold value for calibrating the station mark part is preset. For each pixel, after n groups of pictures are subjected to subtraction operation, n pixel value differences of each pixel are obtained, and the variance of the n pixel value differences is calculated. The variance of the pixel value difference indicates the likelihood that the pixel is a logo. And comparing the variance of the pixel value difference with the preset threshold value, and judging whether the variance of the pixel value difference is smaller than or equal to the threshold value. If the judgment result is yes, the pixel corresponding to the variance of the pixel value difference is considered to belong to the station caption part, and the pixel value of the pixel is set to be white; otherwise, the pixel corresponding to the variance of the pixel value difference is considered to belong to a non-station mark part, and the pixel value of the pixel is set to be black; thereby obtaining a station logo identification image which only has two pixel values of black and white and is a binary image.
In step S103, an and operation is performed on the station caption identifying map and the station caption area map, and a station caption image including the station caption content is acquired.
In this embodiment, a frame is selected from the intercepted 2n frames of station caption area maps, and the selected station caption area map and the station caption identification map are subjected to and operation. The pixel value of the station logo part of the station logo identification chart is white, the pixel value of the non-station logo part is black, and the station logo area chart and the station logo identification chart have uniform specification and size and are M multiplied by N. Therefore, the station caption identifier and the selected station caption area image are subjected to and operation, the background content of the selected station caption area image can be removed, and the content of the station caption part is reserved, so that the station caption image containing the station caption content is obtained.
In step S104, the station caption identifying map and the station caption image are stored in the station caption sample library together as a station caption sample.
In this embodiment, after the station caption identification map and the station caption image are obtained, the station caption identification map and the station caption image are associated to obtain a station caption sample, and the station caption sample is stored in a station caption sample library. Thereby completing the establishment of one sample in the station caption sample library.
In the invention, when the station caption sample library is constructed, the obtained station caption area image is processed to obtain the station caption identification image and the station caption image, and the station caption identification image and the station caption image are stored together as the station caption sample, thereby completing the construction of the station caption sample in the station caption sample library, obtaining the station caption identification image and the station caption image with uniform specification required by the station caption identification, simplifying the station caption identification algorithm and improving the station caption identification efficiency.
Fig. 2 shows a specific implementation flow of step S102 in the station caption sample library construction method according to an embodiment of the present invention.
As shown in fig. 2, step S102 specifically includes:
in step S201, the 2n frames of logo region maps are divided into n groups, each group including two frames of logo region maps.
In this embodiment, the 2n frames of logo region maps are divided into n groups according to a certain rule, and each group includes two frames of images. The division of the 2n frame image is performed for the subsequent subtraction processing of the image.
As an implementation example of the present invention, the division rule may be: the 1 st frame image and the n +1 th frame image form a group, the 2 nd frame image and the n +2 th frame image form a group, and so on, the n th frame image and the 2n th frame image form a group to realize the interval division of the 2n frame images.
In step S202, pixel value subtraction is performed on corresponding pixels on the two frames of logo region maps in each group, so as to obtain n pixel value differences of each pixel.
In this embodiment, since the position and size of the logo portion in the logo region map are kept unchanged, the pixel value difference of the logo portion is close to 0 after the subtraction operation between the two frames of images. In the image obtained by subtracting the two frames of images, the station caption part is uniform black, and the effect is certainly similar to that of the station caption part which is dug out, so that the station caption part is extracted from the station caption area image to determine the position and the size of the station caption part. Let the 2n frame station-mark region diagram be represented as Ii(i =1, …, 2N), one pixel in the logo region map is represented as (x, y), where x =1, …, M; y =1, …, N. The pixel value of one pixel in the station mark region map of the ith frame is represented as Ii(x,y)。
As an implementation example of the invention, the 2n frames of station logo area images are divided into n groups according to a certain rule, and each group comprises two frames of images. Suppose that two frame region maps in a group are respectively IiAnd Ii+n(where i =1, …, n), the pixel values of the two frames of logo region images in a group are subtracted to obtain the difference D between the pixel values of each pixeli(x,y),Di(x,y)=|Ii+n(x,y)-Ii(x, y) | (i =1, …, n). The above steps are performed for the remaining n-1 groups, respectively, to obtain n pixel value differences for each pixel.
In step S203, a variance calculation is performed on the n pixel value differences of a pixel according to the n pixel value differences of the pixel.
In this embodiment, each logo region map includes M × N pixels, and N pixel value differences of each pixel in the logo region map are obtained by performing subtraction operation on each pixel in each group of logo region maps. And calculating the variance D (x, y) of the pixel in the n pixel value differences according to the n pixel value differences. The calculation formula of D (x, y) is as follows:
<math> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>i</mi> <mo>=</mo> <mi>n</mi> </mrow> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>D</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>mean</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mi>n</mi> </mfrac> </mrow> </math>
wherein,
<math> <mrow> <msub> <mi>mean</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>i</mi> <mo>=</mo> <mi>n</mi> </mrow> </munderover> <msub> <mi>D</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>n</mi> </mfrac> <mo>.</mo> </mrow> </math>
d (x, y) represents the possibility that the pixel (x, y) is a plateau portion. Since the positions and sizes of the station caption parts in the station caption area image are uniform, the pixel value difference of the station caption part in the station caption area image is close to 0, and the variance of the pixel value difference is also close to 0, so that when the variance D (x, y) is close to 0, the pixel (x, y) belongs to the station caption part.
In step S204, it is determined whether the calculated variance is less than or equal to a preset threshold.
In this embodiment, a threshold value for calibrating the station caption portion is preset in order to eliminate the influence of randomness of other portions. Comparing the calculated variance D (x, y) of the pixel (x, y) with the preset threshold value, and judging whether the variance D (x, y) is less than or equal to the preset threshold value. If yes, go to step S205; otherwise, executing step S206
In step S205, the pixel value of the pixel corresponding to the variance is set to white.
In this embodiment, when the variance D (x, y) is less than or equal to the preset threshold, the pixel value of the pixel (x, y) corresponding to the variance D (x, y) is set to be white.
In step S206, the pixel value of the pixel corresponding to the variance is set to black.
In this embodiment, when the variance D (x, y) is greater than the preset threshold, the pixel value of the pixel (x, y) corresponding to the variance D (x, y) is set to be black.
Steps S203 to S206 are repeated according to the n pixel value differences of the next pixel (i.e., the pixel for which the pixel difference variance calculation is not performed) until black or white is set for all the pixels, thereby obtaining a binary image capable of identifying the logo portion, i.e., a logo mark image. The station caption marking graph is displayed in white on the station caption part, the non-station caption part is displayed in black, and the station caption marking graph is used as a template during station caption identification, and image content overlapped with the station caption part in the station caption marking graph is extracted from the station caption area graph to be identified.
An application scenario of the station caption sample library construction method is given below.
As shown in fig. 3, the clipped logo region map is image I, which is 2n frames in total. After processing, the station logo image is an image F, the image F is a binary image, the station logo portion is white, and the non-station logo portion is black. And the station caption image obtained by performing AND operation on the station caption identification image F and any station caption area image I is an image S. And taking the station caption identification image F and the station caption image S as a station caption sample, and storing the station caption sample F and the station caption image S into a station caption sample library together.
Example two
Fig. 4 shows a first implementation flow of the station caption identifying method according to the second embodiment of the present invention.
The station caption identification method is based on the station caption sample library described in the first embodiment, wherein the station caption sample library stores a plurality of station caption samples of television stations, the station caption samples include a station caption identification image and a corresponding station caption image, and the station caption identification image is a binary image. The station caption identification image and the station caption image have the same size as the station caption area image to be identified, and are M multiplied by N, and the station caption of the station caption identification image and the station caption of the station caption area image to be identified are consistent in position and direction, so that the factors of translation, scaling and rotation of the image are not required to be considered during station caption identification, the station caption identification algorithm is simplified, and the station caption identification efficiency is improved. When station caption is identified, the station caption area image to be identified and the station caption identification image in a station caption sample are subjected to AND operation, the content of a non-station caption part in the station caption area image to be identified is removed, and the station caption area image to be identified without the non-station caption part is matched with the station caption image, so that the problem that a traditional station caption identification method needs to extract characteristic points from the station caption area image to be identified is avoided, and the defect that the traditional station caption identification method is easily interfered by the background of the station caption area image to be identified is overcome.
As shown in fig. 4, the station caption identification method specifically includes:
in step S401, a logo area map to be identified in the current television screen is captured.
In this embodiment, after the television picture is switched and played, the station caption area map at the upper left corner of the current television picture is captured, and the station caption area map to be identified is obtained.
In step S402, the station caption area map to be identified and the station caption samples in the station caption sample library are matched one by one, and a similarity metric is calculated.
In this embodiment, the station caption sample library stores therein station caption samples of a plurality of television stations, and a station caption sample of a television station is a station caption identifying map of a television station and a station caption image corresponding thereto, and is stored in the station caption sample library in advance.
And when station caption identification is carried out on the station caption area image to be identified, matching each station caption sample in the station caption sample library with the station caption area image to be identified, and calculating the similarity metric value of the station caption area image to be identified and each station caption sample. The steps include:
and performing AND operation on the station caption area image to be identified and the station caption identification image, thereby removing a non-station caption part in the station caption area image to be identified. And carrying out similarity measurement on the station logo area image to be identified without the non-station logo part and the station logo image, calculating a similarity measurement value of the station logo area image to be identified and the station logo image, and storing the similarity measurement value.
In step S403, after the matching between the station caption area map to be identified and all the station caption samples in the station caption sample library is completed, the station caption sample with the minimum similarity metric is selected as the station caption identification result.
In this embodiment, the similarity metric value represents the degree of similarity between the station caption area map to be identified and each station caption sample, and the smaller the metric value, the closer the station caption area map to be identified is to the station caption sample. And after matching of the station caption area graph to be identified and all the station caption samples in the station caption sample library is completed, obtaining the similarity metric value of the station caption area graph to be identified and each station caption sample in the station caption sample library, and selecting the station caption sample with the minimum metric value as the identification result of the station caption area graph to be identified.
As another implementation example of the present invention, fig. 5 shows a second implementation flow of the station caption identifying method provided in the second embodiment of the present invention. As shown in fig. 5, the station caption identification method specifically includes:
in step S501, a logo area map to be identified in the current television screen is captured.
In this embodiment, after the television picture is switched and played, the station caption area map at the upper left corner of the current television picture is captured, and a station caption area map to be identified is obtained and marked as I.
In step S502, the statistical value of the counter is initialized.
In this embodiment, a counter is provided, and the number of times of matching between the station caption area map to be identified and the station caption sample in the station caption sample library is recorded by using the statistical value of the counter. The initialization counter means that the statistical value of the initialization counter is 1 when station logo identification is performed. Thereafter, every time matching of one station caption sample is performed, 1 is added to the statistical value of the counter. In step S503, a station caption sample to be matched is selected from the station caption sample library.
In this embodiment, the station caption sample to be matched is a station caption sample that is not matched with the station caption area map to be identified. The station caption sample of one station caption comprises a station caption identification picture and a station caption image. As an implementation example of the present invention, the station caption sample library stores l station caption samples, and selects the ith station caption sample, where the station caption sample includes a station caption identification diagram Fi(x, y) and station caption image Si(x, y), wherein i =1, …, l; x =1, …, M, y =1, …, N.
In step S504, the station caption area map to be identified is matched with the station caption sample to be matched, and a similarity metric value is calculated.
In this embodiment, a station caption sample of a station caption includes a station caption identification map and a station caption image. The step of matching the station caption area graph to be identified with the station caption sample to be matched is as follows:
and performing AND operation on the station caption area image to be identified and the station caption identification image, thereby removing a non-station caption part in the station caption area image to be identified. And carrying out similarity measurement on the station logo area image to be identified without the non-station logo part and the station logo image, calculating a similarity measurement value of the station logo area image to be identified and the station logo image, and storing the similarity measurement value.
The specific matching and similarity metric calculation will be described in detail in the following embodiments, and will not be described herein.
In step S505, 1 is added to the statistic value of the counter to determine whether the statistic value is greater than the number of station caption samples.
In this embodiment, after each matching of a station caption sample and a station caption area map to be identified is performed, 1 is added to the statistical value in the counter; and judging whether the statistic value of the counter is larger than the number of the station caption samples. If yes, matching the station caption area graph to be identified with each station caption sample in the station caption sample library, and executing step S506; otherwise, step S503 is executed. And counting the matching times through a counting value so as to ensure that the station caption area graph to be identified is matched with each station caption sample.
In step S506, the station caption sample with the minimum similarity metric is selected as the station caption identification result.
In this embodiment, the similarity metric value represents the degree of similarity between the station caption area map to be identified and each station caption sample, and the smaller the metric value, the closer the station caption area map to be identified is to the station caption sample. And selecting the station caption sample with the minimum measurement value as the identification result of the station caption area graph to be identified.
In the invention, when station caption identification is carried out, the acquired station caption area image to be identified is matched with each station caption sample in the station caption sample library, the similarity metric value of the station caption area image to be identified and each station caption sample is calculated, and the station caption sample with the minimum similarity metric value is selected as the station caption identification result of the station caption area image to be identified. Therefore, the non-station caption part of the station caption area image to be identified is removed through the station caption identification image, the problem that the original station caption identification method is easily interfered by the background of the image to be identified is solved, and the accuracy of station caption identification is improved; furthermore, the size of the station caption sample in the station caption sample library is the same as that of the station caption area image to be identified, and the station caption in the image keeps consistent in position and direction, so that the factors of translation, scaling and rotation of the image do not need to be considered during station caption identification, the station caption identification algorithm is simplified, and the station caption identification efficiency is improved.
Fig. 6 shows a specific implementation flow of step S504 in the station caption identifying method according to the second embodiment of the present invention.
As can be seen from fig. 6, step S504 specifically includes:
in step S601, performing and operation on the station caption area map to be identified and the station caption identification map in the station caption sample, and removing a non-station caption part in the station caption area map to be identified.
As an implementation example of the present invention, the station caption sample library stores l station caption samples, and selects the ith station caption sample, where the station caption sample includes a station caption identification diagram Fi(x, y) and station caption image Si(x, y), wherein i =1, …, l; x =1, …, M, y =1, …, N.
The station caption area graph to be identified is I (x, y), and the formula for performing the AND operation on the station caption area graph to be identified and the station caption identification graph is as follows:
Gi(x,y)=Fi(x,y)*I(x,y)
obtained Gi(x, y) is a table logo area map to be recognized with the non-table logo part removed, specifically, GiAnd (x, y) is a table logo area diagram to be recognized after the non-table logo part is removed according to the table logo area of the ith table logo sample.
In step S602, the logo area map to be recognized from which the non-logo portion is removed is subtracted from the logo image, so as to obtain a similar area image of the logo area map to be recognized from which the non-logo portion is removed and the logo image.
In this embodiment, a station logo area map G to be identified is obtained by removing non-station logo partsiAfter (x, y), removing the station logo area graph G to be identified of the non-station logo parti(x, y) and station caption image S in station caption samplei(x, y) subtracting to obtain a similar area image R of the logo area image to be identified and the logo area image of the logo image, wherein the logo area image to be identified is used for removing the non-logo parti(x,y)。RiThe formula for the calculation of (x, y) is:
Ri(x,y)=|Gi(x,y)-Si(x,y)|
if the similarity between the selected ith station caption sample and the station caption area map to be identified is higher, namely the station caption area map G to be identified after the non-station caption part is removedi(x, y) and station caption image SiThe higher the (x, y) similarity is, the more similar the region image R isiThe closer the pixel color value distribution of (x, y) is to a uniform distribution.
In step S603, a similarity metric between the logo area image to be identified and the logo image is calculated according to the similar area image.
In this embodiment, since the similarity between the selected ith station caption sample and the station caption area image to be identified is higher, the similar area image R is higheriThe closer the pixel color value distribution of (x, y) is to a uniform distribution. By calculating the similar region image Ri(x, y) variance as the identificationAnd the similarity measurement value of the station caption area image and the station caption image. Similar region image RiThe variance calculation formula of (x, y) is:
<math> <mrow> <msup> <mi>&sigma;</mi> <mn>2</mn> </msup> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>mean</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mi>T</mi> </mfrac> </mrow> </math>
wherein mean is2For similar region image RiAverage pixel value of (x, y),
<math> <mrow> <msub> <mi>mean</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>R</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>T</mi> </mfrac> </mrow> </math>
t is the total number of pixels of the station caption part:
<math> <mrow> <mi>T</mi> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>F</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </math>
thereby obtaining a similar region image RiVariance of (x, y). The similar region image RiThe smaller the variance of (x, y), that is, the smaller the similarity metric value between the logo region image to be identified and the logo image, the more similar the logo region image to be identified and the logo image.
In this embodiment, the problem of station caption identification is converted into calculating the similarity between the area map of the station caption to be identified and the image of the station caption, and the size and the position of the area map of the station caption to be identified are consistent with those of the station caption of the image of the station caption in the image, so that the rotation, the scaling and the translation of the image are not required to be considered when calculating the similarity metric, the algorithm and the flow of the station caption identification are simplified, and the efficiency of the station caption identification is improved.
An application scenario of the station caption identification method is given below.
The station caption to be identified is a station caption of the south of the lake satellite television, and the obtained regional diagram of the station caption to be identified is shown in (a) in fig. 7.
Assume that the station caption sample library stores station caption samples of the north satellite view, the east satellite view, and the south satellite view of the lake, and the station caption sample includes a station caption identification chart and a station caption image, as shown in fig. 7 (b).
After and operation is performed on the station logo area image to be recognized and the station logo identification images in each station logo sample, the station logo area image to be recognized with the station logo removed is obtained, and fig. 7 (c) shows the station logo area image to be recognized after the station logo non-sections are removed by the station logo identification images of the north lake satellite view, the east Shandong satellite view and the south lake satellite view respectively.
And subtracting the station caption image to be identified from the station caption area image without the station caption part to obtain a similar area image. Fig. 7 (d) shows similar area maps of the station caption area map to be recognized and station caption images of north and south health visions of lake, east and mountains, and south and China.
And calculating the similarity metric value of the logo area image to be identified and the logo image according to the similarity area image. As can be seen from (d) in fig. 7, the area map of the station caption to be identified and the similar area map of the station caption image in the south of the lake are uniform black, and the similarity metric calculated according to the similar area maps must be the minimum. Therefore, the station caption sample with the minimum similarity metric value is selected as the station caption identification result, namely, the station caption sample of the Hunan satellite television is selected as the station caption of the station caption area diagram to be identified.
EXAMPLE III
Fig. 8 shows a component structure of a station caption sample library construction device provided by the third embodiment of the present invention. For convenience of explanation, only portions relevant to the present invention are shown.
As can be seen from fig. 8, the station caption sample library constructing apparatus includes:
and the station caption area image acquisition module 81 is configured to acquire a station caption area in the television picture, and intercept the station caption area to obtain a station caption area image.
Further, the station logo area map obtaining module 81 is specifically configured to:
when a station caption sample library is constructed, acquiring the region of a station caption in a television picture, intercepting the station caption region, and acquiring a 2 n-frame station caption region image, wherein n is an integer greater than zero.
And a station logo identifier map acquisition module 82, configured to extract a station logo portion from the station logo area map, and acquire a station logo identifier map in which the station logo portion is white and the non-station logo portion is black.
Further, the station logo identifier map acquisition module specifically includes:
a dividing unit 821, configured to divide the 2n frames of station logo region maps into n groups, where each group includes two frames of station logo region maps;
a subtraction operation unit 822, configured to perform pixel value subtraction on corresponding pixels on the two frames of logo region images in each group, so as to obtain n pixel value differences of each pixel;
the determining unit 823 is configured to perform variance calculation on the n pixel value differences of each pixel according to the n pixel value differences of the pixel, and determine whether the variance obtained by calculation is smaller than or equal to a preset threshold.
A station mark identification image obtaining unit 824, configured to set the pixel value of the pixel corresponding to the variance to white when the determination result of the determining unit is yes, and set the pixel value of the pixel corresponding to the variance to black when the determination result of the determining unit is no, so as to obtain a station mark identification image.
And the station caption image acquisition module 83 is configured to perform an and operation on the station caption identification map and the station caption area map to acquire a station caption image including the station caption content.
Further, the station caption image acquisition module specifically includes:
a selecting unit 831, configured to select a frame of logo area map from the intercepted logo area maps;
and operation unit 832, configured to perform and operation on the selected station caption area image and the station caption identifying image to obtain a station caption image including the station caption content.
The storage module 84 is configured to store the station caption identification map and the station caption image into the station caption sample library together as a station caption sample.
In the invention, when a station caption sample library is constructed, an obtained station caption area image is processed to obtain a station caption identification image and a station caption image, and the station caption identification image and the station caption image are stored together as a station caption sample, thereby completing the construction of the station caption sample in the station caption sample library, obtaining the station caption identification image and the station caption image with uniform specification required by station caption identification, and storing the station caption identification image and the station caption image corresponding to the station caption identification image.
Example four
Fig. 9 shows a component structure of a station caption identifying apparatus provided in the fourth embodiment of the present invention. For convenience of explanation, only portions relevant to the present invention are shown.
The station caption identifying device corresponds to the station caption identifying method described in the second embodiment and is used for realizing the station caption identifying method described in the second embodiment. In this embodiment, the station caption sample is a station caption identifying chart of a television station and a corresponding station caption image, and is pre-stored in a station caption sample library.
As shown in fig. 9, the station logo recognition apparatus includes:
and the intercepting module 91 is used for intercepting the station logo area map to be identified in the current television picture.
In this embodiment, the capturing module is configured to capture a station caption area map at the upper left corner of the current television picture after the television picture is completely switched and played, and obtain a station caption area map to be identified.
And the matching module 92 is configured to match the station caption area map to be identified with the station caption samples in the station caption sample library one by one, and calculate a similarity metric value.
Further, the matching module specifically includes:
the selecting unit 921 is configured to select a station caption sample to be matched from a station caption sample library, where the station caption sample includes a station caption identification diagram and a station caption image corresponding to the station caption identification diagram.
The matching unit 922 is configured to match the station caption area map to be identified with the station caption sample to be matched, and calculate a similarity metric between the station caption area map to be identified and the station caption sample;
the judging unit 923 is configured to judge whether matching of all the station caption samples in the station caption sample library is completed, and if not, continue matching of the station caption area map to be identified and the next station caption sample to be matched.
Further, the matching unit specifically includes:
the and operation subunit 9221 is configured to perform and operation on the station logo area image to be identified and the station logo identification image, and remove a non-station logo portion in the station logo area image to be identified;
a similar region image obtaining subunit 9222, configured to subtract the station logo region image to be identified of the non-station logo portion from the station logo image, to obtain a similar region image of the station logo region image to be identified of the non-station logo portion and the station logo image;
and the similarity measurement operator unit 9223 is configured to calculate a similarity measurement value between the station logo region map to be identified and the station logo identifier map according to the similar region image.
And the identifying module 93 is configured to select the station caption sample with the smallest similarity metric value as the station caption identifying result after completing matching between the station caption area map to be identified and all the station caption samples in the station caption sample library.
In this embodiment, when the determination result of the determining unit 923 is yes, the matching between the logo area map to be identified and all logo samples in the logo sample library is completed. And after the station caption area image to be identified is matched with the station caption sample, calculating to obtain a similarity metric value of the station caption area image to be identified and the station caption sample. And after matching of the station caption area image to be identified and all the station caption samples in the station caption sample library is completed, obtaining the similarity metric value of the station caption area image to be identified and each station caption sample, selecting the minimum similarity metric value, and taking the station caption image in the station caption sample corresponding to the minimum similarity metric value as the station caption identification result.
In the invention, a station caption sample library stores station caption samples of a plurality of television stations, wherein the station caption samples comprise station caption identification images and corresponding station caption images, and the station caption identification images are binary images. The station caption identification graph, the station caption image and the station caption area graph to be identified are uniform in size, and the station caption in the image keeps consistent in position and direction, so that the factors of translation, scaling and rotation of the image do not need to be considered during station caption identification, a station caption identification algorithm is simplified, and the station caption identification efficiency is improved. When station caption is identified, the station caption area image to be identified and the station caption identification image in a station caption sample are subjected to AND operation, the content of a non-station caption part in the station caption area image to be identified is removed, and the station caption area image to be identified without the non-station caption part is matched with the station caption image, so that the problem that a traditional station caption identification method needs to extract characteristic points from the station caption area image to be identified is avoided, and the defect that the traditional station caption identification method is easily interfered by the background of the station caption area image to be identified is overcome.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention. For example, each module is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be realized; the names of the functional modules are only for convenience of distinction and are not intended to limit the present invention. In addition, each component in each embodiment of the present invention may be integrated into one module, or each component may exist alone physically, or two or more components may be integrated into one component. The integrated components can be realized in a form of hardware or a form of software functional units. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (14)

1. A station caption sample library construction method is characterized by comprising the following steps:
acquiring a station caption area in a television picture, and intercepting the station caption area to obtain a station caption area map;
extracting a station caption part from the station caption area image to obtain a station caption identification image with the station caption part being white and the non-station caption part being black;
performing AND operation on the station caption identification image and the station caption area image to acquire a station caption image containing station caption content;
and storing the station caption identification graph and the station caption image into a station caption sample library together as a station caption sample.
2. The station caption sample library construction method according to claim 1, wherein the step of intercepting the station caption area to obtain a station caption area map specifically comprises:
and intercepting the station caption area to obtain a 2n frame station caption area image, wherein n is an integer larger than zero.
3. The station caption sample library construction method according to claim 2, wherein the step of extracting the station caption part from the station caption area map to obtain the station caption identification map with the station caption part being white and the non-station caption part being black specifically comprises the steps of:
dividing the 2n frames of station logo area images into n groups, wherein each group comprises two frames of station logo area images;
carrying out pixel value subtraction on corresponding pixels on the two frames of logo area images in each group to obtain n pixel value differences of each pixel;
according to the n pixel value differences of each pixel, performing variance calculation on the n pixel value differences of the pixel, and judging whether the variance obtained by calculation is smaller than or equal to a preset threshold value or not;
and when a certain variance is smaller than or equal to a preset threshold value, setting the pixel value of the pixel corresponding to the variance to be white, otherwise, setting the pixel value of the pixel corresponding to the variance to be black, so as to obtain the station mark identification graph.
4. The station caption sample library construction method according to claim 1, wherein the step of performing and operation on the station caption identification map and the station caption area map to obtain the station caption image including the station caption content specifically comprises:
selecting a frame of station logo area image from the intercepted station logo area image;
and performing AND operation on the selected station caption area image and the station caption identification image to obtain a station caption image containing station caption contents.
5. A station caption identification method, characterized in that the method comprises:
intercepting a station logo area image to be identified in a current television picture;
matching the station caption area graph to be identified with the station caption samples in the station caption sample library one by one, and calculating a similarity metric value;
after matching of the station caption area graph to be identified and all the station caption samples in the station caption sample library is completed, selecting the station caption sample with the minimum similarity metric value as a station caption identification result;
the station caption sample is a station caption marking picture of a television station and a corresponding station caption image, and is pre-stored in a station caption sample library.
6. The station caption identifying method according to claim 5, wherein the step of matching the station caption area map to be identified with the station caption samples in the station caption sample library one by one and calculating the similarity metric specifically comprises:
selecting a station caption sample to be matched from a station caption sample library, wherein the station caption sample comprises a station caption identification picture and a station caption image corresponding to the station caption identification picture;
matching the station caption area graph to be identified with the station caption sample to be matched, and calculating a similarity metric value of the station caption area graph to be identified and the station caption sample;
judging whether the matching of all station caption samples in the station caption sample library is finished or not;
and when the judgment result is negative, continuously matching the station caption area image to be identified with the next station caption sample to be matched.
7. The station caption identification method of claim 6, wherein the step of matching the station caption area map to be identified with the station caption sample to be matched and calculating the similarity metric between the station caption area map to be identified and the station caption sample specifically comprises:
performing AND operation on the station caption area image to be identified and the station caption identification image, and removing a non-station caption part in the station caption area image to be identified;
subtracting the station caption image from the station caption area image to be identified without the non-station caption part to obtain a similar area image of the station caption image and the station caption area image to be identified without the non-station caption part;
and calculating the similarity metric value of the logo area image to be identified and the logo image according to the similar area image.
8. A station caption sample library construction device, characterized in that the device comprises:
the station caption area image acquisition module is used for acquiring the area of the station caption in the television picture, intercepting the station caption area and acquiring a station caption area image;
the station caption marking image acquisition module is used for extracting a station caption part from the station caption area image and acquiring a station caption marking image with a white station caption part and a black non-station caption part;
the station caption image acquisition module is used for performing AND operation on the station caption marking image and the station caption area image to acquire a station caption image containing station caption contents;
and the storage module is used for storing the station caption identification chart and the station caption image into a station caption sample library together as a station caption sample.
9. The station caption sample library construction apparatus of claim 8, wherein the station caption area map acquisition module is specifically configured to:
and intercepting the station caption area to obtain a 2n frame station caption area image, wherein n is an integer larger than zero.
10. The station caption sample library construction apparatus according to claim 8, wherein the station caption identification map acquisition module specifically includes:
the dividing unit is used for dividing the 2n frames of station logo area images into n groups, and each group comprises 2 frames of station logo area images;
the subtraction operation unit is used for carrying out pixel value subtraction on corresponding pixels on the two frames of logo area images in each group to obtain n pixel value differences of each pixel;
the judgment unit is used for carrying out variance calculation on the n pixel value differences of the pixels according to the n pixel value differences of each pixel and judging whether the variance obtained by calculation is smaller than or equal to a preset threshold value or not;
and the station logo identification image acquisition unit is used for setting the pixel value of the pixel corresponding to the variance to be white when the certain variance is smaller than or equal to a preset threshold value, and otherwise, setting the pixel value of the pixel corresponding to the variance to be black so as to obtain the station logo identification image.
11. The station caption sample library construction device according to claim 7, wherein the station caption image acquisition module specifically comprises:
a selecting module, which is used for selecting a frame of station caption area image from the intercepted station caption area image;
and the operation module is used for performing AND operation on the selected station caption area image and the station caption identification image to obtain a station caption image containing station caption contents.
12. An apparatus for station logo recognition, the apparatus comprising:
the intercepting module is used for intercepting a station logo area image to be identified in the current television picture;
the matching module is used for matching the station caption area graph to be identified with the station caption samples in the station caption sample library one by one and calculating a similarity metric value;
the identification module is used for selecting the station caption sample with the minimum similarity metric value as the station caption identification result after matching the station caption area graph to be identified with all the station caption samples in the station caption sample library;
the station caption sample is a station caption marking picture of a television station and a corresponding station caption image, and is pre-stored in a station caption sample library.
13. The station logo recognition device according to claim 12, wherein the matching module specifically comprises:
the platform logo sample library comprises a selecting unit and a matching unit, wherein the selecting unit is used for selecting a platform logo sample to be matched from the platform logo sample library, and the platform logo sample comprises a platform logo identification image and a corresponding platform logo image;
the matching unit is used for matching the station caption area image to be identified with the station caption sample to be matched and calculating the similarity metric value of the station caption area image to be identified and the station caption sample;
and the judging unit is used for judging whether the matching of all the station caption samples in the station caption sample library is finished or not, and when the judging result is negative, continuing to match the station caption area image to be identified with the next station caption sample to be matched.
14. The station logo recognition device according to claim 13, wherein the matching unit specifically comprises:
the and operation subunit is used for performing and operation on the station logo area image to be identified and the station logo identification image to remove a non-station logo part in the station logo area image to be identified;
a similar area image obtaining subunit, configured to subtract the station caption image to be identified from the station caption image from which the non-station caption part is removed, and obtain a similar area image of the station caption image and the station caption image to be identified from which the non-station caption part is removed;
and the similarity measurement operator unit is used for calculating the similarity measurement value of the logo area image to be identified and the logo image according to the similar area image.
CN201410038040.5A 2014-01-26 2014-01-26 Method and device for constructing station caption sample library and method and device for identifying station caption Active CN103729657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410038040.5A CN103729657B (en) 2014-01-26 2014-01-26 Method and device for constructing station caption sample library and method and device for identifying station caption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410038040.5A CN103729657B (en) 2014-01-26 2014-01-26 Method and device for constructing station caption sample library and method and device for identifying station caption

Publications (2)

Publication Number Publication Date
CN103729657A true CN103729657A (en) 2014-04-16
CN103729657B CN103729657B (en) 2017-05-03

Family

ID=50453721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410038040.5A Active CN103729657B (en) 2014-01-26 2014-01-26 Method and device for constructing station caption sample library and method and device for identifying station caption

Country Status (1)

Country Link
CN (1) CN103729657B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537376A (en) * 2014-11-25 2015-04-22 深圳创维数字技术有限公司 A method, a relevant device, and a system for identifying a station caption
CN106446889A (en) * 2015-08-10 2017-02-22 Tcl集团股份有限公司 Local identification method and local identification device for station logo
CN106845442A (en) * 2017-02-15 2017-06-13 杭州当虹科技有限公司 A kind of station caption detection method based on deep learning
CN108009637A (en) * 2017-11-20 2018-05-08 天津大学 The station symbol dividing method of Pixel-level TV station symbol recognition network based on cross-layer feature extraction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005623A (en) * 2006-01-21 2007-07-25 宇龙计算机通信科技(深圳)有限公司 Method for determining video frequency frame block in-frame or interframe coding
US20080130997A1 (en) * 2006-12-01 2008-06-05 Huang Chen-Hsiu Method and display system capable of detecting a scoreboard in a program
CN101739561A (en) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 TV station logo training method and identification method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101005623A (en) * 2006-01-21 2007-07-25 宇龙计算机通信科技(深圳)有限公司 Method for determining video frequency frame block in-frame or interframe coding
US20080130997A1 (en) * 2006-12-01 2008-06-05 Huang Chen-Hsiu Method and display system capable of detecting a scoreboard in a program
CN101739561A (en) * 2008-11-11 2010-06-16 中国科学院计算技术研究所 TV station logo training method and identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于静等: "基于统计分类的台标识别相关技术研究", 《微型机与应用》 *
黄超越: "台标检测系统的原理与应用", 《有线电视技术 设备器件》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537376A (en) * 2014-11-25 2015-04-22 深圳创维数字技术有限公司 A method, a relevant device, and a system for identifying a station caption
CN104537376B (en) * 2014-11-25 2018-04-27 深圳创维数字技术有限公司 One kind identification platform calibration method and relevant device, system
CN106446889A (en) * 2015-08-10 2017-02-22 Tcl集团股份有限公司 Local identification method and local identification device for station logo
CN106446889B (en) * 2015-08-10 2019-09-17 Tcl集团股份有限公司 A kind of local recognition methods of logo and device
CN106845442A (en) * 2017-02-15 2017-06-13 杭州当虹科技有限公司 A kind of station caption detection method based on deep learning
CN108009637A (en) * 2017-11-20 2018-05-08 天津大学 The station symbol dividing method of Pixel-level TV station symbol recognition network based on cross-layer feature extraction
CN108009637B (en) * 2017-11-20 2021-06-25 天津大学 Station caption segmentation method of pixel-level station caption identification network based on cross-layer feature extraction

Also Published As

Publication number Publication date
CN103729657B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
EP3100449B1 (en) Method for conversion of a saturated image into a non-saturated image
CN111124888B (en) Method and device for generating recording script and electronic device
CN103729657B (en) Method and device for constructing station caption sample library and method and device for identifying station caption
US20130148882A1 (en) Detecting objects in images using color histograms
CN105719243B (en) Image processing apparatus and method
CN110569774B (en) Automatic line graph image digitalization method based on image processing and pattern recognition
CN108737875B (en) Image processing method and device
CN111008969B (en) Blackboard writing extraction and intelligent recognition method and system
CN108509988B (en) Test paper score automatic statistical method and device, electronic equipment and storage medium
CN110751605A (en) Image processing method and device, electronic equipment and readable storage medium
CN112308063A (en) Character recognition device, translation pen, image translation method, and image translation device
CN105354550A (en) Form content extracting method based on registration of local feature points of image
CN113052162B (en) Text recognition method and device, readable storage medium and computing equipment
CN111612681A (en) Data acquisition method, watermark identification method, watermark removal method and device
CN105957027A (en) MRF sample image restoring method based on required directional structural feature statistics
CN110334606B (en) Picture-in-picture positioning method and device
CN111127529B (en) Image registration method and device, storage medium and electronic device
CN107292892A (en) The dividing method and device of video frame images
CN111241934A (en) Method and device for acquiring photophobic region in face image
CN105957061A (en) Methods for making threshold graph and detecting picture for rotary crown cover
CN108205672B (en) Automatic calibration method for display screen
CN107103321B (en) The generation method and generation system of road binary image
CN114782692A (en) House model repairing method and device, electronic equipment and readable storage medium
CN108304825B (en) Text detection method and device
CN111340677A (en) Video watermark detection method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant