JP4544166B2 - Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus - Google Patents

Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus Download PDF

Info

Publication number
JP4544166B2
JP4544166B2 JP2006021065A JP2006021065A JP4544166B2 JP 4544166 B2 JP4544166 B2 JP 4544166B2 JP 2006021065 A JP2006021065 A JP 2006021065A JP 2006021065 A JP2006021065 A JP 2006021065A JP 4544166 B2 JP4544166 B2 JP 4544166B2
Authority
JP
Japan
Prior art keywords
image data
character
data
pixel
broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2006021065A
Other languages
Japanese (ja)
Other versions
JP2007202065A (en
Inventor
治 井坂
晴雄 東風
充 高橋
Original Assignee
ダイキン工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ダイキン工業株式会社 filed Critical ダイキン工業株式会社
Priority to JP2006021065A priority Critical patent/JP4544166B2/en
Publication of JP2007202065A publication Critical patent/JP2007202065A/en
Application granted granted Critical
Publication of JP4544166B2 publication Critical patent/JP4544166B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a broadcast image generation method, a broadcast image generation program, and a broadcast image generation apparatus for displaying caption information even in a terminal having limited processing capability such as a portable terminal.

  The data of a program broadcast as a television broadcast includes video data and audio data. The broadcast program data may include subtitle data that allows the user to select whether to display the program data. Such selectable caption data is generally referred to as closed caption, and is mainly developed for the hearing impaired. This subtitle includes not only the performer's conversation but also explanations such as BGM and sound effects.

  In this closed caption, a voice-coded audio code or the like is inserted into the 21st horizontal scanning line of the television signal. Closed caption data can be separated from the television signal by a dedicated decoder.

  A technique for generating metadata using text data that allows the user to select whether or not to display based on such closed caption data is also disclosed (see, for example, Patent Document 1). In the information processing apparatus described in this document, a transmission signal is acquired, and identification information that can uniquely distinguish a program is acquired from the broadcast signal. Then, time information and distinction information are added to the broadcast signal. Thereby, it is possible to search using the metadata having the time information corresponding to the text data and the discriminating information for discriminating the program.

In addition, a technique for displaying a caption that is easy to view even when an image is reproduced and displayed on a small display has been studied (for example, see Patent Document 2). In the technique described in this document, video data before conversion read from a recording medium is developed in a frame memory. The character recognition unit analyzes the video caption written as video in the video data as a figure, and performs character recognition to obtain caption text data. If necessary, the data format conversion unit converts the data format of the video data, and the writing unit records the converted video data and caption text data. A small-sized data display device such as a mobile phone inputs video data from a recording medium or network media, and generates video data in the first display area of the display and caption text data in the second display area according to the area. Display caption.
JP 2005-198206 A (first page) Japanese Patent Laying-Open No. 2005-123726 (first page)

  As described above, portable terminals such as cellular phone terminals have recently become widespread. Some of such portable terminals have various functions such as a network connection function and an image reproduction function. Furthermore, a model equipped with a program execution environment is also provided, and a simple application program may be downloaded or executed. However, the portable terminal is housed in a small casing for portability, and the size of the display is limited. Furthermore, since there are limitations on the memory capacity, power consumption, and CPU (central processing unit) functions, it may be difficult to execute advanced information processing. For this reason, there is a limit in executing a complicated application program such as a closed caption data decoder function in a small portable terminal.

  The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a broadcast image generation method capable of displaying subtitle information even in a terminal having limited processing capability such as a portable terminal, and a broadcast An object is to provide an image generation program and a broadcast image generation apparatus.

In order to solve the above problem, the invention described in claim 1 is a method for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer is configured to generate a video image in the broadcast signal. An image data acquisition step for acquiring first image data from the data, a character data acquisition step for acquiring character data corresponding to the video data, and a gradation expression process using subpixels for the character data Performing a character image generation step of generating character image data, and a merged image generation step of generating second image data obtained by combining the character image data and the first image data by a JPEG conversion algorithm ; The gradation expression processing using the sub-pixels constitutes one pixel in the three primary colors of red, green and blue constituting the pixel. That is summarized in that said a anti-aliasing process for controlling the gradation of each color.

  According to a second aspect of the present invention, in the broadcast image generation method according to the first aspect, the control computer extracts the caption signal from the received broadcast signal and the character data acquisition stage. The gist is to generate character data based on this.

  According to a third aspect of the present invention, in the broadcast image generation method according to the first aspect, the control computer specifies the broadcast identifier in a step of specifying a broadcast identifier of the received broadcast signal and in the character data acquisition step. The gist is to obtain associated character data for each video.

The invention according to claim 4 is a method for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer acquires first image data from video data in the broadcast signal. An image data acquisition step, a character data acquisition step of acquiring character data corresponding to the video data, and a character image generation of generating character image data obtained by performing gradation expression processing using subpixels on the character data And a combined image generating step of generating second image data obtained by combining the character image data and the first image data by a compression algorithm of discrete cosine transform, and performing gradation using the sub-pixels The expression process controls the gradation of each color that constitutes one pixel in the three primary colors of red, green, and blue that constitute the pixel. And summarized in that a that the anti-aliasing process.

The invention according to claim 5 is an apparatus for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer acquires first image data from video data in the broadcast signal. Image data acquisition means, character data acquisition means for acquiring character data corresponding to the video data, and character image generation for generating character image data obtained by performing gradation expression processing using subpixels on the character data And gradation expression processing using the sub-pixel, functioning as combined image generation means for generating second image data obtained by combining the character image data and the first image data using a JPEG conversion algorithm Controls the gradation of each color constituting one pixel in the three primary colors of red, green, and blue constituting the pixel. And summarized in that a anti-aliasing process.

The invention according to claim 6 is an apparatus for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer acquires first image data from video data in the broadcast signal. Image data acquisition means, character data acquisition means for acquiring character data corresponding to the video data, and character image generation for generating character image data obtained by performing gradation expression processing using subpixels on the character data Gradation expression using the sub-pixel, functioning as a combined image generating means for generating second image data obtained by combining the character image data and the first image data by a compression algorithm of discrete cosine transform The processing controls the gradation of each color constituting one pixel in the three primary colors of red, green, and blue constituting the pixel. And summarized in that a anti-aliasing process for.

The invention according to claim 7 is a program for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer acquires first image data from video data in the broadcast signal. Image data acquisition means, character data acquisition means for acquiring character data corresponding to the video data, and character image generation for generating character image data obtained by performing gradation expression processing using subpixels on the character data And a gray scale expression process using the sub-pixel, functioning as a combined image generating means for generating a second image data obtained by combining the character image data and the first image data by a JPEG conversion algorithm. Is the gradation of each color constituting one pixel in the three primary colors of red, green and blue constituting the pixel. A gist that the anti-aliasing process of controlling.

The invention according to claim 8 is a program for generating image data relating to broadcasting using a control computer for acquiring a broadcast signal, wherein the control computer acquires first image data from video data in the broadcast signal. Image data acquisition means, character data acquisition means for acquiring character data corresponding to the video data, and character image generation for generating character image data obtained by performing gradation expression processing using subpixels on the character data Gray scale expression using the sub-pixels, functioning as combined image generation means for generating second image data obtained by combining the character image data and the first image data by a compression algorithm of discrete cosine transform The processing is performed on the pixels of the three primary colors of red, green, and blue constituting the pixel. And summarized in that a anti-aliasing processing for controlling the tone.

(Function)
According to the first, fifth, and seventh aspects of the invention, gradation expression processing using sub-pixels for character data (one pixel is formed in the three primary colors of red, green, and blue constituting the pixel). Character image data subjected to anti-aliasing processing for controlling the gradation of each color is generated, and second image data obtained by combining the character image data and the first image data is generated by a JPEG conversion algorithm. Thereby, it is possible to browse video and characters using a terminal including a viewer that can browse images generated by the JPEG method. Furthermore, since the character image data is generated by performing gradation expression processing using subpixels, it is possible to reproduce easy-to-read characters.

  According to the second aspect of the present invention, since the caption signal is extracted from the received broadcast signal and character data is generated based on the caption signal, the characters of the caption broadcast can be viewed even on a terminal without a decoder. it can.

  According to the third aspect of the invention, the broadcast identifier of the received broadcast signal is specified, and the character data associated with the broadcast identifier is acquired for each video. The information provided through the user can be viewed on a simple terminal.

  According to the fourth, sixth, and eighth aspects of the invention, the second image data obtained by combining the character image data and the first image data is generated by a compression algorithm of discrete cosine transform. When the discrete cosine transform is used, it is possible to reproduce characters that are easy to read even with a simple terminal while performing data compression.

  According to the present invention, it is possible to display caption information even in a terminal having a limited processing capability such as a portable terminal.

  Hereinafter, an embodiment embodying the present invention will be described with reference to FIGS. FIG. 1 is an explanatory diagram for explaining a configuration of a video information processing apparatus to which the present invention is applied. In this embodiment, as shown in FIG. 1, a broadcast signal from a broadcast station is received using a television receiver 30. A display selection subtitle decoder 40 and a home server 50 are connected to the television receiver 30. The display selection subtitle decoder 40 decodes the display selection subtitle based on the user's operation input and displays it on the video, or generates recording data and supplies it to the home server 50 for recording.

The broadcasting station 10 is a facility that broadcasts programs using terrestrial waves or satellite waves. The broadcast signal of the broadcast program includes video data and audio data. The video data is moving image data, and the audio data is data relating to audio that is reproduced in synchronization with the video data.

  Also, the broadcast signal includes subtitles that are always displayed as video and subtitles that are displayed by selection. The former includes introductions of program titles and casts, Japanese subtitles in overseas works, etc. On the other hand, subtitle data (so-called closed caption) that can be displayed and selected as in the latter case may include text data corresponding to a performer's conversation, and explanations about broadcast content such as BGM and sound effects. is there. In this way, caption data that can be selected to be displayed or not is referred to as “display selected caption data”.

  Next, the display selection subtitle data will be described. For example, in NTSC analog terrestrial broadcasting, 525 scanning lines are used for video signals. Of these 525 lines, the first 21 lines of each field (one field is composed of 2 fields) is called VBI (Vertical Blanking Interval), and is used as an interval for starting scanning. Assigned. The closed caption is configured to be transmitted by multiplexing a 7-bit character code in the 21st VBI of the VBI of each field. Each field can be used to transmit two types of character sets at about 60 characters / second. This display selection subtitle data is decoded from the video data at the time of reproduction, and can be displayed simultaneously with the video.

  The user uses the television receiver 30 to receive a broadcast signal composed of video data and audio data including display selection subtitle data. The television receiver 30 includes a tuner 31, a signal processing unit 32, and an output unit 33 including a display and a speaker. When the display selection subtitle data is not displayed, the broadcast signal selected by the tuner 31 is demodulated by the signal processing unit 32, the video signal is output to the display, and the audio signal is output to the speaker by the output unit 33. You can watch the program.

  The television receiver 30 acquires a broadcast signal selected by the tuner 31. The display selection subtitle decoder 40 acquires the broadcast signal selected by the tuner 31 of the television receiver 30 and decodes the video signal and the audio signal. Then, the display selection subtitle data is extracted and decoded based on the user's operation input, and output to the output unit 33.

The home server 50 is connected to a video information providing server 70 as a video information processing apparatus via the Internet I as a network.
On the other hand, the home server 50 receives the recording data with the time stamp generated by the display selection subtitle decoder 40 based on the user's operation input, and records it in the internal recording data storage unit. In addition, the home server 50 receives supply of encrypted metadata including text data corresponding to display selection subtitle data and a time code corresponding to the text data from the video information providing server 70 via the Internet I. .

  Furthermore, it is possible to decrypt the encrypted metadata using the key data, and execute matching processing between the text input by the user and the metadata using the metadata. If the time code corresponding to the text input by the user is detected as a result of the matching process, the recorded data is searched based on the time code and supplied to the display selection subtitle decoder 40.

The video information providing server 70 acquires a broadcast signal with display-selected caption data created by the broadcast station 10 by receiving it through various networks and radio waves. Then, metadata is created using this broadcast signal. Then, the created metadata is encrypted. Further, the video information providing server 70 distributes the encrypted metadata to the user via the Internet I. The video information providing server 70 includes data storage means such as a CPU, ROM (Read Only Memory), RAM (Random Access Memory), and HDD (Hard Disk Drive). This CPU functions as a control computer. The block configuration of the video information providing server 70 is shown in FIG. The CPU of the video information providing server 70 executes an image information processing program, thereby executing an image data acquisition stage, a character data acquisition stage, a character image generation stage, a combined image generation stage, and the like. As a result, the video information providing server 70 functions as an image data acquisition unit, a character data acquisition unit, a character image generation unit, a combined image generation unit, and the like. Here, the video information providing server 70 executes normal metadata generation processing and video information generation processing for mobile terminals. Hereinafter, the functions shown in the block diagram of the video information providing server 70 are realized.

(Metadata generation process)
First, normal metadata generation processing will be described. The CPU of the video information providing server 70 functions as a broadcast signal acquisition unit 701 as a signal acquisition unit. The broadcast signal acquisition unit 701 acquires a broadcast signal via a network or broadcast radio wave and supplies it to the decoder 702. The decoder 702 decodes the broadcast signal supplied from the broadcast signal acquisition unit 701. Here, the decoder 702 decodes only the video signal including the program management information including the program ID information and the display selection subtitle data necessary for creating the metadata among the broadcast signals.

  The program ID information extraction unit 703 extracts program ID information that can specify a broadcast program from the program management data included in the video data decoded by the decoder 702 and supplies the extracted program ID information to the metadata generation unit 708. Further, the program ID information extraction unit 703 supplies the video data to the display selection subtitle data decoder 704 as a separating unit.

  The display selection subtitle data decoder 704 decodes the display selection subtitle data included in the acquired video data, and supplies the corresponding text data to the analysis processing unit 705. The analysis processing unit 705 divides the text data corresponding to the display selection subtitle data supplied from the display selection subtitle data decoder 704 into a text group of an appropriate length as necessary (for example, the text displayed at a time is displayed). Are divided as one text group), and the divided text group is supplied to the time code addition processing unit 707.

  Then, the time code addition processing unit 707 receives a subtitle data registration instruction from the analysis processing unit 705. Using the timer 706, the time code addition processing unit 707 adds the time when the subtitle data registration instruction is received as a time code. For example, in the case of text corresponding to a display selection subtitle, a time code corresponding to the start time of the display selection subtitle is added. When the broadcast signal acquisition unit 701 acquires a broadcast signal in real time for the broadcast, the time code addition processing unit 707 adds the time code to the text data based on the current time indicated by the timer 706. If the time code addition time is delayed with respect to the program broadcast time, the time code addition processing unit 707 corresponds to the broadcast time of the program based on the delay time and the current time indicated by the timer 706. The time code is calculated and added to the text data.

  The metadata generation unit 708 generates metadata by adding the program ID information supplied from the program ID information extraction unit 703 to the text data to which the time code supplied from the time code addition processing unit 707 is added. This metadata is added with a time code in which the start time is described for the text data. Then, the program ID information supplied from the program ID information extraction unit 703 is added. This metadata is supplied to the encryption processing unit 709.

  The encryption processing unit 709 encrypts the metadata supplied from the metadata generation unit 708 with an encryption key stored in advance in the key data storage unit 72. In the key data storage unit 72, an encryption key is recorded for each program ID. Then, the encryption processing unit 709 extracts the encryption key from the key data storage unit 72 based on the program ID included in the metadata, encrypts the metadata using the encryption key, and encrypts the encryption key. Recorded in the generalized metadata storage unit 73. The encrypted metadata storage unit 73 functions as video information storage means for recording a video information file based on the searched characters. The encrypted metadata is provided to the home server 50 via the Internet I from the communication unit 710 as a transmission unit for each program in response to a request from the user.

  The home server 50 stores the encrypted metadata acquired from the video information providing server 70 in a built-in encrypted metadata storage unit. In the key data storage unit, the encrypted metadata is decrypted using the decryption key stored in advance. This decryption key is provided and recorded for each program ID. The decrypted metadata is recorded in the metadata storage unit in the home server 50. When a search target program ID and a text as a search key are input to the operation input unit of the home server 50, the home server 50 refers to the metadata recorded in the metadata storage unit and performs matching processing. Execute. When the text including the search key is specified, the recorded data storage unit is searched using the time code associated with the text.

(Video information generation processing for mobile devices)
Next, generation processing of video information provided to the mobile phone terminal 100 will be described with reference to FIG. This processing is executed using character data generated from the text data corresponding to the display selection subtitle data in the analysis processing unit 705.

  First, the video information providing server 70 separates caption data and video data (step S1-1). In this process, similarly to the metadata generation process, the broadcast signal acquired by the broadcast signal acquisition unit 701 is decoded by a decoder 702 as an image data acquisition unit. Again, the decoder 702 decodes only the video signal that includes the program management information including the program ID information and the subtitle signal (display selected subtitle data) necessary for creating the metadata among the broadcast signals.

  Further, the program ID information extraction unit 703 extracts program ID information that can specify a broadcast program from program management data included in the video data decoded by the decoder 702. In the video information generation process for the portable terminal, the program ID information is supplied to the superimposed image generation unit 712. Further, the program ID information extraction unit 703 supplies this video data to the display selection subtitle data decoder 704.

  Next, the video information providing server 70 converts the caption data into text (step S1-2). Here again, the display selection subtitle data decoder 704 decodes the display selection subtitle data included in the acquired video data, and supplies the corresponding text data to the analysis processing unit 705 as the character data acquisition means. The analysis processing unit 705 generates character data using text data corresponding to the display selection subtitle data.

Then, the video information providing server 70 generates character image data in which the generated character data (text data) is expressed by gradation using RGB sub-pixels (step S1-3). In the present embodiment, a character image generation unit 711 serving as a character image generation unit generates character image data for the text from the analysis processing unit 705 using anti-aliasing processing using the acquired subpixels. In this technique, a general anti-aliasing process including a sub-pixel is performed in which an intermediate color between the characters and the background is arranged at the boundary between the character and the background to adjust the gradation smoothly. That is, in the pixels of the three primary colors “red, green, and blue” that make up the pixels of the color display, finer changes in gradation are expressed by sub-pixel rendering that controls the gradation of each color that makes up one pixel. In this process, since a font having interpolation data is used, a bitmap font is not used.

  Next, the video information providing server 70 generates JPEG image data (second image data) in which the character image data generated by the character image generation unit 711 is superimposed on the video data (step S1-4). Here, first, a superimposed image generation unit 712 as a superimposed image generation unit acquires video data as first image data from the decoder 702. Then, image data is generated using “Exif (Exchangeable Image File Format)” that supports a plurality of formats such as “JPEG (Joint Photographic Experts Group)”. This “Exif” is an extension of “JFIF (JPEG File Interchange Format)” which is a standard format for adding image information and the like to image data compressed by the JPEG method and storing it in a file. It is a standard for image files for digital cameras standardized by the Japan Electronics Industry Promotion Association (JEIDA). According to this standard, it is possible to record information about an image, additional information such as a shooting date and time, and record a reduced image (thumbnail).

  Furthermore, image data obtained by superimposing the character image data acquired from the character image generation unit 711 on the corresponding video data is generated by a JPEG conversion algorithm. In the present embodiment, the character image data is superimposed on the image data, but it is sufficient that both data are combined into one file as a whole. For example, it is also possible to display character image data in a different area from the first image data and generate second image data in which both are integrated.

  Next, the video information providing server 70 records the JPEG image data in the portable terminal image data storage unit 75 (step S1-5). In this case, the superimposed image generation unit 712 acquires program ID information that can specify a broadcast program from the program ID information extraction unit 703. Then, the program ID information is added to the composite image data and recorded in the mobile terminal image data storage unit 75 in the order of generation.

  And it waits until display selection subtitle data is changed (step S1-6). When new display selection subtitle data is supplied, steps S1-1 to S1-5 are repeated. These processes end when the program ends (step S1-7).

Then, the image data recorded in the portable terminal image data storage unit 75 in this way is provided to the cellular phone terminal 100 via the communication unit 710 in response to a user request.
On the other hand, the mobile phone terminal 100 can browse normal JPEG format image data using a viewer capable of browsing. In this case, the user can browse a video image in which subtitles are embedded.

As described above, according to the present embodiment, the following effects can be obtained.
In the above embodiment, caption data and video data are separated (step S1-1), and the caption data is converted into text (step S1-2). Then, character image data in which gradation is expressed by RGB subpixels is generated for the text data (step S1-3). JPEG image data is generated by superimposing the generated character image data on the video data (step S1-4). Since the subtitled image is provided as JPEG image data, it can be browsed even with a relatively simple model such as a portable terminal.

In the above embodiment, character data in which gradation representation is performed on text data by RGB sub-pixels is generated (step S1-3). For example, DCT (discrete cosine tr
ansform (discrete cosine transform) processing, and even when text data is superimposed on the image by color shift, the compatibility with the image data is good and an easy-to-read character can be reproduced.

  In the above embodiment, the video information providing server 70 executes metadata generation processing and video information generation processing for mobile terminals. Therefore, in these two processes, the broadcast signal acquisition unit 701 to the analysis processing unit 705 can be combined.

In addition, you may change the said embodiment into the following aspects.
In the embodiment described above, it is possible to use not only discrete cosine transform but also discrete wavelet transform (DWT) in generating JPEG image data. In this case, a solitary wave having spatial localization is used in the DWT instead of the DCT standing wave (cosine wave). Thereby, by assigning a relatively large number of waves to the higher frequency components, it is possible to cope with a sudden change. Therefore, it is possible to suppress mosquito noise that has occurred around a sharp change such as an edge.

  In the above embodiment, the video information providing server 70 executes video information generation processing for mobile terminals. Instead, the home server 50 may execute video information generation processing for a portable terminal. In this case, the home server 50 is provided with a character image generation unit 711 and a superimposed image generation unit 712.

  In the above embodiment, when generating character image data, the character image generation unit 711 generates character image data from the analysis processing unit 705 using the anti-aliasing processing using the acquired subpixels for the text. . The text used for generating the character image data is not limited to the data acquired from the analysis processing unit 705. For example, it is also possible to specify a broadcast identifier of a received broadcast signal and acquire character data associated with the broadcast identifier from another data storage unit for each video. Thereby, not only caption data but also various information supplied from many servers can be used.

  In the above embodiment, the video information providing server 70 executes the metadata generation process and the video information generation process for the mobile terminal. The video information providing server 70 is not limited to executing the two processes at the same time.

  In the above embodiment, the video information providing server 70 performs processing from separation of subtitle data and video data (step S1-1) to recording of character image data (step S1-5). The present invention is not limited to causing one computer to execute these processes. For example, separation of subtitle data and video data (step S1-1), text conversion of subtitle data (step S1-2), character image data generation (step S1-3) to recording (step S1-5), respectively. It can also be executed on another computer. As a result, in addition to the case where an image is generated at the same time as broadcasting, it is also possible to create JPEG image data from a program once recorded and saved and subtitle metadata.

The system schematic of one Embodiment of this invention. Explanatory drawing of the block configuration of the image | video information provision server of one Embodiment of this invention. Explanatory drawing of the process sequence of embodiment of this invention.

Explanation of symbols

DESCRIPTION OF SYMBOLS 10 ... Broadcasting station, 30 ... Television receiver, 40 ... Display selection subtitle decoder, 50 ... Home server, 70 ... Video information provision server as a video information processing apparatus, 701 ... Broadcast signal acquisition part, 702 ... Decoder, 704 ... Display selection subtitle data decoder, 705... Analysis processing unit, 711... Character image generation unit, 712... Superimposed image generation unit, 75.

Claims (8)

  1. A method of generating image data related to broadcasting using a control computer that acquires a broadcast signal,
    The control computer is
    An image data acquisition stage for acquiring first image data from video data in a broadcast signal;
    A character data acquisition step of acquiring character data corresponding to the video data;
    A character image generation step of generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    Performing a combined image generation step of generating second image data obtained by combining the character image data and the first image data by a JPEG conversion algorithm ;
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generation method characterized by the above.
  2. The control computer extracting a caption signal from the received broadcast signal;
    The broadcast image generation method according to claim 1, wherein in the character data acquisition step, character data is generated based on the caption signal.
  3. The control computer identifying a broadcast identifier of a received broadcast signal;
    The broadcast image generation method according to claim 1, wherein in the character data acquisition step, character data associated with the broadcast identifier is acquired for each video.
  4. A method of generating image data related to broadcasting using a control computer that acquires a broadcast signal,
    The control computer is
    An image data acquisition stage for acquiring first image data from video data in a broadcast signal;
    A character data acquisition step of acquiring character data corresponding to the video data;
    A character image generation step of generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    Performing a combined image generation step of generating second image data obtained by combining the character image data and the first image data by a compression algorithm of discrete cosine transform ;
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generation method characterized by the above.
  5. A device that generates image data related to broadcasting using a control computer that acquires broadcast signals,
    The control computer is
    Image data acquisition means for acquiring first image data from video data in a broadcast signal;
    Character data acquisition means for acquiring character data corresponding to the video data;
    Character image generation means for generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    The second image data obtained by combining the character image data and the first image data functions as a combined image generation unit that generates a JPEG conversion algorithm ,
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generating apparatus characterized by the above.
  6. A device that generates image data related to broadcasting using a control computer that acquires broadcast signals,
    The control computer is
    Image data acquisition means for acquiring first image data from video data in a broadcast signal;
    Character data acquisition means for acquiring character data corresponding to the video data;
    Character image generation means for generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    The second image data obtained by combining the character image data and the first image data functions as a combined image generation unit that generates a compression algorithm of discrete cosine transform ,
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generating apparatus characterized by the above.
  7. A program for generating image data related to broadcasting using a control computer that acquires a broadcast signal,
    The control computer;
    Image data acquisition means for acquiring first image data from video data in a broadcast signal;
    Character data acquisition means for acquiring character data corresponding to the video data;
    Character image generation means for generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    The second image data obtained by combining the character image data and the first image data is caused to function as a combined image generation unit that generates a JPEG conversion algorithm ,
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generation program characterized by the above.
  8. A program for generating image data related to broadcasting using a control computer that acquires a broadcast signal,
    The control computer;
    Image data acquisition means for acquiring first image data from video data in a broadcast signal;
    Character data acquisition means for acquiring character data corresponding to the video data;
    Character image generation means for generating character image data obtained by performing gradation expression processing using subpixels on the character data;
    The second image data obtained by combining the character image data and the first image data is caused to function as a combined image generation unit that generates a compression algorithm of discrete cosine transform ,
    The gradation expression process using the sub-pixel is an anti-aliasing process for controlling the gradation of each color constituting one pixel in pixels of three primary colors of red, green, and blue constituting the pixel. A broadcast image generation program characterized by the above.
JP2006021065A 2006-01-30 2006-01-30 Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus Expired - Fee Related JP4544166B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006021065A JP4544166B2 (en) 2006-01-30 2006-01-30 Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006021065A JP4544166B2 (en) 2006-01-30 2006-01-30 Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus

Publications (2)

Publication Number Publication Date
JP2007202065A JP2007202065A (en) 2007-08-09
JP4544166B2 true JP4544166B2 (en) 2010-09-15

Family

ID=38456152

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006021065A Expired - Fee Related JP4544166B2 (en) 2006-01-30 2006-01-30 Broadcast image generation method, broadcast image generation program, and broadcast image generation apparatus

Country Status (1)

Country Link
JP (1) JP4544166B2 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092460A (en) * 1998-09-08 2000-03-31 Nec Corp Device and method for subtitle-voice data translation
JP2001117529A (en) * 1999-08-05 2001-04-27 Matsushita Electric Ind Co Ltd Method and device for improving sharpness of white-and- black text and graphics on color matrix digital display device
JP2002041022A (en) * 2000-07-19 2002-02-08 Matsushita Electric Ind Co Ltd Display device of character string, display method of character string and recording medium that record program
JP2002171235A (en) * 2000-11-30 2002-06-14 Nippon Hoso Kyokai <Nhk> Broadcast character information distributing system and its server and its method
JP2002232861A (en) * 2001-01-30 2002-08-16 Hitachi Ltd Video information distributing device and operation device
JP2002344871A (en) * 2001-05-14 2002-11-29 Hitachi Ltd Device and method for recording caption broadcast
JP2003288096A (en) * 2002-03-27 2003-10-10 Nippon Telegr & Teleph Corp <Ntt> Method, device and program for distributing contents information
JP2005303743A (en) * 2004-04-13 2005-10-27 Daikin Ind Ltd Information processing apparatus and information processing method, program, and information processing system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1809028A4 (en) * 2004-11-02 2009-10-28 Tv Asahi Data Vision Corp Captioned still image content creating device, captioned still image content creating program and captioned still image content creating system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000092460A (en) * 1998-09-08 2000-03-31 Nec Corp Device and method for subtitle-voice data translation
JP2001117529A (en) * 1999-08-05 2001-04-27 Matsushita Electric Ind Co Ltd Method and device for improving sharpness of white-and- black text and graphics on color matrix digital display device
JP2002041022A (en) * 2000-07-19 2002-02-08 Matsushita Electric Ind Co Ltd Display device of character string, display method of character string and recording medium that record program
JP2002171235A (en) * 2000-11-30 2002-06-14 Nippon Hoso Kyokai <Nhk> Broadcast character information distributing system and its server and its method
JP2002232861A (en) * 2001-01-30 2002-08-16 Hitachi Ltd Video information distributing device and operation device
JP2002344871A (en) * 2001-05-14 2002-11-29 Hitachi Ltd Device and method for recording caption broadcast
JP2003288096A (en) * 2002-03-27 2003-10-10 Nippon Telegr & Teleph Corp <Ntt> Method, device and program for distributing contents information
JP2005303743A (en) * 2004-04-13 2005-10-27 Daikin Ind Ltd Information processing apparatus and information processing method, program, and information processing system

Also Published As

Publication number Publication date
JP2007202065A (en) 2007-08-09

Similar Documents

Publication Publication Date Title
US10339893B2 (en) Display apparatus
US7530084B2 (en) Method and apparatus for synchronizing dynamic graphics
JP4792458B2 (en) Subtitle display device
US6104425A (en) Method and apparatus for transmitting television signals, method and apparatus for receiving television signals, and method and apparatus for transmitting/receiving television signals
JP4564613B2 (en) Image processing apparatus, television receiver, and image processing method
US6487722B1 (en) EPG transmitting apparatus and method, EPG receiving apparatus and method, EPG transmitting/receiving system and method, and provider
EP1595395B1 (en) Method and system for copy protection
JP4076067B2 (en) Recording / playback system
KR100630983B1 (en) Image processing method, and image encoding apparatus and image decoding apparatus capable of employing the same
JP4250458B2 (en) Image display method and image processing apparatus for image system
KR100618923B1 (en) Information processing apparatus, method, and computer-readable medium
EP1241884B1 (en) Transmission and reception of an audio signal using the video blanking period
US20140369667A1 (en) Information processing apparatus, information processing method, and program
KR100965471B1 (en) Captioned still image content creating device, captioned still image content creating program and captioned still image content creating system
US6557171B1 (en) Digital tv broadcast sending apparatus, digital tv broadcast receiving apparatus, and digital tv broadcast sending / receiving system which facilitate preselection of tv programs, and computer readable recording medium storing a program for achieving a function of the digital tv broadcast receiving apparatus
JP4172498B2 (en) Transmission system, transmission method, video output device, and video input device
CN100396093C (en) Receiving apparatus, printing system, and mobile telephone
JP4926416B2 (en) Image display method, program, recording medium, and image display apparatus
JP4192476B2 (en) Video conversion apparatus and video conversion method
US20030084462A1 (en) Digital boradcast reception device and method thereof, and printing device and method thereof
US20060265731A1 (en) Image processing apparatus and image processing method
US20040158861A1 (en) Program-selection device, program selection method, and program information providing system
CN1984291B (en) Method for performing time-shift function and television receiver using the same
KR100256060B1 (en) Television receiver and additional information transmitting method
JP2008167018A (en) Recording and reproducing device

Legal Events

Date Code Title Description
A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20090707

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20091014

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100309

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100428

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20100608

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20100621

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130709

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130709

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees