WO2006068269A1 - 映像構造化装置及び方法 - Google Patents
映像構造化装置及び方法 Download PDFInfo
- Publication number
- WO2006068269A1 WO2006068269A1 PCT/JP2005/023748 JP2005023748W WO2006068269A1 WO 2006068269 A1 WO2006068269 A1 WO 2006068269A1 JP 2005023748 W JP2005023748 W JP 2005023748W WO 2006068269 A1 WO2006068269 A1 WO 2006068269A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- character string
- video
- frame image
- information
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 69
- 238000003860 storage Methods 0.000 claims abstract description 94
- 238000005520 cutting process Methods 0.000 claims abstract description 9
- 238000000605 extraction Methods 0.000 claims description 95
- 238000012545 processing Methods 0.000 claims description 79
- 239000000284 extract Substances 0.000 claims description 11
- 230000010365 information processing Effects 0.000 description 55
- 230000006870 function Effects 0.000 description 38
- 238000010586 diagram Methods 0.000 description 20
- 230000006835 compression Effects 0.000 description 14
- 238000007906 compression Methods 0.000 description 14
- 230000006837 decompression Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 10
- 239000002131 composite material Substances 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 5
- 238000003066 decision tree Methods 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000013518 transcription Methods 0.000 description 4
- 230000035897 transcription Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/327—Table of contents
- G11B27/329—Table of contents on a disc [VTOC]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/775—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/781—Television signal recording using magnetic recording on disks or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/907—Television signal recording using static stores, e.g. storage tubes or semiconductor memories
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
Definitions
- the present invention relates to video archiving and monitoring, and a method for presenting structural information related to video content, and more particularly to a video structuring apparatus and method for efficiently accessing a predetermined location in a video.
- a television signal recording / reproducing apparatus disclosed in Japanese Patent Application Laid-Open No. 2004-80587 is known as an example of a structure information presentation method related to video content.
- This television signal recording / reproducing apparatus writes a digital video signal, which is a digital format television signal, for each program, or reads and writes the written digital video signal for each program, and a digital video signal.
- a thumbnail image with a reduced screen size is generated from a screen of at least one frame at an arbitrary time point in each program among digital video signals read from the control unit and the recording / reproducing unit for performing writing and reading processing
- a thumbnail generation unit and a thumbnail synthesis unit that synthesizes and outputs a thumbnail list screen of each program generated by the thumbnail generation unit.
- a thumbnail list area for storing a thumbnail list screen is provided in the recording / playback unit.
- the control unit generates a thumbnail image by the thumbnail generation unit and synthesizes a thumbnail list screen from the generated thumbnail image by the thumbnail synthesis unit. Then, the synthesized thumbnail list screen is stored in the thumbnail list area.
- the first frame of each program, or a screen of one frame or more at an arbitrary point in time, such as a screen five minutes after the start of the program is used as a thumbnail image.
- the television signal recording / reproducing apparatus disclosed in Japanese Patent Application Laid-Open No.
- 2004-80587 uses a plurality of frame images as thumbnails at regular time intervals or scene change timings.
- An index that appropriately represents the content of image content is not necessarily structured in association with a video source. Therefore, in this television signal recording / playback device, there is a high possibility that it will not appear on the S-index of the video file required by the user, so the access efficiency to the image required by the user is high. However, this is a bad problem.
- Japanese Patent Application Laid-Open No. 11-167583 discloses that a video is first input to a video storage medium and a telop character recognition 'search terminal.
- the ro information at the time when is stored is stored in the video storage unit, and the telop character recognition 'search terminal performs telop character display frame detection, telop character area extraction, and telop character recognition processing.
- a method for storing the telop character recognition result and ro information at the time when the telop character is displayed as an index file in the index file storage unit is disclosed.
- time information is accumulated as ro information, and a character code is output as a telop character recognition result, for example.
- a user inputs the desired video from the video search information input storage unit of the video search terminal from an interface such as a WWW (World wide web) browser, for example, the input character
- the code is searched from the index file stored in the index file storage unit of the telop character recognition 'search terminal, and the video having the corresponding ro information is extracted from the video storage unit.
- the searched video is displayed on the video display unit of the video search terminal, for example, a computer display.
- the telop character included in the index file is text information obtained by character recognition, and thus may include misrecognition. It is highly text information. Since meaningless text information due to this misrecognition appears on the index, if the search efficiency when the user selects a desired scene does not increase, a malfunction will occur.
- Japanese Unexamined Patent Application Publication No. 2003-345809 discloses a voice transcription device that writes a news voice corresponding to a news video into a character string, and a character in which the character string appears in the news video.
- the character recognition device that recognizes the character string and the similarity between the words in the speech transcription result corresponding to the character appearance section detected by the character recognition device are obtained, and this similarity is used.
- a database construction system that includes a registration device that associates and registers in the database.
- passage search is performed for transcription of news speech using all words in the telop and CG flip character string recognized by the character recognition device.
- the database construction system reduces the risk of extracting unrelated sentences due to the influence of the thesaurus of one word, and news that is not related to the database. The risk of video registration is reduced.
- search results are in units of passages, so it is possible to register news videos in the database in such a way that the context of the results can be understood and the context can be easily divided.
- Japanese Patent Application Laid-Open No. 2003-333265 discloses receiving image data from the outside and extracting attribute information of the image data from a predetermined portion of the image data. Notification using an attribute extraction unit, a notification destination storage unit that stores notification information indicating that image data has been received in association with attribute information in advance, and attribute information extracted by the attribute extraction unit
- An information management apparatus is disclosed that includes a notification destination determination unit that extracts a destination from a notification destination storage unit, and an output unit that notifies notification information to a notification destination extracted by the notification destination determination unit. According to this information management apparatus, when external information is received from the outside, information indicating that the external information has been received can be output to a notification destination to which the information is to be notified.
- the output unit extracts the internal information from the internal information storage unit based on the internal information ID, and stores the internal information together with related information and image data in the browsing information database based on the notification destination.
- the output unit Notification information indicating that it has been received can be transmitted to the user terminal based on the notification destination received from the notification destination determination unit, and the internal information ID received from the internal information search unit can be transmitted to the user terminal together with the notification information. It is possible to send.
- Japanese Patent Laid-Open No. 3-141484 discloses that when the number of characters included in a character string is known, the character string is optically read and one character is read from the character string image.
- a character cutout method for cutting out a partial screen corresponding to is disclosed.
- this character segmentation method a one-dimensional sequence feature is extracted from a character string image, and a model function that defines the character segmentation position corresponding to the number of characters and the one-dimensional sequence feature is defined.
- the one-dimensional sequence feature and its model function are nonlinearly matched, and the character cutout position of the character string image corresponding to the character cutout position of the model function is obtained from the nonlinear correspondence function in nonlinear matching, and the obtained character cutout is obtained.
- a partial image corresponding to one character is cut out from the position.
- the character segmentation device disclosed in Japanese Patent Laid-Open No. 3-141484 requires a user because an index that appropriately represents the contents of the image content is not structured in association with the video source. There is a problem that it is not possible to access a specific part of the video efficiently.
- Japanese Patent Laid-Open No. 2001-34709 discloses a feature vector that is generated from an input character pattern and that is stored in accordance with a condition stored in each node of a previously generated decision tree.
- a fast recognition search system that selects child nodes sequentially according to the identification results and repeats this classification until reaching the terminal node.
- This fast recognition search system includes a generating unit that generates a template of a multidimensional feature vector stored in a recognition dictionary from a set of patterns to which a preset correct answer category has been assigned, and a template created by the generating unit.
- Template dictionary storage means that associates and stores templates that contribute to template generation, and the currently focused template, the set of patterns corresponding to each of the templates, and the appearance frequency of correct categories are classified into subsets.
- a subset generation means for outputting a template belonging to the subset and a threshold value for separation into the subset, and a subset of the templates sequentially generated by the subset generation means before the corresponding separation.
- Hierarchical dictionary means for storing in association with a subset of templates and the hierarchical dictionary storage means.
- a decision tree classification means that classifies the input pattern in order of the input hierarchical structure and outputs the child nodes of the classified result, and is effective for determining the template from the leaf nodes of the hierarchical structure.
- the subset generation means generates a decision tree by including in the subsets on both sides of the threshold the categories that exist across the determined threshold.
- this fast-recognition search system when generating a decision tree by optimizing the classification method for identifying subsequent categories according to the distribution of templates belonging to the leaf nodes of the decision tree, By registering the template that exists across the boundary of both nodes in both nodes, the search can be executed at high speed in a stable time without backtracking.
- the high-speed recognition / retrieval system disclosed in Japanese Patent Application Laid-Open No. 2001-34709 does not require a user because an index that appropriately expresses the contents of image content is not associated with a video source.
- the problem is that it is not possible to access a specific part of the video to be performed efficiently.
- Patent Document 1 Japanese Unexamined Patent Application Publication No. 2004-80587
- Patent Document 2 Japanese Patent Application Laid-Open No. 11-167583
- Patent Document 3 Japanese Unexamined Patent Publication No. 2003-345809
- Patent Document 4 Japanese Patent Laid-Open No. 2003-333265
- Patent Document 5 JP-A-3-141484
- Patent Document 6 Japanese Unexamined Patent Publication No. 2001-34709
- An object of the present invention is to structure a character string display appropriately representing the contents of image content in association with a video source, and to improve the access efficiency to a specific part of the video required by the user. It is an object of the present invention to provide an image structuring apparatus and method capable of performing the above.
- Another object of the present invention is to analyze the content of a video and present the obtained structural information as an index list of character string display, thereby enabling efficient access to the target video. It is to provide an apparatus and method.
- Another object of the present invention is to provide an image structure capable of presenting an index with reduced influence of recognition errors included in a character recognition result when character strings existing in an image are recognized. It is to provide an apparatus and method.
- Another object of the present invention is to provide a video structuring apparatus capable of displaying to a user a character string display or a recognized character string expressing the content of a video as an index for cueing the video, and It is to provide a method.
- Another object of the present invention is to display a character string display or recognition character string expressing the content of a video to the user as an index for cueing the video, and the user displays the character string display.
- a video structuring apparatus capable of cueing and reproducing the video after the frame image specified by the selected character string display or recognized character string And providing a method.
- Another object of the present invention is to display the recognized character string to the user by giving priority to the display of the recognized character string according to the recognition reliability when the character string in the video is recognized. Displaying character strings that better represent the contents of the video as an index for cueing the video.
- An object of the present invention is to provide an image structuring apparatus and method that can be used by a user.
- Another object of the present invention is to display the character string display by an image with priority to the user according to the low recognition reliability when the character string in the video is recognized, To provide a video structuring apparatus and method that allows a user to use a display of a character string that more appropriately represents the content of a video as an index for cueing the video.
- Another object of the present invention is to provide an image structure device and method that allows a user to know that a character string has appeared in an image when the images are sequentially input. There is to be.
- Another object of the present invention is to provide a video structuring apparatus and a video structuring apparatus that allow the user to know that a preset character string has appeared in the video when the video is sequentially input. Is to provide a method.
- the video structuring apparatus receives a video signal, outputs a video frame image, and frame identification information for identifying the frame image;
- the frame image and the frame identification information are received from the input means, and it is determined whether or not a character string exists in the frame image.
- the frame image is Generate character string position information about the character string existing in the character string existing frame image as the character string existing frame image, character string position information, frame identification information for identifying the character string existing frame image, character
- the character string extraction means for outputting the column existence frame image, and the frame identification information, the character string existence frame image, and the character string position information are obtained from the character string extraction means,
- the video information storage means for associating the information stored in the index file and the power of the video information storage means, the index file is read, and the range in which the character string exists from the character string existing frame image is read based on the character string position information.
- the video structuring apparatus receives a video signal, and receives a video frame image, frame identification information for identifying the frame image, and video data of the video signal.
- a video input means that outputs a frame image and frame identification information from the video input means to determine whether or not a character string exists in the frame image and determines that a character string exists in the frame image.
- the character string position information and the character string existence frame image are identified by generating the character string position information for the character string existing in the character string existence frame image using the frame image as the character string existence frame image.
- Character string extraction means for outputting frame identification information and character string existence frame image, structure information presenting means, and character identification from the character string extraction means.
- the frame image and character string position information are acquired and associated and stored in the index file
- the video data and frame identification information are acquired from the video input means and stored in association
- the frame identification information is stored from the structure information presenting means.
- the video data recorded in association with the frame identification information acquired from the structure information presenting means is read, and the video data after the frame image corresponding to the frame identification information acquired from the structure information presenting means is output.
- Video information storage means and video playback means for acquiring video data output from the video information storage means and outputting and displaying the data on the display means.
- the structure information presenting means reads the index file from the video information storage means, extracts the range where the character string exists from the character string existing frame image based on the character string position information, and displays the character string by the extracted image. Is displayed on the display means, and when the user inputs information indicating that the character string display is selected, the frame identification information associated with the selected character string display is stored in the video information storage means. Output.
- the video structuring apparatus receives the video signal and outputs a video frame image and frame identification information for identifying the frame image;
- the frame image and the frame identification information are received from the input means, and it is determined whether or not a character string exists in the frame image. If it is determined that the character string exists in the frame image, the frame image is Character string position information about a character string existing in the character string existing frame image is generated as a character string existing frame image, character string position information, frame identification information for identifying the character string existing frame image, text Character string extraction means for outputting a character string existence frame image, frame identification information, character string existence frame image and character string position information are obtained from the character string extraction means, and character strings are obtained based on the character string position information.
- Character recognition means for outputting information, frame identification information, a character string existing frame image and character string position information are acquired from the character string extraction means, and character string recognition means power, recognition character string and frame identification information, Image information storage means that acquires character string position information and associates the acquired images and information with each other and stores them in an index file, and image information storage means power index
- Image information storage means that acquires character string position information and associates the acquired images and information with each other and stores them in an index file
- image information storage means power index
- a frame that reads the file, cuts out the range where the character string exists from the character string existing frame image based on the character string position information, identifies the character string display by the cut image and the recognized character string, and identifies the character string existing frame image Structural information presenting means capable of being displayed on the display means in association with the identification information.
- the video structuring apparatus receives a video signal and outputs a frame image of the video, frame identification information for identifying the frame image, and video data of the video signal.
- a frame image and frame identification information are received from the video input means and whether a character string exists in the frame image, and when it is determined that a character string exists in the frame image
- Character string extraction means for outputting information and character string existence frame image, frame identification information, character string existence frame image and character string position from character string extraction means Information, and based on the character string position information, cut out a range where the character string exists from the character string existing frame image, perform character string recognition processing on the cut image, and recognize the character string by the character code.
- a character string recognizing means that outputs the recognized character string, frame identification information, and character string position information, a structure information presenting means, and a character string extracting means from the character string extracting means.
- the character string position information is acquired, the recognized character string, the frame identification information, and the character string position information are acquired from the character string recognition means.
- the acquired image and information are stored in the index file in association with each other, the video data acquired from the video input means and the frame identification information are stored in association with each other, and the structure information is obtained when the frame identification information is acquired from the structure information display means.
- Video information storage means for reading the video data recorded in association with the frame identification information acquired from the presentation means and outputting video data after the frame image corresponding to the frame identification information acquired from the structure information presentation means;
- video playback means for acquiring video data output from the video information storage means and outputting the acquired video data to the display means for display.
- the structure information presenting means reads out the index file from the video information storage means, extracts the range where the character string exists from the character string existing frame image based on the character string position information, and extracts the cut out image.
- the character string display and the recognized character string can be output and displayed on the display means, and when the user inputs the information indicating that the character string display or the recognized character string is selected, the selection is performed.
- the frame identification information associated with the displayed character string display or recognized character string is output to the video information storage means.
- the character string recognizing means may be a video information accumulating means by calculating the recognition reliability of the character string.
- the recognition reliability for example, the likelihood value in character recognition corresponding to each character in the character string image, the reciprocal of the average value of the distance values, or the like can be used.
- the video information storage unit stores the recognition reliability acquired from the character string recognition unit in the index file in association with the character string position information, and the structure information presenting unit stores the recognition reliability. Comparison with a predetermined threshold is performed.
- the structure information presenting means outputs the recognized character string to the display means without displaying the character string display by the image when it is determined that the recognition reliability of the character string recognition is larger than the predetermined threshold value. May be displayed.
- the structure information presenting means compares the recognition reliability with a predetermined threshold value, and if the reliability of the character string recognition is determined to be smaller than the predetermined threshold value, displays the recognized character string. Instead, the character string display by the image may be output and displayed on the display means. In this way, by selecting whether to prioritize the display of the recognized character string according to the degree of recognition reliability, the user can select either the character string display or the recognized character string. A more appropriate representation of the video content can be used as an index for video cueing. [0031] Further, in the present invention, when it is determined that there is new character string position information, the structure information presenting means causes the display means to display information indicating that the character string exists in the video, and / or Audio may be output from the audio output means. With this configuration, the user can know that a character string has appeared in the video when the video is sequentially input, and the video content can be appropriately displayed. The expressed character string display or recognized character string can be used as an index for cueing a video.
- the video structuring apparatus receives a video signal and outputs a frame image of the video, and receives the frame image from the video input means and receives the frame image. It is determined whether or not there is a character string in the image, and if it is determined that the character string exists in the frame image, a character string extraction unit that outputs information indicating that the character string exists; When information indicating that a character string exists is acquired from the column extraction means, the information indicating that the character string exists in the video is displayed on the display means, and the structure information is displayed so that audio is output from the Z or audio output means. Means.
- the video structuring apparatus receives a video signal and outputs a frame image of the video, and receives the frame image from the video input unit and receives the frame image. If it is determined whether or not a character string exists in the image, and if it is determined that the character string exists in the frame image, the character existing in the character string existing frame image in which the character string exists If character string position information is obtained from the character string extraction means that generates the character string position information and outputs the character string position information, and the character string position information is obtained from the character string extraction means, And a structure information presenting means for displaying information to the effect on the display means and / or outputting sound from the sound output means.
- the video structuring apparatus receives a video signal, outputs a video frame image, and frame identification information for identifying the frame image;
- a frame image is received from the input means and a determination is made as to whether or not a character string exists in the frame image, and if it is determined that a character string exists in the frame image, the character string in which the character string exists
- Character string extraction means for outputting the existing frame image and character string position information about the character string existing in the frame image, and the character string existence frame image and the character string position information are acquired from the character string extraction means.
- the character string recognition processing is performed on the cut out image to obtain the recognized character string by the character code, and the recognized character string and the character string position information
- the character string recognition means that outputs and the recognition character string is acquired from the character string recognition means, and whether or not the acquired recognition character string is a character string included in the keyword group that is preset is determined ⁇ If it is determined that the acquired recognized character string is a character string included in the keyword, the information indicating that the character string exists in the video is displayed on the display means, and Z Or a structure information presenting means for outputting sound from the sound output means.
- an index such as a character string display or a recognition character string that appropriately expresses the content of video content is associated with video data (video source) and presented. It is possible to efficiently access a specific part of the video that is required by the. In many video contents, it is expected that the text information appearing in the video accurately reflects the content of the video, and the index generated at the appearance timing of the text information is associated with the video data. By doing so, users will be able to access the necessary parts of the video efficiently. Even if the video contains text information that is not related to the content of the video, such as “Breaking News”, the user sees the text string display, It is possible to immediately determine whether or not to view the video of “Breaking News”.
- the recognition reliability of the recognized character string By switching between character string display by image and display of recognized character string based on the image, access to a specific part of the video can be made more reliable, video search can be performed efficiently, and the user can It is possible to reduce the burden of the selection operation.
- the user can know that a character string has appeared in the video, and A user who has been notified that a new character string has appeared shall display the character string or the recognition sentence.
- By inputting information to select a character string it is possible to cue and reproduce and view the video after the frame image corresponding to the selected character string display or recognized character string.
- the user can use a character string display or recognition character string appropriately representing the content of the video as an index for cueing the video, and further, the content of the video By selecting a character string display or a recognized character string that appropriately represents the desired image, it is possible to cue a desired image.
- FIG. 1 is a block diagram showing a configuration example of a video structuring system including a video structuring apparatus according to the present invention.
- FIG. 2 is a block diagram showing an image structure device of the first embodiment of the present invention.
- FIG. 3 is a diagram showing time-series frame images obtained by decoding a video file of video identification information “ABC. MPGj”.
- FIG. 4 is a diagram showing an example of index information output by a character string extraction unit based on the video file shown in FIG. 3.
- FIG. 5 is a diagram showing an example of the content of a first index file including the index information shown in FIG.
- FIG. 6 is a diagram showing an example of index list display.
- FIG. 7 is a block diagram showing a signal processing system in the video structuring apparatus according to the second embodiment of the present invention.
- FIG. 8 is a flowchart for explaining image structure processing in the image structure apparatus shown in FIG.
- FIG. 9 is a flowchart showing an example of a character string extraction process.
- FIG. 10 is a block diagram showing a video structure display device according to a third embodiment of the present invention.
- FIG. 11 is a block diagram showing an image structure display device according to a fourth embodiment of the present invention.
- FIG. 12 is a diagram showing an example of the contents of a second index file.
- FIG. 13 is a diagram showing an example of an index list display.
- FIG. 14 is a block diagram showing a video structure display device according to a fifth embodiment of the present invention.
- 15 It is a block diagram showing an image structure display device according to a sixth embodiment of the present invention.
- 16 It is a block diagram showing an image structure display device of a seventh embodiment of the present invention.
- 17 It is a block diagram showing an image structure display device of an eighth embodiment of the present invention.
- 18 It is a block diagram showing an image structure display device according to a ninth embodiment of the present invention.
- FIG. 19 is a diagram showing another example of index list display.
- FIG. 20 is a diagram showing another example of index list display.
- Video structure equipment 100, 200, 300, 400, 500, 600, 700, 800, 900 Video structure equipment
- Video playback unit 320, 520, 920 Video playback unit
- FIG. 1 shows an example of the configuration of a video structuring system including a video structuring apparatus according to the present invention.
- This video structuring system includes an imaging device 12 that forms a subject image on a light receiving surface, photoelectrically converts it and outputs the video signal, and converts the captured video signal into video data for transmission.
- a video structuring apparatus 100 according to the present invention.
- the image structuring device the image structure device 200, 300, 400, 500, 600, 700, 800, 900 in each embodiment described later may be used. Is possible.
- the video output device 20 converts the captured video signal into video data for wireless transmission, and transmits this video data to the base station 24 and the video structuring device 100 via the antenna 18. It is configured.
- the video output device 20 is also configured to convert the captured video signal into video data for recording and record it in the video database 16. Further, the video output device 20 is configured to read out video data recorded in the video database 16, convert it into video data for transmission, and output it to the communication network 30.
- the video data may be composite video signals.
- a cable television network may be used.
- the video output device 20 reads the video data recorded in the video database 16 and converts it into video data for wireless transmission.
- the video data is transmitted to the base station via the antennas 18 and 22. 24 and the function of transmitting to the image structuring apparatus 100 are also provided.
- the video output device 20 has a function of receiving video data transmitted from the base station 24 or the video structure device 100 using a wireless or wired communication means using the antenna 18 or the like, and recording it in the video database 16. It also has.
- the base station 24 receives the video data output from the antenna 18 of the video output device 20 using the antenna 22, converts the video data into video data transmitted by wire, and then transmits the video data via the communication network 30. A function for outputting to the structuring apparatus 100 is provided.
- the base station 24 further receives various types of information such as video data and video index information transmitted by the video structuring apparatus 100, and via the antenna 22, the video output apparatus 20, a mobile phone (not shown), a mobile terminal, etc. It also has a function to transmit to other communication devices.
- the video structuring apparatus 100 receives a video signal output from the imaging device 14 or the video output apparatus 20 via a video input unit or a video signal input unit described later, and extracts a time-series frame image from the video signal.
- Frame identification information that identifies a frame image that includes a character string portion such as a telop, and character string position information that identifies the position of the character string portion or the character string position in the frame image. It has a function to generate index information that associates.
- the frame identification information here includes, for example, time information, counter information, and page information.
- the image structuring device 100 is The generated index information is output to other communication devices via the communication network 30 or wireless communication means.
- the imaging device 14 may include a microphone and the like that can output an audio signal.
- the video structuring apparatus 100 also has a function of recording the generated index information on a recording unit or a recording medium provided in the video structuring apparatus 100. Furthermore, the video structuring apparatus 100 extracts an image of the character string portion included in the frame image based on the frame identification information included in the generated index information and the character string position information that specifies the position of the character string. It also has a function of generating display data for index list display.
- the character string image includes a character string display and a character string image. This display data is output from the video structure display device 100 to the display device 172, whereby an index list can be displayed for the user.
- a user browses an index list display including a character string display or a character string image, and the user's desired character via an input device 170 such as a keyboard or a mouse.
- an input device 170 such as a keyboard or a mouse.
- an image file including the frame image is read based on frame identification information information associated with the character string display.
- playback can be started from the position of the frame.
- FIG. 2 shows a video structuring apparatus according to the first embodiment of the present invention having the above-described configuration.
- the video structuring apparatus 200 shown in FIG. 2 receives digitized video data or video signals as input, and frame identification information for identifying frame images or time-series frame images and their individual frame images.
- a video image input unit 210 that outputs video identification information and a frame image or a time-series frame image are input from the video input unit 210, and it is determined whether or not a character string exists in the frame image.
- the character string extraction unit When it is determined that the string exists, the character string extraction unit that outputs the frame identification information of the character string existence frame image in which the character string exists and the character string position information such as the coordinate value of the character string in the frame image 212 And index information in which the character string existing frame image, character string position information, and frame identification information are associated with each other are stored as a first index file.
- An image information storage unit 216 for storing data, the first index stored A structural information presenting unit 218 that reads a file and outputs a frame image in which a character string exists or a character string image corresponding to character string position information to the display device 172;
- the video signal includes an RGB signal, a composite video signal, and the like.
- the video input unit 210 when the video input unit 210 receives the digitized video data or the video signal such as the RGB signal or the composite video signal, the video input unit 210 identifies the entire video and the digitalized video. It has a function of outputting data and frame identification information for identifying the frame image when reproducing each frame image in the video data to the video information storage unit 216. Further, when receiving the video data or the video signal, the video input unit 210 generates a frame image or a time-series frame image from the input video signal, and individually identifies each frame image. For identifying the entire video, and a function of outputting individual frame images or time-series frame images to the character string extraction unit 212.
- the character string extraction unit 212 receives video identification information such as a file name and a program title in which video is recorded, a frame image, and the second frame identification information from the video input unit 210, and inputs them. If a character string exists in the input frame image, and if it is determined that a character string exists in the input frame image, the video identification information, the character string existing frame image, In order to identify a specific frame image in which a character string exists, frame information and a character string existing in the frame image as index information are used as index information. Output for.
- a character string existing frame image is a frame image detected as a character string exists, but here, even if it is a thumbnail image or the like obtained by reducing such a frame image as necessary. Good.
- the character string position information is constituted by, for example, a coordinate value indicating where the detected character string is located in the frame image.
- the structure information presenting unit 218 presents a character string display using an image to the user based on the index information acquired in this way.
- the frame identification information is for identifying individual frame images.
- information such as shooting time information, frame image number, or counter information may be used.
- time information Time information for synchronized playback Time information such as PTS (Presentation Time Stamp), DTS (Decoding Time Stamp), and reference time information SCR (System Clock Reference) may be used.
- PTS Presentation Time Stamp
- DTS Decoding Time Stamp
- SCR System Clock Reference
- the character string extraction unit 212 receives the video identification information, the first frame image, and the frame identification information for identifying each frame image from the video input unit 210. It is determined whether or not a character string exists in the frame image. Next, when it is determined that a character string exists in the frame image, the video identification information, the character string existence frame image, and the frame identification information for identifying a specific frame image in which the character string exists are provided. Then, the character string position information such as the coordinate value of the character string existing in the frame image is output to the video information storage unit 216 as the first index information.
- the character string extraction unit 212 does not output frame identification information and character string position information.
- the character string extraction unit 212 determines whether or not a character string exists in the frame image of the second frame image, and determines that a character string exists in the frame image. Outputs frame identification information for specifying a character string existing frame image in which the character string exists, and character string position information such as a coordinate value of the character string existing in the frame image. The character string extraction unit 212 sequentially repeats this process for each subsequent frame image.
- the character string extraction unit 212 differentiates the input frame image to generate a differential image. Each pixel value of the differential image is binarized with a predetermined threshold value, and the obtained binary image is projected in the horizontal and vertical directions, and a projection pattern is obtained by generating a histogram for the pixels.
- the character string extraction unit 212 determines a continuous area where the projection pattern has a value equal to or greater than a predetermined value as a character string candidate area. At this time, the size of the continuous area is less than the default value. May be excluded from the character region candidates as noise. Then, the final character string position information can be generated by applying the layout analysis process to each character string candidate region determined based on the projection pattern.
- the character string position information may be information representing a rectangle that minimally surrounds one character string, or may be information representing a shape obtained by combining a plurality of rectangles.
- FIG. 3 shows a time-series frame image obtained by decoding a video file whose video identification information is “ABC. MPGj” and a character string included in the frame image, for example.
- the video input unit 210 decodes the video file “ABC. MPG”, as shown in the figure, one or a plurality of frame images are obtained.
- a video signal such as an RGB signal or a YC signal (composite signal) is input to the video input unit 210, one or a plurality of time series frame images are converted into numerical values as shown in FIG. The ability to obtain a single frame image.
- the character string extraction unit 212 receives from the video input unit 210 video identification information of the file “ABC. MPG”, individual frame images, and frame identification information for identifying the individual frame images. Then, it is determined whether or not a character string exists in these frame images.
- the video file name is used as the video identification information.
- the program title of the table (EPG) can also be used.
- As the frame identification information shooting time information is used in the illustrated example.
- the processing in the video structure display device 200 shown in FIG. 2 will be described by taking as an example a case where a series of frame images as shown in FIG. 3 is input.
- the character string extraction unit 212 Image identification information “ABC. MPG” for identifying the entire image, image data of the frame image 101 reduced as necessary, frame identification information for identifying the character string existing frame image 101 in which the character string exists, and its The coordinates PalOl (120, 400) and Pbl01 (600, 450) of the character string existing in the frame image are output to the video information storage unit 216 as index information.
- the frame identification information for identifying the character string existing frame image 101 for example, the file name “ABC — 01231433.JPG” can be used.
- the coordinate system of the character string in the example shown in FIG. 3 a coordinate system using the upper left pixel of the frame image as the origin is used.
- the coordinate value of the top left corner of the rectangle that encloses the character string is defined as Pa
- the coordinate value of the bottom right vertex of the rectangle that encloses the character string is defined as Pb.
- the character string extraction unit 212 identifies the entire video.
- the character string position information including the coordinates Pal02 (20, 100) and Pbl02 (120, 150) is output to the video information storage unit 216 as index information.
- the frame identification information here, for example, the file name “ABC-02540467.JPG” is used.
- FIG. 4 shows an example of the index information output from the character string extraction unit 212 based on the video file shown in FIG.
- the index information output by the character string extraction unit 212 includes video identification information “ABC. MPG” for identifying the video file, frame identification information for identifying the frame image in which the character string exists, and the like.
- video identification information is, for example, a file name “ABC-01231433.JPG”
- character string position information is, for example, the coordinates Pal01 (120, 400) and Pbl01 (600, 450).
- the video information storage unit 216 includes video identification information output from the character string extraction unit 212, a character string existing frame image in which the character string exists, frame identification information for identifying the character string existing frame image, The first index information associated with the character string position information is stored as a first index file.
- the video information storage unit 216 stores the video identification information, video data, and frame identification information output from the video input unit 210 as video data.
- FIG. 5 is a diagram showing an example of a first index file including the index information shown in FIG.
- the first index file (INDEX01.XML) includes one or more index information of the video file “ABC. MPG” shown in FIG. 4 and other video files (for example, Index information for “DEF. MPG” etc. is also listed.
- the IntuX file in 1 is not limited to the one having a database structure such as XML or extensiDle markup language, but has a file format for display in HTML (hypertext markup language), etc. Other file formats can be used.
- the structure information presentation unit 218 reads the index file stored by the video information storage unit 216, generates index list display information, and outputs it to the display device 172.
- the display device 172 displays the index list as shown in FIG. 6 and notifies the user.
- Fig. 6 shows an example of an indicia list display.
- the title 120 of the index list display the video identification information display field 122 for identifying the video file, and the character string existing frame image in which the character string exists are identified.
- a character string display by an image obtained by cutting out a range where a character string exists from a frame image using frame identification information 24 such as shooting time, frame identification information, frame image video data, and character string position information.
- the character string display 126 may be displayed in the order desired by the user, or the user location. It may be displayed at a desired position. It is also possible to display the index list at user-desired time intervals.
- the user can select the desired character string display 126, playback point information such as the shooting time, and the like by operating the input device 170 such as a mouse or a keyboard.
- the playback point information is information indicating where the video is played back, and is represented by frame identification information. If the user selects a desired character string display 126 or the like and designates a video playback point, the video final of the selected video identification information is read and specified by the corresponding frame identification information 124. The image after the frame image is displayed on the display device 172. In the example shown here, the shooting time is used as playback point information.
- FIG. 7 shows a configuration of a signal processing system in the video structuring apparatus according to the second embodiment of the present invention.
- the video structuring apparatus shown in FIG. 7 is realized by a program installed in a computer system controlling hardware resources of the computer system.
- the video identification information is reduced as necessary.
- a character string existing frame image such as a thumbnail
- frame identification information for identifying a specific character string existing frame image in which the character string exists character string position information such as a coordinate value of a character string existing in the frame image, Can be output as index information.
- the image structuring device 950 receives a video signal from the imaging device 14 that forms a subject image on a light receiving surface, performs photoelectric conversion, and outputs the video signal.
- the video structuring apparatus 950 receives an audio signal collected by the image processing unit 951 that converts an input video signal into video data for recording and a video recording device 14 and converts it into audio data or video data for recording.
- An antenna 20 for transmitting and receiving various information and a transmission / reception unit 968 are provided.
- the video structure device 950 includes a compression / decompression unit 953, a recording medium mounting unit 978, and a recording unit. It includes a media interface 979, an input interface 971, a display interface 973, an information processing 980, a memory 981, a recording 984, and a calendar B temple 990.
- the compression / decompression unit 953 performs compression control on video data or audio data by a technique typified by MPEG (motion picture expert group), and performs expansion / expansion control on the compressed video. Further, the compression / decompression unit 953 performs a process of compressing the image data by a technique represented by a joint picture expert group (JPEG) or controlling the decompression / expansion of the compressed image.
- JPEG joint picture expert group
- the recording medium mounting unit 978 is for detachably mounting the recording medium 977, and the recording medium interface 979 is for recording and reading various information with respect to the recording medium 977. It is.
- the recording medium 977 is a detachable recording medium such as a semiconductor such as a memory card, an optical recording medium typified by a DVD, a CD, or a magnetic recording medium.
- the input interface 971 is an input device such as a keyboard and a mouse used for inputting various instructions such as start or end of index list display, selection of video files, display of character strings, or selection of character string images. Sends / receives information to / from 170.
- the display interface 973 outputs an image signal for display to the display device 172 that displays information such as images and characters.
- the information processing unit 980 is configured by a CPU, for example, and includes a video signal input process, a process of generating a frame image and frame identification information from the video signal, and a character string in the frame image. Whether or not, generation processing of character string position information, association processing of various information, processing to cut out the range where the character string exists from the frame image
- the memory 981 is used as a work area during program execution.
- the recording unit 984 includes a processing program executed by the video structuring apparatus 950 and various constants, an address used for communication connection with a communication device on a network, a dial-up telephone number, attribute information, a URL (Uniform Resource Locators), a gateway It consists of a hard disk that records various information such as information and DNS (Domain Name System).
- the calendar clock keeps time.
- the information processing unit 980 and its peripheral circuits are connected to the bus 99. 9 are connected to each other so that high-speed information can be transmitted between them.
- the information processing unit 980 can control these peripheral circuits based on an instruction of a processing program that operates on the information processing unit 980.
- the above-described video structuring apparatus 950 may be a dedicated apparatus having processing capability for structuring video information.
- a video recorder video camera, digital still camera, mobile phone equipped with a camera, PHS (Personal Handyphone System, PDA (Personal Data Assistance, Personal Digital Assistants) Equipment), or a general-purpose processing device such as a personal computer.
- PHS Personal Handyphone System
- PDA Personal Digital Assistants
- a general-purpose processing device such as a personal computer.
- the image processing unit 951, the transmission / reception units 965 and 968, the recording medium interface 979, the recording unit 984, and the like can each function as a video signal input unit, and digitized video data or RGB It is possible to receive video signals such as signals and composite video signals.
- a video signal can be input to the video structuring apparatus 950 from an external device by providing the transmission / reception unit 968 with the function of a television vision tuner.
- the display device 172 such as a liquid crystal display device or CRT (cathode ray tube) displays various information such as a character string image, a recognition character ⁇ U, an image, a character, and an index list display, and uses these information. It is used to notify the person.
- An audio output device 956 such as a speaker is used to convey information that a character string is present in the video to the user based on the audio signal output from the sound generation processing unit 957.
- the information processing unit 980 has a function of generating a frame image of the video and frame identification information for identifying the frame image from the input video signal, and whether or not a character string is present in the generated frame image. If it is determined that a character string exists in the frame image, character string position information such as the coordinate value of the character string existing in the character string existing frame image in which the character string exists is obtained. And a function of generating a character string image by cutting out a range where the character string exists from the character string existence frame image based on the character string position information.
- the start instruction of the video structuring process is input from the user
- the video signal is output from the video output device 20
- the video structure set for the calendar clock 980 of the video structure display device 950 The processing performed by the information processing unit 980 of the video structuring apparatus 950 is “video structuring processing” when the start time of the structuring processing has elapsed or when the start of other video structuring processing is instructed. (Box S1200)
- the information processing unit 980 performs a process of waiting for a video signal to be transmitted from the video output device 20 or the imaging device 14.
- the imaging device 14 or the like when the video output device 20, the imaging device 14 or the like outputs a video signal in RGB, YC, MPEG or other formats, the image of the video structuring device 950 is displayed.
- the input unit 951, the transmission / reception unit 965, or the transmission / reception unit 968 receives these video signals by “video input processing” (box S 1210) and transmits the digitized time-series video data via the bus 999 to the information processing unit 980.
- an RGB or YC video signal is input from the video output device 20 or the imaging device 14
- an RGB video signal, a YC composite video signal, or the like is input to the image processing unit 951.
- the image processing unit 951 attaches frame identification information for identifying the frame image when reproducing each frame image of the video data, and converts the digitized time-series video data into the information processing unit 980, the compression / decompression unit 953, Output to memory 981 etc. via bus 999.
- the audio signal is input to the audio processing unit 955, and the audio processing unit 955 transmits the digitized audio data via the bus 999.
- the audio processing unit 955 transmits the digitized audio data via the bus 999.
- the information processing unit 980 adds video identification information for identifying the entire video to the time-series image data output from the image processing unit 951, and compresses / decompresses the time-series image data. 953 performs compression processing (encoding processing) based on MPEG standards. In this state, the information processing unit 980 reproduces the video identification information for identifying the entire video, the digital time-series video data, and each frame image of the video data.
- the frame identification information for identifying the frame image is associated and managed.
- As the video identification information for identifying the entire video for example, a file name or a program title in which the video is recorded is used.
- the image processing unit 951 receives the input video data as an information processing unit 980, a compression / decompression unit 953, a memory 981, etc. Output via bus 999.
- the transmission / reception unit 965 or the transmission / reception unit 968 sends the input video data to the information processing unit 980, the compression / decompression unit via the bus 999. Output to 953, memory 981, etc.
- the information processing unit 980 transfers the acquired video data, such as MPEG, to the compression / decompression unit 953 and performs decompression processing (decoding processing) to obtain time-series image data.
- the information processing unit 980 manages video identification information, time-series video data, and frame identification information for identifying a frame image when reproducing each frame image of the video data in association with each other.
- information such as information regarding the photographing time, frame image number, or counter information may be used as frame identification information for identifying individual frame images.
- time information such as time information PTS (Presentation ime stamp), DTS (Decoding Time Stamp), ⁇ time information SCR (System Clock Reference) for synchronous reproduction may be used.
- the information processing unit 980 receives video identification information, a first frame image, and the like from the memory 981 or the compression / decompression unit 953 via the bus 999.
- Frame identification information for identifying the individual frame image is received, and it is determined whether or not a character string exists in the frame image. If it is determined that a character string exists in the frame image, the information processing unit 980 identifies the video identification information, the character string existence frame image, and a frame that identifies a specific frame image in which the character string exists.
- the identification information and the character string position information such as the coordinate value of the character string existing in the frame image are recorded in the memory 981 or the recording unit 984 as the first index information.
- the character string existing frame image may be a thumbnail image reduced as necessary.
- a specific frame image with a character string has the same character string in multiple frame images In this case, the first frame image in the plurality of frame images is preferable. If it is determined that there is no character string in the frame image, the frame identification information and the character string position information are not recorded.
- the information processing unit 980 sequentially determines whether or not a character string exists in the frame image for each of the second and subsequent frame images, and if a character string exists in the frame image. If it is determined, the frame identification information that identifies the character string existing frame image in which the character string exists and the character string position information such as the coordinate value of the character string existing in the frame image are recorded.
- FIG. 9 shows an example of specific processing in the character string extraction processing (box S1212).
- step S1260 the character string extraction process starts, and in step S1262, the information processing unit 980 identifies the video identification information, the nth frame image (Fn), and the frame identification that identifies the frame image (Fn). Information is received and stored temporarily in the memory 981 or the recording unit 984. In step S1264, the information processing unit 980 determines whether or not there is a frame image from which a character string is to be extracted.
- step S1266 If the process of extracting the character strings for all the image data has already been completed and there is no new frame image, the character string extraction process ends in step S1266, and the information processing unit 980 Returning to the processing routine shown in Fig. 8, the next character string extraction processing is executed.
- step S1268 Fn / Fc is used to thin out the frame image for extracting the character string for each Fc. Calculate to determine whether the result is an integer.
- Fc is a natural number constant. If it is determined that the value of FnZFc is not an integer, the information processing unit 980 returns to step S1262 and receives the frame image of the next number Fn + 1.
- step S1268 when it is determined in step S1268 that the value of Fn / Fc is an integer, the information processing unit 980 executes differential image generation processing in step S1270.
- the information processing unit 980 receives the frame input in step S1262.
- the differential image is differentiated to generate a differential image, and the differential image is temporarily stored in the memory 981 or the recording unit 984.
- the information processing section 980 executes differential image binarization processing in step S 1272.
- the information processing unit 980 reads the sub image generated in S 1270 and the threshold value for binarization from the memory 981 or the recording unit 984, and outputs each pixel of the sub image.
- the value is binarized using the threshold value, and the binarized image data is temporarily stored in the memory 981 or the recording unit 984.
- information processing section 980 performs projection pattern generation processing in step S 1274.
- the information processing unit 980 reads the binarized image data from the memory 981 or the recording unit 984, projects the binarized image in the horizontal direction and the vertical direction, respectively, and generates a histogram relating to pixels.
- a projection pattern is obtained by generating.
- the information processing unit 980 determines a continuous area having a value greater than or equal to a predetermined value in the projection pattern as a character string candidate area. At this time, if the size of the continuous region is less than the predetermined value, the candidate character string region may be excluded as noise.
- the information processing unit 980 generates final character string position information by applying layout analysis processing to each character string candidate region.
- the layout analysis processing is performed as follows: “Preliminary collection of IAPR workshop on Document analysis systemsj 406” Techniques such as “Document layout analysis by extended spl it detection method” can be used.
- this layout analysis processing image regions other than characters are extracted, and regions are divided using these positions as boundaries to divide them into partial regions.
- the position information of this character string is, for example, coordinate values such as Pal01 and PblOl shown in FIG.
- step S1276 the information processing unit 980 performs character recognition processing on the character string candidate area acquired in step S1274. Thereafter, in step S 1278, information processing section 980 determines whether or not a character string exists in the character string candidate area from the result of the character recognition process. If it is determined that the string does not exist, information The processing unit 980 returns to step S1262 and receives the frame image of the next number Fn + 1. On the other hand, if it is determined that the character string exists, the information processing unit 980 determines in step S1280 that the character string recognized from the character string candidate area is the character string that existed when the character recognition process was performed last time. It is determined whether or not.
- step S1280 If it is determined in step S1280 that the character string is not different from the previous character string, that is, if it is determined that the character string is the same, the information processing unit 980 returns to step S1262, Receives the next frame image of Fn + 1. On the other hand, if it is determined that the previous character string is different from the character string recognized this time, the information processing unit 980 executes index information recording processing in step S 1284. In the index information recording process, the information processing unit 980 displays the video identification information input in step S1262, a frame image in which a character string exists, that is, a character string existing frame image, and a frame image in which the character string exists.
- the frame identification information to be identified and the character string position information acquired in step S 1274 are temporarily recorded in the memory 981 or the recording unit 984 as the associated index information.
- a time-series frame image obtained by decoding the video identification information “ABC. MPG” at this time, a character string included in the frame image, frame identification information for identifying the frame image, and character string position information An example is shown in Figure 3.
- the index information of the video file shown in FIG. 3 is information in the format shown in FIG. 4, for example.
- the character string existence frame image in which the character string exists is reduced as necessary to reduce the recording capacity and to be easily displayed when the index list is displayed. It may be recorded.
- the information processing section 980 executes “video information storage processing” (box S1216).
- the information processing unit 980 includes video identification information temporarily stored in the memory 981 or the recording unit 984, a frame image in which a character string exists, frame identification information for identifying the frame image, The first index information associated with the character string position information is read and the first Accumulate as an index file.
- An example of the first index file is shown in FIG.
- the information processing unit 980 displays these video signals. Is encoded into a moving image file such as MPEG by the compression / decompression unit 953 and recorded in the recording unit 984 and the recording medium 977.
- a moving image file such as MPEG
- the information processing unit 980 From this, a video file for recording is generated and recorded in the recording unit 984 and the recording medium 977.
- These moving image files have unique video identification information for identification, and record frame identification information for identifying individual frame images when they are decoded.
- the information processing unit 980 executes “structure information presentation processing” (box S1218).
- the information processing unit 980 reads the first index file recorded in the recording unit 984 or the recording medium 977 and performs the index list display as shown in FIG. Generate a display file for. Then, the frame image in which the character string described in the first index file exists is read out from the recording unit 984 or the recording medium 977, and developed in the memory 981. Then, based on the character string position information, the information processing unit 980 attaches the character string image generated by cutting out the character string candidate area where the character string exists from the frame image to the index list display. The information processing unit 980 outputs the display signal of the index list display thus generated to the display device 172 via the display interface 973. A display example of the index list display is shown in FIG. When the structure information presentation process ends, the information processing unit 980 executes a process of determining whether an end instruction is input, which is shown in step S1232.
- step S1232 the information processing unit 980 determines whether or not the user inputs an instruction to end the video structuring process via the input device 170. If, for example, the user power S index list display end button is selected and an end instruction is input as shown in box S 1230, the information processing unit 980 determines that an end instruction has been input. , Step In step S1240, the image structuring process is terminated. On the other hand, if it is determined that no termination instruction has been input from the user, the information processing unit 980 returns to the video input process (button S1210). Thereby, the video structuring process is continuously executed.
- the user browses the index list display shown in FIG. 6, and the user operates the input device 170 such as a mouse or a keyboard to select a desired character string display 126 or a character string image.
- the information processing unit 980 reads the video file of the selected video identification information from the recording unit 984 or the like, decodes it, and specifies the frame specified by the corresponding frame identification information 124.
- the video after the image is output to display device 172 for display.
- the frame identification information is represented by the shooting time.
- the character string extraction unit 312 identifies video identification information such as a file name and a program title in which video is recorded, a frame image, and individual frame images.
- Frame identification information is input from the video input unit 310.
- the character string extraction unit 312 determines that a character string exists in the input frame image, the character identification information, the character string existence frame image, and the specific frame in which the character string exists are included.
- the frame identification information for identifying the image and the character string position information such as the coordinate value of the character string existing in the frame image are output to the video information storage unit 316 as index information.
- the character string existence frame image is a thumbnail image reduced as necessary.
- the structure information presentation unit 318 presents a character string image to the user.
- the video playback unit 320 plays back the video after the playback point specified by the user.
- the processing executed by the video input unit 310 and the character string extraction unit 312 in the video structuring device 300 of the third embodiment is the same as the video input unit 210 and the characters in the video structuring device 200 shown in FIG. Since the processing is the same as that executed by the column extraction unit 212, detailed description thereof is omitted here.
- the video information storage unit 316 is a character string extraction unit 312.
- the image identification information output by the user, the character string existing frame image in which the character string exists, the frame identification information for identifying the frame image, and the character string position information are associated with each other.
- the index information of 1 is stored as the first index file.
- the video information storage unit 316 stores the video identification information, the video data, and the frame identification information output from the video input unit 310 as video data.
- the structure information presentation unit 318 reads the index file stored by the video information storage unit 316, generates index list display information, and outputs the index list display to the display device 172.
- the display device 172 displays an index list as shown in FIG. 6 and notifies the user.
- the structure information presenting unit 318 sets the playback start point.
- Corresponding video identification information and frame identification information are selected and output to the video information storage unit 316.
- the video information storage unit 316 acquires the video identification information and the frame identification information from the structure information presenting unit 318, the video information storage unit 316 reads the video data corresponding to the acquired video information and outputs it to the video reproduction unit 320 together with the frame identification information. .
- the video information storage unit 316 displays the video file and the frame identification information. Output to.
- the video playback unit 320 decodes the acquired video file, displays the frame image after the frame identification information, and presents the video after the playback point to the user.
- the video information storage unit 316 outputs the time-series frame images after the frame identification information to the video playback unit 320. In this case, the video playback unit 320 displays the frame image after the frame identification information and presents the video after the playback point to the user.
- FIG. 11 shows an image structuring apparatus according to the fourth embodiment of the present invention.
- the character string extraction unit 412 includes video identification information such as a file name and program title in which video is recorded, a frame image, and a frame identification that identifies each frame image. Information is input from the video input unit 410.
- the character string extraction unit 412 determines that a character string is present in the input frame image, the character string extraction frame image, the character string existence frame image, and the specific frame image in which the character string exists are included.
- the frame identification information for identifying the character string and the character string position information such as the coordinate value of the character string existing in the frame image are output as index information to the video information storage unit 416, and the character string existing frame image and The frame identification information and the character string position information are also output to the character string recognition unit 414.
- the character string existence frame image is a thumbnail image reduced as necessary.
- the character string recognizing unit 414 cuts out the range specified by the character string position information from the character string existing frame image as image data, recognizes the character ⁇ IJ included in the cut out image data as a character string, that is, a character code.
- the extracted character string is extracted and output to the video information storage unit 416.
- the structure information presentation unit 418 presents a character string image or a recognized character string to the user.
- the processing until the video input unit 410 in the video structuring apparatus 400 of the fourth embodiment and the process until the character string extraction unit 412 outputs index information to the video information storage unit 416 are as shown in FIG.
- the character string extraction unit 412 determines that a character string exists in the frame image
- the character string extraction unit 412 outputs the first index information to the video information storage unit 416, and the character string existence frame image and the frame.
- the identification information and the character string position information are output to the character string recognition unit 414. If it is determined that there is no character string in the frame image, the character string extraction unit 412 sends the character string existence frame image, the frame identification information, and the character string recognition unit 414 to the character string recognition unit 414. Character string position information is not output.
- the character string recognition unit 414 uses the image data of the character string existing within the range specified by the character string position information in the character string existence frame image and the dictionary data for character string recognition,
- the character string is extracted as a recognized character ⁇ 1J (character code).
- the character string recognition processing here, for example, a character segmentation method and apparatus described in JP-A-3-141484, or a high-speed recognition search system described in JP-A-2001-34709.
- the recognition reliability of the result of character string recognition may be calculated.
- the recognition reliability of the character string for example, the likelihood value in character recognition corresponding to each character in the character string image, the reciprocal of the average value of the distance values, or the like can be used.
- the character string recognition unit 414 next obtains the obtained recognized character string, the frame identification information of the frame image in which the character string exists, the character string position information, the character string
- the recognition reliability of the character string obtained as a result of the column recognition is output to the video information storage unit 416.
- the video information storage unit 416 outputs video identification information, a character string existing frame image in which a character string exists, and frame identification information for identifying the frame image, which are output from the character string extraction unit 412 and the character string recognition unit 414.
- the second index information in which the character string position information is associated with the recognized character string and the recognition reliability is stored as a second index file.
- the video information storage unit 416 stores the video identification information, video data, and frame identification information output from the video input unit 410 as video data.
- FIG. 12 shows an example of the second index file.
- the recognition character string and the recognition reliability of the character string include frame identification information. Accumulated in association.
- information on the photographing time is used as the frame identification information.
- the structure information presentation unit 418 reads the second length file stored by the video information storage unit 416, generates index list display information, and outputs it to the display device 172.
- the display device 172 displays the index list as shown in FIG. 13 and notifies the user.
- Figure 13 shows an example of index list display.
- the index list display includes a title 120 of the index list display, a video identification information display field 122 for identifying a video file, a shooting time for identifying a frame image in which a character string exists, and the like.
- the user can select the desired character string display 126, recognized character string 138, playback point information such as the shooting time, etc. by operating the input device 170 such as a mouse or a keyboard. It has become.
- the desired character string display 126 etc. and designates the playback point of the video
- the video file of the selected video identification information is read out and the frame image specified by the corresponding frame identification information 124 It is also possible to display the video image on the display device 172.
- the shooting time is used as playback point information.
- the character string display 126 based on the index image uses a part of the character string existence frame image, only the character string obtained as a result of character recognition is displayed. Unlike the case, the character string display 126 does not match the content of the video, and the possibility of the occurrence of the phenomenon is reduced. Therefore, the user can list the contents of the video by browsing the index list display, and can easily cue the video.
- the display method can be controlled between the display of the character string by the image and the display of the recognized character string according to the reliability of the character string recognition result, so that the user can trust the recognized character string.
- the index can be selected, and it is possible to improve the work efficiency when the user searches the video.
- FIG. 14 shows a video structuring apparatus according to the fifth embodiment of the present invention.
- the character string extraction unit 512 includes the name of the file in which the video is recorded and the program.
- Video identification information such as a gram title, a frame image, and frame identification information for identifying each frame image are input from the video input unit 510.
- the character string position information such as the coordinate value of the character string existing therein is output as index information to the video information storage unit 516, and the character string existing frame image, the frame identification information, and the character string position information are output.
- the data is output to the character string recognition unit 514.
- the character string recognition unit 514 extracts the character string as a recognized character string (character code) from the image data of the character string existing within the range specified by the character string position information in the character string existence frame image,
- the recognized character string, frame identification information, character string position information, and recognition reliability are output to the video information storage unit 516.
- the structure information presentation unit 518 presents an image of a character string or a recognized character string to the user.
- the structural information presenting unit 518 receives the user's information from the video information storage unit 516.
- the video file of the video identification information is read based on the selection, and the video after the frame image specified by the corresponding frame identification information 124 is displayed on the display device 172.
- the processing performed by the video input unit 510, the character string extraction unit 512, and the character string recognition unit 514 in the video structuring apparatus 500 of the fifth embodiment, and the processing that the video information storage unit 516 stores information Part of the processing until the structural information presenting unit 518 presents the structural information is the video input unit 410, the character string extracting unit 412, the character string recognizing unit 514, the video in the video structuring apparatus 400 shown in FIG. Since the processing is the same as that performed by the information storage unit 416 and the structure information presentation unit 418, detailed description thereof is omitted here.
- the video information storage unit 516 includes the video identification information, the character string existing frame image, the frame identification information for identifying the frame image, and the characters output from the character string extraction unit 512 and the character string recognition unit 514. Second index information associating the column position information, the recognized character string, and the recognition reliability is stored as a second index file.
- the video information storage unit 516 includes video identification information, video data, and frames output from the video input unit 510. The identification information is stored as video data.
- the structure information presentation unit 518 reads the second length file stored by the video information storage unit 516, generates index list display information, and outputs the index list display to the display device 172.
- the display device 172 displays an index list as shown in FIG. 13 and notifies the user.
- the user operates the input device 170 such as a mouse or a keyboard to select a desired character string display 126, a recognized character string 138, playback point information such as a shooting time, and the like, and a video playback start point. Can be specified.
- the structure information presenting unit 518 selects the video identification information and the frame identification information corresponding to the playback start point, and outputs them to the video information storage unit 516.
- the video information storage unit 516 acquires the video identification information and the frame identification information from the structure information presentation unit 518, the video information storage unit 516 reads the video data corresponding to the acquired video information, and outputs the video data together with the frame identification information to the video reproduction unit 520.
- the video information storage unit 516 displays the video file and the frame identification information as the video playback unit 520. Output to. In this case, the video playback unit 520 decodes the acquired video file, displays the frame image after the frame identification information, and presents the video after the playback point to the user.
- the video playback unit 5 20 is configured to acquire and display time-series frame images
- the video information storage unit 516 outputs the time-series frame images after the frame identification information to the video playback unit 520. To do. In this case, the video playback unit 520 displays the frame image after the frame identification information and presents the video after the playback point to the user.
- the character string display 126 based on the index image uses a part of the character string existence frame image, only the character string obtained as a result of character recognition is displayed. Unlike the case, the character string display 126 does not match the content of the video, and the possibility of the occurrence of the phenomenon is reduced. Users can view the contents of the video by browsing the index list display, and can easily cue the video.
- the display method can be controlled between the display of the character string by the image and the display of the recognized character string according to the reliability of the character string recognition result, the user can trust the recognized character string. Select index It is possible to improve the work efficiency when the user searches the video.
- FIG. 15 shows a video structuring apparatus according to the sixth embodiment of the present invention.
- the character string extraction unit 612 determines whether a character string exists in the input frame image. If the character string extraction unit 612 determines that a character string exists, the character string position such as the character string existence frame image and the coordinate value of the character string existing in the frame image is displayed. The information is output to the structure information presentation unit 618. Then, the structure information presentation unit 618 displays and uses information indicating that the frame image corresponding to the character string position information or the character string image is immediately displayed, or that the character string exists in the frame image. The person in charge.
- the video input unit 610 can input digitized video data or video signals such as RGB signals and composite video signals, and output the video data for display to the structure information presenting unit 618. It can be configured.
- the video input unit 610 also inputs digitized video data or video signals such as RGB signals and composite video signals, generates frame images from the input video signals, and outputs them to the character string extraction unit 612. To do.
- the character string extraction unit 612 receives a frame image from the video input unit 610, and determines whether or not a character string exists in the frame image. Next, when the character string extraction unit 612 determines that a character string exists in the frame image, the character string existing frame image and the coordinates of the character string existing in the frame image are displayed. The character string position information such as a value is output to the structure information presentation unit 618.
- the structure information presentation unit 618 normally generates a display video based on the video data input from the video input unit 610, outputs the video to the display device 172, and presents it to the user. .
- the structure information presenting unit 618 receives from the character string extraction unit 612 a character such as the character string existing in the frame image, the character string existing frame image, and the coordinate value of the character string existing in the frame image.
- a character such as the character string existing in the frame image, the character string existing frame image, and the coordinate value of the character string existing in the frame image.
- the column position information is acquired, information indicating that the character string exists in the frame image is displayed and notified to the user. Notification that the character string exists in the frame image may be made by notifying the character string appearance information by voice, or displaying a new character string display in the index list display as shown in FIG. You may go and update the index list display.
- the structure information presentation unit 618 includes a frame image. At the timing when it is determined that the character string exists, the activation switch of the display device 172 may be turned on to alert the user. If the structure information presenting unit 618 determines that a character string exists in the frame image, the structure information presenting unit 618 may send an e-mail notifying the presence of the character string to a predetermined mail address.
- FIG. 16 shows a video structuring apparatus according to the seventh embodiment of the present invention.
- the character string extraction unit 712 receives a frame image and frame identification information for identifying each frame image from the video input unit 710, and receives the input frame image. If it is determined that there is a character string, the character string existence frame image, the frame identification information, and the character string position information such as the coordinate value of the character string existing in the frame image are 3 is output to the structure information presenting unit 718 as index information, and a character string existing frame image, frame identification information, and character string position information are output to the character string recognition unit 714.
- the character string recognition unit 714 extracts the character string as a recognized character string (character code) from the image data of the character string existing within the range specified by the character string position information in the character string existence frame image.
- the recognized character string, frame identification information, character string position information, and recognition reliability are output to the structure information presenting unit 718.
- the video structuring apparatus 700 includes a video input unit 710 that receives digitized video data or a video signal such as an RGB signal or a composite video signal as an input.
- the frame identification information for identifying the frame image when reproducing each frame image of the video data can be output to the structure information presenting unit 718.
- the video input unit 710 receives the digitized video data or video signal, generates a frame image or a time-series frame image from the input video signal, and outputs the frame image and the frame identification.
- the information is output to the character string extraction unit 712.
- the character string extraction unit 712 first receives the first frame image from the video input unit 710, and determines whether or not there is a character string in the frame image. Next, when it is determined that a character string exists in the frame image, the video identification information, the character string existing frame image, and the frame identification for identifying the specific frame image in which the character string exists are included. The information and the character string position information such as the coordinate value of the character string existing in the frame image are output to the structure information presenting unit 718 as the third index information. At the same time, the string The extraction unit 712 outputs the character string existence frame image, the frame identification information, and the character string position information to the character string recognition unit 714.
- the character string existence frame image may be a thumbnail image reduced as necessary.
- the specific frame image in which the character string exists is preferably the first frame image in such a plurality of frame images when the same character string exists in the plurality of frame images.
- the character string extraction unit 712 does not output the character string existing frame image, the frame identification information, and the character string position information.
- the character string extraction unit 212 determines whether or not a character string exists in the frame image of the second frame image, and determines that a character string exists in the frame image. Is a character string existing frame image in which the character string exists, frame identification information for identifying the character string existing frame image, character string position information such as a coordinate value of the character string existing in the frame image, and Is output. The character string extraction unit 212 sequentially repeats this process for the subsequent frame images.
- the character string recognizing unit 714 uses the dictionary data for character string recognition from the image data of the character string existing within the range specified by the character string position information in the character string existing frame image.
- the character ⁇ 1J included in the image data is extracted as a recognized character ⁇ 1J (character code).
- a character string recognition process for example, a character segmentation method and apparatus described in Japanese Patent Laid-Open No. 3-141484 are disclosed.
- a high-speed recognition / search system described in Japanese Patent Laid-Open No. 2001-34709 and a recognition search speed-up method.
- the recognition reliability of the result of character string recognition may be calculated.
- the recognition reliability of the character string for example, a likelihood value in character recognition corresponding to each character in the character string image, a reciprocal of an average value of distance values, or the like can be used.
- the character string recognition unit 714 obtains the obtained recognized character string, the character string position information, the frame identification information of the frame image in which the character string exists, and the character string recognition result.
- the structure information presentation unit 718 normally generates a display video based on the video data input from the video input unit 710, outputs the video to the display device 172, and presents it to the user.
- the structure information presenting unit 718 receives from the character string extracting unit 712 and the character string recognizing unit 714 that the character string exists in the frame image, the character string existing frame image, and the character string existing in the frame image.
- the third index information including the character string position information such as the coordinate value and the frame identification information is obtained, information indicating that the character string exists in the frame image is displayed and notified to the user.
- the new character string display 126 or the recognized character string 138 is displayed in the index list display shown in FIG. 13, and the index list display is updated.
- notification that a character string exists in the frame image may be performed by notifying the character string appearance information by voice.
- the structure information presenting unit 718 may turn on the activation switch of the display device 172 to alert the user at the timing when it is determined that a character string exists in the frame image.
- the user wants the user to use it in the notification and registers the desired character string in the recording unit or the like in advance.
- the structure information presenting unit 718 reads the character string registered in the recording unit or the like in advance. Is displayed on the display device 1 72. Furthermore, according to the recognition reliability, the form of notification that the character string exists in the frame image for the user and the content of notification may be changed.
- the user may be notified of the presence of the character string when a preset specific character string exists in the video.
- the structural information presenting unit 718 acquires the recognized character string from the character string recognizing unit 712, it determines whether or not the acquired recognized character string is a character string included in a preset keyword group. to decide.
- the acquired recognition character string is preset. If it is determined that the character string is included in the keyword, the information indicating that the character string exists in the video is displayed on the display device 172, or the sound is output from the sound output device in advance. The user is notified that the set character string has appeared.
- the structure information presenting unit 718 may transmit an e-mail notifying the presence of the character string to a predetermined e-mail address.
- the user may be notified of the recognized character string itself by embedding the recognized character string recognized and output by the character string recognition unit 714 in this e-mail.
- embedding of the recognized character string may be executed according to the recognition reliability when the character string is recognized. For example, the recognition character string should be included in the email only when the recognition reliability is 50% or higher.
- FIG. 17 shows a video structuring apparatus according to the eighth embodiment of the present invention.
- This video structuring apparatus 800 has both the functions of the video structuring apparatus 400 shown in FIG. 11 and the functions of the video structuring apparatus 700 shown in FIG.
- the structure information presentation unit 818 is configured to display an index list and notify the user of the presence of a character string.
- the video input unit 810 of the video structuring apparatus 800 includes the functions of the video input unit 410 in the video structuring apparatus 400 shown in FIG. 11 and the video input unit in the video structuring apparatus 700 shown in FIG. It has 710 functions.
- the character string extraction unit 812 of the video structuring apparatus 800 includes the function of the character string extraction unit 412 shown in FIG. 11 and the function of the character string extraction unit 712 shown in FIG.
- the function of the character string recognition unit 414 and the function of the character string recognition unit 714 shown in FIG. 11 are provided.
- the video information storage unit 816 of the video structuring apparatus 800 has the function of the video information storage unit 716 shown in FIG. 16, and the structural information presentation unit 818 is a function and diagram of the structural information presentation unit 418 shown in FIG. And the function of the structural information presentation unit 718 shown in FIG.
- the structure information presenting unit 818 displays an index list as shown in Fig. 13 on the display device 172, and notifies the user.
- the structure information presenting unit 818 displays the information indicating that the character string exists in the frame image and notifies the user.
- Index list table The new character string display 126 or the recognized character string 138 is displayed in the display, and the index list display is updated.
- notification that a character string exists in the frame image may be performed by notifying the character string appearance information by voice.
- the structure information presenting unit 718 may turn on the activation switch of the display device 172 to alert the user at the timing when it is determined that a character string exists in the frame image.
- the structure information presenting unit 818 reads out the pre-registered character string from the recording unit or the like and displays it. Display on device 172. Furthermore, according to the recognition reliability, the notification form that the character string exists in the frame image for the user and the notification content may be changed.
- the structure information presenting unit 818 may transmit an e-mail notifying the presence of the character string to a predetermined mail address.
- the character string recognition unit 814 recognizes and outputs the recognized character string in this e-mail.
- the loading of the recognized character string may be executed according to the recognition reliability when the character string is recognized. For example, a recognition character string may be included in an email only when the recognition reliability is 50% or higher.
- FIG. 18 shows a video structuring apparatus according to the ninth embodiment of the present invention.
- This video structuring apparatus 900 has both the functions of the video structuring apparatus 500 shown in FIG. 14 and the functions of the video structuring apparatus 700 shown in FIG.
- the video playback unit 920 is configured to display the video after the playback point selected by the user on the display device 172. It is made.
- the video input unit 910 of the video structuring apparatus 900 includes the functions of the video input unit 510 of the video structuring apparatus 500 shown in FIG. 14 and the video input unit 710 of the video structuring apparatus 700 shown in FIG. With functionality.
- the character string extraction unit 912 of the video structuring apparatus 900 includes the function of the character string extraction unit 512 shown in FIG. 14 and the function of the character string extraction unit 712 shown in FIG. 14 has the function of the character string recognition unit 514 shown in FIG. 14 and the function of the character string recognition unit 714 shown in FIG.
- the video information storage unit 916 of the video structuring apparatus 900 has the function of the video information storage unit 716 shown in FIG. 16, and the structural information presentation unit 918 is the function of the structural information presentation unit 518 shown in FIG. And the function of the structural information presentation unit 718 shown in FIG.
- the structure information presentation unit 918 displays the index list as shown in FIG. 13 on the display device 172, and notifies the user.
- the structure information presentation unit 918 notifies the user by displaying the information that the character string exists in the frame image. Further, the new character string display 126 or the recognized character string 138 is displayed in the index list display to update the index list display.
- Notification that a character string exists in the frame image may be made by notifying the character string appearance information by voice.
- the structure information presenting unit 718 may turn on the activation switch of the display device 172 to alert the user at the timing when it is determined that the character string exists in the frame image.
- the structure information presenting unit 918 reads the pre-registered character string from the recording unit or the like and displays it. Display on device 172. Furthermore, according to the recognition reliability, the notification form that the character string exists in the frame image for the user and the notification content may be changed.
- the structure information presentation unit 818 determines that a character string exists in the frame image.
- an e-mail notifying the existence of a character string may be sent to a predetermined e-mail address.
- the recognized character string recognized and output by the character string recognizing unit 814 may be included in this e-mail.
- loading the recognized character string may be executed according to the recognition reliability when the character string is recognized.
- the user browses the index list display displayed on the display device 172 and operates the input device 170 such as a mouse or a keyboard to display a desired character string display 126 and recognized characters.
- the playback start point of the video can be specified by selecting playback point information such as column 138 and the shooting time.
- the structure information presentation unit 918 selects the video identification information and the frame identification information corresponding to the playback start point, and the video information storage unit 916 Output to.
- the video information storage unit 916 When the video information storage unit 916 acquires the video identification information and the frame identification information from the structure information presentation unit 918, the video information storage unit 916 reads the video data corresponding to the acquired video information and outputs the video data together with the frame identification information to the video playback unit 920. .
- the video playback unit 920 When the video playback unit 920 is configured to be able to decode a video file and acquire a time-series frame image, the video information storage unit 916 displays the video file and the frame identification information as a video playback unit. Output to 920. In this case, the video playback unit 920 decodes the acquired video file, displays the frame image after the frame identification information, and presents the video after the playback point to the user.
- the video information storage unit 916 displays the time-series frame images after the frame identification information. Output to 920. In this case, the video playback unit 920 displays the frame image after the frame identification information and presents the video after the playback point to the user.
- the structure information presenting unit 918 acquires the recognized character string from the character string recognizing unit 912, whether or not the acquired recognized character string is a character string included in the keyword group that is preset. Judgment.
- the structural information presentation unit 918 determines that the acquired recognition character string is a character string included in a preset keyword. Displays a message indicating that a character string is present in the video on the display device 172 or outputs a sound from the voice output device to notify the user that a preset character string has appeared. Do it.
- the character string display 126 based on the index image uses a part of the character string existing frame image, only the character string obtained as a result of character recognition is displayed. Unlike the above, the character string display 126 does not coincide with the content of the video, and the possibility of occurrence of the phenomenon is reduced. Users can view the contents of the video by browsing the index list display, and can easily cue the video. In addition, since the display method can be controlled according to the reliability of the character string recognition result, the user can select the index by trusting the recognized character string, and the user can search the video. Ability to improve work efficiency.
- index list display in the present invention is not limited to the force index list display shown in FIG. 6 and FIG.
- FIG. 19 shows another example of index list display.
- the index list display shown in FIG. 6 and FIG. 13 based on the character string position information, the range where the character string exists is cut out from the character string existing frame image, and the character string display by the cut out image is displayed as the frame identification information. The image is displayed on the display device in association with each other.
- the character string existing frame image 128 is reduced and displayed on the index list display.
- FIG. 20 shows still another example of the index list display.
- the character string display 126 by the image and the recognition character string 138 are displayed simultaneously, but in the case shown in FIG. 20, the character string display 126 by the image is displayed according to the recognition reliability.
- the display with the recognition character string 139 is switched.
- the threshold ⁇ 1 for judging whether or not to display a recognized character string is set to 50%
- the threshold ⁇ 3 for judging whether or not to highlight a recognized character string is set to ⁇ 3.
- the case where 80% is set and the threshold value ⁇ 2 for determining whether or not to display a character string by an image is set to 90% will be described.
- the display method can be controlled between the character string display by the image and the display of the recognized character string in accordance with the recognition reliability of the result of the character string recognition.
- the index can be selected by trusting the character string, and it is possible to improve the work efficiency when the user searches the video.
- the video structuring apparatus according to the first and third to ninth embodiments of the present invention described above is for executing the above-described processes, similarly to the video structuring apparatus according to the second embodiment. It can also be realized by installing the program in a computer system. Therefore, the computer program for realizing the video structuring apparatus of the first to ninth embodiments is also included in the scope of the present invention.
- the present invention by displaying an index list for video search based on the presence of a character string, it is possible to facilitate video search and video cueing by a user.
- the present invention can be applied to systems such as video recorders, video cameras, and digital still cameras.
- the present invention provides a mobile phone equipped with a camera, PHS (Personal Handyphone System), Non-Nanore Computer, PDA (Personal Data Assistance, Personal Digital Assistants), etc. Can do.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006549078A JPWO2006068269A1 (ja) | 2004-12-24 | 2005-12-26 | 映像構造化装置及び方法 |
US11/793,807 US7949207B2 (en) | 2004-12-24 | 2005-12-26 | Video structuring device and method |
US13/111,551 US8126294B2 (en) | 2004-12-24 | 2011-05-19 | Video structuring device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-374715 | 2004-12-24 | ||
JP2004374715 | 2004-12-24 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11793807 A-371-Of-International | 2005-12-26 | ||
US13/111,551 Division US8126294B2 (en) | 2004-12-24 | 2011-05-19 | Video structuring device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006068269A1 true WO2006068269A1 (ja) | 2006-06-29 |
Family
ID=36601861
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/023748 WO2006068269A1 (ja) | 2004-12-24 | 2005-12-26 | 映像構造化装置及び方法 |
Country Status (3)
Country | Link |
---|---|
US (2) | US7949207B2 (ja) |
JP (1) | JPWO2006068269A1 (ja) |
WO (1) | WO2006068269A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008085700A (ja) * | 2006-09-28 | 2008-04-10 | Sanyo Electric Co Ltd | 映像再生装置及び再生用プログラム |
JP2008166988A (ja) * | 2006-12-27 | 2008-07-17 | Sony Corp | 情報処理装置および方法、並びにプログラム |
JP2009188827A (ja) * | 2008-02-07 | 2009-08-20 | Toshiba Corp | 電子機器装置 |
US8120269B2 (en) | 2006-12-18 | 2012-02-21 | Osram Ag | Circuit arrangement and method for operating a high-pressure discharge lamp |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2208149A2 (en) * | 2007-10-04 | 2010-07-21 | Koninklijke Philips Electronics N.V. | Classifying a set of content items |
US8090822B2 (en) | 2008-04-11 | 2012-01-03 | The Nielsen Company (Us), Llc | Methods and apparatus for nonintrusive monitoring of web browser usage |
US8355585B2 (en) * | 2009-05-12 | 2013-01-15 | Red Hat Israel, Ltd. | Data compression of images using a shared dictionary |
JP2012039174A (ja) * | 2010-08-03 | 2012-02-23 | Ricoh Co Ltd | 撮像装置及び撮像方法 |
US9047534B2 (en) * | 2011-08-11 | 2015-06-02 | Anvato, Inc. | Method and apparatus for detecting near-duplicate images using content adaptive hash lookups |
US8650198B2 (en) * | 2011-08-15 | 2014-02-11 | Lockheed Martin Corporation | Systems and methods for facilitating the gathering of open source intelligence |
US9275425B2 (en) * | 2013-12-19 | 2016-03-01 | International Business Machines Corporation | Balancing provenance and accuracy tradeoffs in data modeling |
CN104753546B (zh) * | 2013-12-31 | 2017-06-23 | 鸿富锦精密工业(深圳)有限公司 | 消除移动装置干扰信号的方法以及电子设备 |
CN106557521B (zh) * | 2015-09-29 | 2020-07-14 | 佳能株式会社 | 对象索引方法、对象搜索方法及对象索引系统 |
US10474745B1 (en) | 2016-04-27 | 2019-11-12 | Google Llc | Systems and methods for a knowledge-based form creation platform |
US11039181B1 (en) | 2016-05-09 | 2021-06-15 | Google Llc | Method and apparatus for secure video manifest/playlist generation and playback |
US10750216B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for providing peer-to-peer content delivery |
US10771824B1 (en) | 2016-05-10 | 2020-09-08 | Google Llc | System for managing video playback using a server generated manifest/playlist |
US10785508B2 (en) | 2016-05-10 | 2020-09-22 | Google Llc | System for measuring video playback events using a server generated manifest/playlist |
US11069378B1 (en) | 2016-05-10 | 2021-07-20 | Google Llc | Method and apparatus for frame accurate high resolution video editing in cloud using live video streams |
US10595054B2 (en) | 2016-05-10 | 2020-03-17 | Google Llc | Method and apparatus for a virtual online video channel |
US10750248B1 (en) | 2016-05-10 | 2020-08-18 | Google Llc | Method and apparatus for server-side content delivery network switching |
US11032588B2 (en) | 2016-05-16 | 2021-06-08 | Google Llc | Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback |
US9734373B1 (en) * | 2016-08-31 | 2017-08-15 | Vium, Inc. | Method of reading animal marks |
EP3598742B1 (en) * | 2017-03-14 | 2021-06-16 | Sony Corporation | Recording device and recording method |
CN109246410B (zh) * | 2017-05-31 | 2021-04-02 | 江苏慧光电子科技有限公司 | 全息影像的成像方法和数据生成方法及装置 |
CN110837754B (zh) * | 2018-08-16 | 2022-08-30 | 深圳怡化电脑股份有限公司 | 字符切割定位方法、装置、计算机设备及存储介质 |
CN109146910B (zh) * | 2018-08-27 | 2021-07-06 | 公安部第一研究所 | 一种基于目标定位的视频内容分析指标评价方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07192003A (ja) * | 1993-12-27 | 1995-07-28 | Hitachi Ltd | 動画像検索装置及び方法 |
JPH11167583A (ja) * | 1997-12-04 | 1999-06-22 | Nippon Telegr & Teleph Corp <Ntt> | テロップ文字認識方法および映像蓄積表示装置、テロップ文字認識・検索端末、映像検索端末 |
JP2002014973A (ja) * | 2000-06-28 | 2002-01-18 | Nippon Telegr & Teleph Corp <Ntt> | 映像検索装置、方法、映像検索プログラムを記録した記録媒体 |
JP2003245809A (ja) * | 2002-02-21 | 2003-09-02 | Toshiba Tungaloy Co Ltd | 溝削り工具 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2518063B2 (ja) | 1989-10-26 | 1996-07-24 | 日本電気株式会社 | 文字切り出し方法及びその装置 |
JPH0432970A (ja) * | 1990-05-23 | 1992-02-04 | Hitachi Eng Co Ltd | 画像認識・修正方法及びその装置 |
US5450134A (en) * | 1993-01-12 | 1995-09-12 | Visual Automation Systems, Inc. | Video facility management system for encoding and decoding video signals to facilitate identification of the video signals |
JP3340532B2 (ja) * | 1993-10-20 | 2002-11-05 | 株式会社日立製作所 | ビデオの検索方法および装置 |
JP3202455B2 (ja) * | 1993-12-06 | 2001-08-27 | 富士通株式会社 | 処理装置 |
US6415303B1 (en) * | 1995-01-03 | 2002-07-02 | Mediaone Group, Inc. | Method and system for describing functionality of an interactive multimedia application for use on an interactive network |
JP3113814B2 (ja) * | 1996-04-17 | 2000-12-04 | インターナショナル・ビジネス・マシーンズ・コーポレ−ション | 情報検索方法及び情報検索装置 |
US6366699B1 (en) | 1997-12-04 | 2002-04-02 | Nippon Telegraph And Telephone Corporation | Scheme for extractions and recognitions of telop characters from video data |
ES2195488T3 (es) * | 1998-09-03 | 2003-12-01 | Ricoh Kk | Medios de registro con informaciones de indice video y respectivamente audio, metodos de gestion y de recuperacion de informaciones de video, respectivamente audio y sistema de recuperacion de video. |
US6281940B1 (en) * | 1999-03-31 | 2001-08-28 | Sony Corporation | Display of previewed channels with rotation of multiple previewed channels along an arc |
JP3374793B2 (ja) | 1999-07-21 | 2003-02-10 | 日本電気株式会社 | 高速認識検索システム及びそれに用いる認識検索高速化方法並びにその制御プログラムを記録した記録媒体 |
US7221796B2 (en) * | 2002-03-08 | 2007-05-22 | Nec Corporation | Character input device, character input method and character input program |
JP2003333265A (ja) | 2002-05-14 | 2003-11-21 | Daiwa Securities Smbc Co Ltd | 情報管理装置、情報管理方法、及びプログラム |
JP2003345809A (ja) | 2002-05-30 | 2003-12-05 | Nec System Technologies Ltd | データベース構築システム、パッセージ検索装置、データベース構築方法及びプログラム |
JP2004080587A (ja) | 2002-08-21 | 2004-03-11 | Mitsubishi Electric Corp | テレビジョン信号記録再生装置およびテレビジョン信号記録再生方法 |
-
2005
- 2005-12-26 US US11/793,807 patent/US7949207B2/en not_active Expired - Fee Related
- 2005-12-26 JP JP2006549078A patent/JPWO2006068269A1/ja active Pending
- 2005-12-26 WO PCT/JP2005/023748 patent/WO2006068269A1/ja not_active Application Discontinuation
-
2011
- 2011-05-19 US US13/111,551 patent/US8126294B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07192003A (ja) * | 1993-12-27 | 1995-07-28 | Hitachi Ltd | 動画像検索装置及び方法 |
JPH11167583A (ja) * | 1997-12-04 | 1999-06-22 | Nippon Telegr & Teleph Corp <Ntt> | テロップ文字認識方法および映像蓄積表示装置、テロップ文字認識・検索端末、映像検索端末 |
JP2002014973A (ja) * | 2000-06-28 | 2002-01-18 | Nippon Telegr & Teleph Corp <Ntt> | 映像検索装置、方法、映像検索プログラムを記録した記録媒体 |
JP2003245809A (ja) * | 2002-02-21 | 2003-09-02 | Toshiba Tungaloy Co Ltd | 溝削り工具 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008085700A (ja) * | 2006-09-28 | 2008-04-10 | Sanyo Electric Co Ltd | 映像再生装置及び再生用プログラム |
US8120269B2 (en) | 2006-12-18 | 2012-02-21 | Osram Ag | Circuit arrangement and method for operating a high-pressure discharge lamp |
JP2008166988A (ja) * | 2006-12-27 | 2008-07-17 | Sony Corp | 情報処理装置および方法、並びにプログラム |
US8213764B2 (en) | 2006-12-27 | 2012-07-03 | Sony Corporation | Information processing apparatus, method and program |
JP2009188827A (ja) * | 2008-02-07 | 2009-08-20 | Toshiba Corp | 電子機器装置 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2006068269A1 (ja) | 2008-08-07 |
US7949207B2 (en) | 2011-05-24 |
US20110217026A1 (en) | 2011-09-08 |
US20080166057A1 (en) | 2008-07-10 |
US8126294B2 (en) | 2012-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8126294B2 (en) | Video structuring device | |
JP4469905B2 (ja) | テロップ収集装置およびテロップ収集方法 | |
KR100915847B1 (ko) | 스트리밍 비디오 북마크들 | |
US6961446B2 (en) | Method and device for media editing | |
EP1980960A2 (en) | Methods and apparatuses for converting electronic content descriptions | |
TWI457770B (zh) | 關鍵字擷取方法及裝置、搜尋方法及裝置,以及電腦可讀儲存媒體 | |
US20110243529A1 (en) | Electronic apparatus, content recommendation method, and program therefor | |
US7904452B2 (en) | Information providing server, information providing method, and information providing system | |
JP2003157288A (ja) | 情報関連付け方法、端末装置、サーバ装置、プログラム | |
US20110213773A1 (en) | Information processing apparatus, keyword registration method, and program | |
CN110781328A (zh) | 基于语音识别的视频生成方法、系统、装置和存储介质 | |
US20110125731A1 (en) | Information processing apparatus, information processing method, program, and information processing system | |
JP4814849B2 (ja) | フレームの特定方法 | |
KR101100191B1 (ko) | 멀티미디어 재생장치와 이를 이용한 멀티미디어 자료검색방법 | |
US20130232407A1 (en) | Systems and methods for producing, reproducing, and maintaining electronic books | |
JP2012238232A (ja) | 興味区間検出装置、視聴者興味情報提示装置、および興味区間検出プログラム | |
JP4192703B2 (ja) | コンテンツ処理装置、コンテンツ処理方法及びプログラム | |
CN101547303B (zh) | 成像设备、字符信息关联方法、和字符信息关联系统 | |
CN110309324A (zh) | 一种搜索方法及相关装置 | |
JP4473813B2 (ja) | メタデータ自動生成装置、メタデータ自動生成方法、メタデータ自動生成プログラムおよびプログラムを記録した記録媒体 | |
JP2006202081A (ja) | メタデータ生成装置 | |
JPH11167583A (ja) | テロップ文字認識方法および映像蓄積表示装置、テロップ文字認識・検索端末、映像検索端末 | |
KR20170043944A (ko) | 디스플레이 장치 및 이의 제어 방법 | |
JP2002032386A (ja) | データ処理方法、装置およびその方法を実施するプログラムを記録した記録媒体 | |
JPH08249343A (ja) | 音声情報取得装置及び音声情報取得方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KN KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006549078 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11793807 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 05819770 Country of ref document: EP Kind code of ref document: A1 |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 5819770 Country of ref document: EP |