CN103838808A - Information processing apparatus and method, and program - Google Patents

Information processing apparatus and method, and program Download PDF

Info

Publication number
CN103838808A
CN103838808A CN201310579095.2A CN201310579095A CN103838808A CN 103838808 A CN103838808 A CN 103838808A CN 201310579095 A CN201310579095 A CN 201310579095A CN 103838808 A CN103838808 A CN 103838808A
Authority
CN
China
Prior art keywords
importance
scene
image
signal conditioning
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310579095.2A
Other languages
Chinese (zh)
Inventor
田中和政
田中健司
中村幸弘
高桥义博
深沢健太郎
吉田恭助
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN103838808A publication Critical patent/CN103838808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

There is provided an information processing apparatus including a plurality of feature amount extraction parts configured to extract, from content, a plurality of feature amounts, a display control part configured to control display of an image of the content and information concerning the feature amounts of the content, and a selecting part configured to select display or non-display of the information concerning the feature amounts. The display control part controls display of importance of a scene found on the basis of the display or non-display of the information concerning the feature amounts which is selected by the selecting part.

Description

Signal conditioning package, information processing method and program
Technical field
The present invention relates to signal conditioning package, information processing method and program.Especially, relate to and can make the entity of content be easy to signal conditioning package, information processing method and the program grasped.
Background technology
The preview screen that is used for the entity of confirming dynamic image content generally comprises: preview area, and it is for reproducing dynamic image; With timeline region, it has the slide block of the reproduction position being used to indicate in timeline.
In order to grasp the entity of content, user can reproduce dynamic image to confirm preview, or in order to grasp quickly, user can use slide block to move reproduction position to confirm the entity of content.But, may need to grasp for a long time above-mentioned entity according to the length of content.
On the other hand, according to the Japanese patent laid-open 11-284948 communique as correlation technique or Japanese Patent Laid-Open 2000-308003 communique, owing to showing the image corresponding with scene changes along timeline, so client can confirm where there is what kind of video.
Summary of the invention
But the quantity of the length of content or the scene changes of content may cause the increase of the amount of images corresponding with scene changes, thereby cause user to be difficult to grasp the entity of content.
In view of above situation has proposed the present invention, expect to improve the operability of the entity for grasping content.
Embodiments of the invention provide a kind of signal conditioning package, and this signal conditioning package comprises: multiple Characteristic Extraction portion, and they are configured to extract multiple characteristic quantities from content; Display control unit, it is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And selection portion, it is configured to select to show or do not show the information relevant to described characteristic quantity.The demonstration of the importance of described display control unit control scene, described importance based on select to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
Described display control unit can change according to described importance the demonstration of the information relevant to described characteristic quantity.
Described display control unit can the demonstration as a scene image of the described information relevant to described characteristic quantity according to described importance control.
The mode that described display control unit can be greater than to have the size of a scene image of high described importance the size of a scene image with low described importance shows a scene image with high described importance.
Described display control unit a scene image with high described importance can be presented at have low described importance a scene image before.
Described display control unit can be according to the demonstration of described importance control object image, and in described object images, predetermine one is detected as the described information relevant to described characteristic quantity.
The mode that described display control unit can be greater than to have the size of object images of high described importance the size of the object images with low described importance shows the object images with high described importance.
Described display control unit the object images with high described importance can be presented at have low described importance object images before.
In the case of having along timeline continuous detecting the object images of high described importance, described display control unit can show an object images above with high described importance in continuous detecting has the interval of object images of high described importance.
Described signal conditioning package can also comprise the changing unit that is configured to the weight that changes described importance.Described display control unit can change according to the described importance that has been changed weight by described changing unit the demonstration of the described information relevant to described characteristic quantity.
Described signal conditioning package can also comprise the scene extraction unit that is configured to extract the scene corresponding with described importance.
Described signal conditioning package can also comprise summarization generation portion, and it is configured to collect the scene of being extracted by described scene extraction unit, and generates summary dynamic image.
Described signal conditioning package can also comprise metadata generating unit, and it is configured to generate abstract metadata, and described abstract metadata comprises starting point and the terminal of the scene of being extracted by described scene extraction unit.
Described signal conditioning package can also comprise thumbnail generating unit, and it generates the thumbnail image that represents described content according to the image of the scene of being extracted by described scene extraction unit.
Described signal conditioning package can also comprise the changing unit that is configured to the weight that changes described importance.Described scene extraction unit can be extracted according to the scene that has been changed the described importance of weight by described changing unit.
Embodiment of the present disclosure provides a kind of information processing method, and described method comprises step: signal conditioning package extracts multiple characteristic quantities from content; By the demonstration of the image of content described in described signal conditioning package control and the information relevant to the described characteristic quantity of described content; Select to show or do not show the information relevant to described characteristic quantity by described signal conditioning package; And by the demonstration of the importance of described signal conditioning package control scene, described importance based on to select with described characteristic quantity relevant described information demonstration or do not show and obtain.
Embodiment of the present disclosure provides a kind of program, and described program makes computing machine can play the effect as lower component: multiple Characteristic Extraction portion, and they are configured to extract multiple characteristic quantities from content; Display control unit, it is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And selection portion, it is configured to select to show or do not show the information relevant to described characteristic quantity.The demonstration of the importance of described display control unit control scene, described importance based on selected to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
According to an embodiment of the present disclosure, from content, extract multiple characteristic quantities, and control the demonstration of the image of described content and the information relevant to the characteristic quantity of described content.Then, select to show or do not show the information relevant to described characteristic quantity, and control the demonstration of the importance of scene, described importance shows or does not show that the information relevant to characteristic quantity obtains based on selecting.
According to embodiment of the present disclosure, can easily grasp the entity of content.
Brief description of the drawings
Fig. 1 shows the structure example that has adopted signal conditioning package of the present invention;
Fig. 2 is the process flow diagram that illustrates the content input processing of signal conditioning package;
Fig. 3 is the process flow diagram that illustrates preview Graphics Processing;
Fig. 4 is the process flow diagram that illustrates the Graphics Processing again of preview screen;
Fig. 5 shows the example of preview screen;
Fig. 6 shows the example of preview screen;
Fig. 7 shows the demonstration example of scene changes image displaying part;
Fig. 8 shows another demonstration example of scene changes image displaying part;
Fig. 9 shows the display case of face-image display part;
Figure 10 shows the display case of face-image display part;
Figure 11 shows the structure example that adopts signal conditioning package of the present invention;
Figure 12 is the process flow diagram that illustrates preview Graphics Processing;
Figure 13 is the process flow diagram that illustrates summarization generation processing;
Figure 14 shows the demonstration example of summarization generation display part;
Figure 15 shows another demonstration example of summarization generation display part;
Figure 16 illustrates another abstraction generating method; And
Figure 17 shows the block diagram of the ios dhcp sample configuration IOS DHCP of computing machine.
Embodiment
With reference to accompanying drawing in detail the preferred embodiments of the present invention are described in detail below.Note, in present specification and accompanying drawing, represent to have the structural detail of essentially identical function and structure with identical Reference numeral, and omitted the repeat specification to these structural details.
Below, explanation is used for implementing embodiments of the invention (hereinafter referred to as embodiment).Describe in the following order.
1. the first embodiment (according to the preview screen of importance)
2. the second embodiment (according to the summarization generation of importance)
3. the 3rd embodiment (computing machine)
1. the first embodiment (according to the preview screen of importance)
[structure of signal conditioning package of the present invention]
Fig. 1 shows the structure example of application signal conditioning package of the present invention.
Signal conditioning package 11 shown in Fig. 1 shows the characteristic quantity of the content of extracting from content by the recognition technology such as such as image recognition, speech recognition and character recognition along timeline at the screen for preview content.Signal conditioning package 11 is for example made up of personal computer.
In the example of Fig. 1, signal conditioning package 11 comprises content input section 21, content file 22, the 23-1 to 23-3 of Characteristic Extraction portion, content characteristic amount database 24, display control unit 25, operation inputting part 26, display part 27, Characteristic Extraction portion 28 and search part 29.
The outsides that content input section 21 never illustrates etc. receive content, and received content is offered to the 23-1 to 23-3 of Characteristic Extraction portion.In addition, content input section 21 is registered in received content in content file 22.
In content file 22, be registered with the content from content input section 21.
The 23-1 to 23-3 of Characteristic Extraction portion carries out image recognition, speech recognition, character recognition etc. to content, to extract the each characteristic quantity in the multiple characteristic quantities that comprise image feature amount, phonetic feature amount etc.The 23-1 to 23-3 of Characteristic Extraction portion is registered in the characteristic quantity of the content of extraction in content characteristic amount database 24.Herein, the 23-1 to 23-3 of Characteristic Extraction portion comprises three Characteristic Extraction portions, but the quantity of Characteristic Extraction portion is not limited to three, but changes according to the type of extracted characteristic quantity (quantity).Below, in the time needn't mutually distinguishing, the 23-1 to 23-3 of Characteristic Extraction portion is called to Characteristic Extraction portion 23.
In content characteristic amount database 24, be registered with the characteristic quantity of the content of being extracted by Characteristic Extraction portion 23.
Display control unit 25 in response to the user instruction from operation inputting part 26 respectively from content file 22 and content characteristic amount database 24 take out by the characteristic quantity of the content of preview and this content.The preview image of the content of display control unit 25 based on being removed and generate preview screen about the information of the characteristic quantity of this content, and control display part 27 and show the preview screen generating.In the process of demonstration preview screen, when sending instruction by operation inputting part 26(user by operation inputting part 26) when the input of text or image information is offered to Characteristic Extraction portion 28, the result for retrieval providing from search part 29 in response to the information of input is provided display control unit 25.Display control unit 25 shows preview screen based on result for retrieval.
In addition, in the process of demonstration preview screen, in the time the input of text or image information being offered to Characteristic Extraction portion 28 because of user instruction by operation inputting part 26, the result for retrieval providing from search part 29 in response to the information of input is provided display control unit 25.Display control unit 25 shows preview screen again based on result for retrieval.Showing in the process of preview screen, display control unit 25 is based on result for retrieval and that input and select the characteristic quantity that shows or do not show again to show preview screen by user by operation inputting part 26.Now, the characteristic quantity that display control unit 25 is selected according to user is judged the importance of each scene, and again shows preview screen according to described importance.
In addition, showing in the process of preview screen, display control unit 25 based on by operation inputting part 26 correction to characteristic quantity input etc. the information of registration in content characteristic quantity database 24 is revised and renewal etc.
Operation inputting part 26 for example comprises mouse, is layered in touch panel on display part 27 etc.Operation inputting part 26 will offer display control unit 25 in response to the signal of user's operation.Display part 27 shows the preview screen being generated by display control unit 25.
Characteristic Extraction portion 28 and user that provide from display control unit 25 is provided and sends for it the text of instruction or the characteristic quantity of image information, and this characteristic quantity is offered to search part 29.Search part 29 is for the similar characteristic quantity of characteristic quantity from Characteristic Extraction portion 28, content characteristic quantity database 24 being retrieved, and result for retrieval is offered to display control unit 25.
[operation of signal conditioning package]
Next, with reference to the content input processing of the flowchart text signal conditioning package 11 of Fig. 2.
In step S11, the outside that content input section 21 never illustrates etc. receive content.The content receiving is offered the 23-1 to 23-3 of Characteristic Extraction portion by content input section 21.
In step S12, the 23-1 to 23-3 of Characteristic Extraction portion carries out image recognition, speech recognition, character recognition etc. to the content from content input section 21, to extract the each characteristic quantity comprising in the characteristic quantity such as image feature amount, phonetic feature amount.In step S13, the 23-1 to 23-3 of Characteristic Extraction portion is registered in the content characteristic amount of extraction in content characteristic amount database 24.
In step S14, content input section 21 is registered in the content of reception in content file 22.
With reference to the process flow diagram of Fig. 3, the preview Graphics Processing of the content that content by using registration described above and content characteristic amount carry out is described.
User operates to select by by the content of preview to operation inputting part 26.The information of the content of user being selected by operation inputting part 26 provides to display control unit 25.
In step S31, display control unit 25 is according to carrying out chosen content from the information of operation inputting part 26.In step S32, display control unit 25 obtains from content file 22 content of selecting among step S31.
In step S33, display control unit 25 obtains the characteristic quantity of the content of selecting among step S31 from content characteristic amount database 24.
In step S34, display control unit 25 shows preview screen.In other words, the characteristic quantity of the content of display control unit 25 based on obtaining and the content of obtaining generates preview screen and controls display part 27 and shows the preview screen (preview screen 51 shown in the Fig. 5 that will illustrate after a while) generating, in described preview screen, show the information about various characteristic quantities along timeline.Here, the characteristic quantity information of being not only showing along timeline, also has the information relevant with characteristic quantity.The information relevant with characteristic quantity comprises the information of characteristic quantity information, the acquisition of use characteristic amount or the result that use characteristic amount retrieves.
In step S35, display control unit 25 carries out the Graphics Processing again of preview screen.In the Graphics Processing again of the described preview screen illustrating with reference to Fig. 4 after a while, in the processing of step S35, on display part 27, show preview screen (preview screen 51 shown in the Fig. 6 that will illustrate after a while), this preview screen is updated in response to the user instruction providing from operation inputting part 26.
In step S36, display control unit 25 judges whether the demonstration of preview screen stops.In step S36, if user relies on operation inputting part 26 to send the instruction for stopping, judge that the demonstration of preview screen stops, and stop the demonstration of preview screen.
On the other hand, in step S36, if judge that the demonstration of preview screen does not stop, and processes and is back to step S35 repeating step S35 and following step.
Next, with reference to the Graphics Processing again of the preview screen in the step S35 of flowchart text Fig. 3 of Fig. 4.
In step S51, display control unit 25 determines whether by operation inputting part 26 has inputted text to be retrieved.If judge and inputted text to be retrieved in step S51, the information of the text to be retrieved of input is offered Characteristic Extraction portion 28 by display control unit 25, processes and advance to step S52.
In step S52, Characteristic Extraction portion 28 and search part 29 are retrieved by voice and OCR., in the case, the text former state to be retrieved from display control unit 25 is offered search part 29 by Characteristic Extraction portion 28.Search part 29 is carried out speech retrieval or character identification result retrieval for text to be retrieved to content characteristic quantity database 24, and result for retrieval is offered to display control unit 25.Then, process and advance to step S56.
If judge and do not input text to be retrieved in step S51, process and advance to step S53.In step S53, display control unit 25 determines whether and relies on operation inputting part 26 to input image to be retrieved.If judge and inputted image to be retrieved in step S53, the information of the image to be retrieved of input is offered Characteristic Extraction portion 28 by display control unit 25, processes and advance to step S54.
In step S54, the image of Characteristic Extraction portion 28 and search part 29 retrieval of similar.In other words, in the case, the characteristic quantity of the image to be retrieved providing from display control unit 25 is provided in Characteristic Extraction portion 28, and the characteristic quantity of the image to be retrieved extracting is offered to search part 29.Search part 29 is used the characteristic quantity of image to be retrieved for similar image retrieval content characteristic amount database 24, and result for retrieval is offered to display control unit 25.Then, process and advance to step S56.
If judge and do not input image to be retrieved in step S53, process and advance to step S55.In step S55, display control unit 25 determines whether and relies on operation inputting part 26 to select indicating characteristic amount.
Can be selected to show by user and still not show the characteristic quantity (information relevant with characteristic quantity) showing along timeline in preview screen.If user selects to show at least one in each characteristic quantity, in step S55, judge the indicating characteristic amount of selecting, process and advance to step S56.
In step S56, display control unit 25 shows preview screen again.In other words,, after step S52, in step S56, the result for retrieval of text to be retrieved being added under the state of the characteristic quantity (information relevant with characteristic quantity) that will show along timeline, again show preview screen.In addition,, after step S54, in step S56, the result for retrieval of image to be retrieved being added under the state of the characteristic quantity that will show along timeline, again show preview screen.In addition,, after step S55, in step S56, showing or do not show under the state of the characteristic quantity that will show along timeline according to user's selection, again show preview screen.After this, process the step S35 that is back to Fig. 3.
If judge non-selected indicating characteristic amount in step S55, the Graphics Processing again of preview screen stops, and processes the step S35 being back in Fig. 3.
[example of preview screen]
Fig. 5 shows the example of preview screen.
The preview screen 51 of explanation in the step S34 that the example of Fig. 5 for example shows at Fig. 3 etc.
Preview screen 51 comprises: preview display part 61, can carry out preview to the dynamic image of content therein; With timeline display part 62, it is positioned at the below of preview display part 61 and is shown by selection upper left side label.
Preview display part 61, in response to the user's operation that is arranged on the action button (reproduction button, fast forward button, speed are moved back button, stop button etc.) under preview display part 61, reproduces the also dynamic image of preview content.61 demonstrations of preview display part are for selecting facial frame 71 in shown content, and described face is process face recognition in face-image display part 85 described later.
Timeline display part 62 shows the relevant information of multiple characteristic quantities of extracting to the 23-1 to 23-3 of Characteristic Extraction portion by Fig. 1 along timeline.And, on timeline, arrange wiredly 63, line 63 represents the position of the current image (frame) showing in preview display part 61, user can be by checking that line 63 grasps the reproduction position of content on timeline.
In addition, what show on timeline display part 62 right sides is characteristic quantity list 64, and characteristic quantity list 64 makes it possible to the demonstration on timeline display part 62 or do not show and select.User can tick or not tick select to show or do not show the information relevant with characteristic quantity and only show the information relevant with the characteristic quantity of expecting in the frame that is arranged in this list left side.
Note, in the example of Fig. 5, not only not selected from upper several the 4th frames " correlativity " in characteristic quantity list 64., the timeline display part 62 of Fig. 5 does not show by choosing " correlativity " and shown importance display part 91(Fig. 6 described later).
In addition, in fact summarization generation display part 65 is arranged on the position identical with timeline display part 62, but not shown in the example of Fig. 5.Be arranged on the upper left label of summarization generation display part 65 and timeline display part 62 by selection, can generate display part 65 with replacement time line display part 62 by Display Summary.
Can show that the summarization generation display part 65 describing in detail with reference to Figure 14 is after a while to make to generate summary dynamic image etc.
Timeline display part 62 starts to comprise successively scene changes image displaying part 81, speech waveform display part 82, text retrieval result display part 83, image searching result display part 84, face-image display part 85, object images display part 86, personage's voice region display part 87 and camera action message display part 88 from top.These display parts are all the display parts for showing the information relevant with characteristic quantity.
By choosing " thumbnail (Thumbnail) " in characteristic quantity list 64 with displayed scene modified-image display part 81 in timeline display part 62.In scene modified-image display part 81, the thumbnail image of a two field picture of each scene that demonstration obtains by scene changes on timeline is as a characteristic quantity.Note, hereinafter a scene image (scene head image) is called to scene changes image.
By choosing " waveform (Wave form) " in characteristic quantity list 64 to show speech waveform display part 82 in timeline display part 62.In speech waveform display part 82, on timeline, the speech waveform of displaying contents is as a characteristic quantity.
By choosing " keyword identification (Keyword Spotting) " in characteristic quantity list 64 to show text retrieval result display part 83 in timeline display part 62.In text retrieval result display part 83, the text (" president (president) " in the case of the example of Fig. 5) that shown is based on inputting by operating operation input part 26 for user according to the characteristic quantity of speech recognition or character recognition and the result of retrieval of content characteristic quantity database 24.
By choosing " image recognition (Image Spotting) " in characteristic quantity list 64 to show image searching result display part 84 in timeline display part 62.In image searching result display part 84, shown is based on according to the characteristic quantity of image recognition for user by the similar scene of the selected image of operating operation input part 26 and the result (thumbnail image) of retrieval of content characteristic quantity database 24.
By choosing " face (Face) " in characteristic quantity list 64 to show facial image displaying part 85 in timeline display part 62.In face-image display part 85, shown is from content characteristic amount database 24 with according to the similar characteristic quantity of the characteristic quantity of face recognition (thumbnail image), this characteristic quantity is that the face of being selected by the frame 71 in preview display part 61 by identification obtains.
By choosing " Capitol Hill (Capitol Hill) " in characteristic quantity list 64 to show object images display part 86 in timeline display part 62.Herein, in the example of Fig. 5, " Capitol Hill " is the example of object, but object is not limited to " Capitol Hill " and can be specified by user.In object images display part 86, shown is the result (thumbnail image) of the characteristic quantity retrieval of content characteristic quantity database 24 of the identification of the object to user's appointment (" Capitol Hill " in the situation that of Fig. 5) based on basis.
Note, show the example that shows respectively face-image and object images, but face is also one of object.The image showing in face-image display part 85 and object images display part 86 can be to carry out by the extraction object to from original image the image (thumbnail image) that montage obtains.
By choosing " personage's voice (Human Voice) " in characteristic quantity list 64 to show personage's voice region display part 87 in timeline display part 62.In personage's voice region display part 87, shown is personage's voice region or the music block etc. by obtaining according to the characteristic quantity of speech recognition.Here, as shown in Figure 5, personage's voice region display part 87 not only can show the region that people talks, and also can show according to the mark at talker's sex or age.
By choosing " camera action (Camera Motion) " in characteristic quantity list 64 to show camera action message display part 88 in timeline display part 62.In camera action message display part 88, shown is have cameras such as horizontal pans, pitching shooting or zoom and the action message of camera lens (following, be called camera action message) region, described action message is according to the characteristic information of camera action recognition.As camera action message, also can use the information of the sensor of sensing camera action in the time of content of shooting etc.
In preview screen 51, show the various characteristic quantities such as the above-mentioned characteristic quantity as example that can extract and the information that uses these characteristic quantities to obtain along timeline from content.
But, in above-mentioned preview screen 51, the thumbnail image showing in scene changes image displaying part 81, face-image display part 85 and object images display part 86 in Fig. 5 is according to the quantity of the object of the quantity of the length of content, scene changes or detection and different.This makes to be difficult to verify each image, thereby causes being difficult to grasp the entity of content.
Therefore, in the present invention, the image that comprises thumbnail image showing along timeline in scene modified-image display part 81, face-image display part 85 and object images display part 86 is effectively to show according to the characteristic quantity of user's selection.
In the present invention, for example, the characteristic quantity of selecting according to user, shows the image showing along timeline effectively by varying sized and front and back position relation etc.
The characteristic quantity that user selects in characteristic quantity list 64 is to be judged as in the entity of grasping content for the important characteristic quantity of user.For example, be important if show people's picture, by face detect obtain people occur scene be important; If it is important saying the scene of particular words, the scene of extracting by the text retrieval in speech recognition is important.
Therefore, display control unit 25 judges that the scene corresponding with the characteristic quantity of user's selection is important scenes, and the scene corresponding with more characteristic quantities be more important scene, judges the importance of each scene with this.
Here, now, can carry out importance weighting to each characteristic quantity, and can show the slide block of the weighting for operating each characteristic quantity so that user can operate arbitrarily weighting and judges importance.
In timeline display part 62 as shown in Figure 6, show the importance of judgement described above.
Fig. 6 shows another example of preview screen.In the example of Fig. 6, in timeline display part 62, and the difference of the timeline display part 62 of Fig. 5 is: between speech waveform display part 82 and text retrieval result display part 83, be newly provided with importance display part 91.
Here, the timeline display part 62 in other parts and the Fig. 5 except above-mentioned part of the timeline display part 62 in Fig. 6 is basic identical.
By choosing " correlativity " in characteristic quantity list 64 to show importance display part 91 in timeline display part 62.Importance display part 91 shows the importance obtaining by following processing, described processing is: judge that the corresponding scene of characteristic quantity of selecting in characteristic quantity list 64 with user is important scenes, and judge that the scene corresponding with more characteristic quantities is more important scene, to determine the importance of each scene.Here importance is divided into three ranks, and importance 3 represents the highest importance.
For example, importance display part 91 shows the importance that each scene is judged as follows, and described mode, solid black areas is most important (importance 3) scene, next, thin hatched area is the scene of importance 2, and tiltedly hatched area is the scene of importance 1.
Then, display control unit 25 utilizes this importance to change the demonstration of the information relevant with characteristic quantity in scene changes image displaying part 81, face-image display part 85 or object images display part 86.In other words, in scene modified-image display part 81, face-image display part 85 or object images display part 86, by utilizing this importance, the image of more important scene shown larger and/or be displayed on more before.
Next, with reference to Fig. 7 explanation utilization to importance in scene modified-image display part 81.In example in Fig. 7, in scene modified-image display part 81, start to show thumbnail image 101 from left side to thumbnail image 108.
The A of Fig. 7 shows the scene changes image displaying part 81 in the situation that not considering importance.In other words, in the scene changes image displaying part 81 of the A of Fig. 7, with identical size and show the thumbnail image of any scene changes along the context of timeline., as being arranged in backmost according to the thumbnail image of first thumbnail image of time sequencing 101, as being arranged in foremost according to the thumbnail image of last thumbnail image of time sequencing 108.
The B of Fig. 7 shows the scene changes image displaying part 81 the thumbnail image in the case of having amplified important scenes.In other words,, in the scene changes image displaying part 81 of the B of Fig. 7, the thumbnail image 103 of most important scene is shown that size is larger than other thumbnail image.The thumbnail image 101,106 of important scenes is shown that size is only second to thumbnail image 103.In addition, the thumbnail image 102,104,107 of inferior important scenes is shown that size is greater than the thumbnail image 105,108 of inessential scene.
The C of Fig. 7 shows demonstration from the B of Fig. 7 to be changed, vertical center show each thumbnail image 101 to 108 in the situation that scene changes image displaying part 81.
The D of Fig. 8 shows demonstration from the C of Fig. 7 to be changed, the scene changes image displaying part 81 in the case of the thumbnail image of more important scene is presented at more above.In other words, in the scene changes image displaying part 81 of the D of Fig. 8, show the thumbnail image 103 of most important scene up front, at the inferior thumbnail image 101,106 that shows important scenes above.In addition,, at the thumbnail image 102,104,107 that again shows time important scenes above, in the end face shows the thumbnail image 105,108 of inessential scene.But, in fact hide thumbnail image 102,104,105.
The E of Fig. 8 shows from the demonstration of the D of Fig. 8 and changes, and shows and can not hide the scene changes image displaying part 81 any thumbnail image completely thereby bring on the image that staggers according to importance.
In other words,, in the scene changes image displaying part 81 of the E of Fig. 8, show by this way each thumbnail image: the thumbnail image 102,104,105 being hidden in the case of the D of Fig. 8 be present in thumbnail image 101,103,106 after.
Here, the example of the E of Fig. 8 shows the example by bringing in demonstration on staggering, and similarly, also can stagger and show lower end.
Be similar to the demonstration in the D of Fig. 8, the F of Fig. 8 shows the scene changes image displaying part 81 in the situation that thumbnail image 102,104,105 is hidden.But, scene changes image displaying part 81 in the F of Fig. 8, represent by this way the scene of the thumbnail image that is hidden: hover in the scene of the thumbnail image being hidden in user's operation during in the arrow M response of indication mouse position, use dotted line to show the profile of the thumbnail image being hidden.In addition,, in the time that the arrow M response of indication mouse position hovers on shown profile in user's operation, corresponding thumbnail image is displayed on foremost with it.
As mentioned above, owing to carrying out the scene changes image (thumbnail image) in displayed scene modified-image display part 81 according to the importance of the characteristic quantity of selecting based on user, user can easily grasp the entity of content.
Note, about the thumbnail image in scene changes image displaying part 81, such example has more than been described: wherein, the characteristic quantity of selecting according to user is judged importance in characteristic quantity list 64.On the other hand, about the thumbnail image in face-image display part 85 and object images display part 86, the characteristic of each object (also comprising face) can be selected by user, and the object images corresponding with selected characteristic (thumbnail image) is judged to be to most important image.
For example, for extracting the more detailed characteristics about face according to the face-image of face recognition, comprise that sex, age, smiling face judge or name.For the more detailed characteristics about object according to the object picture extraction of object identification, comprise the color of proprietary name or the object of object.The in the situation that of personage's voice messaging, extract and comprise the characteristic such as male voice or female voice, talker or music recognition.The in the situation that of camera action message, extraction comprises horizontal pans, pitching shooting, draws in zoom or zoom out the characteristics such as zoom.
In addition,, about the thumbnail image in face-image display part 85 and object images display part 86, the characteristic of extracting is as mentioned above configured to selectable, to make that the image corresponding characteristic of selecting with user (thumbnail image) is judged to be to significance map picture.According to the importance of judging in this way, can show each image in mode varying sized or the front and back side that change shows.
Fig. 9 shows the example of the face-image display part 85 in the situation that selecting particular persons as a detailed characteristics.
In other words, in the face-image display part 85 of Fig. 9, from each face-image, extract the face-image of particular persons, and the face-image being extracted is shown that size is greater than other face-image.
This makes user also can easily identify important scenes for object images.
In addition, with reference to Figure 10, the object images (thumbnail image) in face-image display part 85 and object images display part 86 is described.
For example, the face-image display part 85 in Fig. 5 of the example as object images, for all two field pictures that extract face-image, show thumbnail image along timeline.That is, as shown in the A of Figure 10, show continuously same target (face of particular persons) thus show object images in overlapping mode.
For addressing this problem, identify the homogeneity of detected object, and in the region occurring continuously at same target, in the multiple continuous object images that display control unit 25 shows as shown in the B of Figure 10 representational one.Then the mark such as arrow, rectangle that, display control unit 25 shows for described interval.
Here, be selected as representative object image be the most front image or intermediate image in continuous object images, in object detection, there is the image of the full accuracy of object identification, the average image in object images or because user is judged as important image to the selection of plant characteristic continuously.
As the rectangle for showing above-mentioned interval, for example, show the representative colors of a series of object images.For example, determine representative colors according to the frequent color occurring in the background parts of the frequent color occurring or object in detected object etc.Here in the interval occurring continuously at same target, if do not detect object in very short interval due to accuracy of detection, can carry out interpolation to described interval, described interval is judged to be therefrom to detect the interval of object.
In addition,, if the interval that wherein has same target to occur is long and can show without overlapping two object images, the quantity of shown object images is not limited to one.Of this sort in the situation that, as shown in the C of Figure 10, for example, can show the most front image and last image in the interval that same target occurs.
In addition, if the interval that wherein has same target to occur is long, or also can extend the interval that wherein has same target to occur by amplifying (zoom in) timeline, shown object images is not limited to a presentation graphics.Of this sort in the situation that, as shown in the D of Figure 10, according to length of an interval degree, the object images in the moment corresponding with interval in interval to be filled can be displayed on this place in moment.This makes display control unit 25 to show at certain intervals multiple object images and not make these doublings of the image according to length of an interval degree.
In the case of the continuous image that shows without overlapping same target as shown in the D of the B to Figure 10 of Figure 10, the importance of the object images that can also judge according to the characteristic of selecting according to user shows in mode varying sized or the front and back side that change shows.Of this sort in the situation that, display control unit 25 is judged the importance of the same target in the interval that has therein same target appearance, and shows in the mode that changes the size of image or the front and back side of change demonstration.Or display control unit 25 can be judged the importance of each image wherein having in the interval that same target occurs, if the importance of each image is different, allow overlapping prior image is shown greatlyr and more forward in described interval.Or, consider the image showing in this way, display control unit 25 is carved at a time the object images that shows as follows other in other the nonoverlapping situation of object images:, the interval corresponding with this moment in interval is filled.
As mentioned above, confirm in the preview screen of entity of dynamic image content user, show the information relevant with the various characteristic quantities of described content along timeline, thereby make user can easily grasp the entity of content.
And user can select or importance is weighted and selects characteristic quantity each characteristic quantity, to select user to think important scene; The scene important according to this, can carry out displayed scene modified-image in mode varying sized or the front and back side that change shows.This makes it possible to easily identify the scene important for user, thereby can more effectively grasp the entity of content.
In addition, about the object extracting, can show detected object in less overlapping mode from content, thereby and can judge that importance shows significance map picture in mode varying sized or the front and back side that change shows according to the characteristic of user's selection.Like this, can more effectively grasp the entity of content.
2. the second embodiment (according to the summarization generation of importance)
[signal conditioning package configuration of the present invention]
Figure 11 shows another structure example of having applied signal conditioning package of the present invention.
In the example of Figure 11, be similar to the signal conditioning package 11 of Fig. 1, signal conditioning package 111 shows the relevant information of content characteristic amount of extracting from content to the mode of extracting by the recognition technology such as such as image recognition, speech recognition and character recognition along timeline at the screen for preview content.
And, being similar to the signal conditioning package 11 of Fig. 1, the characteristic quantity that signal conditioning package 111 is selected according to user is judged the importance of each scene.But now, different from the signal conditioning package 11 of Fig. 1, signal conditioning package 111 extracts the scene corresponding with above-mentioned importance, and collect the scene being extracted to generate summary dynamic image or start of record and terminal as metadata.
Signal conditioning package 111 comprises content input section 21, content file 22, the 23-1 to 23-3 of Characteristic Extraction portion, content characteristic amount database 24, display control unit 25, operation inputting part 26, display part 27, Characteristic Extraction portion 28 and search part 29, and this is identical with the signal conditioning package 11 of Fig. 1.
Signal conditioning package 111 has increased important scenes detection unit 121 and summarization generation portion 122, and these are different from the signal conditioning package 11 of Fig. 1.
In other words, showing when preview screen, display control unit 25 is that input with by operation inputting part 26 and select the characteristic quantity (information relevant with characteristic quantity) that shows or do not show again to show preview screen by user based on result for retrieval.Now, the characteristic quantity that display control unit 25 is selected according to user is judged the importance of each scene, and again shows the preview screen 51 of Fig. 6 of importance.
In addition,, in the time receiving that user passes through operation inputting part 26 and asks the signal of summarization generation, display control unit 25 Display Summary in preview screen 51 generates display part 65.Then, receive the importance that user expects by operation inputting part 26 by user in, display control unit 25 is controlled important scenes detection unit 121 to extract the scene corresponding with above-mentioned importance, and in summarization generation display part 65, shows the thumbnail image of the scene being extracted.
Important scenes detection unit 121 extracts the scene corresponding with importance according to display control unit 25, and the scene being extracted is offered to display control unit 25 and summarization generation portion 122.For example, important scenes detection unit 121 is stored the starting point of important scenes that is extracted and the information of terminal as the metadata in content characteristic amount database 24.Or important scenes detection unit 121 is by utilizing the rest image of taking from these scenes to generate more than one thumbnail image of represent content.
Or the scene that summarization generation portion 122 use provide from important scenes detection unit 121 generates summary dynamic image.The summary dynamic image generating is recorded in not shown storage part.
In other words, in the case of the importance of judgement is divided into multiple ranks, display control unit 25 is selected the required importance of user.Then, important scenes detection unit 121 extracts the scene corresponding with importance to store the metadata of this scene, or generating thumbnail image, or summarization generation portion 122 generates summary dynamic image.
[operation of signal conditioning package]
Note, substantially carry out similarly the content input processing of signal conditioning package 111 with the content input processing of the signal conditioning package 11 illustrating hereinbefore with reference to Fig. 2, and the explanation of having omitted the content input processing to signal conditioning package 111 is to prevent repeat specification.
Then, with reference to the preview Graphics Processing of the content in the flowchart text signal conditioning package 111 of Figure 12.Here, the step S111 to S115 in Figure 12 and S118 carry out the essentially identical processing of step S31~S36 with Fig. 3, so, suitably omit the explanation of step S111~S115 and S118 to prevent repeat specification.
In step S111, display control unit 25 is according to the Information Selection content from operation inputting part 26.In step S112, display control unit 25 obtains from content file 22 content of selecting among step S111.
In step S113, display control unit 25 obtains the characteristic quantity of selected content among step S111 from content characteristic amount database 24.
In step S114, display control unit 25 shows preview screen.In other words, the content of display control unit 25 based on obtaining and the content characteristic amount of obtaining generate wherein and show the preview screen about the information of various characteristic quantities along timeline, and control the preview screen (preview screen 51 shown in Fig. 5) that display part 27 shows generation.
In step S115, display control unit 25 carries out the Graphics Processing again of the preview screen above illustrating with reference to Fig. 4.In the processing of step S115, show preview screen at display part 27, described preview screen is updated in response to the user instruction providing from operation inputting part 26.In other words, judge to obtain importance by the characteristic quantity of selecting according to user in characteristic quantity list 64, in display part 27, show the preview screen 51 in Fig. 6 of importance.
In step S116, display control unit 25 determines whether and will generate summary.
For example, user operates the label to select summarization generation display part 65 in the upper left label that is arranged at timeline display part 62 and summarization generation display part 65 in preview screen 51 to operation inputting part 26.
In response to this, display control unit 25 is judged and will be generated summary in step S116, processes and advances to step S117.In step S117, important scenes detection unit 121 and summarization generation portion 122 carry out summarization generation processing.With reference to Figure 13, this summarization generation processing is described after a while.According to selecteed importance, the processing in step S117 generates summary dynamic image, storing metadata or generating thumbnail image.
If the label of non-selected summarization generation display part 65 is judged the processing that does not generate summary skips steps S117 in step S116, process and advance to step S118.
In step S118, display control unit 25 judges whether the demonstration of preview screen stops.If user sends the instruction of termination by operation inputting part 26,, in step S118, judge that the demonstration of preview screen stops, and stop the demonstration of preview screen.
On the other hand, in step S118, if judge that the demonstration of preview screen does not stop, and processes and is back to step S115 and repeating step S115 and following step.
Below with reference to the summarization generation processing of the step S117 in flowchart text Figure 12 of Figure 13.
For example, in the step S115 in Figure 12, again show preview screen 51, and show importance in importance display part 91 in Fig. 6.When select the label of summarization generation display part 65 in this preview screen 51 time, Display Summary generates display part 65 with replacement time line display part 62 as shown in Figure 14.
In summarization generation display part 65 in Figure 14, the band of the importance of scene is shown and is superimposed upon on the each person in all scene changes images.Here, importance is divided into three ranks, importance 3 represents the highest importance.
The solid black colour band of Figure 14 is corresponding to the solid black areas in the importance display part 91 in Fig. 6, and represents most important (importance 3) scene.The thin shade tape of Figure 14 is corresponding to the thin hatched area in the importance display part 91 in Fig. 6, and the scene of expression importance 2.In addition, the oblique shade tape of Figure 14 is corresponding to the oblique hatched area in the importance display part 91 in Fig. 6, and the scene of expression importance 1.
Here, in the example of Figure 14, importance lower than the scene of importance 1 on stacked tape not.
Then, for example, user selects importance.For example, as shown in the A of Figure 15, what show on the right side of summarization generation display part 65 is importance selection portion 141, and it is for from " most(is most important) ", " more(is more important) " and " relevant(is suitable) " selection priority (importance).
User operates to select importance in importance selection portion 141 to operation inputting part 26.In response to this operation, display control unit 25 is controlled important scenes detection unit 121 to extract the scene corresponding with importance in step S132.Information about the scene of extracting is provided for display control unit 25, and as shown in the C of the A to Figure 15 of Figure 15, display control unit 25 shows importance selection portion 141.
For example, if select " relevant ", extract the thumbnail image of more than 1 scene of importance, as shown in the A of Figure 15, summarization generation display part 65 shows the thumbnail image of more than 1 scene of importance.For example, if select " more ", extract the thumbnail image of more than 2 scene of importance, as shown in the B of Figure 15, summarization generation display part 65 shows the thumbnail image of more than 2 scene of importance.For example, if select " most important ", extract the thumbnail image of more than 3 scene of importance, as shown in the C of Figure 15, summarization generation display part 65 shows the thumbnail image of more than 3 scene of importance therein.
Then,, in step S133-1, important scenes detection unit 121 is by utilizing the rest image of taking from these scenes to generate more than one thumbnail image of represent content.
Or in step S133-2, important scenes detection unit 121 is stored about the starting point of extracted important scenes and the information of terminal as the metadata in content characteristic amount database 24.
Or in step S133-3, the scene that summarization generation portion 122 use provide from important scenes detection unit 121 generates summary dynamic image.Generated summary dynamic image is recorded in not shown storage part.
Here, show side by side the processing of step S133-1 to S133-3, this is because can carry out any one processes, and can walk abreast and carry out at least two processing.
In step S134, display control unit 25 judges whether summarization generation processing stops.For example, user operates the label with select time line display part 62 in the upper left label that is arranged at timeline display part 62 and summarization generation display part 65 in preview screen 51 to operation inputting part 26.
In response to this operation, display control unit 25 judges that in step S134 summarization generation processing stops, and the timeline display part 62 that shows replacement summarization generation display part 65 is to stop summarization generation processing.
On the other hand, if judge that in step S134 summarization generation processing does not stop, and processes and is back to step S131 repeating step S131 and following step.
As mentioned above, user can select importance according to required scene, and generates summary according to the scene of extracting.Or, user can store about extract the starting point of scene and the information of terminal as metadata with other application etc. in use.And the presentation graphicses such as such as scene changes image can be used to generate more than one thumbnail image of represent content.Because this thumbnail image extracts from important scenes, so the most front image than the scene in correlation technique is the method for thumbnail image, can obtain following effect:, only by watching thumbnail image just can easily know the entity of content.
Here, about the selection of importance, can show the length of the summary dynamic image of the scene generation from extracting in the time switching importance, can select importance to make the length of dynamic image approach the length that user expects, and can generate summary dynamic image.
Or user can pre-enter the length of expectation in signal conditioning package 111, automatically select importance to make generating the length summary dynamic image approaching with this length according to described importance, and generate and make a summary.
[another example of summarization generation]
Next explanation is for more easily generating the other method of summary, and wherein, one that can user selects is extracted similar scene and generates summary with epigraph.
For example, in the image searching result display part 84 in the preview screen 51 in Fig. 5, about the characteristic quantity of user search and the similar scene of input picture, not only can input an image and also can input multiple images with retrieval and the similar scene of each image.Then, can from the result for retrieval of similar scene, extract relevant range as important scenes, thereby generate summary dynamic image and thumbnail image.
Example view in Figure 16 such example, wherein, input four characteristic images 151~154, and retrieval and the similar scene of each image to extract important scenes from the similar scene retrieving.
Show along timeline 141 be with the interval 154A of image 154 similar scenes, with the interval 151A of image 151 similar scenes, and the interval 153A of image 153 similar scenes and with the interval 152A of image 152 similar scenes.Then, in above-mentioned interval, by selecting interval 161 that parameter extracts filled black using the material interval as summary dynamic image, described parameter comprise the noise compensation in accuracy of detection, error detection interval and user in specific time period to interval selection.
As further feature amount, can use scenes change information, about information of sound interruption etc. more flexibly, suitably to extract scene.According to the scene in the interval of these extractions, can generate summary dynamic image and thumbnail image, and can extract starting point and the terminal of important scenes.
As mentioned above, from dynamic image content, extract various characteristic quantities so that must user can select arbitrarily each characteristic quantity owing to utilizing such as the recognition technology such as speech recognition and image recognition, the intention that therefore, can reflect in more detail user is to extract the important scenes of content.
In addition, due to from optional more than one scene of retrieval of similar characteristic image of user, the important scenes that therefore can select neatly user to want.
About dynamic image content, the utilization of this importance is made it possible to generate to thumbnail image and the summary dynamic image of the intention that more reflects user.
Above-mentioned a series of processing can be carried out by hardware, also can carry out by software.In the time that a series of processing are carried out by software, will form the installation of this software in computing machine.Here, the statement of " computing machine " comprises general purpose personal computer that the computing machine of specialized hardware is wherein housed and can carries out various functions in the time that various program is installed etc.
3. the 3rd embodiment (computing machine)
[ios dhcp sample configuration IOS DHCP of computing machine]
Figure 17 illustrates the ios dhcp sample configuration IOS DHCP of carrying out the hardware of the computing machine of above-mentioned a series of processing by program.
In computing machine 300, CPU (central processing unit) (CPU) 301, ROM (read-only memory) (ROM) 302 and random-access memory (ram) 303 interconnect by bus 304.
Input/output interface 305 is also connected to bus 304.Input block 306, output unit 307, storage unit 308, communication unit 309 and driver 310 are connected to input/output interface 305.
Input block 306 is made up of keyboard, mouse, microphone etc.Output unit 307 is made up of display, loudspeaker etc.Storage unit 308 is made up of hard disk, nonvolatile memory etc.Communication unit 309 is made up of network interface etc.Driver 310 drives removable recording medium 311, such as disk, CD, magneto-optic disk, semiconductor memory etc.
In the computing machine of configuration as mentioned above, CPU 301 is loaded on RAM 303 by the program being for example stored in storage unit 308 via input/output interface 305 and bus 304, and carries out described program.Thus, carry out above-mentioned a series of processing.
As an example, can be by the program of being carried out by computing machine (CPU 301) be recorded in removable recording medium 311 grades of encapsulation medium this program is provided.Also can provide program via the wired or wireless transmission medium such as such as LAN (Local Area Network), internet or digital satellite broadcasting.
In computing machine, by removable recording medium 311 is loaded in driver 310, program can be mounted in storage unit 308 via input/output interface 305.Can also use communication unit 309 from wired or wireless transmission medium reception program, and by installation to storage unit 308.Select as another, program can be mounted in ROM 302 or storage unit 308 in advance.
It should be noted that the program of being carried out by computing machine can be according to the order illustrating in present specification according to the program of time sequencing processing, or parallel processing or in the necessary processed program of moment such as such as when request.
In the present invention, for illustrating that the step of above-mentioned series of processes can comprise the processing of carrying out according to time sequencing according to the order of recording and disobey time sequencing but processing parallel or that carry out separately.
Embodiments of the invention are not limited to above-described embodiment.It will be appreciated by those skilled in the art that according to designing requirement and other factors, in the claim that can enclose in the present invention or the scope of its equivalent, carry out various amendments, combination, inferior combination and change.
For example, the present invention can adopt the structure of cloud computing, described cloud computing by multiple devices by network allocation be connected a function and process.
In addition, can be by a device or by distributing multiple devices to carry out the each step illustrating in above-mentioned process flow diagram.
In addition, in the situation that a step comprises multiple processing, can be by a device or by distributing multiple devices to carry out multiple processing that a described step comprises.
In addition, the element illustrating as individual devices (or processing unit) above can be divided into and be constructed to multiple devices (or processing unit).On the contrary, the above element illustrating as multiple devices (or processing unit) can be constructed to a device (or processing unit) jointly.In addition, the element except said elements can be added into each device (or processing unit).And a part for the element of given device (or processing unit) can be contained in the element of another device (or another processing unit), need only the unitary construction of system or operate basic identical.In other words, embodiments of the invention are not limited to above-described embodiment, in the scope that does not depart from this technology, can make various changes and modifications.
Although with reference to accompanying drawing in detail the preferred embodiments of the present invention have been described in detail, have the invention is not restricted to this.It will be appreciated by those skilled in the art that, in the technical scope of claims or equivalent, can have various modified examples or fixed case.Should be understood that, these modified examples or fixed case also belong in technical scope of the present invention.
In addition, the present invention also can construct as follows.
(1) signal conditioning package, it comprises:
Multiple Characteristic Extraction portion, they are configured to extract multiple characteristic quantities from content;
Display control unit, described display control unit is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And
Selection portion, described selection portion is configured to select to show or do not show the information relevant to described characteristic quantity;
Wherein, the demonstration of the importance of described display control unit control scene, described importance based on select to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
(2) according to the signal conditioning package (1) described, wherein,
Described display control unit changes the demonstration of the information relevant to described characteristic quantity according to described importance.
(3) according to the signal conditioning package (2) described, wherein,
Described display control unit is the demonstration as a scene image of the described information relevant to described characteristic quantity according to described importance control.
(4) according to the signal conditioning package (3) described, wherein,
Described display control unit is greater than the size of a scene image with low described importance mode to have the size of a scene image of high described importance shows a scene image with high described importance.
(5) according to the signal conditioning package (3) described, wherein,
Described display control unit by a scene image with high described importance be presented at have low described importance a scene image before.
(6) according to the signal conditioning package (2) described, wherein,
Described display control unit is according to the demonstration of described importance control object image, and in described object images, predetermine one is detected as the described information relevant to described characteristic quantity.
(7) according to the signal conditioning package (6) described, wherein,
Described display control unit is greater than the size of the object images with low described importance mode to have the size of object images of high described importance shows the object images with high described importance.
(8) according to the signal conditioning package (6) described, wherein,
Described display control unit by the object images with high described importance be presented at have low described importance object images before.
(9) according to the signal conditioning package (6) described, wherein,
In the case of having along timeline continuous detecting the object images of high described importance, described display control unit shows an object images above with high described importance in continuous detecting has the interval of object images of high described importance.
(10) according to the signal conditioning package described in any one in (1) to (9), also comprise:
Changing unit, described changing unit is configured to change the weight of described importance;
Wherein, described display control unit changes the demonstration of the described information relevant to described characteristic quantity according to the described importance that has been changed weight by described changing unit.
(11) according to the signal conditioning package (1) described, also comprise:
Scene extraction unit, described scene extraction unit is configured to extract the scene corresponding with described importance.
(12) according to the signal conditioning package (11) described, also comprise:
Summarization generation portion, described summarization generation portion is configured to collect the scene of being extracted by described scene extraction unit, and generates summary dynamic image.
(13) according to the signal conditioning package (11) described, also comprise:
Metadata generating unit, described metadata generating unit is configured to generate abstract metadata, and described abstract metadata comprises starting point and the terminal of the scene of being extracted by described scene extraction unit.
(14) according to the signal conditioning package (11) described, also comprise:
Thumbnail generating unit, described thumbnail generating unit generates the thumbnail image that represents described content according to the image of the scene of being extracted by described scene extraction unit.
(15) according to the signal conditioning package described in any one in (11) to (14), also comprise:
Changing unit, described changing unit is configured to change the weight of described importance;
Wherein, described scene extraction unit is extracted according to the scene that has been changed the described importance of weight by described change portion.
(16) information processing method, said method comprising the steps of:
Signal conditioning package extracts multiple characteristic quantities from content;
By the demonstration of the image of content described in described signal conditioning package control and the information relevant to the described characteristic quantity of described content;
Select to show or do not show the information relevant to described characteristic quantity by described signal conditioning package; And
By the demonstration of the importance of described signal conditioning package control scene, described importance based on to select with described characteristic quantity relevant described information demonstration or do not show and obtain.
(17) program, described program makes computing machine can play the effect as lower component:
Multiple Characteristic Extraction portion, they are configured to extract multiple characteristic quantities from content;
Display control unit, described display control unit is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And
Selection portion, described selection portion is configured to select to show or do not show the information relevant to described characteristic quantity;
Wherein, the demonstration of the importance of described display control unit control scene, described importance based on selected to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
The application comprises the relevant theme of the disclosed content of Japanese priority patent application JP 2012-257826 of submitting to Japan Office on November 26th, 2012, therefore the full content of this Japanese priority application is incorporated to by reference herein.

Claims (18)

1. a signal conditioning package, it comprises:
Multiple Characteristic Extraction portion, they are configured to extract multiple characteristic quantities from content;
Display control unit, described display control unit is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And
Selection portion, described selection portion is configured to select to show or do not show the information relevant to described characteristic quantity;
Wherein, the demonstration of the importance of described display control unit control scene, described importance based on select to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
2. signal conditioning package as claimed in claim 1, wherein,
Described display control unit changes the demonstration of the described information relevant to described characteristic quantity according to described importance.
3. signal conditioning package as claimed in claim 2, wherein,
Described display control unit is the demonstration as a scene image of the described information relevant to described characteristic quantity according to described importance control.
4. signal conditioning package as claimed in claim 3, wherein,
Described display control unit is greater than the size of a scene image with low described importance mode to have the size of a scene image of high described importance shows a scene image with high described importance.
5. signal conditioning package as claimed in claim 3, wherein,
Described display control unit by a scene image with high described importance be presented at have low described importance a scene image before.
6. signal conditioning package as claimed in claim 5, wherein, described display control unit can use dotted line to show the image outline of the scene with low described importance being hidden.
7. signal conditioning package as claimed in claim 2, wherein,
Described display control unit is according to the demonstration of described importance control object image, and in described object images, predetermine one is detected as the described information relevant to described characteristic quantity.
8. signal conditioning package as claimed in claim 7, wherein,
Described display control unit is greater than the size of the object images with low described importance mode to have the size of object images of high described importance shows the object images with high described importance.
9. signal conditioning package as claimed in claim 7, wherein,
Described display control unit by the object images with high described importance be presented at have low described importance object images before.
10. signal conditioning package as claimed in claim 7, wherein,
In the case of having along timeline continuous detecting the object images of high described importance, described display control unit shows an object images above with high described importance in continuous detecting has the interval of object images of high described importance.
11. signal conditioning packages as claimed in any one of claims 1-9 wherein, also comprise:
Changing unit, described changing unit is configured to change the weight of described importance;
Wherein, described display control unit changes the demonstration of the described information relevant to described characteristic quantity according to the described importance that has been changed weight by described changing unit.
12. signal conditioning packages as claimed in claim 1, also comprise:
Scene extraction unit, described scene extraction unit is configured to extract the scene corresponding with described importance.
13. signal conditioning packages as claimed in claim 12, also comprise:
Summarization generation portion, described summarization generation portion is configured to collect the scene of being extracted by described scene extraction unit, and generates summary dynamic image.
14. signal conditioning packages as claimed in claim 12, also comprise:
Metadata generating unit, described metadata generating unit is configured to generate abstract metadata, and described abstract metadata comprises starting point and the terminal of the scene of being extracted by described scene extraction unit.
15. signal conditioning packages as claimed in claim 12, also comprise:
Thumbnail generating unit, described thumbnail generating unit generates the thumbnail image that represents described content according to the image of the scene of being extracted by described scene extraction unit.
16. signal conditioning packages as described in any one in claim 12 to 15, also comprise:
Changing unit, described changing unit is configured to change the weight of described importance;
Wherein, described scene extraction unit is extracted according to the scene that has been changed the described importance of weight by described change portion.
17. 1 kinds of information processing methods, said method comprising the steps of:
Signal conditioning package extracts multiple characteristic quantities from content;
By the demonstration of the image of content described in described signal conditioning package control and the information relevant to the described characteristic quantity of described content;
Select to show or do not show the information relevant to described characteristic quantity by described signal conditioning package; And
By the demonstration of the importance of described signal conditioning package control scene, described importance based on to select with described characteristic quantity relevant described information demonstration or do not show and obtain.
18. 1 kinds of programs, described program makes computing machine can play the effect as lower component:
Multiple Characteristic Extraction portion, they are configured to extract multiple characteristic quantities from content;
Display control unit, described display control unit is configured to control the demonstration of the image of described content and the information relevant to the described characteristic quantity of described content; And
Selection portion, described selection portion is configured to select to show or do not show the information relevant to described characteristic quantity;
Wherein, the demonstration of the importance of described display control unit control scene, described importance based on selected to described selection portion with described characteristic quantity relevant described information demonstration or do not show and obtain.
CN201310579095.2A 2012-11-26 2013-11-18 Information processing apparatus and method, and program Pending CN103838808A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012257826A JP2014106637A (en) 2012-11-26 2012-11-26 Information processor, method and program
JP2012-257826 2012-11-26

Publications (1)

Publication Number Publication Date
CN103838808A true CN103838808A (en) 2014-06-04

Family

ID=50774438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310579095.2A Pending CN103838808A (en) 2012-11-26 2013-11-18 Information processing apparatus and method, and program

Country Status (3)

Country Link
US (1) US20140149865A1 (en)
JP (1) JP2014106637A (en)
CN (1) CN103838808A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231233A (en) * 2016-08-05 2016-12-14 北京邮电大学 A kind of based on weights melt screen method in real time
CN106775243A (en) * 2016-12-16 2017-05-31 厦门幻世网络科技有限公司 A kind of information processing method and electronic equipment
CN114979496A (en) * 2019-04-22 2022-08-30 夏普株式会社 Electronic device

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9185387B2 (en) 2012-07-03 2015-11-10 Gopro, Inc. Image blur based on 3D depth information
US9728230B2 (en) * 2014-02-20 2017-08-08 International Business Machines Corporation Techniques to bias video thumbnail selection using frequently viewed segments
US9685194B2 (en) 2014-07-23 2017-06-20 Gopro, Inc. Voice-based video tagging
US10074013B2 (en) 2014-07-23 2018-09-11 Gopro, Inc. Scene and activity identification in video summary generation
US9734870B2 (en) 2015-01-05 2017-08-15 Gopro, Inc. Media identifier generation for camera-captured media
JP6062474B2 (en) * 2015-03-20 2017-01-18 ヤフー株式会社 Information processing apparatus, information processing method, and information processing program
US9639560B1 (en) 2015-10-22 2017-05-02 Gopro, Inc. Systems and methods that effectuate transmission of workflow between computing platforms
US9871994B1 (en) 2016-01-19 2018-01-16 Gopro, Inc. Apparatus and methods for providing content context using session metadata
US9787862B1 (en) * 2016-01-19 2017-10-10 Gopro, Inc. Apparatus and methods for generating content proxy
US10078644B1 (en) 2016-01-19 2018-09-18 Gopro, Inc. Apparatus and methods for manipulating multicamera content using content proxy
US10129464B1 (en) 2016-02-18 2018-11-13 Gopro, Inc. User interface for creating composite images
KR20170098079A (en) * 2016-02-19 2017-08-29 삼성전자주식회사 Electronic device method for video recording in electronic device
US9972066B1 (en) 2016-03-16 2018-05-15 Gopro, Inc. Systems and methods for providing variable image projection for spherical visual content
US10402938B1 (en) 2016-03-31 2019-09-03 Gopro, Inc. Systems and methods for modifying image distortion (curvature) for viewing distance in post capture
US9838730B1 (en) 2016-04-07 2017-12-05 Gopro, Inc. Systems and methods for audio track selection in video editing
US10229719B1 (en) 2016-05-09 2019-03-12 Gopro, Inc. Systems and methods for generating highlights for a video
US9953679B1 (en) 2016-05-24 2018-04-24 Gopro, Inc. Systems and methods for generating a time lapse video
US9967515B1 (en) 2016-06-15 2018-05-08 Gopro, Inc. Systems and methods for bidirectional speed ramping
US9922682B1 (en) 2016-06-15 2018-03-20 Gopro, Inc. Systems and methods for organizing video files
US10045120B2 (en) 2016-06-20 2018-08-07 Gopro, Inc. Associating audio with three-dimensional objects in videos
US10395119B1 (en) 2016-08-10 2019-08-27 Gopro, Inc. Systems and methods for determining activities performed during video capture
US9953224B1 (en) 2016-08-23 2018-04-24 Gopro, Inc. Systems and methods for generating a video summary
US10268898B1 (en) 2016-09-21 2019-04-23 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video via segments
US10282632B1 (en) 2016-09-21 2019-05-07 Gopro, Inc. Systems and methods for determining a sample frame order for analyzing a video
US10397415B1 (en) 2016-09-30 2019-08-27 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US10044972B1 (en) 2016-09-30 2018-08-07 Gopro, Inc. Systems and methods for automatically transferring audiovisual content
US11106988B2 (en) 2016-10-06 2021-08-31 Gopro, Inc. Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle
US10002641B1 (en) 2016-10-17 2018-06-19 Gopro, Inc. Systems and methods for determining highlight segment sets
JP6270975B2 (en) * 2016-12-14 2018-01-31 ヤフー株式会社 Information processing apparatus, information processing method, and information processing program
US10339443B1 (en) 2017-02-24 2019-07-02 Gopro, Inc. Systems and methods for processing convolutional neural network operations using textures
US9916863B1 (en) 2017-02-24 2018-03-13 Gopro, Inc. Systems and methods for editing videos based on shakiness measures
US10360663B1 (en) 2017-04-07 2019-07-23 Gopro, Inc. Systems and methods to create a dynamic blur effect in visual content
US10395122B1 (en) 2017-05-12 2019-08-27 Gopro, Inc. Systems and methods for identifying moments in videos
JP6946729B2 (en) * 2017-05-12 2021-10-06 富士通株式会社 Information processing equipment, information processing system and information processing method
US10402698B1 (en) 2017-07-10 2019-09-03 Gopro, Inc. Systems and methods for identifying interesting moments within videos
US10614114B1 (en) 2017-07-10 2020-04-07 Gopro, Inc. Systems and methods for creating compilations based on hierarchical clustering
US10743085B2 (en) * 2017-07-21 2020-08-11 Microsoft Technology Licensing, Llc Automatic annotation of audio-video sequences
CN109756767B (en) * 2017-11-06 2021-12-14 腾讯科技(深圳)有限公司 Preview data playing method, device and storage medium
US10897639B2 (en) * 2018-12-14 2021-01-19 Rovi Guides, Inc. Generating media content keywords based on video-hosting website content
KR20210108691A (en) * 2020-02-26 2021-09-03 한화테크윈 주식회사 apparatus and method for multi-channel image back-up based on event, and network surveillance camera system including the same
WO2023189520A1 (en) * 2022-03-30 2023-10-05 ソニーグループ株式会社 Information processing system, information processing method, and program

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3472659B2 (en) * 1995-02-20 2003-12-02 株式会社日立製作所 Video supply method and video supply system
JP4227241B2 (en) * 1999-04-13 2009-02-18 キヤノン株式会社 Image processing apparatus and method
EP1182584A3 (en) * 2000-08-19 2005-12-28 Lg Electronics Inc. Method and apparatus for video skimming
US7203380B2 (en) * 2001-11-16 2007-04-10 Fuji Xerox Co., Ltd. Video production and compaction with collage picture frame user interface
WO2005050986A1 (en) * 2003-11-19 2005-06-02 National Institute Of Information And Communications Technology, Independent Administrative Agency Method and device for presenting video content
TWI259719B (en) * 2004-01-14 2006-08-01 Mitsubishi Electric Corp Apparatus and method for reproducing summary
US7945142B2 (en) * 2006-06-15 2011-05-17 Microsoft Corporation Audio/visual editing tool
JP5010292B2 (en) * 2007-01-18 2012-08-29 株式会社東芝 Video attribute information output device, video summarization device, program, and video attribute information output method
JP5421627B2 (en) * 2009-03-19 2014-02-19 キヤノン株式会社 Video data display apparatus and method
US8881013B2 (en) * 2009-04-30 2014-11-04 Apple Inc. Tool for tracking versions of media sections in a composite presentation
JP2011055190A (en) * 2009-09-01 2011-03-17 Fujifilm Corp Image display apparatus and image display method
JP2011239075A (en) * 2010-05-07 2011-11-24 Sony Corp Display device, display method and program
JP5649425B2 (en) * 2010-12-06 2015-01-07 株式会社東芝 Video search device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231233A (en) * 2016-08-05 2016-12-14 北京邮电大学 A kind of based on weights melt screen method in real time
CN106231233B (en) * 2016-08-05 2019-12-20 北京邮电大学 Real-time screen fusion method based on weight
CN106775243A (en) * 2016-12-16 2017-05-31 厦门幻世网络科技有限公司 A kind of information processing method and electronic equipment
CN106775243B (en) * 2016-12-16 2020-02-11 厦门黑镜科技有限公司 Information processing method and electronic equipment
CN114979496A (en) * 2019-04-22 2022-08-30 夏普株式会社 Electronic device

Also Published As

Publication number Publication date
JP2014106637A (en) 2014-06-09
US20140149865A1 (en) 2014-05-29

Similar Documents

Publication Publication Date Title
CN103838808A (en) Information processing apparatus and method, and program
US8416332B2 (en) Information processing apparatus, information processing method, and program
CN109803180B (en) Video preview generation method and device, computer equipment and storage medium
US10031649B2 (en) Automated content detection, analysis, visual synthesis and repurposing
CN1538351B (en) Method and computer for generating visually representative video thumbnails
US8935169B2 (en) Electronic apparatus and display process
US9313444B2 (en) Relational display of images
KR20110043612A (en) Image processing
KR20100018988A (en) Multi contents display system and method thereof
JP2006236218A (en) Electronic album display system, electronic album display method, and electronic album display program
KR101440168B1 (en) Method for creating a new summary of an audiovisual document that already includes a summary and reports and a receiver that can implement said method
US9131207B2 (en) Video recording apparatus, information processing system, information processing method, and recording medium
CN105814905B (en) Method and system for synchronizing use information between the device and server
CN105556947A (en) Method and apparatus for color detection to generate text color
CN110418148B (en) Video generation method, video generation device and readable storage medium
JP2006079460A (en) System, method and program for displaying electronic album and device, method, and program for classifying image
CN113992973A (en) Video abstract generation method and device, electronic equipment and storage medium
EP3185137A1 (en) Method, apparatus and arrangement for summarizing and browsing video content
JP5146282B2 (en) Information processing apparatus, display control method, and program
JP2012178028A (en) Album creation device, control method thereof, and program
US20140189769A1 (en) Information management device, server, and control method
JP2008090526A (en) Conference information storage device, system, conference information display device, and program
WO2018035829A1 (en) Advertisement playback device
CN114245174B (en) Video preview method and related equipment
CN105812699A (en) Method for generating dynamic pictures and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140604