JP4045768B2 - Video processing device - Google Patents

Video processing device Download PDF

Info

Publication number
JP4045768B2
JP4045768B2 JP2001308282A JP2001308282A JP4045768B2 JP 4045768 B2 JP4045768 B2 JP 4045768B2 JP 2001308282 A JP2001308282 A JP 2001308282A JP 2001308282 A JP2001308282 A JP 2001308282A JP 4045768 B2 JP4045768 B2 JP 4045768B2
Authority
JP
Japan
Prior art keywords
data
video data
video
partial
link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2001308282A
Other languages
Japanese (ja)
Other versions
JP2003116095A (en
JP2003116095A5 (en
Inventor
宏樹 吉村
和貴 平田
Original Assignee
富士ゼロックス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士ゼロックス株式会社 filed Critical 富士ゼロックス株式会社
Priority to JP2001308282A priority Critical patent/JP4045768B2/en
Publication of JP2003116095A publication Critical patent/JP2003116095A/en
Publication of JP2003116095A5 publication Critical patent/JP2003116095A5/ja
Application granted granted Critical
Publication of JP4045768B2 publication Critical patent/JP4045768B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to an apparatus or method relating to presentation of a link to video data in a video processing apparatus, and in particular, secures a work area, extracts partial video data from video data, and extracts text data from the partial video data. , Audio data, image data, related document file data, video data, etc. as link data contents, video processing apparatus and video processing capable of easily and appropriately associating link data between users or users Regarding the method.
[0002]
[Prior art]
In recent years, communication tools and conference system information sharing for communication using multimedia data via the Internet have progressed in individuals and companies. Among them, a system for adding a text annotation to a digital document or video image has been proposed in the same way as writing a marker or memo on a conventional printed matter. In Japanese Laid-Open Patent Publication No. Hei 8-272789 “Material Creation Support System Based on Video Specifications”, text information and video information can be associated with each other and handled as materials in the publication. Hereinafter, this technique will be referred to as a first conventional technique.
[0003]
Next, Japanese Patent Laid-Open No. 2000-250864 “Collaborative Work Support System” can add text data such as memos and questions to streaming data such as presentation materials as a technology that allows annotation in various formats. And can be shared among multiple clients. Hereinafter, this technique will be referred to as a second conventional technique.
[0004]
In Japanese Patent Application Laid-Open No. Hei 6-274552, “Multimedia Data Link System”, an arbitrary area in a moving image displayed on a screen or an arbitrary screen in a series of moving image data is designated, whereby data is displayed on the screen. Can be displayed. Hereinafter, this technique will be referred to as a third conventional technique.
[0005]
further, Y.Yamamoto , CHI2001 "Time-ART" has proposed a tool that has a user interface that can be clipped freely while viewing video and audio data, and has a text annotation function. Hereinafter, this technique will be referred to as a fourth conventional technique.
On the other hand, in Japanese Patent Laid-Open No. 10-21029, “telop display device”, there is a display device that allows a user to easily create a telop and easily add audio information and image information as additional information. Hereinafter, this technique will be referred to as a fifth conventional technique.
[0006]
Conventionally, when a home page is browsed by a web browser for browsing the World Wide Web, link information may be embedded as a so-called image map in the home page. Users can access the linked information by moving the mouse over the area composing the image map of the home page presented by the web browser and clicking the mouse. It is. Hereinafter, this technique will be referred to as a sixth conventional technique.
[0007]
Japanese Patent Laid-Open No. 8-329096 “Image Data Retrieval Device” has means for setting an icon that briefly represents the feature of the image as additional information in the image data, and the icon has a one-dimensional or higher axis. A technique of an image data search apparatus that is arranged at a predetermined position on a map and searches for image data related to the icon using the icon is disclosed. Hereinafter, this technique will be referred to as a seventh conventional technique.
[0008]
Further, Japanese Patent Laid-Open No. 8-329097 “Image Data Retrieval Device” has means for setting a keyword for the image as additional information in the image data, and the image data for retrieving the image data using the keyword. The technology of the search device has been released. Hereinafter, this technique will be referred to as an eighth conventional technique.
[0009]
Japanese Patent Laid-Open No. 8-329098 “Image Data Retrieval Device” includes image data on a first map having one or more axes and additional information on a second map having one or more axes. A technology of an image data search apparatus that can search image data in association with each other is disclosed. Hereinafter, this technique will be referred to as a ninth conventional technique.
[0010]
In Japanese Patent Laid-Open No. 11-39120 “Content Display / Selection Device and Content Display / Selection Method, and Recording Medium on which Content Display / Selection Method Program is Recorded”, the HTML document content is arranged in a two-dimensional array, Technologies that enable browsing (list of contents) without a mouse pointer have been released. Hereinafter, this technique will be referred to as a tenth conventional technique.
[0011]
[Problems to be solved by the invention]
However, the conventional techniques have various problems as described below.
First, as a common problem in the first to fifth conventional systems described above, the user extracts partial video data to another screen during playback of video data, and the content of the video data includes audio data. There was a problem that link data could not be added while referring to it.
[0012]
Further, the link data added to the partial video data has a problem that the link data cannot be added to an arbitrary location on the partial video data, and it is not known where it is added. For example, when multiple pieces of objects such as people and documents are shown in video data, when link data is added to the partial video data, in the prior art, the link data comment indicates which object There was a problem that it was not possible to determine if it was pointing.
Furthermore, there is a problem that the additional information of the related link data cannot be superimposed on the arbitrarily designated portion of the partial image data.
[0013]
Next, in the sixth prior art, when the HTML document content including the image map is presented to the user, the user moves the mouse over the area of the HTML document content including the image map in the browser. Without it, the user could not know the existence of the image map.
[0014]
Next, the seventh, eighth, and ninth prior arts can associate an icon, text data, or additional information with image data, but provide a visual feedback of a link to the user. Instead, when a plurality of links are added to the same image data, it is not possible to give visual feedback to the user, distinguish each link, and use the linked information.
[0015]
Similarly, even if the tenth prior art is used, the user can recognize the presence of an image map associated with a specific area such as a person or object represented in the HTML document content, particularly image data or video data. There was a problem that could not be presented.
Also, using any of the sixth to tenth prior arts, it is used in cooperation with a specific area such as a person or an object represented in the video data and a so-called electronic bulletin board system or a telephone / communication system such as a telephone. There was a problem that you can't.
[0016]
The present invention has been made to solve such conventional problems, and is effective for presenting the existence of data associated with the partial video data for the partial video data specified from the video data. An object is to provide a video processing apparatus and the like.
[0017]
[Means for Solving the Problems]
In order to achieve the above object, in the video processing apparatus according to the present invention, the partial video data specifying means specifies the partial video data that is a part of the video data from the video data, and the partial video data specified by the data association means The data so that the presence of the data can be presented.
Therefore, it is possible to identify partial video data from the video data and associate the data with the partial video data in a manner capable of presenting the presence thereof, thereby presenting the existence of data associated with the partial video data. Can be possible.
[0018]
Here, the video processing apparatus may be configured as various apparatuses, for example, using a computer.
Further, as the video data, for example, temporally continuous video data is used, and specifically, data in which planar image data in a frame continuously changes in time is used. In this case, one point in the video data can be indicated by the value of the coordinates (horizontal axis and vertical axis) representing the position in the frame and the value of the time axis.
[0019]
Various data may be used as the partial video data. For example, image data of one frame, data of a specific target in image data of one frame, or a frame having a time width is used. Image data, that is, image data of a plurality of temporally continuous frames, specific target data having a time width, or the like can be used.
[0020]
Various methods may be used as the method of specifying the partial video data. For example, the method of specifying based on the designation from the user, or the video processing device automatically specifies in accordance with a predetermined procedure. It is possible to use a method, a method using both of these, or the like.
[0021]
Various data may be used as data associated with the partial video data, and text data, audio data, image data, and the like can be used.
Further, the number of data associated with the partial video data may be singular or plural.
[0022]
In the video processing apparatus according to the present invention, the partial video data specifying unit specifies partial video data having a time width for the same target data included in the video data.
Therefore, the data can be associated with the same target data having the time width included in the video data.
[0023]
Here, various target data may be used as the same target data, for example, data targeting a person, data targeting an object, or a predetermined area in a frame. Data etc. can be used. Various methods may be used as a method for specifying the same object. For example, for a stationary object, a method in which an object existing in the same place is regarded as the same object can be used. For those performing the above, it is possible to use a method in which those having characteristics such as the same shape are regarded as the same object.
Various time widths can be used as the time width.
[0024]
In the video processing apparatus according to the present invention, the video data corresponds to audio data. Then, the partial video data specifying means specifies partial video data having a time width in which audio data corresponding to the data of the person is valid for the data of one or a plurality of persons included in the video data.
Therefore, for a single person or a plurality of persons, data having a time width in which sound corresponding to the target is valid can be specified as partial video data.
[0025]
Here, as the audio data, for example, audio data emitted by a person or the like in the corresponding video data is used, and corresponds to the video data on the time axis, for example.
In addition, for data of a single person, for example, a voice that is considered to be emitted by the person except for a time width during which voices considered to be emitted by the person continuously continues or a silent period less than a predetermined threshold is continuous. For example, the time width during which the voice data continues can be determined as the time width in which the audio data corresponding to the data of the person is valid.
[0026]
Similarly, with respect to data of a plurality of persons, for example, a time duration during which a state in which at least one of the plurality of persons is considered to emit sound continues continuously or no sound less than a predetermined threshold is used. A time width during which such a state continues continuously except for a period can be determined as a time width in which audio data corresponding to the data of the plurality of persons is valid.
[0027]
In the video processing apparatus according to the present invention, the partial video data specifying means specifies the partial video data using data for specifying a region where the partial video data is located in the frame of the video data.
Therefore, for example, by using coordinate position data in a frame, it is possible to specify the image area in each frame constituting the partial video data and specify the partial video data.
[0028]
In the video processing apparatus according to the present invention, the partial video data specifying unit specifies a plurality of partial video data candidates by the partial video data candidate specifying unit, and sets the partial video data candidates specified by the partial video data designation receiving unit. The designation of the included partial video data is accepted from the user, and the partial video data for which the designation is accepted is set as the specified partial video data.
Therefore, after automatically specifying a plurality of partial video data candidates by the video processing device, the specified partial video data is finally specified by the method of specifying the partial video data from the plurality of candidates by the user. Partial video data.
[0029]
Here, as the number of partial video data candidates, various numbers may be used, for example, the number may be singular.
In addition, various methods may be used as methods for specifying partial video data candidates. For example, data for each target existing in a frame of video data can be specified as partial video data candidates. .
Further, as the partial video data designation receiving means, for example, a keyboard or a mouse operated by the user can be used.
[0030]
In the video processing apparatus according to the present invention, the related partial video data specifying unit specifies the partial video data from the data associated with the partial video data.
Therefore, for example, when the data associated with the partial video data is designated by the user, the partial video data associated with the data can be specified.
[0031]
In the video processing apparatus according to the present invention, the related data presenting means presents data indicating the presence of data associated with the partial video data in a visual association with the partial video data in the video data.
Therefore, the presence of data associated with the partial video data can be presented in a visual association with the partial video data, whereby the presence or association of the associated data is visually indicated to the user. It can be grasped.
[0032]
Here, as data indicating the existence of data associated with the partial video data, for example, icon data can be used, and various data can be used as described later.
Further, as a method of visually associating data indicating the presence of data associated with partial video data and the partial video data, various methods may be used. For example, how to arrange these data in the vicinity Alternatively, it is possible to use a method of arranging a part of these data in an overlapping manner.
Further, as a presentation method, for example, a method of displaying and outputting on a screen or a method of printing and outputting on a paper surface can be used.
[0033]
In the video processing apparatus according to the present invention, the related data presenting means presents data having a shape based on the shape of the partial video data as data indicating the presence of data associated with the partial video data.
Therefore, by presenting data having a shape based on the shape of the partial video data, it is possible to make it easier for the user to visually grasp the association between the data and the partial video data.
[0034]
Here, various data may be used as the data having a shape based on the shape of the partial video data. For example, shadow data having a shape based on the shape of the partial video data may be used.
[0035]
In the video processing apparatus according to the present invention, the related data presenting means is a frame provided outside the frame of the video data and outside the frame as data indicating the presence of data associated with the partial video data. The data indicating the horizontal position and the data indicating the vertical position within the frame of the partial video data are presented on the inside.
Accordingly, data indicating the presence of data associated with the partial video data is presented in a frame provided outside the frame, not within the frame of the video data, so that the image in the frame can be easily viewed as it is. . In addition, the presented data can indicate the horizontal position and the vertical position within the frame of the partial video data.
[0036]
Various frames may be used as the frame provided outside the frame of the video data. For example, a frame that is slightly larger than the frame of the video data is used. Video data is not presented inside the frame.
In addition, the partial video data exists at a position where the vertical line at the horizontal position and the horizontal line at the vertical position are orthogonal to each other.
[0037]
In the video processing apparatus according to the present invention, data indicating the presence of data associated with the partial video data is associated with a predetermined process. And the designation of the data presented by the presentation data designation accepting means (data indicating the existence of data associated with the partial video data) is accepted from the user, and the presentation data corresponding process execution means is associated with the data accepted by the designation. Execute the process.
Therefore, the user can execute processing associated with the data by designating the presented data.
[0038]
Here, various processes may be used as the predetermined process. For example, a document process related to the presented data, a process for starting a program related to e-mail, the Internet, etc., or a process related to the presented data. For example, a process for displaying or transmitting data to be displayed can be used. More specifically, for example, a process for displaying data related to the presented data on the screen, or an address set by e-mail for the data. Or a process for transmitting the data by voice to a telephone number set by telephone.
As the presentation data designation receiving means, for example, a keyboard or a mouse operated by the user can be used.
[0039]
Further, in the video processing apparatus according to the present invention, it is possible to execute an operation related to the same video data by a plurality of terminal devices.
Accordingly, for example, not only the operation related to the same video data is performed by one terminal device (for example, one user) but also the operation related to the same video data is performed by a plurality of terminal devices (for example, a plurality of users). Thus, partial video data related to the same video data, data associated with the partial video data, and the like can be shared and edited together.
[0040]
Here, various devices may be used as the terminal device, and for example, a computer can be used.
Various numbers may be used as the number of terminal devices.
Various operations relating to the same video data may be used. For example, an operation for specifying partial video data from video data or an operation for associating data with the specified partial video data may be used. it can.
[0041]
As one configuration example, a plurality of terminal devices are communicably connected via a wired or wireless network, and a common storage device accessible by the plurality of terminal devices is provided to operate the storage device. The target data is saved.
[0042]
Further, in the video processing device according to the present invention, the plurality of related data presenting means displays the data indicating the existence of the plurality of data associated with the partial video data that is a part of the video data specified from the video data. It is presented visually associated with the partial video data in the data.
Accordingly, it is possible to present the presence of a plurality of data associated with the partial video data in a visual association with the partial video data, so that the existence of the plurality of associated data and the association can be indicated to the user. Can be visually grasped.
[0043]
Here, various numbers may be used as the number of pieces of data associated with the partial video data.
Further, as the data indicating the presence of a plurality of data associated with the partial video data, for example, data different from the case where a single data is associated with the partial video data is used. Data representing the number of pieces of data associated with is used.
[0044]
In the video processing apparatus according to the present invention, the plurality of related data presenting means presents the same number of data as the number of the associated data as data indicating the presence of the plurality of data associated with the partial video data.
Therefore, the number of data associated with the partial video data can be presented so as to be visually grasped by the user.
[0045]
Here, as the same number of data as the number of data associated with the partial video data, data having the same or similar shape can be used as a preferred embodiment example, or, for example, data having different shapes. May be used.
[0046]
Further, in the video processing apparatus according to the present invention, the plurality of related data presenting means presents data indicating the presence of each data associated with the partial video data in an identifiable manner for each associated data.
Therefore, for each piece of data associated with the partial video data, the data indicating the presence can be visually identified by the user.
[0047]
Here, as an aspect in which the data indicating the existence of each piece of data associated with the partial video data can be identified, for example, the shape, color, brightness, arrangement position, etc. of the data indicating the existence are different for each piece of data. In such a manner, it can be used.
[0048]
As in the present invention described above, a technique for presenting data indicating the presence of a plurality of data associated with data of the same image visually associated with the image data and the same number of data as the plurality of data are presented. However, the technology that enables identification for each piece of data is not necessarily limited to the partial video data specified from the video data, and can be applied to various types of image data. It is also possible to apply to image data.
[0049]
In addition, the present invention provides a video processing method for realizing various processes as described above.
For example, in the video processing method according to the present invention, partial video data that is a part of the video data is specified from the video data, and the data is associated with the specified partial video data so that the presence of the data can be presented.
In the video processing method according to the present invention, data indicating the presence of a plurality of data associated with partial video data that is a part of the video data identified from the video data is used as the partial video data in the video data. And present them in a visual association.
[0050]
Further, the present invention provides a program that realizes various processes as described above. In the present invention, a storage medium storing such a program can also be provided.
For example, in the program according to the present invention, a process of specifying partial video data that is a part of the video data from the video data, and a process of associating data with the specified partial video data so that the presence of the data can be presented And let the computer run.
Further, in the program according to the present invention, data indicating the presence of a plurality of data associated with partial video data that is a portion of the video data specified from the video data is visually compared with the partial video data in the video data. Causes the computer to execute processing to be presented in association with.
[0051]
DETAILED DESCRIPTION OF THE INVENTION
Embodiments according to the present invention will be described with reference to the drawings.
First, a video processing apparatus and a video processing method according to the first embodiment of the present invention will be described.
FIG. 1 is a block diagram showing an example of a video processing apparatus according to the present invention. The video processing device 1 includes a storage unit 11, a link target area designation unit 12, a link generation unit 13, a video presentation unit 14, and a link management unit 15.
[0052]
The storage unit 11 is composed of a general storage device, and is linked (associated) with one target video data (hereinafter also simply referred to as video data), link data (association data), and Holds the linked data to be linked to the other target.
The link target area designating unit 12 is composed of a coordinate input device such as a mouse or a digitizer, and is described by the user (user) as coordinate data of an area to be linked in the video data (hereinafter referred to as link target area coordinate data). The link target area coordinate data is output to the link generation unit 13.
[0053]
The link generation unit 13 inputs the identifier or name of the linked data input from the user through the dialog-type user interface. Further, the link generation unit 13 links the link target area coordinate data input from the link target area specifying unit 12 and the linked data input from the user, and outputs the link data to the storage unit 11 as link data. .
The video presentation unit 14 includes a display, and presents the visualized link data and video data to the user.
The link management unit 15 manages and controls the storage unit 11, the link target area specifying unit 12, the link generation unit 13, and the video providing unit 14.
[0054]
In this example, video data will be described as meaning data that is a combination of moving image data and audio data, or one of moving image data and audio data. Further, in this example, the partial video data means a part of temporal or spatial (regional) data in the video data.
Note that the video data referred to in the present invention includes, for example, image-only data, and includes, for example, a case where data such as audio is associated with the image data.
[0055]
FIG. 2 is a detailed block diagram of the video processing apparatus of FIG.
As shown in FIG. 2, the storage unit 11 includes a video storage device 21 and a link data storage device 26. The link target area specifying unit 12 includes an (arbitrary) partial video data specifying device 23 and a partial video data presentation device 24. The link generation unit 13 includes a link / data addition device 25. The video presentation unit 14 includes a video data presentation device 22, a partial video data presentation device 24, and a link / data presentation device 27.
[0056]
The video storage device 21 is configured by a general memory and holds input video data.
The video data presentation device 22 includes a display and presents video data held in the video storage device 21 to the user.
[0057]
The partial video data designating device 23 is constituted by a coordinate input device such as a mouse, designates an arbitrary part of the video data presented by the video data presentation device 22, and designates the designated partial video data as a partial video data. Transfer to the video data presentation device 24.
The partial video data presentation device 24 presents the partial video data transferred from the partial video data designation device 23.
[0058]
The link data adding device 25 adds link data to the partial video data presented by the partial video data presenting device 24 and transfers it to the link data storage device 26.
The link data storage device 26 holds the link data added by the link data adding device 25 and the partial video data.
The link data presentation device 27 presents the link data added by the link data addition device 25 and the link data group.
[0059]
Here, extraction of arbitrary partial video data from video data will be described.
As a form of extracting the partial video data from the video data, the user manually designates the external shape (contour) or circumscribed rectangular progress on the image of the partial video data through the user interface provided by the video data processing device 1. There are forms such as a method of extracting partial video data by the method, and a method of selecting a supplement of partial video data automatically extracted by the video processing apparatus 1 by the user.
[0060]
Here, a method of extracting partial video data when the video processing apparatus 1 automatically extracts partial video data candidates will be described.
Assume that the video data from which the partial video data is to be extracted is as shown in FIG. That is, a rectangular area (x (yO, 30) in the xy orthogonal coordinates) on a frame of a certain frame of video data (Video.mpg) 31 (31 frames with frame numbers 120 to 150 in this example). , (1O, 10), (20, 10), (20, 30)}) are recorded as persons to be extracted as partial video data candidates. The figure shows an x-coordinate axis representing the horizontal direction, a y-coordinate axis representing the vertical direction, and a time t axis representing the flow of time.
[0061]
As shown in FIG. 4, this partial video data extraction procedure includes contour extraction processing in each frame (step S1), circumscribed rectangle calculation processing in each frame (step S2), inter-frame difference calculation processing (step S3), partial video It consists of data detection processing (step S4) and partial video data candidate presentation processing (step S5).
[0062]
Specifically, first, in the contour extraction process in each frame, the video processing device 1 performs the contour extraction process in each frame in the video data 31 in order to specify the rectangular area of the partial video data (step S1). . Contour extraction can extract a contour by extracting edges of a human image by using a so-called differential filter used in normal image processing and connecting the edges. Further, even when a person is divided into a plurality of small regions by the contour extraction process, it is possible to extract a region (contour) in units of people by a conventional region division / integration process.
[0063]
Next, after extracting the contour of the person unit, the circumscribed rectangle 33 including the contour is calculated in the circumscribed rectangle calculation process in each frame (step S2). Here, by this circumscribed rectangle calculation process, in the 31 frames from frame numbers 120 to 150, {(10,30), (10,10), (20,10), (20,30) )} Circumscribed rectangle 33 can be calculated.
[0064]
Subsequently, in the inter-frame difference calculation process and the partial video data detection process, each frame is compared to check whether a person in the same partial video data can be handled as a single object (partial video data). (Step S3, Step S4). That is, by calculating the frame difference between each frame as used in MPEG2 or the like, it is determined whether or not the recorded one frame and the next frame are the same.
[0065]
Specifically, in the inter-frame difference calculation process (step S3), in the frame difference between the frame with the frame number 119 and the frame number 120, no person is recorded in the frame with the frame number 119, and the frame with the frame number 120 is displayed. Since people are recorded, the result of the frame difference (for example, the sum of the differences of each pixel) has a large value. Similarly, the frame difference between the frame with frame number 150 and the frame with frame number 151 also has a large value. On the other hand, in the frames with frame numbers 120 to 150, since a person is recorded in the same rectangular area 33, the frame difference in that frame has a small value.
[0066]
In the partial video data detection process (step S4), based on the above frame difference value and whether or not the rectangular area 33 exists, the frame number 120 to 150 is a candidate for partial video data. It can be seen that is recorded.
Therefore, in the partial video data candidate presentation processing, the portion of the rectangular area 33 of these frames is presented as a single partial video data 32 to the user of the video processing apparatus 1.
[0067]
Next, as shown in FIG. 5, the processing procedure of the video processing apparatus 1 according to this example will be described.
This processing procedure includes video presentation (step S11), partial video designation (step S12), partial video presentation (step S13), link data addition (step S14), and link data storage (step S15). Become.
[0068]
First, in video presentation, the video data presentation device 22 presents video data held in the video storage device 21 of the video processing device 1 (step S11).
Next, in the designation of the partial video, so-called time code or frame number of the video data designated by the user and coordinate data are acquired using the partial video data designation device 23 (step S12).
[0069]
Subsequently, in the presentation of the partial video, the partial video data presentation device 23 presents the partial video data designated by the user (step S13).
In link data addition, the user adds related data (link data) to the partial video data presented by the partial video data presentation device 23 using the link data addition device 25 (step). S14).
Finally, in the link data storage, the link data storage device 26 holds the link data added by the user, the so-called time code or frame number of the video data, and the coordinate data (step S15). .
[0070]
FIG. 6 shows the data structure of data stored in the link data storage device 26. FIG. 7 also shows an expanded data structure of data stored in the link data storage device 26.
The link data storage device 26 holds the time code 41 of the partial image data, and is linked data 43 input by the link data adding device 25 and arbitrary coordinate data designated by the partial video data presentation device 24. 42, a storage device name 44, and partial image icon data 45 are stored. In the expanded data structure, user data 46 is further stored to perform collaborative work.
[0071]
For example, when the link data is added from the still image portion of a certain frame, the time code of the point of the video data of the partial video data presentation device 24 is recorded in the time code 41. In the case of designation of “from here to here”, information on the start point and end point to which link data is added is recorded in the time code 41.
[0072]
The coordinate data 42 gives the two-dimensional coordinates (x1, y1), (x1, y2), (x2, y2), (x2, y1) as the link target area coordinates to the partial video data presentation device 24, such as a mouse. The text data and partial image icons plotted by the input device are (x1, y1), (x1, y2), (x2, y2), (x2, y1) Hold.
The linked data 43 holds comments, electronic data file information, text data, and file storage location information.
[0073]
Next, an operation procedure will be described using the user interface example according to this example of FIG.
The user designates a partial video to which link data is to be added from the video presented on the video data presentation screen 51, whereby the designated partial video is displayed on the partial video data presentation screen 52.
[0074]
At any place designated by the user on the partial video data presentation screen 52, a comment or electronic data file with a plurality of text data is added to the presented image as partial image icons 61a to 61c from the link data addition screen 53. Is possible. In this case, as shown in FIG. 6 and FIG. 7, the specified time for adding the link data is stored as the time code 41, and the user adds the link data to any part on the partial video data presentation screen 52. Is stored in the coordinate data 42, and a comment or electronic data file based on the added text data is held as the link target data 43, and these three are held as one data.
[0075]
FIG. 9 shows a data structure after the link data is added to the partial video data.
Figure 9 shows the relationship between the three link data ("Question" and "This comment is a point" comment (text data) and the name "abc.mpg" in the time code (00: 01: 00.00) Video data). As described above, since the data is held for each designated arbitrary partial video data, the temporarily added link data can be erased from the partial video data.
[0076]
In addition, a storage device destination can be designated as the storage destination. This is because when video data with link data added is stored in a public server or private server like the storage device name 44, and for the same video data among a plurality of users. This is used when link data is added in cooperation.
[0077]
Furthermore, as shown in FIG. 8, when link data is added as the same related data to an arbitrary area designated on the partial video data presentation screen 52, links such as text data comments and related electronic data are linked. -Data can be superimposed. Here, not only icons and comments overlap the coordinate position, but also related link data can be registered as a group. The link data with “*” in FIG. 9 is held as grouped information.
[0078]
As shown in FIG. 10, when link data is added to a person object 71 or a place object 72 on the partial video data presentation screen 52, a message is sent when link data is added to the objects 71 and 72. It is possible.
For example, when adding link data to the person 71, the person message sending link data 73 is used. According to this, in order to ask a certain participant about video data for which a meeting has been held, the user adds a comment and an e-mail address to the participant displayed on the partial video data presentation screen. It is possible to send a message to the participant. Further, when sending the message, it is possible to send not only the comment but also the link data to which the user has added the link data. As a result, it is possible to grasp in a simple and appropriate manner what kind of situation the question is, the specified time, and the situation at the spot.
[0079]
When link data is added to a place object, place space message sending link data 74 is used. The usage is assumed as follows. In other words, if any specified partial video data holds an important person's comment and wants to use that information in a future meeting, use a message service such as e-mail for the meeting place. And send the data. When actually used, it is disclosed using the terminal at the location or the user's terminal.
[0080]
On the linked data presentation screen 54, a plurality of partial video data presentation screens 62a to 62e with link data added by the user in the video data are presented. Further, as the link data group presented on the link target data presentation screen 54, not only link data extracted from the video data but also link data other than the video data can be designated.
[0081]
Next, a procedure for adding video data and link data between a plurality of users via a network will be described.
FIG. 11 shows an example of a device and a user interface that are mainly used when used among a plurality of users.
User A and user B take out video data from the video storage device 21 and designate arbitrary partial video data to which link data is added.
[0082]
In FIG. 11, for the same partial video data, the user A uses a link data addition input dialog (link data addition screen) 53 to generate one piece of link data “this person is Mr. X”. The user B uses the link data addition input dialog 53 to add two pieces of link data “related video of this conversation” (text data) and “xyz.mpg” (video data). Is added. These data are held in the link data storage device 26. The data structure is shown in FIG. 12, and the time codes and coordinate data of user A and user B are held respectively. FIG. 13 shows an image diagram in which partial image icons 81a and 83a to 83c representing link data added by the user A and the user B are simultaneously presented. The linked object data presentation screen 53 of each user A and B shows partial image data 82a to 82c and 84a to 84d with respective link data.
[0083]
Also, user A designates partial video data to which link data is added in advance, and later tells user B the location of the partial video data by e-mail etc. Asynchronous collaborative work is possible. Furthermore, it is possible to re-edit partial video data and link data created in advance by accessing the link data storage device 26 between a single user or a plurality of users.
[0084]
Further, in order to be able to add, hold and present link data between a single user or a plurality of users, a configuration as shown in FIG. 14 can be used. In this configuration, the terminal device of user A, the terminal device of user B, the link / data storage device 98 shared among the users, and the video storage device 97 shared among the users are networked. Connected through. The devices of users A and B include video data presentation devices 91a and 91b, (optional) partial video data designating devices 92a and 92b, partial video data presentation devices 93a and 93b, and link data, respectively. Additional devices 94a and 94b, link data presentation devices 95a and 95b, and link data storage devices 96a and 96b are provided.
[0085]
With reference to FIG. 15 and FIG. 16, a description will be given of a case where video data synthesized by reusing video data 1 and video data 2 created in advance composed of linked data and video data is created.
FIG. 15 shows a state in which the link target data 102 is linked to the video data 101.
[0086]
FIG. 16 shows an example in which video data is reused and edited using the video processing apparatus 1 of this example.
Before a certain meeting is held, the meeting organizer and the like can understand the process up to now and share the video data 1 created in advance related to this meeting in order to share it among the participants. While accessing the video data 2 and browsing the meeting minutes and materials which are the individual linked target data 114a, 115a, 115b, 116a-116c, 124a, 125a-125c, 126a, a plurality of video data 111-113 , 121-123, the most relevant video data can be taken out and edited, such as rearranged, to produce synthesized video data.
[0087]
Next, a process of automatically extracting a video frame that is a target of link data from video data will be described.
As described above, the link data adding device 25 allows the user to designate the partial video data, as well as the video data in the video data after the specified arbitrary partial video data, and the video data in the video data. When the video object and audio data are analyzed, the partial audio data estimated to be the same person's utterance and the same content in the dialogue between multiple persons on the corresponding frame of the partial video data for which link data is specified The estimated partial audio data is extracted, and video data (partial video data) and link data corresponding to the extracted partial audio data are added.
[0088]
For example, FIG. 17 is an example of extracting the start point and the end point of speech estimation of the same person.
In this case, among a plurality of frames F1 to F7 that are continuous with respect to the axis of time t, the audio data of the frame (for example, the frame F1 or the frame F4) to which the link target data is to be added is the next frame (for example, the frame F2 or the frame F7) and the point where the audio data is interrupted are presented as the speech estimation points T1 and T2, and the video frames corresponding to the start and end points of the audio data are added to the link target data. To do.
[0089]
FIG. 18 is an example of extracting the start point and the end point of the dialog estimation between a plurality of persons. Also in this case, the dialog estimated locations T11 to T14 are extracted from a plurality of frames F11 to F17 continuous with respect to the axis of time t, as in the case of FIG. However, in this example, in this case, if the time between the conversations T21 and T22 occurring during the conversation is Δt, as shown in FIG. 19, this is the same if Δt is shorter than a certain interval. Guess the dialogue part.
[0090]
Next, a video processing apparatus and a video processing method according to the second embodiment of the present invention will be described.
The schematic configuration and operation of the video processing apparatus 1 of this example are the same as those shown in the first embodiment, for example, and in this example, different parts will be described in detail.
[0091]
FIG. 20 is a block diagram showing an example of the data structure of the link data according to this example.
The link data of this example includes an identifier 131, a video data file name 132, a frame start number 133, a frame end number 134, a link target area coordinate 135, a link target data name (for example, URL) 136, and visual feedback It consists of data 137.
[0092]
The identifier 131 is data for distinguishing the link data itself, and is assigned by the link management unit 15 for each link data.
The video data file name 132 identifies video data to be linked.
The frame start number 133 is a start number of a frame to be linked with the video data.
The frame end number 134 is the end number of the frame to be linked with the video data.
[0093]
The link target area coordinates 135 are coordinate data to be linked in the video data designated by the user.
The linked target data name 136 is a name of data linked to the video data.
The visual feedback data 137 is data used for visually giving feedback to the user that there is a link to the video data.
[0094]
Here, the identifier 131 is set by the link management unit 15.
The video data name 132, the frame start number 133, the frame end number 134, and the link target area coordinates 135 are input from the user by the link target area specifying unit 12.
The link target data name 136 is input from the user by the link generation unit 13 using a dialog-type user interface.
The visual feedback data 137 is generated by the link generation unit 13.
[0095]
FIG. 21 is a diagram showing a main user interface according to the present example.
The main user interface 141 includes a video presentation screen 142, a video playback button 143, a video stop button 144, a link start button 145, a link end button 146, and a link target data name input dialog 147.
[0096]
The video presentation screen 142 presents video data held in the storage unit 11 to the user.
The video playback button 143 makes it possible to start playback of the video data when the user clicks with the mouse or the like.
The video end button 17 enables the reproduction of the video data to be stopped when the user clicks with a mouse or the like.
[0097]
The link start button 145 allows the user to specify the start frame of the video data being reproduced to be linked by clicking with the mouse or the like.
The link end button 146 allows the user to specify the end frame of the video data being reproduced to be linked by clicking with the mouse or the like.
The linked target data name input dialog 147 allows the user to input the linked target data name to be linked to the video data through the dialog.
[0098]
FIG. 22 is a flowchart showing an example of the linking process of the video processing apparatus 1 of this example.
As shown in FIG. 22, the linking process includes an initialization process (step S21), a video reproduction detection process (step S22), a link start detection process (step S23), a link end detection process (step S24), and a link target area. It consists of definition processing (step S25), linked target input processing (step S26), link generation processing (step S27), link presentation processing (step S28), and video stop detection processing (step S29).
[0099]
Next, the processing procedure of the video processing apparatus 1 of this example will be described using the flowchart of FIG.
First, in the initialization process, the storage unit 11, the link target area designating unit 12, the link generation unit 13, the video presentation unit 14, and the link management unit 15 of the video processing device 1 are initialized (step S21). .
[0100]
That is, first, link data is generated and initialized by the link management unit 15. Specifically, the file name of the video data to be used held in the storage unit 11 is set as the value of the video data name 132 in the link data by using dialog input or the like. As the link data identifier 131, an identifier unique to the video processing apparatus 1 is set by the link management unit 15. The link management unit 15 sets values such as 0 as default values for the frame start number 133 and the frame end number 134 of the link data. Similarly, the link management unit 15 sets predetermined values for the link target area coordinates 135 of the link data, the linked target data name 136, and the visual feedback data 137. The link data generated by the link management unit 15 is held by the storage unit 11.
[0101]
Next, in the video playback detection process, playback of the video data designated by the user is started by detecting the click of the video playback button 143 using the mouse or tablet from the user (step S22). ).
Subsequently, in the link start detection process, the link management unit 15 determines the frame start number for defining the link area for the video data by detecting the click of the link start button 145 from the user. The value is set as the frame start number 133 of the link data (step S23).
[0102]
Subsequently, in the link end detection process, the link management unit 15 determines the frame end number for defining the link area for the video data by detecting the click of the link end button 146 from the user. The value is set as the frame end number 134 of the link data (step S24). Here, the link management unit 15 temporarily stops the reproduction of the video data.
[0103]
In the link target area definition process, the link management unit 15 first superimposes the message and video data on the fact that the link target area can be defined on the video presentation screen 142 for the user. Notice. Further, the link target area designating unit 12 acquires coordinate data of an area to be linked to video data presented on the video presentation screen 142 by designation with the mouse from the user. Here, the link target area designating unit 12 gives the user visual feedback such as surrounding the area designated by the user with a white line. The link target area designating unit 12 stores coordinate data defining the link target area acquired from the user (hereinafter also referred to as link target area defining coordinate data) in the storage unit 11. The value is set as the value of the link target area coordinate 135 of the data (step S25).
[0104]
In the linked target input process, the link management unit 15 obtains the linked target data name specified by the linked target data name input dialog from the user, and stores the link data stored in the storage unit 11. It is set as the value of the linked target data name 136 (step S26).
In the link generation process, the link generation unit 13 uses the value of the link target area coordinates 135 of the link data held in the storage unit 11 and the video data corresponding to the frame start number 133 to the frame end number 134. Image data and related coordinate data for visual feedback to a person are generated. The image data and related coordinate data are set as visual feedback data of link data (step S27).
[0105]
In the link presentation process, the image data related to the video data is superimposed and presented on the video presentation screen 142 using the related coordinate data of the visual feedback data 137 of the link data (step S28).
In the video stop detection process, it is detected whether or not the user has clicked the video stop button with the mouse. If the user has clicked, the presentation of the video data is stopped and the linking process is terminated (step S29). On the other hand, if the user has not clicked, the process after the link start detection process is performed again (steps S23 to S29).
[0106]
Here, the link target area definition process (step S25) and the link generation process (step S27) will be described in detail with reference to FIGS.
FIG. 23 shows an example of a video object (logo “Y”) 151 linked as partial video data.
FIG. 23 shows video data (still image data in each frame) corresponding to the frame start number 133 to the frame end number 134 held in the storage unit 11.
[0107]
FIG. 24 shows a diagram in which the video object 151 is being displayed to the user using the video presentation screen 142 by a frame 152 by the user operating the mouse.
FIG. 25 shows a diagram of a video object 153 in which the video object 151 is tilted obliquely by image processing.
FIG. 26 shows a diagram in which shadow data 154 is generated by image processing of edge extraction (boundary extraction) and color conversion of the slanted video object 153 of FIG.
[0108]
FIG. 27 shows a diagram in which the original video object 151 in FIG. 23 and the shadow data 154 in FIG. 26 are combined.
FIG. 28 shows a diagram in which an area to be presented to the user using the video presentation screen 142 is extracted from the data of FIG.
[0109]
In the link target area definition process (step S25), as described above, first, it is possible to link to the video object, the color of the frame of the video presentation screen 142 is changed, or the color of the link start button 145 is changed. Notify users by making changes.
Next, the user operates the mouse while referring to the video object 151 shown in FIG. 23 presented on the video presentation screen 142, and the video object (here, “Y” logo) 151 to be linked. Select. The selection result is indicated by a frame 152 shown in FIG.
[0110]
The coordinates representing the frame 152 (for example, the coordinates of the upper left corner and the lower right corner) are set as the link target area coordinates 135 of the link data held in the storage unit 11 as the link target area definition coordinate data.
Subsequently, in the link generation process (step S27), image processing such as projective transformation of the image in FIG. 24 is performed to make it distinguishable from the original video object 151 in FIG. Furthermore, the shadow data 154 is obtained by performing contour extraction using a differential filter on the slanted video object 153, determining the boundary between the video object 153 and the background, and performing color conversion of the area of the video object 153. Generate.
[0111]
Further, the image of FIG. 27 is obtained by combining the original video object 151 of FIG. 23 and the generated shadow data 154.
Finally, visual feedback data 137 is generated by clipping the region to be presented on the video presentation screen 142 to the user. The coordinate value of the boundary of the clipped shadow data 155 with the background or the original video object 151 is set as related coordinate data as the visual feedback data 137 held in the storage unit 11 together with the shadow data 154. Subsequently, in the link generation process (step S27), the video of FIG. 28 is presented on the video presentation screen 142.
[0112]
Here, the link generation processing (step S27) when a plurality of links are made to one video object will be described.
When a plurality of links are made to one video object, as shown in FIG. 29, a plurality of video objects inclined at different angles are generated, and the shadow data 156a is changed by changing the shadow color. By generating 156b, the user can distinguish each link. The shadow data 156a and 156b are superimposed on the original video data 151 of FIG. 23 as shown in FIG. 30 and further subjected to clipping processing as shown in FIG. 31, thereby using the images 157a and 157b after the clipping processing. Visual feedback data 137 is generated.
[0113]
Next, a description will be given of the state of the user interface when the user instructs the link target presentation by designating the shadow data presented by visual feedback with the mouse.
First, when a link is associated with the video object presented on the video presentation screen 142, the shadow data is superimposed and displayed as described above. When the user clicks the shadow data with the mouse, the link management unit 15 uses the identifier 131 in the link data, the video data name 132, the frame start number 133, the frame end number 134, and the visual feedback data. 137 is determined whether or not included, and if it matches or included, the value of the linked data name 136 is displayed in the linked data name input dialog 147 so that the user can Make the link target data accessible. Alternatively, the contents of the linked target data name 136 are presented on another window or display (for example, the video presentation screen 142 is divided into screens and displayed on the one screen).
[0114]
In the above description, the video data and the link target data have been described on the premise that they are stored in the storage unit 11 of the same video processing device 1, but the video data or the link target data is, for example, via a network. The video processing apparatus 1 may be connected to the video processing apparatus 1 to access the video data or the link target data. In this case, the video data name 132 or the link target data name 136 in FIG. 20 can be configured as a so-called URL that represents the access destination of the video data or a URL that represents the access destination of the link target data, respectively.
[0115]
Further, the description has been made on the assumption that the video data and the link target data are held in the storage unit 11 of the same video processing apparatus 1, but as shown in FIG. 32, the client 161 and the server 162 are connected via the network 163. The functions of the respective units of the video processing device 1 described above can be separately arranged in the client 161 or the server 162 and linked. For example, as shown in FIG. 32, the link generation unit 173 is arranged in the server 162, and the storage unit 171, the link target designation unit 172, the video instruction unit 174, and the link management unit 175, which are other processing units, are arranged in the client 161. It is also possible to adopt a configuration.
[0116]
FIG. 33 shows an example of the format of link data transmitted to the network.
As shown in FIG. 33, when the client 161 and server 162 as shown in FIG. 32 are connected via the network 163 by converting the link data structure of FIG. 20 into, for example, the so-called XML format and transmitting it to the network. Link data can be transferred and used.
[0117]
Similarly, FIG. 34 shows another example of the format of link data transmitted to the network. FIG. 34 shows a state in which link data is designated as linked data.
Specifically, the link data identifier of LlNK001 is set as the <resource-name> element. When link data is designated as linked data in this way, the link management unit 175 interprets the XML format link data (link data with an identifier of LINK003) in FIG. 34 and converts the link data of L1NK001 into the link data. get. Further, the link management unit 175 interprets the XML format link data (link data with the identifier LlNKOO1) in FIG. Video.mpg data is set in the <audiovisual-data> element, and Detect that Annotation.txt data is set in <resource-name> element.
[0118]
Subsequently, the link management unit 175 causes the user to select whether to use Video.mpg data or Annotation.txt data, and presents the selected data on the video presentation screen 142. If a link data identifier is further set as a <resource-name> element, the same operation is repeated to follow the link. In this way, link data can be reused by setting the link data identifier in the linked data name 136 or the <resource-name> element. In addition, link data that has been XML formatted is transferred by e-mail or the like, and the link data that has been XML formatted is used by the transferred video processing device 1 of the user so that the link data can be reused. It is also possible to do.
[0119]
Next, means and steps for identifying any partial video data linked from electronic data such as linked text data, audio data, or video data will be described.
Here, it is assumed that link data as shown in FIG.
That is, the value “LlNK001” is set in the link identifier 131, the value “Video.mpg” is set in the video data name 132, the value “120” is set in the frame start number 133, and the frame end number 134 is set. The value of “150” is set, the value of “{(1O, 30), (10,10), (20,10), (20,30)}” is set in the link target area coordinate 135, and the linked It is assumed that the value “Annotation.txt” is set in the target data name 136 and “Visual.dat” is set in the visual feedback data 137.
[0120]
When a user specifies arbitrary partial video data from electronic data such as linked text data, audio data, or video data, the user first selects a desired data from the linked target data name input dialog 147 in FIG. Enter the linked data name (name of the electronic data). That is, the user inputs “Annotation.txt” using the linked target data name input dialog 147. When the link target data name is input from the link target data name input dialog 147, the link management unit 15 of the video processing device 1 selects the linked data from the link data held in the storage device 11. Searches and retrieves link data that matches the name value.
[0121]
Next, the link management unit 15 refers to the video data name 132 of the link data and acquires video data that matches the video data name from the storage unit 11. That is, the link management unit 15 refers to the value “Video.mpg” of the video data name 132 in the link data, and acquires the video data matching the “Video.mpg” from the storage unit 11.
[0122]
Subsequently, the link management unit 15 refers to the values “120” and “150” of the frame start number 133 and the frame end number 134 in the link data, that is, the frame to be extracted from the video data, that is, the frame number. Extract 120 to 150 frames. Furthermore, the link management unit 15 refers to the link data, refers to the link target area coordinates 135, and uses “{(10,30), (1O, 1O), (20,1O), ( 20,30)} ”.
[0123]
Therefore, the link management unit 15 matches the above-described extracted link target area coordinates of each frame, here (10, 30), (10, 10), (20, 10), (20, 30). Data corresponding to the visual feedback data 137 of the link data is arranged in the area surrounded by the coordinates and presented on the video presentation screen 142.
Therefore, the user can specify an area indicated by visual feedback in the video data “Video.mpg” corresponding to the linked target data name “Annotation.txt”.
[0124]
Next, electronic data such as linked text data, audio data, or video data is transferred to a telephone or communication system such as an electronic bulletin board system or a telephone, and the electronic data is transmitted to an object related to any linked partial video data. The means and steps for delivering data will be described.
[0125]
36 is similar to the video processing apparatus 1 of FIG. 1 in that a storage unit 191, a link target area designating unit 192, a link generation unit 193, a video presentation unit 194, and a link management unit 195 are provided. It is a block diagram showing an example of an expanded video processing device 181 to which a unit 196 and a telephone call unit 197 are added.
[0126]
The link / data transfer unit 196 is composed of a CPU and a buffer storage device. The link / data transfer unit 196 inputs link target data to be transferred from the storage unit 191, and transfers it to the telephone call unit 197.
The telephone call unit 197 is a subsystem having a normal telephone call function, and transmits the link target data input from the link / data transfer unit 196 to an external telephone.
[0127]
Here, it is assumed that link data as shown in FIG. That is, the value “LlNK002” is set in the link identifier 131, the value “Video.mpg” is set in the video data name 132, the value “120” is set in the frame start number 133, and the frame end number 134 is set. A value of “150” is set, and a value of “{(10,30), (10,10), (20,1O), (20,30)}” is set in the link target area coordinate 135 and the linked target It is assumed that the value “Voice.dat” is set in the target data name 136 and “Visual2.dat” is set in the visual feedback data 137. Voice.dat, which is voice data, is stored in the storage unit 191, and in combination with “Voice.dat”, a telephone number “O120-123-4567” for a call corresponding to the voice data “Voice.dat” is stored. Similarly, it is assumed that it is held in the storage unit 191.
[0128]
It is assumed that the user selects link data identified by the identifier “Link002” using the mouse by visual feedback presented on the video presentation screen 142.
When the link management unit 195 refers to the link data held in the storage unit 191 and specifies that the data to be linked is “Voice.dat” which is voice data, the link management unit 195 corresponds to the “Voice.dat”. The telephone number “O120-123-4567” for the call to be obtained is acquired.
[0129]
Next, the link management unit 195 transfers “Voice.dat” to the link data transfer unit 196. Subsequently, the telephone call unit 197 uses the acquired telephone number “0120-123-4567” to call the call destination, and when there is a call, reproduces “Voice.dat” as voice data and completes the call. .
[0130]
Here, it is assumed that the telephone is connected to a telephone of a normal public telephone network. However, instead of the telephone call unit 197, a data transmission function is prepared, and when the link target data is text data, an electronic bulletin board is used. It can also be configured to forward to. Similarly, it is possible to prepare a data transmission function having a so-called Internet telephone function instead of the normal telephone function, and transmit the link target data to the Internet telephone.
[0131]
Next, a video processing apparatus and a video processing method according to the third embodiment of the present invention will be described.
38 and 39 show video data and a frame 202 with one video object 203 and visual feedback 204a, 204b, 205a, 205b, 206a, 206b on the video presentation screen 201 of the video processing apparatus 1 of this example. It is a figure which shows the example of the user interface made.
In FIG. 40, video data and a frame 202 having two video objects 203 and 207 and visual feedback 205a, 205b, 206a, 206b, 208a and 208b are presented on the video presentation screen 201 of the video processing apparatus 1 of this example. It is a figure which shows the example of the user interface which is connected.
[0132]
FIG. 38 shows a case where one link is set for the video object (“Y” logo) 203.
On the other hand, FIG. 39 shows a case where two links are set for the same video object (“Y” logo) 203. When a plurality of links are set in this way, it is possible to distinguish the links by presenting graphics of different colors on the frame 202 of the video presentation screen 201.
[0133]
As shown in FIGS. 38, 39, and 40, the video objects 203, 207 are two sides of a frame 202 having a short distance from the video object (“Y” logo) 203, 207 to which the link is set. By placing figures 204a, 204b, 205a, 205b, 206a, 206b, 208a, 208b showing links at positions corresponding to horizontal positions (for example, horizontal axis) and vertical positions (for example, vertical axis) The person can get visual feedback indicating the presence of the link.
[0134]
As described above, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the means for designating any partial video data in the video data and the video data and the designated partial video data are simultaneously presented. In a configuration comprising means, means for adding link data to specified arbitrary partial video data, and means for presenting target data linked to the specified arbitrary partial video data, One or more related link data is added to the partial video data, and one or more partial image icons indicating the link data are superimposed and presented, or visually adjacent to or superimposed on the partial video data. A means for presenting the existence of the link was provided.
[0135]
Then, the video data is presented by adding comments or related materials used for the text data to the video data, and any partial video data to which the annotation is to be added is specified. Present video data at the same time, add link data to specified partial video data, and visually add link data to adjacent partial video data to be linked or superimposed Present.
[0136]
Therefore, the link data is present in any partial video data by visually presenting the presence of the link adjacent to or overlaying any partial video data to be linked. Visual feedback can be provided to the user.
Also, the user can add text data, audio data, image data, related material file data, moving image data, etc. to the partial video data to which link data is to be added. It becomes possible to relate easily and appropriately.
In addition, the user interface for associating displays both video data and arbitrary partial video data, and reproduces the video data, so that the user adds link data while referring to the extracted partial video data. It becomes possible.
[0137]
In the video processing apparatus and the video processing method according to the embodiments of the present invention, the link data includes a time range of the partial video data, and the means for adding the link data is a means for specifying the time range of the partial video data. Equipped with.
Accordingly, the time range of the video data can also be specified as the specified range of the partial video data to which the link data is added.
[0138]
In the video processing apparatus and the video processing method according to the embodiments of the present invention, the link data or the partial video data includes area information in the video data of the partial video data, and the means for adding the link data is the partial video data. Get the region information above and compose the link data.
Therefore, the link data including the area information on the partial video data can be configured by the means for adding the link data.
[0139]
Further, in the video processing apparatus and the video processing method according to the embodiments of the present invention, the link data presentation means presents one or more partial image icons on the time axis and on the video data (on the space axis).
Therefore, it is possible to present one or more partial image icons on the time axis and on the video data (on the space axis).
[0140]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the means for visually presenting the presence of the link adjacent to or overlapping the arbitrary partial video data to be linked is the arbitrary part The presence of the link is visually indicated by the video data, the shadow of the video object in the area, or the luminance change of a similar shape.
Therefore, by providing the user with visual feedback based on the luminance change of the shadow of the video object in the area or the shadow of the video object in the area or the similar shape, the video object in the area or the video object in the area Visual feedback can be provided to the user by the brightness change of the shadow corresponding to the shape or the similar shape.
[0141]
In addition, in the video processing apparatus and the video processing method according to the embodiment of the present invention, when a plurality of related data is indicated to an arbitrary location in the arbitrary partial data of the video, the partial images are added in an overlapping manner. Provided with means.
Therefore, when a plurality of link data are related to an arbitrary video object in an arbitrary partial data in the video data, the link data can be superimposed and added using a partial image icon or the like. Become.
[0142]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the arbitrary partial video data or the region may include any partial video data or the luminance change of the shadow or similar shape of the video object in the region. A shadow shape to be presented or a similar shape is generated from the shape of the video object in the image.
Therefore, the shape of a shadow or similar shape to be presented from the shape of the video object in the arbitrary partial video data or the region as the luminance change of the shadow or similar shape of the video object in the region or the arbitrary partial video data By generating the above, it is possible to provide the user with visual feedback that does not feel uncomfortable with the original video data.
[0143]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, when a plurality of links are added to the same image data, a visual feedback is given to the user to distinguish each link. The linked information can be used effectively.
[0144]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the means for visually presenting the presence of the link adjacent to or overlapping the arbitrary partial video data to be linked is the arbitrary partial video. A means for extracting a video object from a luminance change in the data is provided.
Therefore, the video object of arbitrary partial video data can be extracted from the luminance in the arbitrary partial video data.
[0145]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the means for adding the link data allows the user to select the video target in the video data extracted from the luminance change of the video data.
Therefore, the user can select a video object in the video data extracted from the luminance change of the video data.
[0146]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, there are two or more means for visually presenting the presence of the link adjacent to or overlapping with any partial video data to be linked. When presenting a link, the presence of the link is visually presented based on such arbitrary partial video data or a different shadow or different similar shape luminance change or color change of the video object in the region.
Therefore, when a plurality of link data is added to the same partial video data, the plurality of links can be presented in an identifiable manner.
[0147]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, by adding the specified link data to the arbitrary partial video data, and simultaneously presenting the link data to the partial video data individually and in combination. In addition to adding related link data to a designated partial image of arbitrary partial video data, link data can be superimposed on the designated partial image.
[0148]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the added link data is relative to the link data added to other arbitrary partial video data or other arbitrary partial video data. And means for generating link data of the link data.
Therefore, the link data added can be associated with other arbitrary partial video data and link data added to other arbitrary partial video data.
[0149]
As described above, in the video processing apparatus and the video processing method according to the embodiment of the present invention, a plurality of links can be associated with the same area in the video data or the same video object in the area, The specified arbitrary partial video data to which the link data is added can be associated with other arbitrary partial video data and other link data in the video data.
[0150]
In addition, in the video processing apparatus and the video processing method according to the embodiment of the present invention, as the contents of the link data, electronic data such as text data, audio data or video data, or an electronic file or link data is linked. Is described.
Accordingly, electronic data such as text data, audio data, or video data can be linked as the contents of data linked to the video data.
[0151]
As described above, in the video processing apparatus and the video processing method according to the embodiment of the present invention, an existing electronic document such as an associated e-mail, image data used in a conference, It is possible to associate electronic files such as partial audio data and video data.
[0152]
In addition, the video processing apparatus and the video processing method according to the embodiments of the present invention include means for adding, sharing, presenting, or distributing link data by one or a plurality of users.
Thus, for example, the user obtains link data using the portable information terminal and a means for adding, sharing, presenting or distributing the link data with respect to the stored link data, and linked video data. Various re-editing such as adding a link can be performed. In addition, the user obtains the link data and the video data to which the link data is added or the data to be linked by using means for adding, sharing, presenting or distributing the link data among a plurality of users. Thus, various re-editing operations such as compositing link data, video data to which link data is added, or data to be linked can be performed.
[0153]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the means for adding the link data is such that the audio data is effective for a single person or a plurality of persons in any specified partial video data. In this case, there is provided means for extracting a time range of the partial video data from the moving image data and the audio data.
Therefore, by analyzing the audio data in any specified partial video data by means of adding link data, the same content in the dialogue between multiple persons such as the part of the same person's remarks or questions and answers It is possible to guess and cut out a portion that is, extract partial video data corresponding to the data of the portion, and add link data to the partial video data.
[0154]
The video processing apparatus and the video processing method according to the embodiments of the present invention further include means for specifying any partial video data to be linked from electronic data such as linked text data, audio data, or video data. It was.
Therefore, by specifying any partial video data linked from electronic data such as linked text data, audio data or video data, any partial video data linked from the linked electronic data etc. It becomes possible to refer to it.
[0155]
Further, in the video processing apparatus and the video processing method according to the embodiment of the present invention, the user designates the partial video data or the video target in the partial video data, so that the linked text data, audio data, or video data is specified. The electronic data is transferred to an electronic bulletin board system, a call or communication system such as a telephone or electronic mail, and the electronic data is delivered to an object related to any linked partial video data.
Therefore, referring to the other party's data related to the arbitrary partial video data from the arbitrary partial video data using the electronic bulletin board system or the telephone or communication system, the electronic data etc. Can be notified or transferred.
[0156]
Further, in the video processing apparatus and the video processing method according to the embodiments of the present invention, any configuration in which link data is linked in a configuration in which video data is presented, link data is held and processed with respect to the video data. Corresponding to the partial video data, there is provided means for visually presenting the presence of a link in the outer frame of the video data.
Therefore, the presence of a link to the partial video data can be presented using the outer frame without disturbing the presented video data.
[0157]
As described above, in the video processing apparatus and the video processing method according to the embodiment of the present invention, when video data linked with link data is presented to the user, for example, the user presents video in the video processing apparatus. Visual feedback of the link can be given to the user without moving the mouse over the area of the video data presented on the screen, informing the user of the presence of one or more links Can do. In addition, it is possible to refer to arbitrary partial video data from electronic data such as linked text data, audio data, or video data. Furthermore, it is possible to refer to or use an object related to the arbitrary partial video data from the arbitrary partial video data through the electronic bulletin board system or the telephone or the communication system through visual feedback.
[0158]
In the video processing apparatus according to the embodiment of the present invention, the partial video data specifying means is configured by the function of the link target area specifying unit 12 for specifying the partial video data from the video data. Data associating means is constituted by functions such as the link generation unit 13 for associating (linking) with the data.
Further, in the partial video data specifying means such as the video processing apparatus according to the embodiment of the present invention, the partial video data candidate specifying means is configured by the function of specifying the partial video data candidates, and the partial video data is selected from the candidates. The partial video data designation accepting means is configured by the function of accepting data designation from the user.
[0159]
In the video processing apparatus according to the embodiment of the present invention, the related partial video data specifying unit is configured by the function of the link management unit 15 that specifies the partial video data from the data associated with the partial video data. The related data presenting means is configured by the function of the video presenting unit 14 that visually presents data (visual feedback data) indicating the presence of data associated with the partial video data in association with the partial video data. Yes.
[0160]
In the video processing apparatus according to the embodiment of the present invention, visual feedback data and predetermined processing are associated with each other in the storage unit 11, and the designation of the presented visual feedback data is received from the user. The presentation data designation receiving means is configured by the function of the video presentation unit 14 and the like, and the presentation data corresponding process is executed by the function of the link management unit 15 that executes the process associated with the visual feedback data that has received the designation. Means are configured.
[0161]
In addition, in the video processing apparatus according to the embodiment of the present invention, for example, a plurality of visual feedback data indicating the presence of a plurality of data associated with the partial video data is presented in association with the partial video data. A plurality of related data presenting means is configured by the functions of the video presenting unit 14 and the like.
[0162]
Here, the configurations and modes of the video processing apparatus and the video processing method according to the present invention are not necessarily limited to those described above, and various configurations and modes may be used.
The application field of the present invention is not necessarily limited to the above-described fields, and the present invention can be applied to various fields.
[0163]
In addition, various processes performed in the video processing apparatus and the video processing method according to the present invention include, for example, a control program stored in a ROM (Read Only Memory) in a hardware resource including a processor and a memory. A configuration controlled by execution may be used, and for example, each functional unit for executing the processing may be configured as an independent hardware circuit.
Further, the present invention can also be grasped as a computer-readable recording medium such as a floppy (registered trademark) disk or a CD (Compact Disc) -ROM storing the above control program, or the program (itself). The processing according to the present invention can be performed by inputting a program from a recording medium to a computer and causing the processor to execute the program.
[0164]
【The invention's effect】
As described above, in the video processing apparatus and the video processing method according to the present invention, for example, partial video data that is a part of the video data is specified from the video data, and the data is specified for the specified partial video data. Since the existence of data is associated so as to be presented, the existence of data associated with the partial video data can be presented.
That is, in the video processing apparatus and the video processing method according to the present invention, for example, data indicating the presence of data associated with the partial video data is presented in a visual association with the partial video data in the video data. Therefore, the existence of the associated data and the association can be visually grasped by the user.
[0165]
Further, in the video processing apparatus and the video processing method according to the present invention, for example, data indicating the presence of a plurality of data associated with partial video data that is a part of the video data specified from the video data is stored in the video. Since the partial video data in the data is presented in a visually correlated manner, the presence of the plurality of associated data and the association can be visually recognized by the user.
[Brief description of the drawings]
FIG. 1 is a diagram illustrating a configuration example of a video processing apparatus according to the present invention.
FIG. 2 is a diagram showing a detailed configuration example of a video processing apparatus according to the present invention.
FIG. 3 is a diagram illustrating a state in which partial video data is extracted from video data.
FIG. 4 is a diagram illustrating an example of a processing procedure for extracting partial video data.
FIG. 5 is a diagram illustrating an example of a processing procedure for adding link data to partial video data;
FIG. 6 is a diagram illustrating an example of a data structure of a link data additional storage device;
FIG. 7 is a diagram showing an example of an extended data structure of a link data additional storage device.
FIG. 8 is a diagram illustrating an example of a user interface.
FIG. 9 is a diagram illustrating an example of a data structure after link data is added to partial video data.
FIG. 10 is a diagram showing an example of a user interface for link data addition presentation;
FIG. 11 is a diagram illustrating a specific example of a device configuration and a user interface in cooperative work.
FIG. 12 is a diagram showing another example of a data structure after link data is added to partial video data.
FIG. 13 is a diagram showing an example in which partial image icons representing link data added by a plurality of users are presented.
FIG. 14 is a diagram illustrating a configuration example of a system that performs editing work;
FIG. 15 is a diagram illustrating an example of a structure of linked data / target data and video data;
FIG. 16 is a diagram illustrating an example of a state in which video data synthesized from a plurality of video data portions is generated.
FIG. 17 is a diagram showing an example of speech estimation when link data is added.
FIG. 18 is a diagram showing an example of dialog estimation when link data is added.
FIG. 19 is a diagram for explaining an example of a method of guessing a dialog when link data is added.
FIG. 20 is a diagram illustrating an example of a data structure of link data.
FIG. 21 is a diagram illustrating an example of a user interface.
FIG. 22 is a diagram illustrating an example of a procedure of a linking process.
FIG. 23 is a diagram illustrating an example of a video object.
FIG. 24 is a diagram illustrating an example of a video object surrounded by a frame.
FIG. 25 is a diagram showing an example in which a video object is tilted obliquely.
FIG. 26 is a diagram showing an example of shadow data.
FIG. 27 is a diagram illustrating an example of a composition of a video object and shadow data.
FIG. 28 is a diagram illustrating an example of an extracted region where video objects and shadow data should be presented.
FIG. 29 is a diagram illustrating an example of a plurality of shadow data.
FIG. 30 is a diagram illustrating an example of a composite of a video object and a plurality of shadow data.
FIG. 31 is a diagram illustrating an example of an extracted region where video objects and a plurality of shadow data are to be presented;
FIG. 32 is a diagram illustrating an example of a configuration for performing a linking process via a network.
FIG. 33 is a diagram illustrating an example of a format of link data transmitted to a network.
FIG. 34 is a diagram showing another example of the format of link data transmitted to the network.
FIG. 35 is a diagram illustrating an example of link data values;
FIG. 36 is a diagram illustrating a configuration example of an extended video processing apparatus.
FIG. 37 is a diagram illustrating an example of link data values;
FIG. 38 is a diagram illustrating an example of a user interface in which video data including one video object, a frame, and visual feedback are presented on a video presentation screen of the video processing apparatus.
FIG. 39 is a diagram illustrating an example of a user interface in which video data including one video object, a frame, and visual feedback are presented on a video presentation screen of the video processing apparatus.
FIG. 40 is a diagram illustrating an example of a user interface in which video data including two video objects, a frame, and visual feedback are presented on a video presentation screen of the video processing apparatus.
[Explanation of symbols]
1, 181... Video processing device 11, 171, 191.
12, 172, 192 .. link target area designating part,
13, 173, 193 .. link generation unit,
14, 174, 194 .. Video presentation part,
15, 175, 195 ··· link management unit, 21, 97 ·· video storage device,
22, 91a, 91b .. video data presentation device,
23, 92a, 92b .. Arbitrary partial video data designation device,
24, 93a, 93b .. Partial video data presentation device,
25, 94a, 94b,.
26, 96a, 96b, 98... Link data storage device,
27, 95a, 95b .. link data presentation device,
31, 101, 111-113, 121-123, F1-F7, F11-F17 ..video data,
32 ... Partial video data 33 ... circumscribed rectangle 41 ... time code
42 .. coordinate data,
43, 102, 114a, 115a, 115b, 116a, 116b, 116c, 124a, 125a, 125b, 125c, 126a ...
44 .. Storage device name 45.. Partial image icon data
46 ・ ・ User data, 51 ・ ・ Video data presentation screen,
52..Partial video data presentation screen, 53.Link data addition screen,
54 ..Linked data presentation screen,
62a to 62e, 82a to 82c, 84a to 84d, partial video data with link data,
71, 72 ... objects,
73, 74 .. Link data for sending messages,
81a, 83a to 83c, partial image icons,
T1, T2,.
T11 to T14, T21, T22 .. dialog guessing point, 131 .. identifier,
132 .. Video data name 133.. Frame start number
134-Frame end number, 135-Link target area coordinates,
136 ..Linked data name,
137 .. Visual feedback data.
141..User interface 142, 201..Video presentation screen,
143 ... Video playback button, 144 Video stop button,
145 ... Link start button, 146 ... Link end button,
147 .... Link target data name input dialog,
151, 203, 207 ... Video object, 152 ... Frame
153 .. Video object tilted diagonally,
154, 156a, 156b, 157a, 157b, shadow data,
155 .. Extracted area to be presented with shadow data,
161..Client, 162..Server, 163..Network,
196 ··· Link and data transfer section, 197 · · Telephone call section,
202 .. Frame of video presentation screen,
204a, 204b, 205a, 205b, 206a, 206b, 208a, 208b .. visual feedback,

Claims (13)

  1. Partial video data specifying means for specifying partial video data that is a part of the video data from the video data;
    Data association means for associating data with the identified partial video data;
    Associated data presenting means for visually presenting data indicating the presence of data associated with the partial video data in association with the partial video data in the video data;
    The related data presenting means presents shadow data having a shape based on the shape of the partial video data as data indicating the presence of data associated with the partial video data.
    A video processing apparatus characterized by that.
  2. Partial video data specifying means for specifying partial video data that is a part of the video data from the video data;
    Data association means for associating data with the identified partial video data;
    Associated data presenting means for visually presenting data indicating the presence of data associated with the partial video data in association with the partial video data in the video data;
    The related data presenting means is a data indicating the presence of data associated with the partial video data. Presents data indicating the horizontal position and data indicating the vertical position at
    A video processing apparatus characterized by that.
  3. In the video processing device according to claim 1 or 2,
    The partial video data specifying means specifies partial video data having a time width for the same target data included in the video data.
    A video processing apparatus characterized by that.
  4. The video processing apparatus according to claim 3.
    Video data is compatible with audio data,
    The partial video data specifying means specifies partial video data having a time width in which audio data corresponding to the data of the person is valid for the data of one or more persons included in the video data.
    A video processing apparatus characterized by that.
  5. The video processing apparatus according to any one of claims 1 to 4,
    The partial video data specifying means specifies the partial video data using data for specifying an area where the partial video data is located in the frame of the video data.
    A video processing apparatus characterized by that.
  6. The video processing apparatus according to any one of claims 1 to 5,
    The partial video data specifying means includes a partial video data candidate specifying means for specifying a plurality of partial video data candidates, a partial video data designation receiving means for receiving designation of partial video data included in the specified partial video data candidates from a user, and And the specified partial video data is designated as partial video data.
    A video processing apparatus characterized by that.
  7. The video processing apparatus according to any one of claims 1 to 6,
    Provided with related partial video data specifying means for specifying the partial video data from the data associated with the partial video data;
    A video processing apparatus characterized by that.
  8. The video processing apparatus according to any one of claims 1 to 7,
    Data indicating the presence of data associated with the partial video data is associated with a predetermined process,
    Presenting data designation accepting means for accepting designation of data indicating the presence of data associated with the presented partial video data from the user;
    A presentation data corresponding process execution means for executing a process associated with the data for which the designation has been received;
    A video processing apparatus comprising:
  9. The video processing apparatus according to any one of claims 1 to 8,
    It is possible to execute operations related to the same video data by a plurality of terminal devices.
    A video processing apparatus characterized by that.
  10. The partial video data specifying means provided in the video processing device specifies the partial video data that is a part of the video data from the video data,
    Data associated means provided in the image processing apparatus associates the data to the specified partial image data,
    The related data presenting means provided in the video processing device, as data indicating the presence of the data associated with the partial video data, the shadow data having a shape based on the shape of the partial video data, Presented in visual association with partial video data,
    And a video processing method.
  11. The partial video data specifying means provided in the video processing device specifies the partial video data that is a part of the video data from the video data,
    Data associated means provided in the image processing apparatus associates the data to the specified partial image data,
    The related data presenting means provided in the video processing device, as data indicating the presence of data associated with the partial video data, is outside the frame of the video data and inside the frame provided outside the frame. , Presenting the data indicating the horizontal position and the data indicating the vertical position within the frame of the partial video data in a visual association with the partial video data in the video data,
    And a video processing method.
  12. A program to be executed by a computer constituting the video processing device,
    A function for identifying partial video data that is a part of the video data from the video data;
    A function for associating data with specified partial video data;
    A function for presenting shadow data having a shape based on the shape of the partial video data as a data indicating the presence of data associated with the partial video data, visually associated with the partial video data in the video data, Make it happen on the computer,
    A program characterized by that.
  13. A program to be executed by a computer constituting the video processing device,
    A function for identifying partial video data that is a part of the video data from the video data;
    A function for associating data with specified partial video data;
    As data indicating the presence of data associated with the partial video data, the horizontal position within the frame of the partial video data is indicated outside the frame of the video data and inside the frame provided outside the frame. Causing the computer to realize a function of presenting data and data indicating a vertical position in a visual association with the partial video data in the video data,
    A program characterized by that.
JP2001308282A 2001-10-04 2001-10-04 Video processing device Expired - Fee Related JP4045768B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2001308282A JP4045768B2 (en) 2001-10-04 2001-10-04 Video processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2001308282A JP4045768B2 (en) 2001-10-04 2001-10-04 Video processing device

Publications (3)

Publication Number Publication Date
JP2003116095A JP2003116095A (en) 2003-04-18
JP2003116095A5 JP2003116095A5 (en) 2005-06-23
JP4045768B2 true JP4045768B2 (en) 2008-02-13

Family

ID=19127618

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2001308282A Expired - Fee Related JP4045768B2 (en) 2001-10-04 2001-10-04 Video processing device

Country Status (1)

Country Link
JP (1) JP4045768B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4081680B2 (en) * 2003-11-10 2008-04-30 ソニー株式会社 Recording apparatus, recording method, recording medium, reproducing apparatus, reproducing method, and content transmission method
JP2006127367A (en) 2004-11-01 2006-05-18 Sony Corp Information management method, information management program, and information management apparatus
JP4434094B2 (en) * 2005-07-06 2010-03-17 ソニー株式会社 Tag information generation apparatus, tag information generation method and program
JP2007018198A (en) * 2005-07-06 2007-01-25 Sony Corp Device for generating index information with link information, device for generating image data with tag information, method for generating index information with link information, method for generating image data with tag information, and program
JP2007079809A (en) * 2005-09-13 2007-03-29 Fuji Xerox Co Ltd Electronic paper system
JP5002997B2 (en) * 2006-03-30 2012-08-15 カシオ計算機株式会社 Projection apparatus and program
US8826322B2 (en) 2010-05-17 2014-09-02 Amazon Technologies, Inc. Selective content presentation engine
JP6565409B2 (en) * 2015-07-17 2019-08-28 沖電気工業株式会社 Communication support apparatus, communication support method and program
JP2017169222A (en) * 2017-05-10 2017-09-21 合同会社IP Bridge1号 Interface device for designating link destination, interface device for viewer, and computer program

Also Published As

Publication number Publication date
JP2003116095A (en) 2003-04-18

Similar Documents

Publication Publication Date Title
JP4772380B2 (en) A method to provide just-in-time user support
US8806355B2 (en) Method and apparatus for visualizing and navigating within an immersive collaboration environment
US6816887B1 (en) Method and apparatus for sending private messages within a single electronic message
US5943055A (en) Computer interface method and system
US8370745B2 (en) Method for video seamless contraction
CA2820108C (en) Annotation method and system for conferencing
US7260771B2 (en) Internet-based system for multimedia meeting minutes
US20040078435A1 (en) Method, computer program product and apparatus for implementing professional use of instant messaging
US20030105816A1 (en) System and method for real-time multi-directional file-based data streaming editor
JP5050060B2 (en) Shared space for communicating information
AU2001241645B2 (en) Communication system and method including rich media tools
EP2645267A1 (en) Application sharing
EP2192732A2 (en) System and method for synchronized authoring and access of chat and graphics
CN1119763C (en) Apparatus and method for collaborative dynamic video annotation
KR100220042B1 (en) Presentation supporting method and apparatus therefor
CN100444099C (en) Method for capturing picture, capturer and instant-telecommunication customer terminal
US20070022159A1 (en) conference recording system
DE69433189T2 (en) Display control for computer arrangement with collaboration
US7124372B2 (en) Interactive communication between a plurality of users
KR20150087405A (en) Providing note based annotation of content in e-reader
US7640502B2 (en) Presentation facilitation
US20020085030A1 (en) Graphical user interface for an interactive collaboration system
JP2013518351A (en) Web browser interface for spatial communication environment
US7496582B2 (en) Identification of relationships in an environment
US7458013B2 (en) Concurrent voice to text and sketch processing with synchronized replay

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20040917

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20041005

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070115

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070206

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070329

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070522

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070719

RD01 Notification of change of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7421

Effective date: 20071003

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20071030

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20071112

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101130

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111130

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111130

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121130

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121130

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131130

Year of fee payment: 6

LAPS Cancellation because of no payment of annual fees