JP2005182460A - Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein - Google Patents

Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein Download PDF

Info

Publication number
JP2005182460A
JP2005182460A JP2003422341A JP2003422341A JP2005182460A JP 2005182460 A JP2005182460 A JP 2005182460A JP 2003422341 A JP2003422341 A JP 2003422341A JP 2003422341 A JP2003422341 A JP 2003422341A JP 2005182460 A JP2005182460 A JP 2005182460A
Authority
JP
Japan
Prior art keywords
annotation
data
search key
document data
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2003422341A
Other languages
Japanese (ja)
Inventor
Tadahiko Iijima
Tadashi Kimura
Kisho Sato
紀章 佐藤
正 木村
忠彦 飯島
Original Assignee
Canon Inc
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc, キヤノン株式会社 filed Critical Canon Inc
Priority to JP2003422341A priority Critical patent/JP2005182460A/en
Publication of JP2005182460A publication Critical patent/JP2005182460A/en
Application status is Withdrawn legal-status Critical

Links

Abstract

<P>PROBLEM TO BE SOLVED: To extract and retrieve retrieval information from a corresponding part in electronic information data relating to an added annotation in adding annotation data to document data. <P>SOLUTION: An information processor is provided with: a document data storage for storing the document data; an annotation input means for inputting the annotation data (characters, graphics and lines) by correlating the data with the document data; a means for registering the annotation data as retrieval keywords; a means for extracting the retrieval keywords (characters) from an object area corresponding to the annotation data and registering them; a means for listing and displaying the registered retrieval keywords and displaying the pertinent document data; a retrieval means for performing retrieval by the registered keyword; and a means for displaying the document data of a retrieved result on a screen together with the annotation data. <P>COPYRIGHT: (C)2005,JPO&NCIPI

Description

  The present invention relates to an information processing apparatus capable of inputting annotation data into document data. In particular, the present invention relates to an information processing apparatus having a function of searching based on information related to annotation data.

  Conventionally, an apparatus capable of registering search data such as an annotation function, a bookmark function, and a marker function as search information for a registered document in this type of information processing apparatus has been realized. With the annotation function, it is possible to search with registered character strings, and it is possible to search with partial matches of registered character strings. The bookmark function provides a function to mark important pages and jump from the bookmark list. In the marker function, a function is provided in which a region to be noticed is marked with a marker, searched with the marker color used, and the corresponding page is displayed.

As a prior art example, Patent Document 1 proposes a document processing apparatus capable of registering an annotation image in association with a designated position of document data. In this application, it is proposed to designate the corresponding location of the document data with the initial handwriting stroke, and create and register the annotation image after the second stroke.
JP 2000-250903 A

  In the conventional annotation data registration function, the annotation data itself added to the document can be used as search information to call a specific page of the document. Also in the prior art examples, it has been proposed to register and call document data and annotation images in association with each other using handwritten annotation images.

  However, in any of the cases described above, it has not been possible to perform a search or call using information focused on by the user, such as characters or image information in the vicinity of the original data where the annotation data is added.

  The present invention has been made paying attention to the above points. When annotation data is added to document data, retrieval can be performed by extracting search information from a corresponding portion in electronic information data related to the added annotation. An information processing apparatus, an annotation processing method, an information processing program, and a recording medium storing the information processing program are provided.

  The present invention obtains and registers search information using a character recognition function (OCR) from a display screen of document data corresponding to the annotation data when adding the annotation data to the document. And the corresponding annotation data can be called in association with each other.

  Further, the registration target area on the display screen is changed according to the type of annotation data to be added, and search data is acquired from the vicinity of the annotation data input position without requiring any special operation.

  That is, the technical contents of the present invention can solve the above-described problems by including the following configuration.

  (1) Information processing comprising document data storage means for storing document data including characters, figures, images, etc., display control means for displaying the contents of the document data storage on a display, and document input means for inputting the document data In the apparatus, a display screen storage means for temporarily storing a display screen of document data being displayed as image information, an annotation input means for inputting annotation data corresponding to an arbitrary position on the display screen, and a first search for the annotation data First search key registration means for registering the document data in association with the document data, annotation target area extraction means for extracting the annotation target area from the vicinity of the input position of the annotation data, and a second search key from the annotation target area Second search key extraction means for extracting the second search key, second search key registration means for registering the extracted second search key, and registration by the first search key registration means First search key, and an information processing apparatus characterized by comprising an index storage means for storing both the second search key registered by the second search key registration unit that.

  (2) The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. (1) The information processing apparatus according to description.

  (3) The information processing apparatus according to (1), further comprising annotation target image registration means for registering image data of the annotation target area in the display screen in association with the first search key as a reduced image. .

  (4) The information processing apparatus according to (1), further comprising second search key extraction means for extracting search information from the annotation target area of the display screen storage means by a character recognition function (OCR).

  (5) Document search means for searching for the corresponding position in the document data with reference to the index storage means registered in (1), and a search result for combining and displaying the annotation data together with the searched document data The information processing apparatus according to (1), further comprising display control means.

  (6) Document search means for searching for the corresponding position in the document data with reference to the index storage means registered in (1), search for combining and displaying the annotation data together with the searched document data An information processing apparatus comprising a result display control means.

  (7) Information processing comprising a document data storage step for storing document data including characters, figures, images, etc., a display control step for displaying the contents of the document data storage on a display, and a document input step for inputting the document data In the apparatus, a display screen storing step of temporarily storing a display screen of document data being displayed, an annotation input step of inputting annotation data corresponding to an arbitrary position on the display screen, and the annotation data as the first search key A first search key registration step for registering in association with document data; an annotation target region extraction step for extracting an annotation target region from the vicinity of the input position of the annotation data; and a second search key is extracted from the annotation target region. A second search key extraction step, a second search key registration step for registering the extracted second search key, and the first search key registered by the first search key registration step. Key, and annotation processing method comprising the index storage step of storing both the second search key registered by the second search key registration process.

  (8) The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. 7) The annotation processing method according to the description.

  (9) In the above (6), an annotation process characterized by comprising an annotation target image registration step of registering image data of the annotation target area in the display screen as a reduced image in association with the first search key. Method.

  (10) The annotation processing method according to (7), further comprising a second search key extracting step of extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR).

  (11) A document search step for searching the corresponding position in the document data with reference to the index storage registered in (7), and a search result display for combining and displaying the annotation data together with the searched document data The annotation processing method according to (7), further comprising a control step.

  (12) A document search step for searching the corresponding position in the document data with reference to the index storage registered in (7), and a search result for combining and displaying the annotation data together with the searched document data An annotation processing method comprising a display control step.

  (13) Information including document data storage processing for storing document data including characters, graphics, images, etc., display control processing for displaying the contents of the document data storage on a display, and document input processing for inputting the document data In a processing program, a display screen storage process for temporarily storing a display screen of document data being displayed, an annotation input process for inputting annotation data corresponding to an arbitrary position on the display screen, and the annotation data as a first search key First search key registration process for registering in association with the document data, annotation target area extraction process for extracting an annotation target area from the vicinity of the input position of the annotation data, and extracting a second search key from the annotation target area Registered by the second search key extracting process, the second search key registering process for registering the extracted second search key, and the first search key registering process. First search key, and an information processing program characterized by comprising an index storage process of storing both the second search key registered by the second search key registration process.

  (14) The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. 13) The information processing program described.

  (15) The information processing program according to (13), further comprising an annotation target image registration process for registering image data of the annotation target area in the display screen in association with the first search key as a reduced image .

  (16) The information processing program according to (13), further including a second search key extraction process for extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR).

  (17) A document search process for searching the corresponding position in the document data with reference to the index storage registered in (13), and a search result display for combining and displaying the annotation data together with the searched document data. The information processing program according to (13), further comprising a control process.

  (18) Document search processing for searching for a corresponding position in document data with reference to the index storage registered in (11), and a search result for combining and displaying the annotation data together with the searched document data An information processing program comprising display control processing.

  (19) Information processing including document data storage processing for storing document data including characters, graphics, images, etc., display control processing for displaying the contents of the document data storage on a display, and document input processing for inputting the document data In the program, a display screen storage process for temporarily storing a display screen of document data being displayed, an annotation input process for inputting annotation data corresponding to an arbitrary position on the display screen, and the annotation data as the first search key First search key registration processing for registering in association with document data, annotation target region extraction processing for extracting an annotation target region from the vicinity of the input position of the annotation data, and extracting a second search key from the annotation target region Registered by the second search key extraction process, the second search key registration process for registering the extracted second search key, and the first search key registration process 1 of the search key, and a recording medium storing an information processing program characterized by comprising an index storage process of storing both the second search key registered by the second search key registration process.

  (20) The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. 19) A recording medium storing the information processing program according to the description.

  (21) The information processing program according to (19), further comprising an annotation target image registration process for registering image data of the annotation target area in the display screen as a reduced image in association with the first search key. Recording medium that stores

  (22) An information processing program as set forth in (19) above, comprising a second search key extraction process for extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR) recoding media.

  (23) A document search process for searching for the corresponding position in the document data with reference to the index storage registered in (19), and a search result display for combining and displaying the annotation data together with the searched document data. A recording medium storing the information processing program according to (19), comprising a control process.

  (24) Document search processing for searching for a corresponding position in the document data with reference to the index storage registered in (19), and a search result for combining and displaying the annotation data together with the searched document data A recording medium storing an information processing program comprising a display control process.

  According to the present invention, when annotating data is input, search information can be generated from a predetermined registration range corresponding to the type of the annotation data without requiring a special operation from the vicinity of the annotation data input position of the corresponding document data. The corresponding page can be searched and called without registration operation for searching.

  Hereinafter, embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

  In order to make the description easy to understand in the description of the following drawings, the first search key defined in claim 1 is referred to as an annotation key, and the second search key is referred to as an annotation target key.

  FIG. 1 is a claim related diagram showing an annotation processing method by the information processing apparatus according to the present embodiment.

  In the drawing, an input operation means 100 is an input operation means for inputting a character or a figure using an input device such as a keyboard or a mouse, which will be described later, and performing document creation, annotation input, search instruction and the like.

  The input document data is stored in the document data storage unit 101, and the document data is displayed on the display device 103 by the display control unit 102. The display control unit 102 also displays annotation data stored in an index storage unit 108 (to be described later) in association with the document data in the document data storage unit 101.

  The display screen storage unit 104 is a display screen storage unit that temporarily stores a display screen of document data displayed on the display device 103. The stored display screen is used as input data for extraction area determination and character recognition by an annotation area extraction means 106 described later.

  The annotation key registration unit 105 registers the annotation data on the screen displayed on the display device 103. In this embodiment, a case where annotation data is input using handwritten lines, straight lines, characters, figures, etc. is shown as an example. The annotation data is input from the input operation means 100, and the annotation type, size, and text are stored in the index storage 108 according to the attribute.

  The annotation area target extraction means 106 is a means for determining a capture target area in the display screen corresponding to the input annotation data. In the present embodiment, it is possible to capture different target areas depending on the types of annotation data such as lines, characters, and figures. The annotation target image registration unit 107 is a unit that registers a predetermined area in the display screen corresponding to the annotation data in the index storage unit 108 as a reduced image.

  Annotation target key extraction means 109 is means for extracting a search key from the target area on the display screen obtained by the annotation target area extraction means 106. The annotation target key registration unit 110 is a unit that registers the extracted search key in the index storage unit 108.

  The document search unit 111 is a unit for instructing to search the corresponding document using the search data registered in the index storage unit 108. The document search unit 113 executes a search by an operation from the input operation unit 100, and stores a list of corresponding annotation data in the search result storage 112 with reference to the index storage unit 108.

  The search result is displayed on the display device 103 by the search result display control means 113 via the display control means 102. When the corresponding document data of the search result is selected, the annotation data stored in the index storage 108 and the corresponding page of the document data are combined and displayed.

  FIG. 2 is a functional block diagram showing the configuration of the information processing apparatus for explaining the annotation processing according to this embodiment. In the figure, a CPU 20 is a microprocessor and executes programs such as document data display, annotation data input, retrieval information extraction, and index registration.

  The keyboard / mouse 21 is an input device for inputting characters, specifying an input position, and the like.

  The memory 22 is a random access memory and stores and executes a program describing a series of processing procedures such as input of annotation data and registration in an index according to this embodiment. It is also used as temporary storage required during program execution. The display 23 is a display device for displaying document data, annotation data, search results, and the like. The document data storage device 24 stores document data for display.

  The index data storage device 25 is a storage device that stores annotation data and search information extracted from document data when the annotation data is input. The DISK 26 is an external storage device for storing and calling document data.

  A system BUS 27 is a system bus that connects and controls various devices connected according to the present embodiment.

  FIG. 3 is a display example of the embodiment according to the present invention.

  Reference numeral 30 denotes an example of a state in which document data is displayed, and this display state is an initial display example of an annotation. 31 is an example of an annotation input, and an annotation tool 32 for inputting an annotation is displayed.

  By pressing a button such as a straight line of the annotation tool 32, various types of annotation input can be performed.

  When the pen button of the annotation tool 32 is pressed, the pen input mode is entered, and handwriting line input is possible by clicking and dragging.

  When a straight line, square, or circle button of the annotation tool 32 is pressed, a corresponding input mode is set. Lines, squares, and circles are entered by clicking and dragging to specify the start and end points.

  When the character button of the annotation tool 32 is pressed, the character input mode is set. For character input, an annotation target is selected in the same manner as in the drawing of a square, and after the next click, a text input box is displayed at the clicked position, and the character can be input by the input device.

  In the annotation input example 31, a square is selected from the annotation tool 32, and 33 squares are drawn by clicking and dragging.

  FIG. 4 is an example of annotation target extraction corresponding to various annotation inputs and annotation types.

  Reference numeral 40 denotes an example of annotation input by a straight line with respect to text in document data. In the straight line annotation input example 40, a straight line annotation is performed under “IM Team Fardendand”, and a rectangular portion of a broken line is extracted as an annotation target by a straight line annotation target extraction S206 described later.

  Reference numeral 41 denotes an example of inputting annotations by using squares with respect to text and graphics in document data. In the square annotation input example 41, “IM Team Fardenand” and the cars in the image are annotated with squares, and a rectangular portion of a broken line is extracted as an annotation target by square annotation target extraction S208 described later.

  42 is an example of annotation input by a circle for text and figures in document data. In the circle annotation input example 42, “IM Team Fardenand” and the car in the image are annotated with a circle, and a broken-line rectangular portion is extracted as an annotation object by circle annotation object extraction S208 described later.

  Reference numeral 43 is an example of inputting annotations by using characters for text and graphics in document data. In the character annotation input example 43, “IM Team Fardenand” and the car in the image are annotated with characters, and a rectangular portion of a broken line is extracted as an annotation target by character annotation target extraction S208 described later.

  In the case of the pen mode, which is one of the annotation input means, since a handwritten line is input by clicking and dragging, it is recognized as a straight line, square, circle, or character by the handwritten annotation recognition process S202, which will be described later. Each process is performed.

  FIG. 5 is an example of an index generated from an annotation target. The index includes an annotation key, an annotation target, and an annotation target key. The annotation key includes an annotation number, an annotation type, an annotation size, and an annotation text according to the order of the inputted annotations. The annotation target is an image obtained by cutting out a part of the annotation target range from the display screen storage 104. The annotation target key includes the page, the number of lines, the number of digits, the number of characters, and the text included in the annotation target, which are positional information on which the annotation has been performed.

  Reference numerals 50 and 51 are examples of states after annotation with straight lines, squares, circles, and characters. The post-annotation state example 50 is the first page of the document data, the post-annotation state example 51 is the second page of the document data, and the index example generated thereby is 55.

  If the annotation 52 is the annotation made first, the annotation number of the annotation 52 is “1”. The annotation type of the annotation 52 is registered as “square”, and the size is registered as “medium”. Since the annotation 52 is a square annotation, the annotation text is blank. Since the text was extracted from the annotation target by the text extraction process S213 described later, the clipped image of the annotation target of the annotation 52 is registered as “character 1”, and “Toyota and Honda”, which is the text extraction result, is added to the text of the annotation target key. Is registered and the number of characters “10” is registered. In the position information of the annotation target extracted by the square annotation target extraction S208 described later, the number of pages, the number of lines, and the number of digits of the annotation target key of the annotation 52 are also registered as the annotation target key.

  When the annotation 53 is the second input annotation, the annotation number of the annotation 53 is “2”. The annotation type of the annotation 53 is registered as “circle”, and the size is registered as “small”. Since the annotation 53 is a circle, the annotation text is blank. Since no text is extracted from the annotation target by the text extraction processing S213 described later, the cut-out image of the annotation target is registered as “image 1”. In the annotation 53, since the annotation target is an image, the text and the number of characters of the annotation target key are blank. In the annotation target position information extracted by circle annotation target extraction S206 described later, the number of pages, the number of lines, and the number of digits of the annotation target key of the annotation 53 are also registered as the annotation target key.

  If the annotation 54 and the annotation 55 are the third and fourth input annotations, respectively, the index is registered in the same manner as the annotation 52.

  If the annotation 56 is the fifth input annotation, the annotation number of the annotation 56 is “5”. The annotation type of the annotation 56 is registered as “character”, the text of the annotation key is registered as the input character “cooperation”, and the size is registered as “2” which is the number of characters of the text of the annotation key. The annotation target and annotation target key of the annotation 56 are registered in the same manner as the annotation 52.

  FIG. 6 is an example showing the flow of the search operation.

  Reference numeral 600 denotes an example in which a list display mode and a search mode are selected. The selection result in the mode selection 600 is reflected in a search instruction S300 described later.

  Reference numeral 601 denotes an example displayed when the list display mode is selected in the mode selection 600. Reference numeral 602 denotes a tab for selecting the display mode of the display target index, which includes a list display mode and an image display mode. In the list display mode example 601, the list display mode is selected. A list display mode and an image display mode can be switched at any time by a display mode selection tab 602. Reference numeral 603 denotes a display list of display target indexes. In the list display mode example 601, a part of the above-described index example 55 is displayed. Reference numeral 604 denotes a scroll bar for scrolling the displayed list. By operating the scroll bar 604, a display target index that is not displayed can be displayed.

  In 605, the annotation target of the annotation selected from the display target index is displayed as a preview. In the list display example 601, the annotation number 5 is selected in the display list 603, and the annotation target with the annotation number 5 is displayed in the annotation target preview 605. When one of the annotations in the display list 603 is double-clicked or the annotation target preview 605 is clicked, a screen 610 in which the annotation is added to the annotation target document data is displayed.

  606 is an example displayed when the list display mode is selected by the mode selection 600 as in the list mode display example 601. The list display mode display example 606 is different from the list display mode display example 601 in that the image display mode is selected on the display mode selection tab 602.

  In the list display mode display example 606, since the image display mode is selected in the display mode selection tab 602, a list of annotations is displayed side by side with a part of the annotation target preview and the text of the annotation target key. When the annotation target preview is clicked, a screen 610 in which an annotation is added to the annotation target document data is displayed.

  Reference numeral 607 denotes a search mode display example in which search is selected in the mode selection 600. Reference numeral 608 denotes a search item selection unit, which can select combinations of search items and search items. A list display is also prepared in the search item selection unit 608, and when the list display is selected, the same display as the list display mode display examples 601 and 606 is displayed. Reference numeral 609 denotes a search key input unit which selects a text or an annotation type to be searched. The search key input unit 609 is provided with an optimal input interface according to the search item selected by the search item selection unit 608. The index that matches the search becomes a display index, and is displayed as a list and an image as in the list display mode display examples 601 and 606, and performs the same operation. When a search item is selected and a search key is entered in the search result display state, a narrow search is performed based on the display index that is the search result.

  FIG. 7 shows a flowchart of the present embodiment which is an annotation processing method in the information processing apparatus.

  The start in FIG. 7 means that the annotation processing in the information processing apparatus is started in a state where power is distributed to each device used for the annotation processing in the information processing apparatus shown in FIG. 2 and the operation system is read from the DISK 37. Show.

  After starting the annotation process in the information processing apparatus, first, in S10, the user selects a document data reading mode, annotation mode, and search mode. A document reading process is performed in the document reading mode, an annotation adding process is performed on the document in the annotation mode, and an index searching process stored in the index storage device 25 is performed in the search mode.

  In S11, it is determined whether the processing mode is the document reading mode. If the processing mode is the document reading mode, the document data is selected in S12. In document data selection S12, document data that is active on the operation system is selected. In the document data selection S12, when the index data storage area corresponding to the document data selected in the document data selection S12 does not exist in the index data storage device 25 simultaneously with the selection of the document data, the index data storage device stores the index data. Create a storage area. In this embodiment, an annotation and search for one document data will be described for easy understanding.

  It may be possible not to provide an index data storage area corresponding to each document in the index data storage device by registering in the index which index data for each document data is an annotation made to which document. .

  In S13, it is determined whether the processing mode is the annotation mode. If it is the annotation mode, the index registration in S14 is performed. In the index registration S14, an annotation input process is performed, and the system generates and registers an index based on the input annotation data.

  In S15, it is determined whether the processing mode is the search mode. If the search mode is the search mode, the search process in S16 is performed. In the search process S16, the user selects either the list mode or the search mode, the user searches for the desired annotation in each mode, and reads the document data and combines the annotation data based on the search result.

  In S17, the processing result of each processing mode is displayed on the display 33.

  In S18, the end of the process is determined. If the process is not terminated, the process returns to the process mode selection S10. If the process is terminated, the annotation process in the information processing apparatus is terminated.

  FIG. 8 is a flowchart illustrating a processing example when the index registration processing S14 is performed.

  In S200, an annotation input process is performed by the annotation input shown in FIG.

  In S201, it is determined whether or not the input annotation is a handwritten annotation in the pen mode in which the pen button is pressed by the annotation tool 32. If it is a handwritten annotation, the handwritten annotation recognition process in S202 is performed.

  In the handwritten annotation recognition process S202, handwritten annotation data is recognized using a figure recognition system and a character recognition system. Through the handwriting recognition process S202, the handwritten annotation data is also converted into the same annotation data as the annotation data of the annotation performed using the drawing system, and used as the annotation data used in the subsequent processing.

  In step S203, the annotation number, the annotation type, the annotation size, and the annotation text according to the order of the annotations in which the annotation data is input are registered in the index storage device 205 as annotation keys.

  In S204, the screen displayed on the display 33 is read as an image including an annotation target. In the annotation target reading process S204, in addition to the method of reading the screen displayed on the display 33 as an image including the annotation target, the location displayed on the display 33 of the document data selected in the document data selection S12 is used as data. Although it is conceivable to take out, since the subsequent processing is different from the processing by the above method, an embodiment by the method described later is given as another embodiment.

  From S205 to S212, the annotation object is extracted as a rectangular image from the image included in the annotation object based on the annotation data, and the correspondence between the position of the rectangle on the screen and the document data is taken into the annotation object document data. Get location information.

  In S205, it is determined whether the annotation data type is a straight line. If the annotation type is a straight line, a straight line annotation target extraction process S206 is performed. In the straight line annotation target extraction process S206 in the present embodiment, the straight line annotation target is set to the text above or below the straight line, and the annotation target range is expanded from the length of the straight line up and down until the text is extracted. The nearer lower text is extracted as an annotation target. At the same time, the correspondence between the position on the rectangular screen and the document data is taken, and the position information in the document data to be annotated is acquired.

  In S207, it is determined whether the annotation data type is a square, and if the annotation type is a square, a square annotation target extraction process S208 is performed. In the square annotation target extraction process S208 in the present embodiment, it is assumed that the annotation is performed on the inside of the square, and the rectangle surrounding the square is set as the annotation target, and the annotation target is cut out from the image including the annotation target as a rectangular image. At the same time, the correspondence between the position on the rectangular screen and the document data is taken, and the position information in the document data to be annotated is acquired. In this embodiment, an allowable range of annotation input error is provided by providing a margin proportional to the size of the square between the annotation by the square and the rectangle.

  In S209, it is determined whether the annotation data type is a circle. If the annotation type is a circle, a circle annotation target extraction process S210 is performed. In the circle annotation target extraction process S210 in the present embodiment, it is assumed that an annotation is performed on the inside of the circle, and a rectangle surrounding the circle is set as the annotation target, and the annotation target is cut out from the image including the annotation target as a rectangular image. At the same time, the correspondence between the position on the rectangular screen and the document data is taken, and the position information in the document data to be annotated is acquired. In this embodiment, an allowable range of annotation input error is provided by providing a margin proportional to the size of the circle between the circle annotation and the rectangle.

  In S211, it is determined whether the annotation data type is a character. If the annotation type is a character, a character annotation target extraction process S212 is performed. If it is not a character, the index registration is terminated. In this embodiment, in the case of annotation by characters, the annotation target is selected after the selection of the annotation target in the same manner as the drawing of the square, and therefore the annotation target extraction is performed based on the drawn square. To do. At the same time, the correspondence between the position on the rectangular screen to be annotated and the document data is taken, and the position information in the document data to be annotated is acquired. However, in this embodiment, since the image including the character annotation and the annotation target extracted by the character annotation target extraction processing S212 is set as the annotation target for index registration, the annotation target extracted by the character annotation and the character annotation target extraction processing S212 is used. Cut out a rectangular image to include.

  Further, the character annotation target range is set to text or an image in the vicinity of the character, and the character annotation target extraction process S212 searches the text and image while expanding the annotation target range evenly in the vertical and horizontal directions from the character annotation. Alternatively, if a part of the image is discovered and the annotation target range is enlarged so that the whole is the target range, it is not necessary to specify the annotation target range with a square when inputting the character annotation. However, in this case, it is expected that the extracted annotation target is part of the document or phrase, so the text included in the annotation target is matched with the dictionary and the text included in the annotation target by parsing, parsing, etc. It is effective to use a process for determining whether there is a meaning and a process for expanding the annotation target until a meaningful text is extracted.

  In S213, processing for extracting text from the annotation target is performed. In this embodiment, since the annotation target is extracted as an image, it is performed by character recognition processing such as OCR.

  In S214, the extracted annotation object is registered in the index data storage device 35 as an annotation object. When text is extracted in the text extraction process S213, the name of the image file to be annotated is prefixed with a letter, followed by the serial number of the annotation object including the text, and if no text is extracted, it is converted into an image and the text after it. The serial number of the annotation target that does not contain

  In S215, the annotation target key is registered in the index data storage device 25. In the annotation target key registration process S215, it is determined whether text is extracted from the annotation target. When the text is extracted, the number of characters of the text is registered as the number of characters of the annotation target key as the text of the annotation target key. At the same time, the position information in the annotation target document data is registered as the number of pages, the number of lines, and the number of digits of the annotation target key.

  FIG. 9 is a flowchart illustrating a processing example when the search processing S16 is performed.

  In S300, the index data is read from the index storage device 25 and set as the corresponding index list.

  In S301, based on the search operation input by the search operation shown in FIG. 6, it is instructed whether to search using the annotation key, search using the annotation target key, or search using the document data position.

  When the list mode is selected in the mode selection 600 and when the list is selected in the search item selection unit 608 in the search mode, all searches are not performed, and other than the list is selected in the search item selection unit 608 in the search mode. Instructed to perform the corresponding search if it is done.

  In S302, it is determined whether the search instruction S301 is a search using an annotation key. If the search instruction S301 is a search using an annotation key, the user performs S303 in which the annotation key that is the search key is input, and performs the corresponding index list editing S304. In the corresponding index list editing S304, editing is performed to leave only the index corresponding to the search.

  In S305, it is determined whether the search instruction S301 is a search using the annotation target key. If the search instruction S301 is a search using an annotation target key, the user performs an input of the annotation target key that is the search key, and performs an appropriate index list editing S304.

  In S307, it is determined whether the search instruction S301 is a search based on the document data position. If the search instruction S301 is a search based on the document data position, the user performs S308 in which the document data position as the search key is input, matches the input search key with the index data, and performs the corresponding index list edit 304.

  In S309, it is determined whether the display of the search result is the list display mode. If the list display mode, the corresponding index list is displayed as a list. If not, the corresponding index list is displayed as an image. S311 I do.

  In S312, the user inputs the next operation.

  In S313, it is determined whether the operation input by the user in the operation input S312 is a narrow search. If the search is a narrow search, the corresponding index list is set as a search target and the process returns to the search instruction S301. If the operation input S312 is not a narrow search, it is determined in S314 whether the user has selected an index from the corresponding index list. If the user has selected an index from the corresponding index list, the search process is terminated, and if it has not been selected, it is determined whether the search is terminated in S315.

  In the search end determination S315, if the search is completed, the process ends. If not, the process returns to the index data reading S300.

[Other embodiments]
In the embodiment of the present invention, the case where the search keyword is extracted using optical character recognition (OCR) from the image information on the display screen is described, but the annotation data input position on the screen is associated with the document data. Since the correspondence position relationship between the annotation data and the document data can be easily realized by a known method by using the index table, it can also be realized by a method of extracting from the document data.

It is a claim related figure for demonstrating the characteristic of this invention. It is a functional block diagram which shows the structure of a present Example. It is a figure which shows the initial display for description of a present Example, and the example of annotation data input. It is explanatory drawing for the operation | movement according to the kind of annotation data by a present Example. It is a figure which shows the relationship between the document data and index storage by a present Example. It is a figure explaining the flow of search operation by a present Example. It is a main flowchart for operation | movement description of a present Example. It is a flowchart explaining the annotation data input process by a present Example. It is a flowchart explaining the search process by a present Example.

Explanation of symbols

20 Microprocessor 21 Input device 22 such as keyboard and mouse Random access memory 23 Display 24 Document data 25 input from input device Index storage 26 storing annotation data and search key 26 External storage device such as FD, CD-ROM, hard disk 27 System bus connecting each device

Claims (24)

  1. In an information processing apparatus comprising document data storage means for storing document data including characters, figures, images, etc., display control means for displaying the contents of the document data storage on a display, and document input means for inputting the document data,
    Display screen storage means for temporarily storing the display screen of the document data being displayed as image information;
    Annotation input means for inputting annotation data corresponding to an arbitrary position on the display screen,
    First search key registration means for registering the annotation data in association with the document data as a first search key;
    Annotation target area extracting means for extracting the annotation target area from the vicinity of the input position of the annotation data;
    Second search key extraction means for extracting a second search key from the annotation target area;
    Second search key registration means for registering the extracted second search key;
    Index storage means for storing both the first search key registered by the first search key registration means and the second search key registered by the second search key registration means is provided. Information processing apparatus.
  2.   The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. Information processing device.
  3.   2. The information processing apparatus according to claim 1, further comprising annotation target image registration means for registering image data of the annotation target area in the display screen in association with the first search key as a reduced image.
  4.   The information processing apparatus according to claim 1, further comprising second search key extraction means for extracting search information from the annotation target area of the display screen storage means by a character recognition function (OCR).
  5.   A document search means for searching for a corresponding position in the document data with reference to the index storage means registered in claim 1, and a search result display control means for combining and displaying the annotation data together with the searched document data. The information processing apparatus according to claim 1, further comprising:
  6.   A document search means for searching for a corresponding position in document data with reference to the index storage means registered in claim 1, and a search result display control for combining and displaying the annotation data together with the searched document data An information processing apparatus comprising means.
  7. In an information processing apparatus comprising a document data storage step for storing document data including characters, graphics, images, etc., a display control step for displaying the contents of the document data storage on a display, and a document input step for inputting the document data.
    A display screen storage step for temporarily storing a display screen of the document data being displayed;
    An annotation input process for inputting annotation data corresponding to an arbitrary position on the display screen,
    A first search key registration step of registering the annotation data in association with the document data as a first search key;
    An annotation target area extracting step of extracting an annotation target area from the vicinity of the input position of the annotation data;
    A second search key extracting step of extracting a second search key from the annotation target area;
    A second search key registration step of registering the extracted second search key;
    An index storage step for storing both the first search key registered by the first search key registration step and the second search key registered by the second search key registration step is provided. Annotation processing method.
  8.   The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. Annotation processing method.
  9.   7. The annotation processing method according to claim 6, further comprising an annotation target image registration step of registering image data of the annotation target area in the display screen in association with the first search key as a reduced image.
  10.   8. The annotation processing method according to claim 7, further comprising a second search key extracting step of extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR).
  11.   A document search step for searching for a corresponding position in document data with reference to the index storage registered in claim 7, and a search result display control step for combining and displaying the annotation data together with the searched document data. The annotation processing method according to claim 7, wherein:
  12.   A document search step for searching for a corresponding position in document data with reference to the index storage registered in claim 7, and a search result display control step for combining and displaying the annotation data together with the searched document data An annotation processing method characterized by comprising:
  13. An information processing program comprising: document data storage processing for storing document data including characters, graphics, images, etc .; display control processing for displaying the contents of the document data storage on a display; and document input processing for inputting the document data ,
    Display screen storage processing for temporarily storing the display screen of the document data being displayed;
    Annotation input processing that inputs annotation data corresponding to any position on the display screen,
    A first search key registration process for registering the annotation data in association with the document data as a first search key;
    An annotation target area extraction process for extracting an annotation target area from the vicinity of the input position of the annotation data;
    A second search key extraction process for extracting a second search key from the annotation target area;
    A second search key registration process for registering the extracted second search key;
    An index storage process for storing both the first search key registered by the first search key registration process and the second search key registered by the second search key registration process is provided. Information processing program.
  14.   The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. Information processing program.
  15.   14. The information processing program according to claim 13, further comprising an annotation target image registration process for registering image data of the annotation target area in the display screen as a reduced image in association with the first search key.
  16.   14. The information processing program according to claim 13, further comprising a second search key extraction process for extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR).
  17.   14. A document search process for searching for a corresponding position in document data with reference to the index storage registered in claim 13, and a search result display control process for combining and displaying the annotation data together with the searched document data. The information processing program according to claim 13, wherein:
  18.   12. Document search processing for searching for a corresponding position in document data with reference to the index storage registered in claim 11, and search result display control processing for combining and displaying the annotation data together with the searched document data An information processing program comprising:
  19. In an information processing program comprising document data storage processing for storing document data including characters, graphics, images, etc., display control processing for displaying the contents of the document data storage on a display, and document input processing for inputting the document data,
    Display screen storage processing for temporarily storing the display screen of the document data being displayed;
    Annotation input processing that inputs annotation data corresponding to any position on the display screen,
    A first search key registration process for registering the annotation data in association with the document data as a first search key;
    An annotation target area extraction process for extracting an annotation target area from the vicinity of the input position of the annotation data;
    A second search key extraction process for extracting a second search key from the annotation target area;
    A second search key registration process for registering the extracted second search key;
    An index storage process for storing both the first search key registered by the first search key registration process and the second search key registered by the second search key registration process is provided. A recording medium storing an information processing program.
  20.   The annotation data uses one or more of handwritten lines, straight lines, characters, and figures, and the annotation target area from the display screen is changed according to the type of the annotation data. A recording medium storing an information processing program.
  21.   20. A record storing an information processing program according to claim 19, further comprising an annotation target image registration process for registering image data of the annotation target area in the display screen as a reduced image in association with the first search key. Medium.
  22.   20. A recording medium storing an information processing program according to claim 19, further comprising a second search key extraction process for extracting search information from the annotation target area of the display screen storage by a character recognition function (OCR).
  23.   A document search process for searching for a corresponding position in document data with reference to the index storage registered in claim 19, and a search result display control process for combining and displaying the annotation data together with the searched document data. 20. A recording medium storing an information processing program according to claim 19.
  24.   A document search process for searching for a corresponding position in document data with reference to the index storage registered in claim 19, and a search result display control process for combining and displaying the annotation data together with the searched document data A recording medium storing an information processing program characterized by comprising:
JP2003422341A 2003-12-19 2003-12-19 Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein Withdrawn JP2005182460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003422341A JP2005182460A (en) 2003-12-19 2003-12-19 Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003422341A JP2005182460A (en) 2003-12-19 2003-12-19 Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein

Publications (1)

Publication Number Publication Date
JP2005182460A true JP2005182460A (en) 2005-07-07

Family

ID=34783251

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003422341A Withdrawn JP2005182460A (en) 2003-12-19 2003-12-19 Information processor, annotation processing method, information processing program, and recording medium having information processing program stored therein

Country Status (1)

Country Link
JP (1) JP2005182460A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007233695A (en) * 2006-03-01 2007-09-13 Just Syst Corp Annotation management device, web display terminal, annotation management method and web display method
JP2008234203A (en) * 2007-03-19 2008-10-02 Ricoh Co Ltd Image processing apparatus
JP2009098763A (en) * 2007-10-15 2009-05-07 Hitachi Ltd Handwritten annotation management apparatus and interface
JP2009187401A (en) * 2008-02-07 2009-08-20 Canon Inc Document management system, document management apparatus, and document managing method and program
JP2010134876A (en) * 2008-12-08 2010-06-17 Canon Inc Information processing device and method
JP2010525497A (en) * 2007-05-11 2010-07-22 ジェネラル・インスツルメント・コーポレーションGeneral Instrument Corporation Method and apparatus for annotating video content with metadata generated using speech recognition technology
JP2010200225A (en) * 2009-02-27 2010-09-09 Sharp Corp Image forming apparatus and program
JP2011170418A (en) * 2010-02-16 2011-09-01 Lenovo Singapore Pte Ltd Method for generating tag data for retrieving image
US20120154436A1 (en) * 2010-12-21 2012-06-21 Casio Computer Co., Ltd Information display apparatus and information display method
JP2014186366A (en) * 2013-03-21 2014-10-02 Toshiba Corp Commodity comparison device, method and program
JP2015148951A (en) * 2014-02-06 2015-08-20 シャープ株式会社 handwriting input device and handwriting input method
JP2016009418A (en) * 2014-06-26 2016-01-18 京セラドキュメントソリューションズ株式会社 Document processor and document processing program

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007233695A (en) * 2006-03-01 2007-09-13 Just Syst Corp Annotation management device, web display terminal, annotation management method and web display method
JP2008234203A (en) * 2007-03-19 2008-10-02 Ricoh Co Ltd Image processing apparatus
JP2010525497A (en) * 2007-05-11 2010-07-22 ジェネラル・インスツルメント・コーポレーションGeneral Instrument Corporation Method and apparatus for annotating video content with metadata generated using speech recognition technology
JP2009098763A (en) * 2007-10-15 2009-05-07 Hitachi Ltd Handwritten annotation management apparatus and interface
JP2009187401A (en) * 2008-02-07 2009-08-20 Canon Inc Document management system, document management apparatus, and document managing method and program
JP2010134876A (en) * 2008-12-08 2010-06-17 Canon Inc Information processing device and method
JP2010200225A (en) * 2009-02-27 2010-09-09 Sharp Corp Image forming apparatus and program
JP2011170418A (en) * 2010-02-16 2011-09-01 Lenovo Singapore Pte Ltd Method for generating tag data for retrieving image
US20120154436A1 (en) * 2010-12-21 2012-06-21 Casio Computer Co., Ltd Information display apparatus and information display method
JP2012133060A (en) * 2010-12-21 2012-07-12 Casio Comput Co Ltd Information display device and information display program
CN102708108A (en) * 2010-12-21 2012-10-03 卡西欧计算机株式会社 Information display apparatus and information display method
JP2014186366A (en) * 2013-03-21 2014-10-02 Toshiba Corp Commodity comparison device, method and program
JP2015148951A (en) * 2014-02-06 2015-08-20 シャープ株式会社 handwriting input device and handwriting input method
JP2016009418A (en) * 2014-06-26 2016-01-18 京セラドキュメントソリューションズ株式会社 Document processor and document processing program

Similar Documents

Publication Publication Date Title
US5960448A (en) System and method for displaying a graphically enhanced view of a region of a document image in which the enhanced view is correlated with text derived from the document image
JP4229507B2 (en) Method and system for generating document summaries using location information
KR910002745B1 (en) Apparatus for searching information
CN1928865B (en) Method and apparatus for synchronizing, displaying and manipulating text and image documents
FI124000B (en) Method and arrangement for processing data retrieval results
US5832474A (en) Document search and retrieval system with partial match searching of user-drawn annotations
US6658408B2 (en) Document information management system
JP5791861B2 (en) Information processing apparatus and information processing method
KR100489913B1 (en) Document display system and electronic dictionary
JP4746136B2 (en) Rank graph
JP2006164254A (en) System, device, method and program for indicating video search result
US6002798A (en) Method and apparatus for creating, indexing and viewing abstracted documents
US20050261891A1 (en) System and method for text segmentation and display
JP4445985B2 (en) Information processing apparatus and document search method
US5799325A (en) System, method, and computer program product for generating equivalent text files
US20020138476A1 (en) Document managing apparatus
JP3478725B2 (en) Document information management system
JP2007265251A (en) Information retrieval device
US5623679A (en) System and method for creating and manipulating notes each containing multiple sub-notes, and linking the sub-notes to portions of data objects
DE102012202558A1 (en) Generation of a query from displayed text documents using virtual magnets
DE69630928T2 (en) Device and method for displaying a translation
JP2004334334A (en) Document retrieval system, document retrieval method, and storage medium
US5809498A (en) Method of locating a penstroke sequence in a computer
US6966030B2 (en) Method, system and computer program product for implementing acronym assistance
US8645812B1 (en) Methods and apparatus for automated redaction of content in a document

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20070306