WO2021033603A1 - Image-reading device - Google Patents

Image-reading device Download PDF

Info

Publication number
WO2021033603A1
WO2021033603A1 PCT/JP2020/030642 JP2020030642W WO2021033603A1 WO 2021033603 A1 WO2021033603 A1 WO 2021033603A1 JP 2020030642 W JP2020030642 W JP 2020030642W WO 2021033603 A1 WO2021033603 A1 WO 2021033603A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
search
character string
text
Prior art date
Application number
PCT/JP2020/030642
Other languages
French (fr)
Japanese (ja)
Inventor
卓志 段床
Original Assignee
京セラドキュメントソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京セラドキュメントソリューションズ株式会社 filed Critical 京セラドキュメントソリューションズ株式会社
Publication of WO2021033603A1 publication Critical patent/WO2021033603A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof

Definitions

  • the present invention relates to an image reader, and more particularly to a technique for acquiring and displaying text through a network.
  • the image of the original is read by the image reading unit, and the image of the original is printed on the recording paper by the image forming unit.
  • a web page can be accessed, displayed and browsed through a network, and the URL (Uniform Resource Locator) of the browsed web page is recorded in a URL table, and the web page is recorded.
  • the hypertext character string specified by the user on the web page is recorded in the character string table, and the web page corresponding to the URL in the URL table and the web page linked to the hypertext character string in the character string table are acquired. And these web pages are combined and printed.
  • the image forming apparatus since the image forming apparatus includes an image reading unit for reading the image of the original as described above, it is convenient if the text related to the image of the read original can be searched and obtained from the database on the network. Improves sex.
  • Patent Document 1 the web page corresponding to the URL of the URL table and the web page linked to the hypertext character string of the character string table are acquired, and these web pages are combined and printed. , The text related to the image of the original document read by the image reading unit is not acquired.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to make it easier to specify a search target when searching for text.
  • the image reading device is included in a display unit, an image reading unit that reads an image of a document, a communication unit that performs data communication via a network, and an image of the document read by the image reading unit.
  • the text is recognized, a character string satisfying a preset selection condition is extracted from the recognized text and selected, and a search result by a search engine is acquired using the selected character string as a search condition. It is provided with a control unit for displaying on the display unit.
  • FIG. 1 It is sectional drawing which shows the image forming apparatus which is an example of the image reading apparatus which concerns on one Embodiment of this invention. It is a functional block diagram which shows the main internal structure of an image forming apparatus. It is a flowchart which shows the control procedure for searching and acquiring the text and the image which are related to the image of the original document read by the image reading unit through a network. It is a figure which shows the initial screen displayed on the display part. It is a figure which shows the screen of the display part which displays the text extracted from an image and a browser. It is a figure which shows the screen of the display part which displays an image and a browser.
  • FIG. 1 is a cross-sectional view showing an image forming apparatus which is an example of an image reading apparatus according to an embodiment of the present invention.
  • the image forming apparatus 10 includes an image reading unit 11 and an image forming unit 12.
  • the image reading unit 11 has an image sensor that optically reads the image of the document, and the analog output of the image sensor is converted into a digital signal to generate image data indicating the image of the document.
  • the image forming unit 12 prints an image indicated by the above image data or image data received from the outside on a recording paper, and has an image forming unit 3M for magenta, an image forming unit 3C for cyan, and an image for yellow. It includes a forming unit 3Y and an image forming unit 3Bk for black. In each of the image forming units 3M, 3C, 3Y, and 3Bk, the surface of the photoconductor drum 4 is uniformly charged, the surface of the photoconductor drum 4 is exposed, and an electrostatic latent image is formed on the surface of the photoconductor drum 4.
  • the electrostatic latent image on the surface of the photoconductor drum 4 is developed into a toner image, and the toner image on the surface of the photoconductor drum 4 is transferred to the intermediate transfer belt 5.
  • a color toner image is formed on the intermediate transfer belt 5.
  • the toner image of this color is secondarily transferred to the recording paper P conveyed from the paper feeding unit 14 through the transfer path 8 in the nip area N between the intermediate transfer belt 5 and the secondary transfer roller 6.
  • the recording paper P is heated and pressurized by the fixing device 15, the toner image on the recording paper P is fixed by thermocompression bonding, and the recording paper P is further discharged to the discharge tray 17 through the discharge roller 16.
  • FIG. 2 is a functional block diagram showing a main internal configuration of the image forming apparatus 10.
  • the image forming apparatus 10 includes an image reading unit 11, an image forming unit 12, a display unit 41, an operation unit 42, a touch panel 43, a network communication unit (NW communication unit) 45, and an image. It includes a memory 46, a storage unit 48, and a control unit 49. These components are capable of transmitting and receiving data or signals through the bus to each other.
  • the display unit 41 is a display device such as a liquid crystal display (LCD: Liquid Crystal Display) or an organic EL (OLED: Organic Light-Emitting Diode) display.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the operation unit 42 includes physical keys such as a numeric keypad, an enter key, and a start key.
  • a touch panel 43 is arranged on the screen of the display unit 41.
  • the touch panel 43 is a touch panel of a so-called resistance film type or capacitance type, and detects contact (touch) of a user's finger or the like with the touch panel 43 together with the contact position, and a detection signal indicating the coordinates of the contact position. Is output to the control unit 51 or the like described later of the control unit 49.
  • the touch panel 43 serves as an operation unit in which user operations on the screen of the display unit 41 are input together with the operation unit 42.
  • the network communication unit 45 includes a communication module such as a LAN board and performs data communication through the network.
  • the network communication unit 45 is an example of a communication unit within the scope of claims.
  • the image memory 46 stores image data indicating an image of the original document read by the image reading unit 11.
  • the storage unit 48 is a large-capacity storage device such as an SSD (Solid State Drive) or an HDD (Hard Disk Drive), and stores various application programs and various data.
  • SSD Solid State Drive
  • HDD Hard Disk Drive
  • the control unit 49 is composed of a processor, RAM (Random Access Memory), ROM (Read Only Memory), and the like.
  • the processor is, for example, a CPU (Central Processing Unit), an ASIC (Application Specific Integrated Circuit), an MPU (Micro Processing Unit), or the like.
  • the control unit 49 functions as the control unit 51 when the control program stored in the ROM or the storage unit 48 is executed by the processor.
  • the control unit 49 comprehensively controls the image forming apparatus 10.
  • the control unit 49 is connected to an image reading unit 11, an image forming unit 12, a display unit 41, an operation unit 42, a touch panel 43, a network communication unit 45, an image memory 46, a storage unit 48, and the like, and is a component thereof. Operation control and transmission / reception of signals or data to / from each component.
  • the control unit 51 serves as a processing unit that executes various processes necessary for image formation by the image forming apparatus 10. Further, the control unit 51 receives an operation instruction input by the user based on the detection signal output from the touch panel 43 or the operation of the physical key of the operation unit 42. Further, the control unit 51 has a function of controlling the display operation of the display unit 41 and a function of controlling the communication operation of the network communication unit 45. Further, the control unit 51 processes the image data stored in the image memory 46.
  • the control unit 51 reads the image of the document by the image reading unit 11.
  • the image data indicating this image is read, temporarily stored in the image memory 46, the image data is input to the image forming unit 12, and the image indicated by the image data is formed on the recording paper by the image forming unit 12.
  • control unit 51 executes the search function in response to an instruction input by the user operating the touch panel 43.
  • control unit 51 causes the image reading unit 11 to read the image of the original while the control unit 51 receives the execution instruction of the search function and stores the image data indicating this image in the image memory 46
  • the control unit 51 Recognizes and extracts the text included in the image of the original in the image memory 46 by a known OCR (Optical Character Recognition) function, and displays this text on the screen of the display unit 41. Further, the control unit 51 extracts and selects a character string satisfying a preset selection condition from the recognized text, and displays the selected character string on the screen of the display unit 41.
  • OCR Optical Character Recognition
  • the user specifies the character string displayed on the screen of the display unit 41 by touch operation.
  • the control unit 51 determines the character string specified by the touch operation through the touch panel 43, and uses the determined character string as a search condition in the existing search engine on the network via a browser through the network communication unit 45. Send. Then, the control unit 51 receives the search result from the search engine that the search engine searches the database of the search engine using the search condition through the network communication unit 45, and sends the search result to the display unit 41. Display it on the displayed browser.
  • This database stores the data that each web page has, which is collected from each web page that exists on the Internet.
  • control unit 51 transmits the image in the image memory 46 as a search condition to an existing image search engine on the network through the network communication unit 45, and the image search engine searches the search result using the image as a search condition. Received from the image search engine, the search result is displayed on the screen of the display unit 41.
  • the image reading unit 11 recognizes the text included in the image of the original, displays the character string extracted from the text based on the preset selection conditions, and the user uses the extracted and displayed character string.
  • the specified character string is used as a search condition, and various data are searched from the database by the search engine using this search condition, and other images are searched from the database by the image search engine using the image of the manuscript as the search condition. Is searched, and these various data and images are displayed on the screen of the display unit 41.
  • the user can search and acquire other images and texts related to the image of the manuscript read by the image reading unit 11 through the network as a search target.
  • search engine on the network uses a known system provided by the search engine operator.
  • the control unit 51 displays the initial screen IS as shown in FIG. 4 on the display unit 41.
  • this initial screen IS a plurality of function keys 61a to 61h and the like associated with each function are displayed.
  • the control unit 51 gives an instruction to execute the search function as an instruction corresponding to the function key 61h through the touch panel 43. Is received, and the search function is activated based on the instruction (S101).
  • the user sets the document in the image reading unit 11 and operates the start key of the operation unit 42.
  • the control unit 51 receives the document reading instruction associated with the operation of the start key, the image reading unit 11 reads the image of the document and stores the image data indicating this image in the image memory 46 (S102). ).
  • the control unit 51 recognizes and extracts the text included in the image of the original in the image memory 46 by a known OCR (Optical Character Recognition) function, and displays the extracted text T1 as shown in FIG. 5A, for example. Is displayed on the screen of, and the browser B1 is started up and displayed on the screen of the display unit 41 (S103). Further, the control unit 51 extracts a character string included in the text T1 from the text T1 based on a preset selection condition, selects the character string, and displays the selected character string on the screen of the display unit 41 (S104). For example, the control unit 51 refers to the word dictionary stored in advance in the storage unit 48, and determines all the words constituting the text T1 as the character string C.
  • OCR Optical Character Recognition
  • FIG. 5A shows an example in which a list LC in which each character string C selected in this way is arranged in descending order of character size is displayed on the screen of the display unit 41.
  • the control unit 51 switches the image G1 and the browser B2 in the image memory 46 between the text T1, the list LC, and the browser B1 and processes them so that they can be displayed on the screen of the display unit 41. (S105).
  • the control unit 51 extracts the image area included in the image G1 of the original based on the layout analysis of the image G1 of the original by the OCR function, and when the image G1 and the browser B2 are displayed, the extracted image area is imaged. It may be displayed as G1.
  • the control unit 51 displays the existence of the browser B2 on the screen of the display unit 41 by tab ta2 in a tab format (FIG. 5A).
  • the control unit 51 receives the display instructions of the image G1 and the browser B2 via the touch panel 43, and displays the image G1 and the browser B2 on the screen of the display unit 41. (Fig. 5B).
  • the control unit 51 receives the display instructions of the text T1, the list LC and the browser B1 via the touch panel 43, and the text T1 is displayed.
  • the list LC and the browser B1 are displayed on the screen of the display unit 41 (FIG. 5B).
  • the control unit 51 receives an instruction to specify the arbitrary character string C as a search condition through the touch panel 43.
  • This specified character string C is set as a search condition.
  • the control unit 51 gives an instruction to specify all the touch-operated character strings C as search conditions through the touch panel 43 each time the touch operation is performed. Accept and set these character strings C as search conditions.
  • control unit 51 transmits the character string C as the search condition to the search engine on the network by the network communication unit 45 via the browser B1 (S106).
  • the search engine searches the database using the search conditions and sends the search results to the image forming apparatus 10.
  • the control unit 51 of the image forming apparatus 10 receives the search result, that is, the data hit by the search by the search engine through the network communication unit 45, the data as the search result is displayed on the browser B1 of the screen of the display unit 41. Display (S107).
  • the control unit 51 causes the browser B1 to display the text T2 as the data hit by the search by the search engine together with the text T1 on the screen of the display unit 41.
  • the control unit 51 displays the texts T2 side by side on the browser B1.
  • control unit 51 transmits the image G1 (or the image area included in the image G1) as a search condition to the search engine on the network by the network communication unit 45 via the browser B2 (S108).
  • the search engine searches the database using the search conditions and sends the search results to the image forming apparatus 10.
  • the control unit 51 of the image forming apparatus 10 receives the search result, that is, the image hit by the search by the search engine through the network communication unit 45, the image as the search result is displayed on the browser B2 of the screen of the display unit 41. Display (S109).
  • the browser B2 in which the image G2 searched by the image search engine is arranged is displayed together with the image G1 in the image memory 46.
  • the control unit 51 displays these images G2 side by side on the browser B2.
  • the image of the original is read by the image reading unit 11 with the search function set, the text included in this image is recognized, and a character string having a larger size in this text is recognized.
  • various data are searched from the database by the search engine.
  • the image search engine searches the database for other images. Then, the searched data and / or the searched data are displayed on the screen of the display unit 41.
  • the other searched images and data can be used, for example, by being stored in the storage unit 48 according to the operation of the touch panel 43 or the operation unit 42, or being formed on the recording paper by the image forming unit 12. ..
  • the search can be performed by searching for these as the search target.
  • the search it is also possible to obtain the above image and the character string in the complete form without any part missing as the search result.
  • each word (character string) is searched from the text included in the image of the original document read by the image reading unit 11, and a character string having a predetermined character size is extracted.
  • the selection condition of selecting is applied, but other selection conditions can be applied.
  • control unit 51 detects the appearance frequency of the character string for each word (character string) different from each other in the text, and extracts the character string whose appearance frequency in the text is higher than a predetermined value by the above selection condition. It may be a condition that the selection is made.
  • control unit 51 may set the above selection condition as a condition of extracting and selecting a word (character string) in a preset display mode in the text.
  • the preset display mode is, for example, a color, a background color, an underline, a bold character, or the like given to a character string.
  • control unit 51 may set the above selection condition as a condition for extracting and selecting a word (character string) composed of preset types of characters in the text.
  • the preset types of characters are, for example, character types such as kanji, katakana, hiragana, and alphabets, or language types (characters corresponding to language types).
  • the control unit 51 displays the character string selected by such a selection condition on the screen of the display unit 41, sets the character string specified via the touch panel 43 by the user's touch operation as a search condition, and performs network communication.
  • the character string is transmitted to an existing search engine on the network as a search condition, the search result by the search engine is received, and the search result is displayed on the screen of the display unit 41.
  • the search condition is the text included in the image of one original document read by the image reading unit 11, but the control unit 51 reads a plurality of documents by the image reading unit 11 and said that.
  • each browser is displayed on the screen of the display unit 41 for each image of each manuscript, each search condition is sent to the search engine from each browser, and each from the search engine. Search results may be received separately for each search condition and displayed in each corresponding browser.
  • the image reading unit 11 describes the entire original.
  • the control unit 51 transmits each image G1, G2, and G3 to the search engine as search conditions.
  • control unit 51 may send the texts T1, T3, and T4 extracted from each of the three originals shown in FIG. 7 to the search engine as search conditions.
  • the control unit 51 causes the browser B1 to display the text T2 as the data hit by the search by the search engine together with the text T1 on the screen of the display unit 41.
  • the control unit 51 uses the text T3 and the text as the search result as the browser B2 as shown in FIG. 8B, and the text T4 and the text as the search result as the browser B3 as shown in FIG. 8C. It is displayed on the display unit 41 so that it can be switched and displayed.
  • the search engine and the database on the network are illustrated, but the image forming apparatus 10 stores, for example, the data possessed by each web page collected from each web page existing on the Internet.
  • the data stored in the storage unit 48 may be searched as a search engine by storing the data in the storage unit 48 in advance and using the search conditions set as described above.

Abstract

In the present invention, an image formation device (10) comprises: a display unit (41); an image reading unit (11) that reads an image of a manuscript; a network communication unit (45) that performs data communications through a network; and a control part (51) that recognizes text contained in the manuscript image read by the image reading unit (11), extracts and selects from the recognized text a character string satisfying a preset selection condition, uses the selected character string as a search condition to acquire search results from a search engine, and causes the display unit (41) to display the search results.

Description

画像読取装置Image reader
 本発明は、画像読取装置に関し、特に、ネットワークを通じてテキストを取得して表示するための技術に関する。 The present invention relates to an image reader, and more particularly to a technique for acquiring and displaying text through a network.
 画像形成装置では、画像読取部により原稿の画像を読取り、画像形成部により原稿の画像を記録紙に印刷する。一方、特許文献1に記載の装置では、ウェブページを、ネットワークを通じてアクセスして表示させて閲覧できるようにしており、閲覧したウェブページのURL(Uniform Resource Locator)をURLテーブルに記録し、また該ウェブページにおけるユーザーにより指定されたハイパーテキストの文字列を文字列テーブルに記録し、URLテーブルのURLに対応するウェブページ及び文字列テーブルのハイパーテキストの文字列にリンクしているウェブページを取得して、これらウェブページを結合して印刷している。 In the image forming apparatus, the image of the original is read by the image reading unit, and the image of the original is printed on the recording paper by the image forming unit. On the other hand, in the device described in Patent Document 1, a web page can be accessed, displayed and browsed through a network, and the URL (Uniform Resource Locator) of the browsed web page is recorded in a URL table, and the web page is recorded. The hypertext character string specified by the user on the web page is recorded in the character string table, and the web page corresponding to the URL in the URL table and the web page linked to the hypertext character string in the character string table are acquired. And these web pages are combined and printed.
特開2006-85376号公報Japanese Unexamined Patent Publication No. 2006-85376
 ここで、上記のように画像形成装置では、原稿の画像を読取る画像読取部を備えているので、その読取られた原稿の画像に関連するテキストをネットワーク上のデータベースから検索して取得できれば、利便性が向上する。 Here, since the image forming apparatus includes an image reading unit for reading the image of the original as described above, it is convenient if the text related to the image of the read original can be searched and obtained from the database on the network. Improves sex.
 上記特許文献1では、URLテーブルのURLに対応するウェブページ及び文字列テーブルのハイパーテキストの文字列にリンクしているウェブページを取得して、これらのウェブページを結合して印刷しているが、画像読取部により読取られた原稿の画像に関連するテキストを取得するものではない。 In Patent Document 1, the web page corresponding to the URL of the URL table and the web page linked to the hypertext character string of the character string table are acquired, and these web pages are combined and printed. , The text related to the image of the original document read by the image reading unit is not acquired.
 本発明は、上記の事情に鑑みなされたものであり、テキストの検索時に検索対象を従来よりも簡単に指定できるようにすることを目的とする。 The present invention has been made in view of the above circumstances, and an object of the present invention is to make it easier to specify a search target when searching for text.
 本発明の一局面にかかる画像読取装置は、表示部と、原稿の画像を読取る画像読取部と、ネットワークを通じてデータ通信を行う通信部と、前記画像読取部により読取られた原稿の画像に含まれているテキストを認識し、当該認識されたテキストから、予め設定された選択条件を満たす文字列を抽出して選択し、前記選択した文字列を検索条件として検索エンジンによる検索結果を取得して前記表示部に表示させる制御部と、を備えるものである。 The image reading device according to one aspect of the present invention is included in a display unit, an image reading unit that reads an image of a document, a communication unit that performs data communication via a network, and an image of the document read by the image reading unit. The text is recognized, a character string satisfying a preset selection condition is extracted from the recognized text and selected, and a search result by a search engine is acquired using the selected character string as a search condition. It is provided with a control unit for displaying on the display unit.
 本発明によれば、テキストの検索時に検索対象を従来よりも簡単に指定することができる。 According to the present invention, it is possible to specify the search target more easily than before when searching for text.
本発明の一実施形態に係る画像読取装置の一例である画像形成装置を示す断面図である。It is sectional drawing which shows the image forming apparatus which is an example of the image reading apparatus which concerns on one Embodiment of this invention. 画像形成装置の主要内部構成を示す機能ブロック図である。It is a functional block diagram which shows the main internal structure of an image forming apparatus. 画像読取部により読取られた原稿の画像に関連するテキストや画像を、ネットワークを通じて検索して取得するための制御手順を示すフローチャートである。It is a flowchart which shows the control procedure for searching and acquiring the text and the image which are related to the image of the original document read by the image reading unit through a network. 表示部に表示された初期画面を示す図ある。It is a figure which shows the initial screen displayed on the display part. 画像から抽出されたテキスト及びブラウザーを表示している表示部の画面を示す図である。It is a figure which shows the screen of the display part which displays the text extracted from an image and a browser. 画像及びブラウザーを表示している表示部の画面を示す図である。It is a figure which shows the screen of the display part which displays an image and a browser. 画像から抽出されたテキスト及び検索エンジンにより検索されたデータが配置されたブラウザーを表示している表示部の画面を示す図である。It is a figure which shows the screen of the display part which displays the browser in which the text extracted from an image and the data searched by a search engine are arranged. 画像及び画像検索エンジンにより検索された他の画像が配置されたブラウザーを表示している表示部の画面を示す図である。It is a figure which shows the screen of the display part which displays the browser in which an image and another image searched by an image search engine are arranged. 複数の画像を含む原稿を例示する図である。It is a figure which illustrates the manuscript containing a plurality of images. 複数の画像のテキスト別に、表示部に表示されたそれぞれのブラウザーを示す図である。It is a figure which shows each browser displayed on the display part by the text of a plurality of images. 複数の画像のテキスト別に、表示部に表示されたそれぞれのブラウザーを示す図である。It is a figure which shows each browser displayed on the display part by the text of a plurality of images. 複数の画像のテキスト別に、表示部に表示されたそれぞれのブラウザーを示す図である。It is a figure which shows each browser displayed on the display part by the text of a plurality of images.
 以下、本発明の実施形態について図面を参照して説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 図1は、本発明の一実施形態に係る画像読取装置の一例である画像形成装置を示す断面図である。この画像形成装置10は、画像読取部11と、画像形成部12とを備えている。 FIG. 1 is a cross-sectional view showing an image forming apparatus which is an example of an image reading apparatus according to an embodiment of the present invention. The image forming apparatus 10 includes an image reading unit 11 and an image forming unit 12.
 画像読取部11は、原稿の画像を光学的に読み取る撮像素子を有しており、この撮像素子のアナログ出力がデジタル信号に変換されて、原稿の画像を示す画像データが生成される。 The image reading unit 11 has an image sensor that optically reads the image of the document, and the analog output of the image sensor is converted into a digital signal to generate image data indicating the image of the document.
 画像形成部12は、上記画像データ又は外部から受信した画像データによって示される画像を記録用紙に印刷するものであり、マゼンタ用の画像形成ユニット3M、シアン用の画像形成ユニット3C、イエロー用の画像形成ユニット3Y、及びブラック用の画像形成ユニット3Bkを備えている。各画像形成ユニット3M、3C、3Y、及び3Bkのいずれにおいても、感光体ドラム4の表面を均一帯電させ、感光体ドラム4の表面を露光して、感光体ドラム4の表面に静電潜像を形成し、感光体ドラム4の表面の静電潜像をトナー像に現像して、感光体ドラム4の表面のトナー像を、中間転写ベルト5に転写する。これにより、カラーのトナー像が中間転写ベルト5上に形成される。このカラーのトナー像は、中間転写ベルト5と2次転写ローラー6の間のニップ域Nにおいて給紙部14から搬送路8を通じて搬送されてきた記録用紙Pに2次転写される。 The image forming unit 12 prints an image indicated by the above image data or image data received from the outside on a recording paper, and has an image forming unit 3M for magenta, an image forming unit 3C for cyan, and an image for yellow. It includes a forming unit 3Y and an image forming unit 3Bk for black. In each of the image forming units 3M, 3C, 3Y, and 3Bk, the surface of the photoconductor drum 4 is uniformly charged, the surface of the photoconductor drum 4 is exposed, and an electrostatic latent image is formed on the surface of the photoconductor drum 4. Is formed, the electrostatic latent image on the surface of the photoconductor drum 4 is developed into a toner image, and the toner image on the surface of the photoconductor drum 4 is transferred to the intermediate transfer belt 5. As a result, a color toner image is formed on the intermediate transfer belt 5. The toner image of this color is secondarily transferred to the recording paper P conveyed from the paper feeding unit 14 through the transfer path 8 in the nip area N between the intermediate transfer belt 5 and the secondary transfer roller 6.
 この後、定着装置15で記録用紙Pが加熱及び加圧されて、記録用紙P上のトナー像が熱圧着により定着され、更に記録用紙Pが排出ローラー16を通じて排出トレイ17に排出される。 After that, the recording paper P is heated and pressurized by the fixing device 15, the toner image on the recording paper P is fixed by thermocompression bonding, and the recording paper P is further discharged to the discharge tray 17 through the discharge roller 16.
 次に、画像形成装置10の制御に係る構成について説明する。図2は、画像形成装置10の主要内部構成を示す機能ブロック図である。図2に示すように画像形成装置10は、画像読取部11と、画像形成部12と、表示部41と、操作部42と、タッチパネル43と、ネットワーク通信部(NW通信部)45と、画像メモリー46と、記憶部48と、制御ユニット49とを備えている。これらの構成要素は、互いにバスを通じてデータ又は信号の送受信が可能とされている。 Next, the configuration related to the control of the image forming apparatus 10 will be described. FIG. 2 is a functional block diagram showing a main internal configuration of the image forming apparatus 10. As shown in FIG. 2, the image forming apparatus 10 includes an image reading unit 11, an image forming unit 12, a display unit 41, an operation unit 42, a touch panel 43, a network communication unit (NW communication unit) 45, and an image. It includes a memory 46, a storage unit 48, and a control unit 49. These components are capable of transmitting and receiving data or signals through the bus to each other.
 表示部41は、液晶ディスプレイ(LCD:Liquid Crystal Display)や有機EL(OLED:Organic Light-Emitting Diode)ディスプレイなどの表示装置である。 The display unit 41 is a display device such as a liquid crystal display (LCD: Liquid Crystal Display) or an organic EL (OLED: Organic Light-Emitting Diode) display.
 操作部42は、テンキー、決定キー、スタートキーなどの物理キーを備えている。 The operation unit 42 includes physical keys such as a numeric keypad, an enter key, and a start key.
 表示部41の画面には、タッチパネル43が配置されている。タッチパネル43は、所謂抵抗膜方式や静電容量方式などのタッチパネルであって、タッチパネル43に対するユーザーの指などの接触(タッチ)をその接触位置とともに検知して、その接触位置の座標を示す検知信号を制御ユニット49の後述する制御部51などに出力する。このタッチパネル43は、操作部42と共に表示部41の画面に対するユーザー操作が入力される操作部としての役割を果たす。 A touch panel 43 is arranged on the screen of the display unit 41. The touch panel 43 is a touch panel of a so-called resistance film type or capacitance type, and detects contact (touch) of a user's finger or the like with the touch panel 43 together with the contact position, and a detection signal indicating the coordinates of the contact position. Is output to the control unit 51 or the like described later of the control unit 49. The touch panel 43 serves as an operation unit in which user operations on the screen of the display unit 41 are input together with the operation unit 42.
 ネットワーク通信部45は、LANボード等の通信モジュールを含み、ネットワークを通じてデータ通信を行う。なお、ネットワーク通信部45は、特許請求の範囲における通信部の一例となる。 The network communication unit 45 includes a communication module such as a LAN board and performs data communication through the network. The network communication unit 45 is an example of a communication unit within the scope of claims.
 画像メモリー46には、画像読取部11により読取られた原稿の画像を示す画像データが記憶される。 The image memory 46 stores image data indicating an image of the original document read by the image reading unit 11.
 記憶部48は、SSD(Solid State Drive)、又はHDD(Hard Disk Drive)などの大容量の記憶装置であって、各種のアプリケーションプログラムや種々のデータを記憶している。 The storage unit 48 is a large-capacity storage device such as an SSD (Solid State Drive) or an HDD (Hard Disk Drive), and stores various application programs and various data.
 制御ユニット49は、プロセッサー、RAM(Random Access Memory)、及びROM(Read Only Memory)などから構成される。プロセッサーは、例えばCPU(Central Processing Unit)、ASIC(Application Specific Integrated Circuit)、又はMPU(Micro Processing Unit)等である。制御ユニット49は、上記のROM又は記憶部48に記憶された制御プログラムが上記のプロセッサーで実行されることにより、制御部51として機能する。 The control unit 49 is composed of a processor, RAM (Random Access Memory), ROM (Read Only Memory), and the like. The processor is, for example, a CPU (Central Processing Unit), an ASIC (Application Specific Integrated Circuit), an MPU (Micro Processing Unit), or the like. The control unit 49 functions as the control unit 51 when the control program stored in the ROM or the storage unit 48 is executed by the processor.
 制御ユニット49は、画像形成装置10を統括的に制御する。制御ユニット49は、画像読取部11、画像形成部12、表示部41、操作部42、タッチパネル43、ネットワーク通信部45、画像メモリー46、及び記憶部48などと接続されており、これらの構成要素の動作制御や、該各構成要素との間での信号またはデータの送受信を行う。 The control unit 49 comprehensively controls the image forming apparatus 10. The control unit 49 is connected to an image reading unit 11, an image forming unit 12, a display unit 41, an operation unit 42, a touch panel 43, a network communication unit 45, an image memory 46, a storage unit 48, and the like, and is a component thereof. Operation control and transmission / reception of signals or data to / from each component.
 制御部51は、画像形成装置10による画像形成に必要な各種の処理などを実行する処理部としての役割を果たす。また、制御部51は、タッチパネル43から出力される検知信号あるいは操作部42の物理キーの操作に基づき、ユーザーにより入力された操作指示を受け付ける。更に、制御部51は、表示部41の表示動作を制御する機能、ネットワーク通信部45の通信動作を制御する機能を有する。また、制御部51は、画像メモリー46に記憶されている画像データに対して処理を施す。 The control unit 51 serves as a processing unit that executes various processes necessary for image formation by the image forming apparatus 10. Further, the control unit 51 receives an operation instruction input by the user based on the detection signal output from the touch panel 43 or the operation of the physical key of the operation unit 42. Further, the control unit 51 has a function of controlling the display operation of the display unit 41 and a function of controlling the communication operation of the network communication unit 45. Further, the control unit 51 processes the image data stored in the image memory 46.
 このような構成の画像形成装置10において、例えば、ユーザーが、原稿を画像読取部11にセットし、操作部42のスタートキーを操作すると、制御部51は、画像読取部11により原稿の画像を読取らせて、この画像を示す画像データを画像メモリー46に一旦記憶させ、この画像データを画像形成部12に入力させて、この画像データによって示される画像を画像形成部12により記録紙に形成させる。 In the image forming apparatus 10 having such a configuration, for example, when the user sets the document in the image reading unit 11 and operates the start key of the operation unit 42, the control unit 51 reads the image of the document by the image reading unit 11. The image data indicating this image is read, temporarily stored in the image memory 46, the image data is input to the image forming unit 12, and the image indicated by the image data is formed on the recording paper by the image forming unit 12. Let me.
 また、本実施形態では、制御部51は、ユーザーによるタッチパネル43の操作で入力される指示に応じて検索機能を実行する。検索機能の実行指示を制御部51が受け付けた状態で、制御部51が画像読取部11により原稿の画像を読み取らせ、この画像を示す画像データを画像メモリー46に記憶させたとき、制御部51は、既知のOCR(Optical Character Recognition)機能により画像メモリー46内の原稿の画像に含まれるテキストを認識して抽出し、このテキストを表示部41の画面に表示させる。また、制御部51は、当該認識されたテキストから、予め設定された選択条件を満たす文字列を抽出して選択し、当該選択した文字列を表示部41の画面に表示させる。 Further, in the present embodiment, the control unit 51 executes the search function in response to an instruction input by the user operating the touch panel 43. When the control unit 51 causes the image reading unit 11 to read the image of the original while the control unit 51 receives the execution instruction of the search function and stores the image data indicating this image in the image memory 46, the control unit 51 Recognizes and extracts the text included in the image of the original in the image memory 46 by a known OCR (Optical Character Recognition) function, and displays this text on the screen of the display unit 41. Further, the control unit 51 extracts and selects a character string satisfying a preset selection condition from the recognized text, and displays the selected character string on the screen of the display unit 41.
 ユーザーは、表示部41の画面に表示されている文字列をタッチ操作により指定する。制御部51は、タッチパネル43を通じて、そのタッチ操作により指定された文字列を判定し、ネットワーク通信部45を通じて、その判定した文字列を、検索条件としてネットワーク上の既存の検索エンジンにブラウザーを介して送信する。そして、制御部51は、ネットワーク通信部45を通じて、検索エンジンが当該検索条件を用いて該検索エンジンのデータベースを検索した検索結果を、該検索エンジンから受信し、この検索結果を、表示部41に表示させているブラウザーに表示させる。このデータベースは、インターネット上に存在する各ウェブページから収集された、各ウェブページが有するデータを記憶したものである。 The user specifies the character string displayed on the screen of the display unit 41 by touch operation. The control unit 51 determines the character string specified by the touch operation through the touch panel 43, and uses the determined character string as a search condition in the existing search engine on the network via a browser through the network communication unit 45. Send. Then, the control unit 51 receives the search result from the search engine that the search engine searches the database of the search engine using the search condition through the network communication unit 45, and sends the search result to the display unit 41. Display it on the displayed browser. This database stores the data that each web page has, which is collected from each web page that exists on the Internet.
 また、制御部51は、ネットワーク通信部45を通じて、画像メモリー46内の画像を検索条件としてネットワーク上の既存の画像検索エンジンに送信し、画像検索エンジンが該画像を検索条件として検索した検索結果を該画像検索エンジンから受信し、この検索結果を表示部41の画面に表示させる。 Further, the control unit 51 transmits the image in the image memory 46 as a search condition to an existing image search engine on the network through the network communication unit 45, and the image search engine searches the search result using the image as a search condition. Received from the image search engine, the search result is displayed on the screen of the display unit 41.
 従って、画像読取部11により原稿の画像に含まれるテキストが認識され、予め設定された選択条件に基づき該テキストから抽出された文字列が表示され、当該抽出されて表示された文字列からユーザーにより指定された文字列が検索条件とされて、この検索条件を用いて検索エンジンによりデータベースから各種データが検索されると共に、該原稿の画像を検索条件として用い、画像検索エンジンによりデータベースから他の画像が検索され、これら各種データ及び画像が表示部41の画面に表示される。 Therefore, the image reading unit 11 recognizes the text included in the image of the original, displays the character string extracted from the text based on the preset selection conditions, and the user uses the extracted and displayed character string. The specified character string is used as a search condition, and various data are searched from the database by the search engine using this search condition, and other images are searched from the database by the image search engine using the image of the manuscript as the search condition. Is searched, and these various data and images are displayed on the screen of the display unit 41.
 このため、ユーザーは、画像読取部11により読取らせた原稿の画像に関連する他の画像及びテキストを検索対象として、ネットワークを通じて検索して取得することができる。 Therefore, the user can search and acquire other images and texts related to the image of the manuscript read by the image reading unit 11 through the network as a search target.
 なお、ネットワーク上の検索エンジンは、検索エンジン事業者が提供する既知のシステムを用いるものとする。 Note that the search engine on the network uses a known system provided by the search engine operator.
 次に、そのような画像読取部11により読取られた原稿の画像に関連するテキストや画像を、ネットワークを通じて検索して取得するための制御手順を、図3に示すフローチャートなどを参照して詳しく説明する。 Next, the control procedure for searching and acquiring the text or image related to the image of the original document read by the image reading unit 11 through the network will be described in detail with reference to the flowchart shown in FIG. To do.
 まず、制御部51は、図4に示すような初期画面ISを表示部41に表示させているものとする。この初期画面ISには、それぞれの機能に対応付けられた複数の機能キー61a~61h等が表示されている。この初期画面が表示されているときに、ユーザーが検索機能を設定するための機能キー61hにタッチ操作すると、制御部51は、タッチパネル43を通じて、機能キー61hに対応する指示として検索機能の実行指示を受け付け、当該指示に基づいて検索機能を実行可能な状態に起ち上げる(S101)。 First, it is assumed that the control unit 51 displays the initial screen IS as shown in FIG. 4 on the display unit 41. On this initial screen IS, a plurality of function keys 61a to 61h and the like associated with each function are displayed. When the user touches the function key 61h for setting the search function while this initial screen is displayed, the control unit 51 gives an instruction to execute the search function as an instruction corresponding to the function key 61h through the touch panel 43. Is received, and the search function is activated based on the instruction (S101).
 そして、検索機能が起ち上がった状態で、ユーザーは、原稿を画像読取部11にセットし、操作部42のスタートキーを操作する。制御部51は、スタートキーの操作に対応付けられた原稿読取指示を受け付けると、画像読取部11により原稿の画像を読取らせて、この画像を示す画像データを画像メモリー46に記憶させる(S102)。 Then, with the search function activated, the user sets the document in the image reading unit 11 and operates the start key of the operation unit 42. When the control unit 51 receives the document reading instruction associated with the operation of the start key, the image reading unit 11 reads the image of the document and stores the image data indicating this image in the image memory 46 (S102). ).
 制御部51は、既知のOCR(Optical Character Recognition)機能により画像メモリー46内の原稿の画像に含まれるテキストを認識して抽出し、例えば図5Aに示すようにその抽出したテキストT1を表示部41の画面に表示させ、及びブラウザーB1を立ち上げて表示部41の画面に表示させる(S103)。また、制御部51は、予め設定された選択条件に基づきテキストT1に含まれる文字列をテキストT1から抽出して選択し、この選択した文字列を表示部41の画面に表示させる(S104)。例えば、制御部51は、記憶部48に予め記憶されている単語辞書を参照して、テキストT1を構成する全ての単語を文字列Cとして判別する。そして、制御部51は、当該判別した各文字列C別に、文字のサイズを判定して、文字のサイズが予め定められた大きさであることを上記選択条件として、この選択条件を満たす文字列を抽出して選択する。例えば、制御部51は、「予め定められた大きさ」を、(1)判定された文字のサイズの中で最も大きなサイズ、又は(2) 判定された文字のサイズの中で最も大きなサイズを1番目としてn番目までの大きさ(n=1よりも大きい任意の整数)とする。本実施形態では、制御部51は、判定された文字のサイズの中で最も大きなサイズを1番目として3番目までの大きさとなる文字列Cを選択する場合を例にして説明する。図5Aは、このように選択した各文字列Cを文字のサイズが大きい順に並べたリストLCを表示部41の画面に表示させる例を示している。 The control unit 51 recognizes and extracts the text included in the image of the original in the image memory 46 by a known OCR (Optical Character Recognition) function, and displays the extracted text T1 as shown in FIG. 5A, for example. Is displayed on the screen of, and the browser B1 is started up and displayed on the screen of the display unit 41 (S103). Further, the control unit 51 extracts a character string included in the text T1 from the text T1 based on a preset selection condition, selects the character string, and displays the selected character string on the screen of the display unit 41 (S104). For example, the control unit 51 refers to the word dictionary stored in advance in the storage unit 48, and determines all the words constituting the text T1 as the character string C. Then, the control unit 51 determines the size of the character for each of the determined character strings C, and sets the selection condition that the size of the character is a predetermined size, and the character string satisfying this selection condition. Is extracted and selected. For example, the control unit 51 sets the "predetermined size" to (1) the largest size among the determined character sizes, or (2) the largest size among the determined character sizes. The first is the size up to the nth (any integer larger than n = 1). In the present embodiment, the case where the control unit 51 selects the character string C having the largest size among the determined character sizes as the first and up to the third size will be described as an example. FIG. 5A shows an example in which a list LC in which each character string C selected in this way is arranged in descending order of character size is displayed on the screen of the display unit 41.
 また、制御部51は、例えば図5Bに示すように画像メモリー46内の画像G1及びブラウザーB2を、上記テキストT1、リストLC及びブラウザーB1と切り換えて、表示部41の画面に表示可能に処理する(S105)。このとき、制御部51は、OCR機能による原稿の画像G1のレイアウト解析に基づき、原稿の画像G1に含まれる画像領域を抽出し、画像G1及びブラウザーB2の表示時に、その抽出した画像領域を画像G1として表示させてもよい。 Further, as shown in FIG. 5B, for example, the control unit 51 switches the image G1 and the browser B2 in the image memory 46 between the text T1, the list LC, and the browser B1 and processes them so that they can be displayed on the screen of the display unit 41. (S105). At this time, the control unit 51 extracts the image area included in the image G1 of the original based on the layout analysis of the image G1 of the original by the OCR function, and when the image G1 and the browser B2 are displayed, the extracted image area is imaged. It may be displayed as G1.
 例えば、制御部51は、上記テキストT1、リストLC及びブラウザーB1の表示時に、タブ形式でブラウザーB2の存在をタブta2により表示部41の画面に表示しておく(図5A)。この状態で、ユーザーがブラウザーB1のタブta2にタッチ操作すると、制御部51は、タッチパネル43を介して画像G1及びブラウザーB2の表示指示を受け付け、画像G1及びブラウザーB2を表示部41の画面に表示させる(図5B)。なお、画像G1及びブラウザーB2の表示時に、ブラウザーB2のタブta1にタッチ操作すると、制御部51は、タッチパネル43を介して上記テキストT1、リストLC及びブラウザーB1の表示指示を受け付け、上記テキストT1、リストLC及びブラウザーB1を表示部41の画面に表示させる(図5B)。 For example, when the text T1, the list LC, and the browser B1 are displayed, the control unit 51 displays the existence of the browser B2 on the screen of the display unit 41 by tab ta2 in a tab format (FIG. 5A). In this state, when the user touches the tab ta2 of the browser B1, the control unit 51 receives the display instructions of the image G1 and the browser B2 via the touch panel 43, and displays the image G1 and the browser B2 on the screen of the display unit 41. (Fig. 5B). When the tab ta1 of the browser B2 is touched during the display of the image G1 and the browser B2, the control unit 51 receives the display instructions of the text T1, the list LC and the browser B1 via the touch panel 43, and the text T1 is displayed. The list LC and the browser B1 are displayed on the screen of the display unit 41 (FIG. 5B).
 ここで、ユーザーが表示部41の画面上のリストLCにおける任意の文字列Cにタッチ操作すると、制御部51は、タッチパネル43を通じて、その任意の文字列Cを検索条件として指定する指示を受け付け、この指定された文字列Cを検索条件として設定する。また、ユーザーがリストLCにおける複数の文字列Cに順次タッチ操作すると、制御部51は、タッチ操作の度に、タッチパネル43を通じて、タッチ操作された全ての文字列Cを検索条件として指定する指示を受け付け、これらの文字列Cを検索条件として設定する。 Here, when the user touches an arbitrary character string C in the list LC on the screen of the display unit 41, the control unit 51 receives an instruction to specify the arbitrary character string C as a search condition through the touch panel 43. This specified character string C is set as a search condition. Further, when the user sequentially touches a plurality of character strings C in the list LC, the control unit 51 gives an instruction to specify all the touch-operated character strings C as search conditions through the touch panel 43 each time the touch operation is performed. Accept and set these character strings C as search conditions.
 そして、制御部51は、上記のように検索条件を設定すると、検索条件としての文字列Cを、ネットワーク通信部45によりネットワーク上の検索エンジンに対してブラウザーB1を介して送信する(S106)。 Then, when the search condition is set as described above, the control unit 51 transmits the character string C as the search condition to the search engine on the network by the network communication unit 45 via the browser B1 (S106).
 検索エンジンでは、当該検索条件を用いてデータベースを検索し、検索結果を画像形成装置10に送信する。画像形成装置10の制御部51は、当該検索結果、すなわち、検索エンジンによる検索でヒットしたデータを、ネットワーク通信部45を通じて受信すると、この検索結果としてのデータを表示部41の画面のブラウザーB1に表示させる(S107)。この結果、制御部51は、例えば図6Aに示すように、表示部41の画面に、上記テキストT1と共に、検索エンジンによる検索でヒットしたデータとしてのテキストT2をブラウザーB1に表示させる。制御部51は、テキストT2が複数である場合は、これらテキストT2をブラウザーB1に並べて表示させる。 The search engine searches the database using the search conditions and sends the search results to the image forming apparatus 10. When the control unit 51 of the image forming apparatus 10 receives the search result, that is, the data hit by the search by the search engine through the network communication unit 45, the data as the search result is displayed on the browser B1 of the screen of the display unit 41. Display (S107). As a result, as shown in FIG. 6A, for example, the control unit 51 causes the browser B1 to display the text T2 as the data hit by the search by the search engine together with the text T1 on the screen of the display unit 41. When there are a plurality of texts T2, the control unit 51 displays the texts T2 side by side on the browser B1.
 また、制御部51は、上記画像G1(あるいは画像G1に含まれる上記画像領域)を検索条件として、ネットワーク通信部45によりネットワーク上の検索エンジンに対してブラウザーB2を介して送信する(S108)。 Further, the control unit 51 transmits the image G1 (or the image area included in the image G1) as a search condition to the search engine on the network by the network communication unit 45 via the browser B2 (S108).
 検索エンジンでは、当該検索条件を用いてデータベースを検索し、検索結果を画像形成装置10に送信する。画像形成装置10の制御部51は、当該検索結果、すなわち、検索エンジンによる検索でヒットした画像を、ネットワーク通信部45を通じて受信すると、この検索結果としての画像を表示部41の画面のブラウザーB2に表示させる(S109)。この結果、例えば図6Bに示すように、表示部41の画面には、画像メモリー46内の画像G1と共に、画像検索エンジンにより検索された画像G2が配置されたブラウザーB2が表示される。制御部51は、画像G2が複数である場合は、これら画像G2をブラウザーB2に並べて表示させる。 The search engine searches the database using the search conditions and sends the search results to the image forming apparatus 10. When the control unit 51 of the image forming apparatus 10 receives the search result, that is, the image hit by the search by the search engine through the network communication unit 45, the image as the search result is displayed on the browser B2 of the screen of the display unit 41. Display (S109). As a result, for example, as shown in FIG. 6B, on the screen of the display unit 41, the browser B2 in which the image G2 searched by the image search engine is arranged is displayed together with the image G1 in the image memory 46. When there are a plurality of images G2, the control unit 51 displays these images G2 side by side on the browser B2.
 このように本実施形態では、検索機能が設定された状態で、画像読取部11により原稿の画像が読取られると、この画像に含まれるテキストが認識されて、このテキストにおけるより大きなサイズの文字列が検索条件に設定されて、検索エンジンによりデータベースから各種データが検索される。また、読み取られた原稿の画像が検索条件に設定された場合は、画像検索エンジンによりデータベースから他の画像が検索される。そして、当該検索されたデータ又は画像、或いはその両方が、表示部41の画面に表示される。なお、この検索された他の画像やデータは、例えばタッチパネル43や操作部42の操作に応じて記憶部48に記憶させたり、画像形成部12により記録紙に形成させたりして利用可能である。 As described above, in the present embodiment, when the image of the original is read by the image reading unit 11 with the search function set, the text included in this image is recognized, and a character string having a larger size in this text is recognized. Is set as a search condition, and various data are searched from the database by the search engine. When the image of the scanned document is set as the search condition, the image search engine searches the database for other images. Then, the searched data and / or the searched data are displayed on the screen of the display unit 41. The other searched images and data can be used, for example, by being stored in the storage unit 48 according to the operation of the touch panel 43 or the operation unit 42, or being formed on the recording paper by the image forming unit 12. ..
 また、本実施形態によれば、文字列の一部の文字が欠けている、又は、読み取られた画像の一部が欠けている、といった場合でも、これらを検索対象として検索することで、当該検索により、一部が欠けていない完全な形での上記画像及び文字列を、検索結果として得ることも可能になる。 Further, according to the present embodiment, even if a part of the character in the character string is missing or a part of the read image is missing, the search can be performed by searching for these as the search target. By the search, it is also possible to obtain the above image and the character string in the complete form without any part missing as the search result.
 なお、上記実施形態では、画像読取部11により読取られた原稿の画像に含まれるテキストから各単語(文字列)を検索して、文字のサイズが予め定められた大きさとなる文字列を抽出して選択するという選択条件を適用しているが、他の選択条件を適用することが可能である。 In the above embodiment, each word (character string) is searched from the text included in the image of the original document read by the image reading unit 11, and a character string having a predetermined character size is extracted. The selection condition of selecting is applied, but other selection conditions can be applied.
 例えば、制御部51は、テキストにおける互いに異なる各単語(文字列)別に、文字列の出現頻度を検出し、上記選択条件を、テキストにおける出現頻度が予め定められた値よりも高い文字列を抽出して選択するという条件としてもよい。 For example, the control unit 51 detects the appearance frequency of the character string for each word (character string) different from each other in the text, and extracts the character string whose appearance frequency in the text is higher than a predetermined value by the above selection condition. It may be a condition that the selection is made.
 また、制御部51は、上記選択条件を、テキストにおける予め設定された表示態様の単語(文字列)を抽出して選択するという条件としてもよい。予め設定された表示態様とは、例えば、文字列に付与される色、背景色、下線、太文字などである。 Further, the control unit 51 may set the above selection condition as a condition of extracting and selecting a word (character string) in a preset display mode in the text. The preset display mode is, for example, a color, a background color, an underline, a bold character, or the like given to a character string.
 また、制御部51は、上記選択条件を、テキストにおける予め設定された種類の文字からなる単語(文字列)を抽出して選択するという条件としてもよい。予め設定された種類の文字とは、例えば、漢字、カタカナ、ひらがな、アルファベット等の文字種別、又は言語の種類(言語の種類に対応する文字)等である。 Further, the control unit 51 may set the above selection condition as a condition for extracting and selecting a word (character string) composed of preset types of characters in the text. The preset types of characters are, for example, character types such as kanji, katakana, hiragana, and alphabets, or language types (characters corresponding to language types).
 制御部51は、そのような選択条件で選択した文字列を表示部41の画面に表示させて、ユーザーのタッチ操作によりタッチパネル43を介して指定された文字列を検索条件として設定し、ネットワーク通信部45を通じて、その文字列を検索条件としてネットワーク上の既存の検索エンジンに送信し、検索エンジンによる検索結果を受信し、この検索結果を表示部41の画面に表示させる。 The control unit 51 displays the character string selected by such a selection condition on the screen of the display unit 41, sets the character string specified via the touch panel 43 by the user's touch operation as a search condition, and performs network communication. Through the unit 45, the character string is transmitted to an existing search engine on the network as a search condition, the search result by the search engine is received, and the search result is displayed on the screen of the display unit 41.
 また、上記実施形態では、画像読取部11により読取られた1枚の原稿の画像に含まれるテキストを検索条件としているが、制御部51は、画像読取部11により複数の原稿が読取られて該各原稿に画像が含まれている場合は、各原稿の画像別にそれぞれのブラウザーを表示部41の画面上に表示させ、それぞれのブラウザーからそれぞれの検索条件を検索エンジンに送信し、検索エンジンからそれぞれの検索条件について別個に検索結果を受信して、それぞれの検索結果を対応する各ブラウザーに表示しても構わない。 Further, in the above embodiment, the search condition is the text included in the image of one original document read by the image reading unit 11, but the control unit 51 reads a plurality of documents by the image reading unit 11 and said that. When each manuscript contains an image, each browser is displayed on the screen of the display unit 41 for each image of each manuscript, each search condition is sent to the search engine from each browser, and each from the search engine. Search results may be received separately for each search condition and displayed in each corresponding browser.
 例えば、図7に示すように、3枚の原稿があり、原稿J1に画像G1、原稿J2に画像G2、原稿J3に画像G3が記載されている場合に、画像読取部11によりこれら原稿の全体画像がそれぞれ読取られて画像メモリー46に記憶されると、制御部51は、各画像G1、G2、G3をそれぞれ検索条件として、検索エンジンに送信する。 For example, as shown in FIG. 7, when there are three originals, the original J1 contains the image G1, the original J2 contains the image G2, and the original J3 contains the image G3, the image reading unit 11 describes the entire original. When each image is read and stored in the image memory 46, the control unit 51 transmits each image G1, G2, and G3 to the search engine as search conditions.
 また、制御部51は、図7に示す3枚の原稿のそれぞれから抽出したテキストT1、T3、T4をそれぞれ検索条件として、検索エンジンに送信してもよい。この場合、制御部51は図8Aに示すように、表示部41の画面に上記テキストT1と共に、検索エンジンによる検索でヒットしたデータとしてのテキストT2をブラウザーB1に表示させる。制御部51は、同様に、図8Bに示すように、上記テキストT3とその検索結果であるテキストをブラウザーB2として、図8Cに示すように、上記テキストT4とその検索結果であるテキストをブラウザーB3として、切換表示可能に、表示部41に表示する。 Further, the control unit 51 may send the texts T1, T3, and T4 extracted from each of the three originals shown in FIG. 7 to the search engine as search conditions. In this case, as shown in FIG. 8A, the control unit 51 causes the browser B1 to display the text T2 as the data hit by the search by the search engine together with the text T1 on the screen of the display unit 41. Similarly, the control unit 51 uses the text T3 and the text as the search result as the browser B2 as shown in FIG. 8B, and the text T4 and the text as the search result as the browser B3 as shown in FIG. 8C. It is displayed on the display unit 41 so that it can be switched and displayed.
 また、上記実施形態では、ネットワーク上の検索エンジン及びデータベースを例示しているが、画像形成装置10において、例えば、インターネット上に存在する各ウェブページから収集された、各ウェブページが有するデータを記憶部48に予め記憶させておき、制御部51は、上記のようにして設定された検索条件を用いて、記憶部48に記憶されているデータを検索エンジンとして検索するようにしてもよい。 Further, in the above embodiment, the search engine and the database on the network are illustrated, but the image forming apparatus 10 stores, for example, the data possessed by each web page collected from each web page existing on the Internet. The data stored in the storage unit 48 may be searched as a search engine by storing the data in the storage unit 48 in advance and using the search conditions set as described above.
 また、図1乃至図8Cを用いて説明した上記実施形態の構成及び処理は、本発明の一例に過ぎず、本発明を当該構成及び処理に限定する趣旨ではない。 Further, the configuration and processing of the above-described embodiment described with reference to FIGS. 1 to 8C are merely examples of the present invention, and the present invention is not intended to be limited to the configuration and processing.

Claims (9)

  1.  表示部と、
     原稿の画像を読取る画像読取部と、
     ネットワークを通じてデータ通信を行う通信部と、
     前記画像読取部により読取られた原稿の画像に含まれているテキストを認識し、当該認識されたテキストから、予め設定された選択条件を満たす文字列を抽出して選択し、前記選択した文字列を検索条件として検索エンジンによる検索結果を取得して前記表示部に表示させる制御部と、を備える画像読取装置。
    Display and
    An image reader that reads the image of the original,
    With the communication department that performs data communication through the network
    The text contained in the image of the original document read by the image scanning unit is recognized, and a character string satisfying a preset selection condition is extracted from the recognized text and selected, and the selected character string is selected. An image reading device including a control unit that acquires a search result by a search engine and displays it on the display unit as a search condition.
  2.  前記選択条件は、前記テキストにおいて、文字のサイズが予め定められた大きさとなる文字列を抽出して選択するという条件である請求項1に記載の画像読取装置。 The image reading device according to claim 1, wherein the selection condition is a condition that a character string having a predetermined character size is extracted and selected in the text.
  3.  前記選択条件は、前記テキストにおける出現頻度が予め定められた値よりも高い文字列を抽出して選択するという条件である請求項1に記載の画像読取装置。 The image reading device according to claim 1, wherein the selection condition is a condition of extracting and selecting a character string whose appearance frequency in the text is higher than a predetermined value.
  4.  前記選択条件は、前記テキストにおける予め設定された表示態様の文字列を抽出して選択するという条件である請求項1に記載の画像読取装置。 The image reading device according to claim 1, wherein the selection condition is a condition of extracting and selecting a character string of a preset display mode in the text.
  5.  前記選択条件は、前記テキストにおける予め設定された種類の文字からなる文字列を抽出して選択するという条件である請求項1に記載の画像読取装置。 The image reading device according to claim 1, wherein the selection condition is a condition of extracting and selecting a character string composed of characters of a preset type in the text.
  6.  ユーザーから指示が入力される操作部を更に備え、
     前記制御部は、前記選択条件を満たす文字列を複数抽出して選択すると共に、当該各文字列を予め定められた条件に基づいた順番で並べて前記表示部に表示させ、前記操作部の操作により前記各文字列のうちの任意の文字列が指定されると、前記指定された文字列を検索条件として前記検索エンジンによる検索結果を取得して前記表示部に表示させる請求項2に記載の画像読取装置。
    It also has an operation unit for inputting instructions from the user.
    The control unit extracts and selects a plurality of character strings satisfying the selection conditions, arranges the character strings in an order based on a predetermined condition and displays them on the display unit, and operates the operation unit. The image according to claim 2, wherein when an arbitrary character string among the character strings is specified, the search result by the search engine is acquired using the specified character string as a search condition and displayed on the display unit. Reader.
  7.  前記検索エンジンは、ネットワーク上の検索エンジンであり、
     前記制御部は、前記通信部のデータ通信により、ネットワーク上の検索エンジンに検索条件を送信し、当該検索エンジンから検索結果を取得して前記表示部に表示させる請求項1に記載の画像読取装置。
    The search engine is a search engine on the network.
    The image reading device according to claim 1, wherein the control unit transmits search conditions to a search engine on a network by data communication of the communication unit, acquires search results from the search engine, and displays the search results on the display unit. ..
  8.  記憶部を更に備え、
     前記制御部は、前記記憶部に予め記憶されている単語辞書を用いて、前記認識されたテキストから、当該認識されたテキストを構成する全ての単語を文字列として判別し、当該判別した各文字列別に、前記選択条件を満たす文字列を抽出して選択する請求項1に記載の画像読取装置。
    With more storage
    The control unit uses a word dictionary stored in advance in the storage unit to discriminate all the words constituting the recognized text as character strings from the recognized text, and each of the discriminated characters. The image reading device according to claim 1, wherein a character string satisfying the selection condition is extracted and selected for each column.
  9.  前記制御部は、前記認識されたテキストを前記表示部に表示させ、ブラウザーを立ち上げて前記表示部に表示させ、前記選択した文字列を前記表示部に表示させ、前記取得した検索結果を、前記表示部に表示させている当該ブラウザーに表示させる請求項1に記載の画像読取装置。 The control unit displays the recognized text on the display unit, launches a browser to display it on the display unit, displays the selected character string on the display unit, and displays the acquired search result on the display unit. The image reading device according to claim 1, which is displayed on the browser displayed on the display unit.
PCT/JP2020/030642 2019-08-21 2020-08-12 Image-reading device WO2021033603A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-151525 2019-08-21
JP2019151525 2019-08-21

Publications (1)

Publication Number Publication Date
WO2021033603A1 true WO2021033603A1 (en) 2021-02-25

Family

ID=74660821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/030642 WO2021033603A1 (en) 2019-08-21 2020-08-12 Image-reading device

Country Status (1)

Country Link
WO (1) WO2021033603A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006085376A (en) * 2004-09-15 2006-03-30 Canon Inc Image forming device, image forming method, computer program and computer-readable storage medium
JP2012175406A (en) * 2011-02-22 2012-09-10 Sharp Corp Image forming apparatus and image forming method
JP2012190313A (en) * 2011-03-11 2012-10-04 Fuji Xerox Co Ltd Image processing device and program
JP2017016549A (en) * 2015-07-06 2017-01-19 株式会社日立システムズ Character recognition device, character recognition method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006085376A (en) * 2004-09-15 2006-03-30 Canon Inc Image forming device, image forming method, computer program and computer-readable storage medium
JP2012175406A (en) * 2011-02-22 2012-09-10 Sharp Corp Image forming apparatus and image forming method
JP2012190313A (en) * 2011-03-11 2012-10-04 Fuji Xerox Co Ltd Image processing device and program
JP2017016549A (en) * 2015-07-06 2017-01-19 株式会社日立システムズ Character recognition device, character recognition method, and program

Similar Documents

Publication Publication Date Title
US7911635B2 (en) Method and apparatus for automated download and printing of Web pages
JP6885318B2 (en) Image processing device
JP2019109628A5 (en)
CN111510576B (en) Image forming apparatus with a toner supply device
JP6885319B2 (en) Image processing device
WO2021033603A1 (en) Image-reading device
JP7363188B2 (en) Image reading device and image forming device
JP2019109629A5 (en)
US11064094B2 (en) Image forming apparatus for forming image represented by image data on recording paper sheet
JP2018077794A (en) Image processing device and image forming apparatus
US10725414B2 (en) Image forming apparatus that displays job list
JP7419942B2 (en) Image processing device and image forming device
JP2020086536A (en) Electronic apparatus and image forming device
US11825041B2 (en) Image processing apparatus and image forming apparatus capable of classifying respective images of plurality of pages of original document based on plurality of topic words
US11849086B2 (en) Image processing apparatus capable of extracting portion of document image specified by preset index and subjecting character string in extracted portion to processing associated with index
JP2020038576A (en) Electronic apparatus, image formation apparatus, electronic mail preparation support method and electronic mail preparation support program
JP2021166333A (en) Document combining method, image forming system, and image forming apparatus
JP2010067208A (en) Display controller, image forming apparatus, and display control program
JP6624027B2 (en) Image processing apparatus and image forming apparatus
JP2024060455A (en) Image reading device and image forming device
JP3971764B2 (en) Image forming apparatus
JP2022074865A (en) Image processing apparatus and image forming apparatus
US9400949B2 (en) Display device and image forming apparatus capable of switching a display language of an authentication screen to a display language of a user
JP2020134964A (en) Image processing apparatus and image processing method
JP2011257946A (en) Image processing apparatus, image processing method, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20854098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20854098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP