WO2020017045A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2020017045A1
WO2020017045A1 PCT/JP2018/027354 JP2018027354W WO2020017045A1 WO 2020017045 A1 WO2020017045 A1 WO 2020017045A1 JP 2018027354 W JP2018027354 W JP 2018027354W WO 2020017045 A1 WO2020017045 A1 WO 2020017045A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
medium
read
image processing
medium image
Prior art date
Application number
PCT/JP2018/027354
Other languages
French (fr)
Japanese (ja)
Inventor
大和 河谷
暁 岩山
雅信 本江
Original Assignee
株式会社Pfu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Pfu filed Critical 株式会社Pfu
Priority to PCT/JP2018/027354 priority Critical patent/WO2020017045A1/en
Priority to JP2020530857A priority patent/JP6956269B2/en
Publication of WO2020017045A1 publication Critical patent/WO2020017045A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals

Definitions

  • the disclosed technology relates to an image processing device and an image processing method.
  • Some books are read page by page using a scanner and converted into electronic data.
  • the disclosed technology has been made in view of the above, and has as its object to improve the work efficiency of an operator.
  • the image processing device includes a storage unit and an image processing unit.
  • the storage unit stores a series of a plurality of read images each including a medium image.
  • the image processing unit sets a first reference position in a first area where the medium image exists, and sets a second reference position in a second area other than the first area. Changing the arrangement of the medium image in the read image based on a positional relationship between the first reference position and the second reference position, based on the medium image after the arrangement change, The same extraction area is set in the read images, and the medium image is extracted from each of the plurality of read images according to the extraction area.
  • the work efficiency of the operator can be improved.
  • FIG. 1 is a diagram illustrating a configuration example of the image processing apparatus according to the first embodiment.
  • FIG. 2 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment.
  • FIG. 3 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment.
  • FIG. 4A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4D is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 4E is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 5 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment.
  • FIG. 6A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 6B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 7 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment.
  • FIG. 8A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 8B is a diagram for explaining an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 8C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 9 is a diagram illustrating an example of a processing flow at the time of the first reading according to the first embodiment.
  • FIG. 10A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 10B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 10C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 11A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 11B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 11A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 11B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 11C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 12A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 12B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 12C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 13A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 13B is a diagram for explaining an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 13C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 12A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 13B is a diagram for explaining an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 13C is a diagram for describing
  • FIG. 14A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 14B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 14C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 14D is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 15C is a
  • FIG. 15D is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 16A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 16B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 16C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 17 is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 18A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 18B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 18C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 19A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 19B is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 19C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 20 is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 21 is a diagram illustrating an example of an old book according to the second embodiment.
  • FIG. 22 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIG. 19A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 19B is a diagram for describing an operation example of the image processing apparatus according to the first embodiment.
  • FIG. 19C is a diagram for describing
  • FIG. 23 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIG. 24 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIG. 25 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIG. 26A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 26B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 26C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 27A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 27B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 27C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 28A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 28B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 28C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 29A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 29B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 29C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 30A is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 30B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 30C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 30D is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 31A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 31B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 31C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 31D is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 32 is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 33A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 33B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 33C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 34 is a diagram for explaining an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 35A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 35B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 35C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 36A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 36B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 36C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 37 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIG. 38A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 38B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 38C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment.
  • FIG. 39 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the third embodiment.
  • FIG. 40A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 40B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 40C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 40A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 40B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 40C is a
  • FIG. 41A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 41B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 41C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 42A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 42B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 42C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 43A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 43B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 43C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 44A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 44B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 44C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 44D is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45D is a diagram for explaining an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 46 is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 47A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 47B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 47C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 45A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 47B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 47C is a diagram for describing an operation
  • FIG. 48 is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 49A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 49B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 49C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 50A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 50B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 50C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 1 is a diagram illustrating a configuration example of the image processing apparatus according to the first embodiment.
  • the image processing device 1 includes a control unit 11, a storage unit 13, an image processing unit 15, and a display unit 17.
  • the storage unit 13, the image processing unit 15, and the display unit 17 operate under the control of the control unit 11.
  • the image processing apparatus 10 is used, for example, mounted on a scanner or connected to a scanner.
  • the storage unit 13 stores an image read by a scanner (hereinafter, may be referred to as a “read image”). For example, when each page of one book is read by a scanner and one book is converted into electronic data, the storage unit 13 stores a series of a plurality of read images in which pages are continuous over one book. I do.
  • the read image includes an image of a medium to be read (hereinafter, may be referred to as a “medium image”), and the medium image is arranged in the read image.
  • the medium image is an image of each page. That is, the storage unit 13 stores a series of a plurality of read images each including a medium image (in each of which the medium image is arranged).
  • FIG. 2, FIG. 3, FIG. 5, and FIG. 7 are diagrams illustrating an example of the processing flow of the image processing apparatus according to the first embodiment.
  • 4A-E, 6A, B, 8A-C, 9, 9, 10A-C, 11A-C, 12A-C, 13A-C, 14A-D, 15A-D, 16A 20, FIG. 17, FIGS. 18A to C, FIGS. 19A to C, and FIG. 20 are diagrams for explaining an operation example of the image processing apparatus according to the first embodiment.
  • each page of one old book is read by a scanner and one book is converted into electronic data.
  • the first embodiment will exemplify a case in which each page of an old book placed in a two-page spread state on a document table is read by an overhead scanner.
  • the process flow shown in FIG. 2 is started when, for example, a process start button (not shown) of the image reading apparatus 10 is pressed by an operator.
  • step S101 the control unit 11 selects a read image to be processed by the image processing unit 15 from a series of a plurality of read images stored in the storage unit 13, and stores the read image in the storage unit. 13 and outputs the acquired read image to the image processing unit 15.
  • the blank image is removed in advance and is not stored in the storage unit 13.
  • a front cover image, a spread image of the first page and the second page (hereinafter, may be referred to as “spread 12 images”), a third page and a fourth page
  • a series of facing spread images (hereinafter sometimes referred to as “spread facing 34 images”),..., Back cover images are stored in the storage unit 13, the control unit 11 proceeds to steps S101 to S109.
  • step S101 in the first processing loop the front cover image is read
  • step S101 in the second processing loop of steps S101 to S109 the spread 12 image is displayed
  • step S101 in the third processing loop of steps S101 to S109 the spread 34 is displayed. Images are sequentially acquired from the storage unit 13.
  • control unit 11 acquires the image of the back cover from the storage unit 13 in step S101 in the last processing loop of steps S101 to S109. Further, for example, a series of images of the front cover image, the facing spread image 12, the facing spread image 34,..., And the back cover image are stored in the storage unit 13 as one file. Each image of the front cover image, the facing spread image 12, the facing spread image 34,..., And the back cover image is sequentially selected.
  • the file to be processed is arbitrarily selected by an operator, for example.
  • step S103 the image processing unit 15 performs a medium image position detection process on the read image selected in step S101.
  • FIG. 3 shows an example of the processing flow of the medium image position detection processing.
  • FIG. 4A shows an example of the read image RI selected in step S101.
  • a read image RI is composed of, for example, medium images MIA and MIB, and an image of a platen on which a medium to be read is placed (hereinafter, may be referred to as “platen plate image”) MTI.
  • the medium image MIA is an image of a spread of an old book
  • the medium image MIB is an image of a color chart placed beside the old book.
  • step S151 in FIG. 3 the image processing unit 15 performs edge detection ED using the nearby tone difference on the read image RI (FIG. 4A) (FIG. 4B).
  • step S153 the image processing unit 15 performs a straight line detection SLD using the Hough detection and the least squares method on the read image RI after the edge detection ED (FIG. 4C).
  • step S155 the image processing unit 15 performs curve detection CLD using labeling on the read image RI after the edge detection ED (FIG. 4D).
  • step S157 the image processing unit 15 performs circumscribed rectangle detection RTD on the read image RI based on the results of the straight line detection SLD and the curve detection CLD (FIG. 4E).
  • the circumscribed rectangle BR1 of the medium image MIA is detected as the position of the medium image MIA in the read image RI
  • the circumscribed rectangle BR2 of the medium image MIB is detected as the position of the medium image MIB in the read image RI. Is detected.
  • step S105 the image processing unit 15 calculates the inclination and the center of gravity of each medium image based on each circumscribed rectangle detected in step S157. For example, the image processing unit 15 calculates the inclination of the circumscribed rectangle as the inclination of the medium image, and calculates the center of gravity of the circumscribed rectangle as the center of gravity of the medium image.
  • step S107 the image processing unit 15 performs a medium image reference position setting process.
  • FIG. 5 shows an example of the processing flow of the medium image reference position setting processing.
  • the processes in steps S151, S153, and S155 in FIG. 5 are the same as those shown in FIG.
  • step S161 in FIG. 5 the image processing unit 15 converts a figure (hereinafter, sometimes referred to as a “medium outline figure”) indicating the outline of the medium image based on the results of the straight line detection SLD and the curve detection CLD. Detect every time.
  • a figure hereinafter, sometimes referred to as a “medium outline figure”
  • step S163 the image processing unit 15 determines, for each medium image, whether the upper side of the medium outline figure is a straight line. If the upper side of the medium contour figure is a straight line (step S163: Yes), the process proceeds to step S165, and if the upper side of the medium contour figure is not a straight line (step S163: No), the process proceeds to step S167.
  • step S165 the image processing unit 15 sets the middle point of the upper side of the medium outline figure (that is, the middle point of the straight line) as the medium image reference position, and stores the read image after the medium image reference position is set in the storage unit 13. Let it.
  • step S167 the image processing unit 15 sets a straight line (hereinafter, may be referred to as an “approximate straight line”) connecting the corner points at both ends of the upper side of the medium contour figure to the medium image.
  • step S169 the image processing unit 15 sets the midpoint of the approximate straight line to the medium image reference position, and causes the storage unit 13 to store the read image after setting the medium image reference position.
  • FIG. 6A and 6B show examples of setting the medium image reference position.
  • FIG. 6A shows an example in which the midpoint of the approximate straight line set in the medium image is set as the medium image reference position.
  • the read image RI includes a medium image MI and a document table image MTI.
  • a circumscribed rectangle BR for the medium image MI is detected.
  • a medium outline figure DL is detected for the medium image MI.
  • step S163 since the upper side of the medium outline graphic DL is not a straight line (step S163: No), the processing of step S167 connects the corner points CR1 and CR2 at both ends of the upper side of the medium outline graphic DL to the medium image MI.
  • An approximate straight line SL is set.
  • the midpoint of the approximate straight line SL is set to the medium image reference position RRP.
  • the image processing unit 15 when the upper side of the medium outline figure DL is not a straight line, instead of setting the midpoint of the approximate straight line SL to the medium image reference position RRP, the image processing unit 15, as shown in FIG.
  • the middle point of the upper side may be set as the medium image reference position RRP.
  • step S109 the control unit 11 determines whether or not the processing in steps S101 to S107 has been completed for all of a series of a plurality of read images stored in the storage unit 13.
  • step S109: Yes the processing proceeds to step S111.
  • step S109: No the process proceeds to step S101.
  • step S111 when a series of images of a front cover image, 12 spread pages, 34 spread pages,..., And a back cover image are stored in the storage unit 13 as read images of one old book, the back cover image is stored.
  • steps S101 to S107 are completed, the process proceeds to step S111.
  • step S111 the image processing unit 15 sets a read image reference position for each read image.
  • 6A and 6B show examples of setting the read image reference position.
  • the image processing unit 15 sets, for example, the middle point of the upper side of the document table image MTI included in the read image RI as the read image reference position MRP1.
  • step S113 similarly to the processing in step S101, the control unit 11 causes the series of read images stored in the storage unit 13 (that is, the series of read images after setting the medium image reference position and the read image reference position). From among the plurality of read images, a read image to be processed by the image processing unit 15 is selected and acquired from the storage unit 13, and the acquired read image is output to the image processing unit 15.
  • step S115 the image processing unit 15 performs erect correction on the medium image included in the read image selected in step S113. That is, in step S115, the image processing unit 15 corrects the tilt of the medium image calculated in step S105.
  • step S117 the image processing unit 15 determines whether or not the read image selected in step S113 is the first read image in a series of a plurality of read images. If the read image selected in step S113 is the first read image (step S117: Yes), the process proceeds to step S119. On the other hand, if the read image selected in step S113 is not the first read image (step S117: No), the process proceeds to step S121.
  • step S121 the image processing unit 15 performs a pair (approx.) Image detection process.
  • FIG. 7 illustrates an example of a processing flow of the image detection processing.
  • the image processing unit 15 may call a medium image (hereinafter, referred to as a “current medium image”) included in the Nth selected read image (that is, the currently selected read image). ) (Hereinafter sometimes referred to as the “media image median point”), and a media image (hereinafter, referred to as “the previously selected read image”) included in the (N ⁇ 1) th read image (ie, the previously selected read image).
  • the center of gravity of the "previous medium image” (hereinafter sometimes referred to as "the previous medium image centroid”) is calculated.
  • the image processing unit 15 calculates the center of gravity of the circumscribed rectangle detected in step S157 as the center of gravity of the current medium image and the center of gravity of the previous medium image.
  • step S173 the image processing unit 15 sets the distance between the read image reference position and the current medium image center of gravity (hereinafter, may be referred to as “current distance”), and the read image reference position and the previous medium image center of gravity. (Hereinafter, sometimes referred to as “previous distance”).
  • step S175 the image processing unit 15 calculates the vertical length H and the horizontal length W of the current medium image, and the vertical length H 'and the horizontal length W' of the previous medium image.
  • the image processing unit 15 calculates the vertical length and the horizontal length of the circumscribed rectangle of the current medium image as the vertical length H and the horizontal length W of the current medium image, and calculates the vertical length and the horizontal length of the previous circumscribed rectangle of the medium image. It is calculated as the height H ′ and the width W ′ of the medium image.
  • step S177 the image processing unit 15 determines whether or not the absolute value of the difference between the current distance and the previous distance (hereinafter, may be referred to as “distance difference”) is less than a threshold value TH1. . If the distance difference is less than threshold TH1 (step S177: Yes), the process proceeds to step S179, and if the distance difference is greater than or equal to threshold TH1 (step S177: No), the process proceeds to step S185.
  • distance difference the absolute value of the difference between the current distance and the previous distance
  • step S179 the image processing unit 15 sets the absolute value of the difference between the vertical length H of the current medium image and the vertical length H ′ of the previous medium image (hereinafter, may be referred to as “vertical length difference”) as a threshold. It is determined whether it is less than TH2. If the height difference is less than the threshold value TH2 (step S179: Yes), the process proceeds to step S181. If the height difference is equal to or greater than the threshold value TH2 (step S179: No), the process proceeds to step S191.
  • step S181 the image processing unit 15 sets the absolute value of the difference between the horizontal length W of the current medium image and the horizontal length W ′ of the previous medium image (hereinafter, may be referred to as “horizontal length difference”) as a threshold. It is determined whether it is less than TH3. If the horizontal length difference is less than the threshold value TH3 (step S181: Yes), the process proceeds to step S183. If the horizontal length difference is equal to or greater than the threshold value TH3 (step S181: No), the process proceeds to step S187.
  • step S183 the image processing unit 15 determines that the current medium image and the previous medium image are both spread images or both are color chart images, and the current medium image and the previous medium image are images that form a pair with each other. Is determined. That is, in step S183, the image processing unit 15 determines that the current medium image and the previous medium image are a pair image of two-page spread images or a pair image of color chart images. The image processing unit 15 determines whether the current medium image and the previous medium image are paired images of facing pages or color chart images, for example, based on the horizontal length of the current medium image. Judgment.
  • the image processing unit 15 determines that the current medium image and the previous medium image are paired images of two-page spread images, and determines the horizontal length of the current medium image. Is smaller than the threshold value TH4, it is determined that the current medium image and the previous medium image are pairs of color chart images.
  • step S185 the image processing unit 15 determines whether or not the vertical length difference is less than the threshold value TH2. If the vertical length difference is smaller than the threshold value TH2 (step S185: Yes), the process proceeds to step S187. If the vertical length difference is equal to or larger than the threshold value TH2 (step S185: No), the process proceeds to step S191.
  • the image processing unit 15 determines whether the horizontal length W of the current medium image is within a predetermined range of the horizontal length W ′ of the previous medium image. For example, the image processing unit 15 determines that the horizontal length W of the current medium image is equal to or more than half (ie, W ′ / 2) and twice (ie, 2W ′) the horizontal length W ′ of the previous medium image. It is determined whether it is within a predetermined range of less than. If the horizontal width W of the current medium image is within this predetermined range (step S187: Yes), the process proceeds to step S189, and if the horizontal length W of the current medium image is not within this predetermined range (step S187: No). , The process proceeds to step S191.
  • step S189 the image processing unit 15 determines that one of the current medium image and the previous medium image is a cover image, the other is a two-page spread image, and the current medium image and the previous medium image are images that form a pair with each other. judge. That is, in step S183, the image processing unit 15 determines that the current medium image and the previous medium image are paired images of the cover image and the facing image.
  • step S191 the image processing unit 15 determines that the current medium image and the previous medium image are not paired with each other. That is, in step S191, the image processing unit 15 determines that the current medium image and the previous medium image are not paired images.
  • FIGS. 8A, 8B, and 8C show examples of image-to-image determination.
  • the medium image MIA included in the read image RIA is the previous medium image
  • the medium image MIB included in the read image RIB is the current medium image.
  • the distance difference between the previous distance DTA between the read image reference position MRP1 and the previous medium image center of gravity GBA and the current distance DTB between the read image reference position MRP1 and the current medium image center of gravity GBB is less than the threshold TH1.
  • the vertical length difference between the vertical length H5 of the medium image MIA and the vertical length H6 of the medium image MIB is less than the threshold value TH2 (step S179: Yes).
  • the horizontal length difference between the horizontal length W5 of the medium image MIA and the horizontal length W6 of the medium image MIB is less than the threshold value TH3 (step S181: Yes). Therefore, it is determined that the medium image MIA and the medium image MIB are paired images of two-page spread images (step S183).
  • the medium image MIC included in the read image RIC is the previous medium image
  • the medium image MID included in the read image RID is the current medium image.
  • the distance difference between the previous distance DTC between the read image reference position MRP1 and the previous medium image center of gravity GBC and the current distance DTD between the read image reference position MRP1 and the current medium image center of gravity GBD is less than the threshold TH1.
  • the vertical length difference between the vertical length H3 of the medium image MIC and the vertical length H4 of the medium image MID is less than the threshold value TH2 (step S179: Yes).
  • the horizontal length difference between the horizontal length W3 of the medium image MIC and the horizontal length W4 of the medium image MID is equal to or larger than the threshold TH3 (Step S181: No).
  • the width W4 of the medium image MID is in a range of not less than half and less than twice the width W3 of the medium image MIC (step S187: Yes). Therefore, it is determined that the medium image MIC and the medium image MID are paired images of the cover image and the facing image (step S189).
  • the medium image MIE included in the read image RIE is the previous medium image
  • the medium image MIF included in the read image RIF is the current medium image.
  • the distance difference between the previous distance DTE between the read image reference position MRP1 and the previous medium image center of gravity GBE and the current distance DTF between the read image reference position MRP1 and the current medium image center of gravity GBF is equal to or greater than the threshold TH1.
  • Step S177: No the vertical length difference between the vertical length H1 of the medium image MIE and the vertical length H2 of the medium image MIF is less than the threshold TH2 (step S185: Yes).
  • the width W2 of the medium image MIF is in a range of not less than half and less than twice the width W1 of the medium image MIE (Step S187: Yes). Therefore, it is determined that the medium image MIE and the medium image MIF are paired images of the cover image and the facing image (step S189).
  • the image processing unit 15 performs the processing of steps S171 to S191 (FIG. 7) on the read image selected this time and all the medium images included in the read image selected last time.
  • step S123 the image processing unit 15 determines whether or not the paired image exists in the processing in steps S171 to S191 (FIG. 7).
  • step S183 or step S189 it is determined that the paired image exists
  • step S191 it is determined that the paired image does not exist.
  • step S123: Yes the process proceeds to step S125. If the paired image does not exist (step S123: No), the process proceeds to step S131.
  • step S125 the image processing unit 15 determines whether the two images forming a pair are two-page spread images or two color chart images.
  • the process of step S183 is performed in FIG. 7, it is determined that the paired images are two-page spread images or color chart images, and when the process of step S189 is performed in FIG. It is determined that the two images are not two-page spread images or two color chart images.
  • step S125: Yes the process proceeds to step S127, and two paired images are not two-page spread images or two color chart images.
  • Step S125 No
  • the process proceeds to Step S129.
  • the image processing unit 15 performs the position setting process A in step S127, while performing the position setting process B in step S129. Details of the position setting processes A and $ B will be described later. After the processing in steps S127 and S129, the processing proceeds to step S135.
  • step S131 the image processing unit 15 outputs a warning indicating that there is no corresponding image.
  • the image processing unit 15 displays a warning message indicating that there is no medium image in the N-th selected read image that is the same as the medium image in the (N ⁇ 1) -th read image. 17 is displayed. That is, the image processing unit 15 outputs a warning when there is no current medium image that is paired with the previous medium image (step S123: No).
  • step S133 the image processing unit 15 determines whether to continue the processing. For example, when the operator gives an instruction to continue the processing to the image processing apparatus 10 in response to the warning output in step S131, the image processing unit 15 determines that the processing is continued (step S133: Yes). ). On the other hand, when the operator gives an instruction to stop the processing to the image processing apparatus 10 in response to the warning output in step S131, the image processing unit 15 determines that the processing is not continued (step S133). : No). If the process is to be continued (step S133: Yes), the process proceeds to step S119, and if the process is not to be continued (step S133: No), the process ends.
  • step S119 the image processing unit 15 performs a position setting process C. Details of the position setting process C will be described later. After the process in step S119, the process proceeds to step S135.
  • step S135 the image processing unit 15 rearranges the medium image in the read image based on the results of the position setting processes A, B, and C (steps S127, S129, and S119). Details of the processing in step S135 will be described later.
  • step S136 the image processing unit 15 determines whether or not the processing in steps S115 to S135 has been completed for all the medium images included in the read image selected in step S113. If the processing of steps S115 to S135 has been completed for all the medium images included in the read image selected in step S113 (step S136: Yes), the processing proceeds to step S137. On the other hand, if there is a medium image in which the processes of steps S115 to S135 have not been performed in the read image selected in step S113 (step S136: No), the process returns to step S115.
  • step S137 the control unit 11 determines whether the processing in steps S113 to S135 has been completed for all of the series of multiple read images stored in the storage unit 13.
  • step S137: Yes the processing proceeds to step S139.
  • step S137: No the process proceeds to step S113.
  • step S139 the image processing unit 15 sets an extraction area for the read image. Details of the processing in step S139 will be described later.
  • step S141 the image processing unit 15 extracts a medium image based on the extraction area set in step S139. Details of the processing in step S141 will be described later.
  • the image processing unit 15 sets the X coordinate in the right direction and the Y coordinate in the downward direction with the upper left vertex of the read image as the origin on all the read images in the rectangle. That is, in the read image, the X coordinate is the coordinate in the horizontal direction, and the Y coordinate is the coordinate in the vertical direction.
  • the read images RI11, RI12, RI13, and RI14 correspond to a series of a plurality of read images of one old book.
  • the read image RI11 includes a medium image MI11 that is an image of the front cover (FIG. 10A). That is, the medium image MI11 is arranged in the read image RI11.
  • the read image RI12 includes a medium image MI121 which is a 12-page spread image and a medium image MI122 which is an image of a color chart (FIG. 11A). That is, the medium image MI121 and the medium image MI122 are arranged in the read image RI12.
  • the read image RI13 includes a medium image MI131 that is a double-page spread image and a medium image MI132 that is a color chart image (FIG. 12A). That is, the read image RI13 includes the medium image MI131 and the medium image MI132.
  • the read image RI14 includes a medium image MI14 which is an image of the back cover (FIG. 13A). That is, the medium image MI14 is arranged in the read image RI14. Further, the read images RI11, RI12, RI13, RI14 include a document table image MTI.
  • the image processing unit 15 performs processing on each of the read images RI11, RI12, RI13, and RI14 as shown in FIGS. 10B, 11B, 12B, and 13B. Operate.
  • the image processing unit 15 detects the circumscribed rectangle BR11 of the medium image MI11, and calculates the center of gravity GB11 of the detected circumscribed rectangle BR11. Further, since the upper side of the medium outline figure of the medium image MI11 is a straight line, the image processing unit 15 sets the middle point of the upper side of the medium outline figure in the medium image MI11 to the medium image reference position RRP11.
  • the image processing unit 15 detects the circumscribed rectangle BR121 of the medium image MI121, and calculates the center of gravity GB121 of the detected circumscribed rectangle BR121, as shown in FIG. 11B. Further, since the upper side of the medium outline figure of the medium image MI121 is not a straight line, the image processing unit 15 determines the middle point of the approximate straight line SL121 connecting the corner points CR121a and CR121b at both ends of the upper side of the medium outline figure in the medium image MI121. It is set to the image reference position RRP121. On the other hand, the image processing unit 15 detects the circumscribed rectangle BR122 of the medium image MI122, and calculates the center of gravity GB122 of the detected circumscribed rectangle BR122. Further, since the upper side of the medium outline figure of the medium image MI122 is a straight line, the image processing unit 15 sets the midpoint of the upper side of the medium outline figure in the medium image MI122 to the medium image reference position RRP122.
  • the image processing unit 15 detects the circumscribed rectangle BR131 of the medium image MI131, and calculates the center of gravity GB131 of the detected circumscribed rectangle BR131, as shown in FIG. 12B. Further, since the upper side of the medium outline figure of the medium image MI131 is not a straight line, the image processing unit 15 determines the middle point of the approximate straight line SL131 connecting the corner points CR131a and CR131b at both ends of the upper side of the medium outline figure in the medium image MI131. It is set to the image reference position RRP131. On the other hand, the image processing unit 15 detects the circumscribed rectangle BR132 of the medium image MI132, and calculates the center of gravity GB132 of the detected circumscribed rectangle BR132. Further, since the upper side of the medium outline figure of the medium image MI132 is a straight line, the image processing unit 15 sets the midpoint of the upper side of the medium outline figure in the medium image MI132 to the medium image reference position RRP132.
  • the image processing unit 15 detects the circumscribed rectangle BR14 of the medium image MI14 and calculates the center of gravity GB14 of the detected circumscribed rectangle BR14, as shown in FIG. 13B. Further, since the upper side of the medium outline figure of the medium image MI14 is a straight line, the image processing unit 15 sets the middle point of the upper side of the medium outline figure in the medium image MI14 to the medium image reference position RRP14.
  • the circumscribed rectangle BR11 indicates an area where the medium image MI11 exists in the read image RI11
  • the circumscribed rectangle BR121 indicates an area where the medium image MI121 exists in the read image RI12
  • the circumscribed rectangle BR122 indicates the area where the read image RI12 exists.
  • the area where the medium image MI122 exists is shown.
  • a circumscribed rectangle BR131 indicates an area where the medium image MI131 exists in the read image RI13
  • a circumscribed rectangle BR132 indicates an area where the medium image MI132 exists in the read image RI13
  • a circumscribed rectangle BR14 indicates the medium in the read image RI14. The area where the image MI14 exists is shown.
  • the image processing unit 15 sets the medium image reference positions RRP11 and RRP121 in the regions where the medium images MI11, MI121, MI122, MI131, MI132, and MI14 exist in each of the read images RI11, RI12, RI13, and RI14. , RRP122, RRP131, RRP132, and RRP14 are respectively set.
  • step S109 Since the read image RI14 is the last image in a series of a plurality of read images of one old book, when the processing of steps S101 to S107 is completed for the read image RI14 (step S109: Yes), The process proceeds to step S111.
  • the image processing unit 15 operates as shown in FIG. 10B, FIG. 11B, FIG. 12B, and FIG. 13B for each of the read images RI11, RI12, RI13, and RI14 according to the processing of step S111 (FIG. 2). . That is, as shown in FIGS. 10B, 11B, 12B, and 13B, the image processing unit 15 sets the middle point of the upper side of the platen image MTI in each of the read images RI11, RI12, RI13, and RI14 as the read image reference. Set to position MRP1.
  • the image processing unit 15 sets the read image reference position in an area other than the area where the medium images MI11, MI121, MI122, MI131, MI132, and MI14 exist in each of the read images RI11, RI12, RI13, and RI14.
  • the image processing unit 15 rotates the circumscribed rectangle BR11 clockwise about the center of gravity GB11 in the read image RI11 (FIG. 10B) selected by the control unit 11 in the process of FIG.
  • the inclination of the image MI11 is corrected (step S115, FIG. 10C).
  • the image processing unit 15 performs the position setting process C on the medium image MI11 (step S119). ).
  • the image processing unit 15 sets the read image reference position MRP1 (coordinates (x s0 , y s0 )) as the starting point and sets the medium image MI11 after erecting correction.
  • a vector Vec b0 ending at the medium image reference position RRP11 (coordinates (x b0 , y b0 )) is set as the position of the medium image MI11 with respect to the original table image MTI. That is, the vector Vec b0 set in the medium image MI11 indicates a positional relationship between the medium image reference position RRP11 and the read image reference position MRP1.
  • the image processing unit 15 rearranges the medium image MI11 in the read image RI11 based on the vector Vec b0 set in the medium image MI11 (step S135). That is, the image processing unit 15 rearranges the medium image MI11 such that the medium image reference position RRP11 matches the coordinates (x b0 , y b0 ) of the end point of the vector Vec b0 .
  • the image processing unit 15 causes the storage unit 13 to store the read image RI11F after the rearrangement of the medium image MI11.
  • the arrangement of the medium image MI11 after the rearrangement is the same as the arrangement of the medium image MI11 before the rearrangement (that is, the arrangement of the medium image MI11 after the erecting correction).
  • the image processing unit 15 rotates the circumscribed rectangle BR121 counterclockwise about the center of gravity GB121 in the read image RI12 (FIG. 11B) selected by the control unit 11 in the process of FIG.
  • the inclination of the image MI121 is corrected (step S115, FIG. 11C).
  • the image processing unit 15 corrects the tilt of the medium image MI122 by rotating the circumscribed rectangle BR122 clockwise around the center of gravity GB122 (step S115, FIG. 11C).
  • the image processing unit 15 performs the image detection process on the read image RI12 (Step S121). .
  • the image detection process while the medium image MI11 and the medium image MI121 are determined to be a paired image of the cover image and the facing image, the medium image MI11 and the medium image MI122 are not a paired image. Is determined.
  • the image processing unit 15 Since it is determined that the medium image MI11 and the medium image MI121 are a pair image of the cover image and the facing image, the image processing unit 15 performs the position setting processing B on the medium image MI121 (step S129).
  • the image processing unit 15 starts from the read image reference position MRP1 (coordinates (x s0 , y s0 )) and sets the medium image reference position RRP121 (coordinates (x b1, y b0) vector Vec b1 to end point) is set as the position of the medium image MI121 for platen image MTI.
  • the vector Vec b1 set in the medium image MI121 indicates the positional relationship between the medium image reference position RRP121 and the read image reference position MRP1.
  • the value x b1 of the X coordinate of the medium image reference position RRP121 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in the medium image MI121 after erecting correction.
  • the value y b0 of the Y coordinate of the reference position RRP121 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP11 in FIG. 10C.
  • the image processing unit 15 determines whether to continue the processing in step S133 after the processing in step S131, A position setting process C is performed (step S119).
  • the image processing unit 15 sets the read image reference position MRP1 (coordinates (x s0 , y s0 )) as the starting point and the erect-corrected medium image MI122.
  • the vector Vec c1 ending at the medium image reference position RRP122 (coordinates (x c0 , y c0 )) is set as the position of the medium image MI 122 with respect to the original plate image MTI. That is, the vector Vec c1 set in the medium image MI122 indicates the positional relationship between the medium image reference position RRP122 and the read image reference position MRP1.
  • the image processing unit 15 rearranges the medium image MI121 based on the vector Vec b1 set in the medium image MI121 in the read image RI12, and also sets the vector Vec c1 set in the medium image MI122.
  • the medium image MI122 is rearranged based on (Step S135). That is, the image processing unit 15, as the medium image reference position RRP121 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI121.
  • the image processing unit 15 as the medium image reference position RRP122 the coordinates (x c0, y c0) of the end point of the vector Vec c1 matches, to reposition the media image MI122.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI12F after the rearrangement of the medium images MI121 and MI122. Due to the rearrangement of the medium image MI121, the arrangement of the medium image MI121 after the rearrangement is changed from the arrangement of the medium image MI121 before the rearrangement (that is, the arrangement of the medium image MI121 after the erecting correction).
  • the arrangement of the medium image MI122 after the rearrangement is the same as the arrangement of the medium image MI122 before the rearrangement (that is, the arrangement of the medium image MI122 after the erecting correction).
  • the image processing unit 15 as the medium image reference position RRP121 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI121 ( Figure 11C).
  • the value x b1 of the X coordinate of the medium image reference position RRP121 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in the medium image MI121 after erecting correction.
  • the value y b0 of the Y coordinate of the position RRP121 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP11 in FIG. 10C.
  • the medium image MI11 and the medium image MI121 are paired images of the cover image and the facing image. Further, it is determined that the one medium image and the other medium image are a pair image of the cover image and the facing image (step S189) because the vertical length of one medium image and the vertical length of the other medium image are determined.
  • the difference is less than the threshold value TH2 (step S179: Yes)
  • the difference between the horizontal length of one medium image and the horizontal length of the other medium image is equal to or larger than the threshold value TH3 (step S181: No)
  • the other medium image This is a case where the horizontal length is within a predetermined range of the horizontal length of one medium image (step S187: Yes).
  • the image processing unit 15 determines that the difference between the vertical length of the medium image MI11 and the vertical length of the medium image MI121 is less than the threshold TH2, the difference between the horizontal length of the medium image MI11 and the horizontal length of the medium image MI121 is equal to or greater than the threshold TH3, Further, when the horizontal length of the medium image MI121 is within a predetermined range of the horizontal length of the medium image MI11, the rearrangement of the medium image MI121 is performed while the vertical arrangement of the medium image MI121 is changed when the medium image MI121 is rearranged. Do not change the horizontal alignment of.
  • the image processing unit 15 rotates the circumscribed rectangle BR131 clockwise around the center of gravity GB131 in the read image RI13 (FIG. 12B) selected by the control unit 11 in the process of FIG.
  • the inclination of the image MI131 is corrected (step S115, FIG. 12C).
  • the image processing unit 15 corrects the tilt of the medium image MI132 by rotating the circumscribed rectangle BR132 counterclockwise about the center of gravity GB132 (step S115, FIG. 12C).
  • the image processing unit 15 performs an image detection process on the read image RI13 (Step S121). .
  • the medium image MI121 and the medium image MI131 are determined to be a paired image of facing images
  • the medium image MI122 and the medium image MI132 are determined to be a paired image of color chart images. .
  • the image processing unit 15 performs the position setting processing A on the medium image MI131 (step S127).
  • position setting process A for the medium image MI131 the image processing unit 15, as shown in FIG. 12C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP131 (coordinates (x A vector Vec b1 ending at b1 , y b0 )) is set as the position of the medium image MI131 with respect to the document table image MTI.
  • the vector Vec b1 set in the medium image MI131 indicates a positional relationship between the medium image reference position RRP131 and the read image reference position MRP1.
  • the value x b1 of the X coordinate of the medium image reference position RRP131 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in FIG. 11C
  • the value of the Y coordinate of the medium image reference position RRP131 y b0 is the same as the y-coordinate value y b0 of the medium image reference position RRP121 in FIG. 11C. That is, the vector Vec b1 set in the medium image MI131 is the same as the vector Vec b1 set in the medium image MI121.
  • the image processing unit 15 performs the position setting processing A on the medium image MI132 (step S127).
  • position setting process A for the medium image MI132 the image processing unit 15, as shown in FIG. 12C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP132 (coordinates (x A vector Vec c1 ending at c0 , yc0 )) is set as the position of the medium image MI132 with respect to the document table image MTI.
  • the vector Vec c1 set in the medium image MI132 indicates a positional relationship between the medium image reference position RRP132 and the read image reference position MRP1.
  • the value x c0 of the X coordinate of the medium image reference position RRP132 is the same as the value X c0 of the X coordinate of the medium image reference position RRP122 in FIG. 11C
  • the value of the Y coordinate of the medium image reference position RRP 132 y c0 is the same as the y-coordinate value y c0 of the medium image reference position RRP122 in FIG. 11C. That is, the vector Vec c1 set in the medium image MI132 is the same as the vector Vec c1 set in the medium image MI122.
  • the image processing unit 15 rearranges the medium image MI131 in the read image RI13 based on the vector Vec b1 set in the medium image MI131, and also sets the vector Vec c1 in the medium image MI132. Is rearranged on the basis of (step S135). That is, the image processing unit 15, as the medium image reference position RRP131 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI131. The image processing unit 15, as the medium image reference position RRP132 the coordinates (x c0, y c0) of the end point of the vector Vec c1 matches, to reposition the media image MI132.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI13F after the rearrangement of the medium images MI131 and MI132. Due to the rearrangement of the medium image MI131, the arrangement of the medium image MI131 after the rearrangement is changed from the arrangement of the medium image MI131 before the rearrangement (that is, the arrangement of the medium image MI131 after the erecting correction). Further, due to the rearrangement of the medium image MI132, the arrangement of the medium image MI132 after the rearrangement is changed from the arrangement of the medium image MI132 before the rearrangement (that is, the arrangement of the medium image MI132 after the erecting correction).
  • the medium image MI121 and the medium image MI131 are paired images of facing images. Further, as described above, the vector Vec b1 set in the medium image MI131 is the same as the vector Vec b1 set in the medium image MI121. Further, as described above, the vector Vec b1 set in the medium image MI121 indicates a positional relationship between the medium image reference position RRP121 and the read image reference position MRP1, and the vector Vec b1 set in the medium image MI131 is The positional relationship between the medium image reference position RRP131 and the read image reference position MRP1 is shown.
  • the image processing unit 15 rearranges the medium image MI131 paired with the medium image MI121 such that the positional relationship between the medium image MI121 and the medium image MI131 is the same.
  • the medium image MI122 and the medium image MI132 are pairs of color chart images. Further, as described above, the vector Vec c1 set in the medium image MI132 is the same as the vector Vec c1 set in the medium image MI122. Further, as described above, the vector Vec c1 set in the medium image MI122 indicates a positional relationship between the medium image reference position RRP122 and the read image reference position MRP1, and the vector Vec c1 set in the medium image MI132 is The positional relationship between the medium image reference position RRP132 and the read image reference position MRP1 is shown.
  • the image processing unit 15 rearranges the medium image MI132 paired with the medium image MI122 such that the positional relationship between the medium image MI122 and the medium image MI132 is the same.
  • step S115 since the circumscribed rectangle BR14 of the medium image MI14 included in the read image RI14 selected by the control unit 11 by the processing of step S113 (FIG. 2) has no inclination (FIG. 13B), the erecting correction is not performed on the medium image MI14. Even if performed (step S115), the medium image MI14 is not rotated.
  • the image processing unit 15 performs the image detection process on the read image RI14 (step S121). .
  • the medium image MI131 and the medium image MI14 are determined to be paired images of the cover image and the facing image.
  • the image processing unit 15 Since it is determined that the medium image MI131 and the medium image MI14 are the paired image of the cover image and the facing image, the image processing unit 15 performs the position setting processing B on the medium image MI14 (Step S129).
  • position setting process B for the medium image MI14 the image processing unit 15, as shown in FIG. 13C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP14 (coordinates (x b3, y b0) vector Vec b3 to end point) is set as the position of the medium image MI14 against platen image MTI.
  • the vector Vec b3 set in the medium image MI14 indicates a positional relationship between the medium image reference position RRP14 and the read image reference position MRP1.
  • the value x b3 of the X coordinate of the medium image reference position RRP14 is the same as the value x b3 of the X coordinate of the medium image reference position RRP14 of the medium image MI14 in FIG.
  • the value y b0 Y coordinate of RRP14 is identical to the value y b0 Y coordinate of the media image reference position RRP131 in FIG 12C.
  • the image processing unit 15 rearranges the medium image MI14 in the read image RI14 based on the vector Vec b3 set in the medium image MI14 (step S135). That is, the image processing unit 15, as the medium image reference position RRP14 the coordinates (x b3, y b0) of end point of the vector Vec b3 match, to reposition the media image MI14.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI14F after the rearrangement of the medium image MI14. Due to the rearrangement of the medium image MI14, the arrangement of the medium image MI14 after the rearrangement is changed from the arrangement of the medium image MI14 before the rearrangement (that is, the arrangement of the medium image MI14 after the erecting correction).
  • the image processing unit 15 as the medium image reference position RRP14 the coordinates (x b3, y b0) of end point of the vector Vec b3 match, to reposition the media image MI14 (Fig @ 13 C).
  • the value x b3 of the X coordinate of the medium image reference position RRP14 is the same as the value x b3 of the X coordinate of the medium image reference position RRP14 in the medium image MI14 after erecting correction
  • the value y b0 of the Y coordinate of the position RRP14 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP131 in FIG. 12C.
  • the medium image MI14 and the medium image MI131 are paired images of the cover image and the facing image. Further, it is determined that the one medium image and the other medium image are a pair image of the cover image and the facing image (step S189) because the vertical length of one medium image and the vertical length of the other medium image are determined.
  • the difference is less than the threshold value TH2 (step S179: Yes)
  • the difference between the horizontal length of one medium image and the horizontal length of the other medium image is equal to or larger than the threshold value TH3 (step S181: No)
  • the other medium image This is a case where the horizontal length is within a predetermined range of the horizontal length of one medium image (step S187: Yes).
  • the image processing unit 15 determines that the difference between the vertical length of the medium image MI131 and the vertical length of the medium image MI14 is less than the threshold TH2, the difference between the horizontal length of the medium image MI131 and the horizontal length of the medium image MI14 is equal to or greater than the threshold TH3, Further, when the horizontal length of the medium image MI131 is within the predetermined range of the horizontal length of the medium image MI14, the rearrangement of the medium image MI14 in the vertical direction is performed while the medium image MI14 is rearranged. Do not change the horizontal alignment of.
  • step S137 extraction area setting
  • the image processing unit 15 acquires the read images RI11F, RI12F, RI13F, and RI14F from the storage unit 13.
  • the read image RI11F includes the medium image MI11 after the erecting correction (FIG. 14A), and the read image RI12F includes the medium image MI121 after the erecting correction and the arrangement change, and the medium image MI122 after the erecting correction. (FIG. 14B).
  • the read image RI13F includes a medium image MI131 after the erecting correction and the arrangement change, and a medium image MI132 after the erecting correction and the arrangement change (FIG. 14C).
  • the read image RI14F includes the medium image MI14 after the erecting correction and the arrangement change (FIG. 14D).
  • the image processing unit 15 performs, based on the media images MI11 and MI122 after the erecting correction and the media images MI121, MI131, MI132, and MI14 after the erecting correction and the layout change, respectively.
  • the extraction area EA11 having the same position and the same size is set for the read images RI11F, RI12F, RI13F, and RI14F (step S139). That is, the vertical length DstH1 of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F.
  • the horizontal length DstW1 of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F.
  • the vector VecL d0 starting from the read image reference position MRP1 and ending at the upper left vertex of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F.
  • the vector VecR d0 starting from the read image reference position MRP1 and ending at the upper right vertex of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F.
  • a medium image MI11 exists in the extraction area EA11 set in the read image RI11F (FIG. 14A), and medium images MI121 and 122 exist in the extraction area EA11 set in the read image RI12F (FIG. 14B).
  • Medium images MI131 and 132 exist in the extraction area EA11 set in the read image RI13F (FIG. 14C), and a medium image MI14 exists in the extraction area EA11 set in the read image RI14F (FIG. 14D).
  • the image processing unit 15 extracts the medium image MI11 from the read image RI11F and extracts the medium images MI121 and MI122 from the read image RI12F according to the same extraction area EA11 set in step S139. Then, the medium images MI131 and MI132 are extracted from the read image RI13F, and the medium image MI14 is extracted from the read image RI14F (step S141).
  • FIGS. 16A to 16C, FIG. 17, FIGS. 18A to 18C, and FIGS. 19A to 19C show examples of setting an extraction area for a read image after rearrangement of a medium image.
  • the read image RI1F includes the rearranged medium image MI1 having the circumscribed rectangle BR1 (FIG. 16A)
  • the read image RI2F includes the rearranged medium image MI2 having the circumscribed rectangle BR2 (see FIG. 16A).
  • the read image RI3F includes a rearranged medium image MI3 having the circumscribed rectangle BR3 and a rearranged medium image MI4 having the circumscribed rectangle BR4 (FIG. 16C).
  • the read image reference position MRP1 is set for all of the read images RI1F, RI2F, and RI3F (FIGS. 16A to 16C).
  • the image processing unit 15 sets the extraction area EA1 based on the circumscribed rectangle including all the rearranged medium images MI1, MI2, MI3, and MI4 in the read images RI1F, RI2F, and RI3F. For example, as shown in FIG. 17, the image processing unit 15 sets a rectangle including all of the circumscribed rectangles BR1, BR2, BR3, and BR4 as the extraction area EA1. Further, as shown in FIG. 17, the image processing unit 15 sets the vector VEA1a starting from the read image reference position MRP1 and ending at the upper left vertex of the extraction area EA1, the read image reference position MRP1 as the start point, and A vector VEA1b ending at the upper right vertex of EA1 is set for the extraction area EA1.
  • the image processing unit 15 sets the extraction area EA1 in which the vectors VEA1a and VEA1b are set in each of the read images RI1F, RI2F, and RI3F (FIGS. 18A to 18C). Therefore, the medium image MI1 exists in the extraction area EA1 set in the read image RI1F (FIG. 18A), the medium image MI2 exists in the extraction area EA1 set in the read image RI2F (FIG. 18B), and the read image Medium images MI3 and MI4 exist in the extraction area EA1 set in the RI3F (FIG. 18C).
  • the extraction area EA1 set in each of the images RI1F, RI2F, and RI3F has the same position and size in all of the images RI1F, RI2F, and RI3F.
  • the image processing unit 15 extracts the medium image MI1 from the read image RI1F and the medium image MI2 from the read image RI2F according to the extraction area EA1 having the same position and size.
  • the medium images MI3 and MI4 are extracted from the read image RI3F.
  • FIG. 20 illustrates an example of a series of read images of the front cover image, the spread 12 images, the spread 34 images,..., The back cover image of the old book, and the image shown in FIG. (I.e., an image after processing).
  • the image processing device 10 includes the storage unit 13 and the image processing unit 15.
  • the storage unit 13 stores a series of a plurality of read images each including a medium image.
  • the image processing unit 15 sets a medium image reference position in an area where a medium image exists (hereinafter, may be referred to as a “medium image area”) in each of the plurality of read images, and Set the read image reference position. Further, the image processing unit 15 rearranges the medium image in the read image based on the positional relationship between the medium image reference position and the read image reference position. Further, the image processing unit 15 sets the same extraction region in the plurality of read images based on the rearranged medium image. Then, the image processing unit 15 extracts a medium image from each of the plurality of read images according to the set extraction region.
  • the storage unit 13 stores at least two read images.
  • the image processing unit 15 includes a medium image included in one of the two read images and a medium image included in the other of the two read images and included in the one of the read images.
  • the medium image included in the other read image is rearranged so that the positional relationship between the medium image and the paired medium image becomes the same.
  • the image processing unit 15 sets an extraction region based on a rectangle including a circumscribed rectangle of all rearranged medium images in a plurality of read images.
  • an appropriate extraction region can be set particularly when the medium image is a two-page spread image. Further, when the read images include different types of medium images such as a double-page spread image and a color chart image, an appropriate extraction region can be set.
  • the storage unit 13 stores at least two read images.
  • the image processing unit 15 determines that the difference between the vertical length of the medium image included in one of the two read images and the vertical length of the medium image included in the other of the two read images is a threshold value TH2. Less than the threshold value TH3, and the difference between the horizontal length of the medium image included in the one read image and the horizontal length of the medium image included in the other read image is equal to or greater than the threshold value TH3. Is within a predetermined range of the horizontal length of the medium image included in one of the read images, when rearranging the medium image included in the other read image, the vertical position is changed while the horizontal direction is changed. Do not change the placement.
  • the image processing unit 15 outputs a warning when there is no paired medium image.
  • the operator can know that there is no paired medium image. Therefore, the operator can appropriately select whether or not to continue the processing when there is no paired medium image.
  • the first embodiment has been described above.
  • Example 2 In the second embodiment, as in the first embodiment, a case where each page of one old book is read by a scanner to convert one book into electronic data will be described as an example. However, in the second embodiment, as shown in FIG. 21, the operator cuts an old book BK turned to the right and cuts off the spine to separate each page one by one into a scanner with an ADF (Auto Document Feeder). A case in which reading is performed continuously will be described as an example.
  • FIG. 21 is a diagram illustrating an example of an old book according to the second embodiment. A plurality of originals generated by cutting one old book BK are put together in a shooter by an operator. A plurality of originals placed on the shooter are successively loaded into the scanner one by one by the ADF, and the scanner reads the front (front) image and the rear (back) image of each original. Can be
  • FIGS. 22, 23, 24, 25, and 37 are diagrams illustrating an example of a processing flow of the image processing apparatus according to the second embodiment.
  • FIGS. 26A-C, 27A-C, 28A-C, 29A-C, 30A-D, 31A-D, 32, 33A-C, 34, 35A-C, 36A-C. 38A to 38C are diagrams for explaining an operation example of the image processing apparatus according to the second embodiment.
  • processing flow illustrated in FIG. 22 is started when, for example, a processing start button (not shown) of the image reading apparatus 10 is pressed by an operator.
  • step S101 of FIG. 22 the control unit 11 selects a read image to be processed by the image processing unit 15 from a series of a plurality of read images stored in the storage unit 13 and And outputs the obtained read image to the image processing unit 15.
  • the blank image is removed in advance and is not stored in the storage unit 13.
  • step S101 of the third processing loop images of the third page are sequentially acquired from the storage unit 13. Further, the control unit 11 acquires the image of the back cover from the storage unit 13 in step S101 in the last processing loop of steps S101 to S109.
  • a series of images of the front cover image, the first page image, the second page image,..., The back cover image is stored in the storage unit 13 as one file, and the control unit 11 From the target file, the first page image, the second page image,..., The back cover image are sequentially selected.
  • the file to be processed is arbitrarily selected by an operator, for example.
  • steps S103 to S105 are the same as those in the first embodiment (FIG. 2), and thus description thereof will be omitted.
  • step S201 in FIG. 22 the image processing unit 15 performs a medium image reference position setting process.
  • FIG. 23 shows an example of a processing flow of the medium image reference position setting processing.
  • the processing in steps S151, S153, S155, and S161 in FIG. 23 is the same as that shown in the first embodiment (FIGS. 3 and 5), and a description thereof will not be repeated.
  • step S211 in FIG. 23 the image processing unit 15 performs a cut position detection process.
  • FIGS. 24 and 25 show an example of the processing flow of the cut location detection processing.
  • FIG. 24 shows a first processing example of the cut position detection processing
  • FIG. 25 shows a second processing example of the cut position detection processing.
  • the cut position detection processing will be described separately for the first processing example and the second processing example.
  • step S221 in FIG. 24 the image processing unit 15 determines whether a series of read images sequentially selected in step S101 is a left-turn image. Whether the old book to be converted into electronic data is a left-turning book or a right-turning book is previously instructed by the operator to the image processing apparatus 10, and the instruction result of "left-turning" or "right-turning" is stored. It is stored in the unit 13 in advance. When the instruction result stored in the storage unit 13 is “turn left”, the image processing unit 15 determines that a series of read images is a left-turn image and stores the read image in the storage unit 13.
  • step S221: Yes If the series of read images is a left-turn image (step S221: Yes), the process proceeds to step S223. If the series of read images is a right-turn image (step S221: No). , The process proceeds to step S229.
  • step S223 the image processing unit 15 determines whether the read image selected in step S101 is an image of an odd page. If the read image selected in step S101 is an odd page image (step S223: Yes), the process proceeds to step S225, and if the read image selected in step S101 is an even page image (step S223). (S223: No), the process proceeds to step S227.
  • step S229 the image processing unit 15 determines whether the read image selected in step S101 is an image of an odd page. If the read image selected in step S101 is an image of an odd page (step S229: Yes), the process proceeds to step S227, and if the read image selected in step S101 is an image of an even page (step S229). S229: No), the process proceeds to step S225.
  • step S225 the image processing unit 15 determines that the cut portion in the medium image (hereinafter, may be referred to as “medium cut portion”) exists on the left side of the medium image.
  • step S227 the image processing unit 15 determines that the medium cutting portion exists on the right side of the medium image.
  • step S231 in FIG. 25 the image processing unit 15 calculates the corner angle LA on the left side of the medium image included in the read image selected in step S101. For example, the image processing unit 15 calculates the angle of the upper left corner of the medium image as the corner angle LA.
  • step S233 the image processing unit 15 calculates a difference angle DLA which is an absolute value of a difference between the corner angle LA and a right angle (90 °).
  • step S235 the image processing unit 15 calculates a corner angle RA on the right side of the medium image included in the read image selected in step S101. For example, the image processing unit 15 calculates the angle of the upper right corner of the medium image as the corner angle RA.
  • step S237 the image processing unit 15 calculates a difference angle DRA that is an absolute value of a difference between the corner angle RA and a right angle (90 °).
  • step S239 the image processing unit 15 determines whether or not the difference angle DLA is less than the difference angle DRA. If the difference angle DLA is smaller than the difference angle DRA (Step S239: Yes), the process proceeds to Step S227. If the difference angle DLA is equal to or larger than the difference angle DRA (Step S239: No), the process proceeds to Step S225. move on.
  • step S227 the image processing unit 15 determines that the medium cutting portion exists on the right side of the medium image.
  • step S225 the image processing unit 15 determines that the medium cutting portion exists on the left side of the medium image.
  • step S213 the image processing unit 15 determines whether or not the medium cutting portion exists on the left side of the medium image. If the medium cutting portion exists on the left side of the medium image (that is, if the process of step S225 has been performed) (step S213: Yes), the process proceeds to step S215. On the other hand, if the medium cut position exists on the right side of the medium image (that is, if the process of step S227 is performed) (step S213: No), the process proceeds to step S217.
  • step S215 the image processing unit 15 sets the upper right corner point of the medium image included in the read image as the medium image reference position.
  • step S217 the image processing unit 15 sets the upper left corner point of the medium image included in the read image as the medium image reference position.
  • step S109 the control unit 11 determines whether or not the processing in steps S101 to S201 has been completed for all of a series of a plurality of read images stored in the storage unit 13.
  • step S109: Yes the processing proceeds to step S111.
  • step S109: No the process proceeds to step S101.
  • step S111 when a series of images of the front cover image, the first page image, the second page image,..., The back cover image is stored in the storage unit 13 as a read image of one old book, When the processes of steps S101 to S201 are completed for the back cover image, the process proceeds to step S111.
  • step S111 to S133 in FIG. 22 Since the processes in steps S111 to S133 in FIG. 22 are the same as those shown in the first embodiment (FIG. 2), description thereof will be omitted. However, since the spread image and the color chart image are not included in the read image of the second embodiment, in the target image detection process (FIG. 7) performed in step S121 of the second embodiment, the process of step S183 or S189 is not performed. When performed, it is simply determined that the current medium image and the previous medium image are images that form a pair.
  • step S123 the image processing unit 15 determines whether or not a paired image exists in the processing in steps S171 to S191 (FIG. 7).
  • step S183 or step S189 it is determined that the paired image exists, and when the processing in step S191 is performed in FIG. 7, it is determined that the paired image does not exist. . If the paired image exists (step S123: Yes), the process proceeds to step S203. If the paired image does not exist (step S123: No), the process proceeds to step S131.
  • step S203 the image processing unit 15 performs a position setting process D. Details of the position setting processing D will be described later. After the process in step S203, the process proceeds to step S135.
  • step S135 the image processing unit 15 rearranges the medium image in the read image based on the results of the position setting processes C and D (steps S119 and S203). Details of the processing in step S135 will be described later.
  • step S137 the control unit 11 determines whether the processing in steps S113 to S135 and S203 has been completed for all of the series of multiple read images stored in the storage unit 13. If the processes of steps S113 to S135 and S203 have been completed for all of the series of read images stored in the storage unit 13 (step S137: Yes), the process proceeds to step S205. On the other hand, if a series of read images stored in the storage unit 13 still have read images for which the processes of steps S113 to S135 and S203 have not been performed (step S137: No), the process proceeds to step S137. It returns to S113.
  • step S205 the image processing unit 15 sets an extraction area for the read image. Details of the processing in step S205 will be described later.
  • step S141 the image processing unit 15 extracts a medium image based on the extraction region set in step S205. Details of the processing in step S141 will be described later.
  • the image processing unit 15 sets the X coordinate in the right direction and the Y coordinate in the downward direction on all the rectangular read images, with the upper left vertex of the read image as the origin. That is, in the read image, the X coordinate is the coordinate in the horizontal direction, and the Y coordinate is the coordinate in the vertical direction.
  • the read images RI21, RI22, RI23, and RI24 correspond to a series of a plurality of read images of one old book.
  • the read image RI21 includes a medium image MI21 which is an image of the front cover, and the medium cut portion CP in the medium image MI21 exists on the right side of the medium image MI21 (FIG. 26A). That is, the medium image MI21 is arranged in the read image RI21.
  • the read image RI22 includes the medium image MI22 which is the image of the first page, and the medium cut portion CP in the medium image MI22 exists on the left side of the medium image MI22 (FIG. 27A).
  • the medium image MI22 is arranged in the read image RI22.
  • the read image RI23 includes the medium image MI23 which is the image of the second page, and the medium cut portion CP in the medium image MI23 exists on the right side of the medium image MI23 (FIG. 28A). That is, the medium image MI23 is arranged on the read image RI23.
  • the read image RI24 includes the medium image MI24 that is the image of the back cover, and the medium cutting point CP in the medium image MI24 is on the left side of the medium image MI24 (FIG. 29A). That is, the medium image MI24 is arranged on the read image RI24.
  • the image processing unit 15 displays each of the read images RI21, RI22, RI23, and RI24 in FIGS. 26B, 27B, 28B, and 29B. Works like that.
  • the image processing unit 15 detects the circumscribed rectangle BR21 of the medium image MI21, and calculates the center of gravity GB21 of the detected circumscribed rectangle BR21. Further, since the medium cutting point CP exists on the right side of the medium image MI21, the image processing unit 15 sets the upper left vertex of the circumscribed rectangle BR21 to the medium image reference position RRP21.
  • the image processing unit 15 detects the circumscribed rectangle BR22 of the medium image MI22, and calculates the center of gravity GB22 of the detected circumscribed rectangle BR22. Further, since the medium cutting position CP exists on the left side of the medium image MI22, the image processing unit 15 sets the upper right vertex of the circumscribed rectangle BR22 to the medium image reference position RRP22.
  • the image processing unit 15 detects the circumscribed rectangle BR23 of the medium image MI23, and calculates the center of gravity GB23 of the detected circumscribed rectangle BR23. Further, since the medium cutting position CP exists on the right side of the medium image MI23, the image processing unit 15 sets the upper left vertex of the circumscribed rectangle BR23 to the medium image reference position RRP23.
  • the image processing unit 15 detects the circumscribed rectangle BR24 of the medium image MI24, and calculates the center of gravity GB24 of the detected circumscribed rectangle BR24. In addition, since the medium cutting position CP exists on the left side of the medium image MI24, the image processing unit 15 sets the upper right vertex of the circumscribed rectangle BR24 to the medium image reference position RRP24.
  • the circumscribed rectangle BR21 indicates an area where the medium image MI21 exists in the read image RI21
  • the circumscribed rectangle BR22 indicates an area where the medium image MI22 exists in the read image RI22
  • the circumscribed rectangle BR23 indicates the area in the read image RI23.
  • the area where the medium image MI23 exists is shown
  • the circumscribed rectangle BR24 shows the area where the medium image MI24 exists in the read image RI24.
  • the image processing unit 15 sets the medium image reference positions RRP21, RRP22, RRP23, and RRP24 in the areas where the medium images MI21, MI22, MI23, and MI24 exist in each of the read images RI21, RI22, RI23, and RI24. Are set respectively.
  • step S109 Since the read image RI24 is the last image in a series of a plurality of read images of one old book, when the processes of steps S101 to S105 and S201 are completed for the read image RI24 (step S109: Yes) ), The process proceeds to step S111.
  • the image processing unit 15 operates as shown in FIG. 26B, FIG. 27B, FIG. 28B, and FIG. 29B for each of the read images RI21, RI22, RI23, and RI24 according to the process of FIG. 2 (FIG. 2). . That is, as shown in FIGS. 26B, 27B, 28B, and 29B, the image processing unit 15 reads the read image RI21, RI22, RI23, and RI24 of the rectangular read images having the same vertical length and horizontal length. The middle point of the upper side of RI21, RI22, RI23, RI24 is set as the read image reference position MRP2.
  • the horizontal length of the rectangular read images RI21, RI22, RI23, RI24 is expressed as SrcW
  • the coordinates of the read image reference position MRP2 are expressed as (SrcW / 2,0).
  • the image processing unit 15 sets the read image reference position MRP2 in an area other than the area where the medium images MI21, MI22, MI23, and MI24 exist in each of the read images RI21, RI22, RI23, and RI24. .
  • the image processing unit 15 performs the position setting process C on the medium image MI21 (Step S119). ).
  • the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image MI21 after erecting correction.
  • the vector VecL b0 to ending the media image reference position RRP21 (coordinates (x b0, y b0)) in, set as the position of the media image MI21 for reading image RI21. That is, the vector VecL b0 set in the medium image MI21 indicates a positional relationship between the medium image reference position RRP21 and the read image reference position MRP2.
  • the image processing unit 15 as shown in FIG. 26C, the read image RI21, to reposition the media image MI21 based on vector VecL b0 set in the medium image MI21 (step S135). That is, the image processing unit 15, as the medium image reference position RRP21 the coordinates (x b0, y b0) of end point of the vector VecL b0 match, to reposition the media image MI21.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI21F after the rearrangement of the medium image MI21.
  • the arrangement of the medium image MI21 after the rearrangement is the same as the arrangement of the medium image MI21 before the rearrangement (that is, the arrangement of the medium image MI21 after the erecting correction).
  • the circumscribed rectangle BR22 of the medium image MI22 included in the read image RI22 selected by the control unit 11 by the process of FIG. 22 has no inclination (FIG. 27B)
  • the erecting correction is not performed on the medium image MI22. Even if performed (step S115), the medium image MI22 is not rotated.
  • the image processing unit 15 performs an image detection process on the read image RI22 (Step S121). .
  • this paired image detection process it is determined that the medium image MI21 and the medium image MI22 are paired images.
  • the image processing unit 15 performs the position setting processing D on the medium image MI22 (Step S203).
  • the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image reference position RRP22 (coordinates (x The vector VecR b0 ending at b1 , y b0 )) is set as the position of the medium image MI22 with respect to the read image RI22.
  • the vector VecR b0 set in the medium image MI22 indicates a positional relationship between the medium image reference position RRP22 and the read image reference position MRP2.
  • the vector VecR b0 set in the medium image MI22 is a vector symmetrical to the vector VecL b0 set in the medium image MI21 in the X coordinate direction with respect to the read image reference position MRP2. That, Y-coordinate y b0 of the end point of the vector VecR b0 is the same as the Y-coordinate y b0 of the end point of the vector VecL b0.
  • the distance from the read image reference position MRP2 to X-coordinate x b1 of the end point of the vector VecR b0 is the same as the distance from the read image reference position MRP2 to X-coordinate x b0 of the end point of the vector VecL b0.
  • the image processing unit 15 rearranges the medium image MI22 in the read image RI22 based on the vector VecR b0 set in the medium image MI22 (step S135). That is, the image processing unit 15, as the medium image reference position RRP22 the coordinates (x b1, y b1) of the end point of the vector VecR b0 match, to reposition the media image MI22.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI22F after the rearrangement of the medium image MI22. Due to the rearrangement of the medium image MI22, the arrangement of the medium image MI22 after the rearrangement is changed from the arrangement of the medium image MI22 before the rearrangement (that is, the arrangement of the medium image MI22 after the erecting correction).
  • the image processing unit 15 rotates the circumscribed rectangle BR23 clockwise around the center of gravity GB23 in the read image RI23 (FIG. 28B) selected by the control unit 11 in the process of FIG.
  • the inclination of the image MI23 is corrected (step S115, FIG. 28C).
  • the image processing unit 15 performs an image detection process on the read image RI23 (Step S121). .
  • this paired image detection process it is determined that the medium image MI22 and the medium image MI23 are paired images.
  • the image processing unit 15 Since it is determined that the medium image MI22 and the medium image MI23 are paired images, the image processing unit 15 performs the position setting processing D on the medium image MI23 (Step S203).
  • the image processing unit 15 starts from the read image reference position MRP2 (coordinates (SrcW / 2,0)) and sets the medium image reference position RRP23 (coordinates (x b0, y b0) vector VecL b0 to end point) is set as the position of the medium image MI23 for reading image RI23.
  • the vector VecL b0 set in the medium image MI23 indicates a positional relationship between the medium image reference position RRP23 and the read image reference position MRP2.
  • the vector VecL b0 set in the medium image MI23 is the same vector as the vector VecL b0 set in the medium image MI21.
  • the image processing unit 15 as shown in FIG. 28C, the read image RI23, to reposition the media image MI23 based on vector VecL b0 set in the medium image MI23 (step S135). That is, the image processing unit 15, as the medium image reference position RRP23 the coordinates (x b0, y b0) of end point of the vector VecL b0 match, to reposition the media image MI23.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI23F after the rearrangement of the medium image MI23. Due to the rearrangement of the medium image MI23, the arrangement of the medium image MI23 after the rearrangement is changed from the arrangement of the medium image MI23 before the rearrangement (that is, the arrangement of the medium image MI23 after the erecting correction).
  • the medium image MI21 and the medium image MI22 are paired images, and the medium image MI22 and the medium image MI23 are paired images. Therefore, the medium image MI21 and the medium image MI23 are paired images.
  • the vector VecL b0 set in the medium image MI23 is the same as the vector VecL b0 set in the medium image MI21.
  • the vector VecL b0 set in the medium image MI21 indicates the positional relationship between the medium image reference position RRP21 and the read image reference position MRP2, and the vector VecL b0 set in the medium image MI23 is The positional relationship between the medium image reference position RRP23 and the read image reference position MRP2 is shown.
  • the image processing unit 15 rearranges the medium image MI23 paired with the medium image MI21 so that the positional relationship between the medium image MI21 and the medium image MI23 is the same.
  • step S115 since the circumscribed rectangle BR24 of the medium image MI24 included in the read image RI24 selected by the control unit 11 by the process of step S113 (FIG. 22) has no inclination (FIG. 29B), the erecting correction is not performed on the medium image MI24. Even if performed (step S115), the medium image MI24 is not rotated.
  • the image processing unit 15 performs the image detection process on the read image RI24 (Step S121). .
  • this paired image detection process it is determined that the medium image MI23 and the medium image MI24 are paired images.
  • the image processing unit 15 performs the position setting processing D on the medium image MI24 (Step S203).
  • the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image reference position RRP24 (coordinates (x b1, y b0) vector VecR b0 to end point) is set as the position of the medium image MI24 for reading image RI24.
  • the vector VecR b0 set in the medium image MI24 indicates a positional relationship between the medium image reference position RRP24 and the read image reference position MRP2.
  • the vector VecR b0 set in the medium image MI24 is the same vector as the vector VecR b0 set in the medium image MI22.
  • the image processing unit 15 rearranges the medium image MI24 in the read image RI24 based on the vector VecR b0 set in the medium image MI24 (step S135). That is, the image processing unit 15, as the medium image reference position RRP24 the coordinates (x b1, y b1) of the end point of the vector VecR b0 match, to reposition the media image Mil Mi-24.
  • the image processing unit 15 causes the storage unit 13 to store the read image RI24F after the rearrangement of the medium image MI24. Due to the rearrangement of the medium image MI24, the arrangement of the medium image MI24 after the rearrangement is changed from the arrangement of the medium image MI24 before the rearrangement (that is, the arrangement of the medium image MI24 after the erecting correction).
  • the medium image MI22 and the medium image MI23 are paired images, and the medium image MI23 and the medium image MI24 are paired images. Therefore, the medium image MI22 and the medium image MI24 are paired images.
  • the vector VecR b0 set in the medium image MI22 is the same as the vector VecR b0 set in the medium image MI24.
  • the vector VecR b0 set in the medium image MI22 indicates the positional relationship between the medium image reference position RRP22 and the read image reference position MRP2, and the vector VecR b0 set in the medium image MI24 is The positional relationship between the medium image reference position RRP24 and the read image reference position MRP2 is shown.
  • the image processing unit 15 rearranges the medium image MI24 paired with the medium image MI22 so that the positional relationship between the medium image MI22 and the medium image MI24 is the same.
  • step S137 extraction area setting
  • the image processing unit 15 acquires the read images RI21F, RI22F, RI23F, and RI24F from the storage unit 13.
  • the read image RI21F includes the medium image MI21 (FIG. 30A)
  • the read image RI22F includes the medium image MI22 after the arrangement change (FIG. 30B)
  • the read image RI23F includes the erect state correction and the arrangement.
  • the read image RI24F includes the medium image MI23 after the change (FIG. 30C), and the medium image MI24 after the change in the arrangement (FIG. 30D).
  • the image processing unit 15 extracts extraction areas EA21 and EA22 having the same size as the read images RI21F, RI22F, RI23F, and RI24F based on the medium images MI21, MI22, MI23, and MI24. It is set (step S139). That is, the vertical length DstH2 of the rectangular extraction area EA21 is the same in all of the read images RI21F, RI22F, RI23F, and RI24F. The horizontal length DstW2 of the rectangular extraction area EA21 is the same in all of the read images RI21F, RI22F, RI23F, and RI24F.
  • an extracted area EA21 having the same position and size is set for the read image RI21F and the read image RI23F.
  • an extracted area EA22 having the same position and size is set for the read image RI22F and the read image RI24F.
  • the extraction area EA21 and the extraction area EA22 have the same size and the same position in the Y coordinate direction on the read image, but have different positions in the X coordinate direction on the read image.
  • the distance from the read image reference position MRP2 to the upper left vertex of the extraction area EA21 is the same as the distance from the read image reference position MRP2 to the upper right vertex of the extraction area EA22.
  • a main part of the medium image MI21 exists in the extraction area EA21 set in the read image RI21F (FIG. 30A)
  • a main part of the medium image MI22 exists in the extraction area EA22 set in the read image RI22F ( 30B)
  • the main part of the medium image MI23 exists in the extraction area EA21 set in the read image RI23F
  • the main part of the medium image MI24 exists in the extraction area EA22 set in the read image RI24F. (FIG. 30D).
  • the “main part” of the medium image is a part of the medium image in which at least a part of some information such as characters, photographs, figures, and tables exists.
  • the image processing unit 15 extracts the main part MI21C of the medium image MI21 from the read image RI21F in accordance with the extraction areas EA21 and EA22 set in step S139, and extracts the medium image from the read image RI22F.
  • the main part MI22C of the MI22 is extracted
  • the main part MI23C of the medium image MI23 is extracted from the read image RI23F
  • the main part MI24C of the medium image MI24 is extracted from the read image RI24F (step S141).
  • the image processing unit 15 causes the display unit 17 to display the main parts MI21C, MI22C, MI23C, and MI24C as shown in FIG. That is, the image processing unit 15 sequentially displays the main part MI21C of the front cover image, the main part MI22C of the first page image, the main part MI23C of the second page image,..., And the main part MI24C of the back cover image. It is displayed on the unit 17. Further, the image processing unit 15 displays the main part MI21C of the front cover image and the main part MI24C of the back cover image on the display unit 17 as a single page, while the main part of the image on the first page. The MI22C and the main part MI23C of the image of the second page are joined at the cut portion and displayed on the display unit 17 as a double-page spread.
  • FIGS. 33A to 33C, 34, 35A to 35C, and 36A to 36C show examples of setting an extraction area for a read image after rearrangement of a medium image.
  • the read image RI6F includes the medium image MI6 after rearrangement (FIG. 33A)
  • the read image RI7F includes the medium image MI7 after rearrangement (FIG. 33B)
  • the read image RI8F includes The medium image MI8 after the arrangement is included (FIG. 33C).
  • the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image MI6 leftward, In the read image RI6F, a rectangle IR6 not including the medium cutting portion CP is created.
  • the image processing unit 15 moves the left side of the circumscribed rectangle of the medium image MI7 in the right direction.
  • a rectangle IR7 that does not include the medium cutting portion CP is created in the read image RI7F.
  • the image processing unit 15 moves the right side of the circumscribed rectangle of the medium image MI8 in the left direction.
  • a rectangle IR8 that does not include the medium cutting portion CP is created in the read image RI8F.
  • the image processing unit 15 determines a rectangular area where all of the rectangles IR6, IR7, and IR8 overlap as the extraction area EA2. Further, the image processing unit 15 obtains a vector VL2 starting from the read image reference position MRP2 and ending at the upper left vertex of the rectangular extraction area EA2.
  • the image processing unit 15 sets the extraction area EA2 in which the vector VL2 is set as the read image RI6F.
  • the image processing unit 15 sets the extraction area EA3 in which the vector VL3 is set as the read image RI7F.
  • the size of the extraction area EA3 is the same as the size of the extraction area EA2.
  • the vector VL3 is a vector symmetrical to the vector VL2 in the X coordinate direction with respect to the read image reference position MRP2. That is, the position of the extraction area EA3 in the Y coordinate direction is the same as the position of the extraction area EA2 in the Y coordinate direction.
  • the distance from the read image reference position MRP2 to the upper right vertex of the extraction area EA3 is the same as the distance from the read image reference position MRP2 to the upper left vertex of the extraction area EA23.
  • the image processing unit 15 sets the extraction area EA2 in which the vector VL2 is set as the read image RI8F. Therefore, the read image RI8F having the same position and size is set for the read image RI6F and the read image RI8F.
  • the main part of the medium image MI6 exists in the extraction area EA2 set in the read image RI6F
  • the main part of the medium image MI7 exists in the extraction area EA3 set in the read image RI7F
  • the main part of the medium image MI8F exists in the read image RI8F.
  • the main part of the medium image MI8 exists in the set extraction area EA2.
  • the image processing unit 15 extracts the main part MI6C of the medium image MI6 from the read image RI6F according to the extraction areas EA2 and EA3, and outputs the main part MI7C of the medium image MI7 from the read image RI7F.
  • the image processing unit 15 extracts the main part MI8C of the medium image MI8 from the read image RI8F.
  • FIG. 37 shows an example of a process of creating a rectangle that does not include a medium cutting portion.
  • step S251 of FIG. 37 the image processing unit 15 detects an inflection point in the medium contour figure.
  • step S253 the image processing unit 15 determines whether or not the medium cutting portion exists on the left side of the medium image. If the medium cut position exists on the left side of the medium image (step S253: Yes), the process proceeds to step S255. If the medium cut position exists on the right side of the medium image (step S253: No), the process proceeds to step S255. Proceed to S257.
  • step S255 the image processing unit 15 detects the position of the inflection point having the maximum value of the X coordinate among the inflection points detected in step S251 (hereinafter, may be referred to as “X coordinate maximum value position”). I do.
  • step S257 the image processing unit 15 calls the position of the inflection point having the minimum value of the X coordinate among the inflection points detected in step S251 (hereinafter, may be referred to as “X coordinate minimum value position”). ) Is detected.
  • step S259 when the medium cutting portion exists on the left side of the medium image, the image processing unit 15 translates the left side of the circumscribed rectangle of the medium image rightward to the X coordinate maximum value position detected in step S255. By doing so, a rectangle that does not include the medium cutting portion is created in the read image.
  • the image processing unit 15 when the medium cutting portion exists on the right side of the medium image, the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image to the X coordinate minimum value position detected in step S257 in the left direction. In the read image, a rectangle not including the medium cutting portion is created.
  • the image processing unit 15 detects a plurality of inflection points IP in the medium contour graphic of the medium image MI. Since the medium cutting point CP exists on the left side of the medium image MI, the image processing unit 15 then determines the inflection point IP having the maximum value of the X coordinate among the plurality of inflection points IP as shown in FIG. 38C. Detect the position IPM. Then, as shown in FIG. 38C, the image processing unit 15 translates the left side of the circumscribed rectangle BR in the right direction to the position IPM (that is, the X coordinate maximum value position), so that the medium cutting portion Create a rectangular IR that does not include a CP.
  • the image processing device 10 includes the storage unit 13 and the image processing unit 15.
  • the storage unit 13 stores a series of a plurality of read images each including a medium image.
  • the image processing unit 15 sets a medium image reference position in a medium image area and sets a read image reference position in an area other than the medium image area in each of the plurality of read images. Further, the image processing unit 15 rearranges the medium image in the read image based on the positional relationship between the medium image reference position and the read image reference position. Further, the image processing unit 15 sets the same extraction region in the plurality of read images based on the rearranged medium image. Then, the image processing unit 15 extracts a medium image from each of the plurality of read images according to the set extraction region.
  • the storage unit 13 stores at least two read images.
  • the image processing unit 15 includes a medium image included in one of the two read images and a medium image included in the other of the two read images and included in the one of the read images.
  • the medium image included in the other read image is rearranged so that the positional relationship between the medium image and the paired medium image becomes the same.
  • the image processing unit 15 sets an extraction area based on a rectangular area where circumscribed rectangles of all rearranged medium images in a plurality of read images overlap.
  • the image processing unit 15 detects a medium cutting position in a medium image, and sets a medium image reference position on the medium image based on the detected medium cutting position.
  • Example 3 The third embodiment differs from the second embodiment in the method of setting the medium image reference position and the method of setting the extraction area. Hereinafter, points different from the second embodiment will be described.
  • FIG. 39 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the third embodiment.
  • FIGS. 9A to 9C are diagrams for explaining an operation example of the image processing apparatus according to the third embodiment.
  • FIG. 39 shows an example of the processing flow of the medium image reference position setting processing.
  • the processing in steps S151, S153, S155, and S161 in FIG. 39 is the same as that shown in the second embodiment (FIG. 23), and thus the description is omitted.
  • step S271 of FIG. 39 the image processing unit 15 sets the center of gravity of the circumscribed rectangle of the medium image as the medium image reference position.
  • the read images RI31, RI32, RI33, and RI34 correspond to a series of a plurality of read images of one old book.
  • the read image RI31 includes a medium image MI31 which is an image of the front cover, and the medium cutting portion CP in the medium image MI31 exists on the right side of the medium image MI31 (FIG. 40A). That is, the medium image MI31 is arranged in the read image RI31.
  • the read image RI32 includes the medium image MI32 which is the image of the first page, and the medium cut portion CP in the medium image MI32 exists on the left side of the medium image MI32 (FIG. 41A).
  • the medium image MI32 is arranged in the read image RI32.
  • the read image RI33 includes the medium image MI33 that is the image of the second page, and the medium cut portion CP in the medium image MI33 exists on the right side of the medium image MI33 (FIG. 42A). That is, the medium image MI33 is arranged in the read image RI33.
  • the read image RI34 includes the medium image MI34 that is the image of the back cover, and the medium cutting position CP in the medium image MI34 is on the left side of the medium image MI34 (FIG. 43A). That is, the medium image MI34 is arranged in the read image RI34.
  • the image processing unit 15 detects the circumscribed rectangle BR31 of the medium image MI31, and calculates the center of gravity GB31 of the detected circumscribed rectangle BR31. In addition, the image processing unit 15 sets the center of gravity GB31 at the medium image reference position of the medium image MI31.
  • the image processing unit 15 detects the circumscribed rectangle BR32 of the medium image MI32, and calculates the center of gravity GB32 of the detected circumscribed rectangle BR32. In addition, the image processing unit 15 sets the center of gravity GB32 at the medium image reference position of the medium image MI32.
  • the image processing unit 15 detects the circumscribed rectangle BR33 of the medium image MI33, and calculates the center of gravity GB33 of the detected circumscribed rectangle BR33. In addition, the image processing unit 15 sets the center of gravity GB33 at the medium image reference position of the medium image MI33.
  • the image processing unit 15 detects the circumscribed rectangle BR34 of the medium image MI34 and calculates the center of gravity GB34 of the detected circumscribed rectangle BR34, as shown in FIG. 43B. In addition, the image processing unit 15 sets the center of gravity GB34 at the medium image reference position of the medium image MI34.
  • the image processing unit 15 sets a rectangular cropping area CA31 for the read image RI31 (FIG. 40B).
  • the cropping area CA31 includes the circumscribed rectangle BR31, and the vertical length of the cropping area CA31 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR31, and the horizontal length of the cropping area CA31 is the horizontal length of the circumscribed rectangle BR31. It has a predetermined margin MG2.
  • the image processing unit 15 cuts out the cropped image RI31F from the read image RI31 according to the cropping area CA31 (FIG. 40C).
  • the image processing unit 15 sets the rectangular cropping area CA32 for the read image RI32 (FIG. 41B).
  • the cropping area CA32 includes the circumscribed rectangle BR32, and the vertical length of the cropping area CA32 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR32, and the horizontal length of the cropping area CA32 is the horizontal length of the circumscribed rectangle BR32. It has a predetermined margin MG2.
  • the image processing unit 15 cuts out the cropped image RI32F from the read image RI32 according to the cropping area CA32 (FIG. 41C).
  • the image processing unit 15 sets a rectangular cropping area CA33 for the read image RI33 (FIG. 42B).
  • the cropping area CA33 includes the circumscribed rectangle BR33, and the vertical length of the cropping area CA33 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR33, and the horizontal length of the cropping area CA33 is the horizontal length of the circumscribed rectangle BR33. It has a predetermined margin MG2.
  • the image processing unit 15 cuts out the cropped image RI33F after the erecting from the read image RI33 according to the cropping area CA33 (FIG. 42C).
  • the image processing unit 15 sets the rectangular cropping area CA34 for the read image RI34 (FIG. 43B).
  • the cropping area CA34 includes the circumscribed rectangle BR34, and the vertical length of the cropping area CA34 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR34, and the horizontal length of the cropping area CA34 is the horizontal length of the circumscribed rectangle BR34. It has a predetermined margin MG2.
  • the image processing unit 15 cuts out the cropped image RI34F from the read image RI34 according to the cropping area CA34 (FIG. 43C).
  • the image processing unit 15 extracts an area having the same position and size as the cropping images RI31F, RI32F, RI33F, and RI34F.
  • EA31 is set.
  • the vertical length DstH3 of the rectangular extraction area EA31 is the same in all of the cropping images RI31F, RI32F, RI33F, and RI34F.
  • the horizontal length DstW3 of the rectangular extraction area EA31 is the same in all of the cropping images RI31F, RI32F, RI33F, and RI34F.
  • a main part of the medium image MI31 exists in the extraction area EA31 set in the cropping image RI31F (FIG. 44A), and a main part of the medium image MI32 exists in the extraction area EA31 set in the cropping image RI32F ( 44B), the main part of the medium image MI33 exists in the extraction area EA31 set in the cropping image RI331F (FIG. 44C), and the main part of the medium image MI34 exists in the extraction area EA31 set in the cropping image RI34F. (FIG. 44D).
  • the image processing unit 15 extracts the main part MI31C of the medium image MI31 from the cropping image RI31F and extracts the main part MI32C of the medium image MI32 from the cropping image RI32F according to the extraction area EA31. Then, the main part MI33C of the medium image MI33 is extracted from the cropping image RI33F, and the main part MI34C of the medium image MI34 is extracted from the cropping image RI34F.
  • the image processing unit 15 causes the display unit 17 to display the main parts MI31C, MI32C, MI33C, and MI34C as shown in FIG. That is, the image processing unit 15 sequentially displays the main part MI31C of the front cover image, the main part MI32C of the first page image, the main part MI33C of the second page image,..., And the main part MI34C of the back cover image. It is displayed on the unit 17. Further, the image processing unit 15 causes the display unit 17 to display the main part MI31C of the front cover image and the main part MI34C of the back cover image as individual pages, while the main part of the first page image. The MI32C and the main part MI33C of the image of the second page are joined at the cut part and displayed on the display unit 17 as a double-page spread.
  • FIGS. 47A to 47C, 48, 49A to 49C, and 50A to 50C show examples of setting an extraction area for a read image.
  • the cropping image RI41F includes the medium image MI41 (FIG. 47A)
  • the cropping image RI42F includes the medium image MI42 (FIG. 47B)
  • the cropping image RI43F includes the medium image MI43 (FIG. 47C).
  • the center of gravity GB41 of the circumscribed rectangle of the medium image MI41 is set in the cropping image RI41F
  • the center of gravity GB42 of the circumscribed rectangle of the medium image MI42 is set in the cropping image RI42F
  • the center of gravity GB42 of the medium image MI43 is set in the cropping image RI43F.
  • the center of gravity GB43 of the circumscribed rectangle is set.
  • the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image MI41 in the leftward direction.
  • a rectangle IR41 not including the medium cutting portion CP is created.
  • the image processing unit 15 moves the left side of the circumscribed rectangle of the medium image MI42 in the right direction.
  • a rectangular IR 42 that does not include the medium cutting portion CP is created in the cropping image RI42F.
  • the image processing unit 15 moves the right side of the circumscribed rectangle of the medium image MI43 in the left direction. Accordingly, a rectangular IR 43 not including the medium cutting portion CP is created in the cropping image RI 43F.
  • the image processing unit 15 arranges the rectangles IR41, IR42, IR43 with the centers of gravity GB41, GB42, GB43 coincident with each other. Then, the image processing unit 15 determines a rectangular area where all of the rectangles IR41, IR42, and IR43 overlap as the extraction area EA4. Therefore, the center of gravity GB5 of the rectangular extraction area EA4 matches the centers of gravity GB41, GB42, and GB43.
  • the image processing unit 15 sets the extraction area EA4 to the cropping image RI41F by matching the center of gravity GB5 with the center of gravity GB41.
  • the image processing unit 15 sets the extraction area EA4 to the cropping image RI42F by matching the center of gravity GB5 with the center of gravity GB42.
  • the image processing unit 15 sets the extraction area EA4 to the cropping image RI43F by matching the center of gravity GB5 with the center of gravity GB43.
  • the main part of the medium image MI41 exists in the extraction area EA4 set in the cropping image RI41F
  • the main part of the medium image MI42 exists in the extraction area EA4 set in the cropping image RI42F.
  • the main part of the medium image MI43 exists in the set extraction area EA4.
  • the image processing unit 15 extracts the main part MI41C of the medium image MI41 from the cropping image RI41F and extracts the main part MI42C of the medium image MI42 from the cropping image RI42F according to the extraction area EA4. Then, the main part MI43C of the medium image MI43 is extracted from the cropping image RI43F.
  • the third embodiment has been described above.
  • the image processing unit 15 sets the middle point of the upper side of the medium outline figure or the middle point of a straight line connecting both corner points of the upper side of the medium outline figure as the medium image reference position.
  • the image processing unit 15 may set the middle point of the lower side of the medium outline figure or the middle point of a straight line connecting both corner points of the lower side of the medium outline figure as the medium image reference position.
  • the image processing unit 15 may set the middle point of the left side of the medium outline figure or the middle point of a straight line connecting both corner points of the left side of the medium outline figure as the medium image reference position.
  • the image processing unit 15 may set the middle point of the right side of the medium outline figure or the middle point of a straight line connecting both corner points of the right side of the medium outline figure as the medium image reference position.
  • the image processing unit 15 may set the upper end of the book gutter at the medium image reference position in the medium image.
  • the image processing unit 15 sets the middle point of the upper side of the document table image included in the read image as the read image reference position.
  • the image processing unit 15 may set the middle point of the lower side of the document table image included in the read image as the read image reference position.
  • the image processing unit 15 may set any vertex of the rectangular read image as the read image reference position.
  • the image processing unit 15 sets the extraction area based on a rectangle including a circumscribed rectangle of all the rearranged medium images in a plurality of read images.
  • the image processing unit 15 sets the extraction area based on a rectangular area where circumscribed rectangles of all rearranged medium images in a plurality of read images overlap.
  • the image processing unit 15 may set the extraction area based on an area where a predetermined margin is added to a circumscribed rectangle. By doing so, it is possible to reduce the possibility that a part of the medium image is missing when extracting the medium image from the read image.
  • the storage unit 13 is realized by, for example, a memory as hardware.
  • the memory include a random access memory (RAM) such as a synchronous dynamic random access memory (SDRAM), a read only memory (ROM), and a flash memory.
  • RAM random access memory
  • SDRAM synchronous dynamic random access memory
  • ROM read only memory
  • flash memory a flash memory
  • the control unit 11 and the image processing unit 15 are realized by, for example, a processor as hardware.
  • the processor include a CPU (Central Processing Unit), a DSP (Digital Signal Processor), and an FPGA (Field Programmable Gate Array).
  • the control unit 11 and the image processing unit 15 may be realized by an LSI (Large Scale Integrated Circuit) including a processor and peripheral circuits. Further, the control unit 11 and the image processing unit 15 may be realized using a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), or the like.
  • the display unit 17 is realized by, for example, a display as hardware.
  • a display used in the image processing apparatus 10 a liquid crystal display is given.
  • All or a part of each processing in the above description in the image processing apparatus 10 may be realized by causing a processor of the image processing apparatus 10 to execute a program corresponding to each processing.
  • a program corresponding to each process in the above description may be stored in the memory, and the program may be read from the memory and executed by the processor.
  • the program is stored in a program server connected to the image processing apparatus 10 via an arbitrary network, and is downloaded from the program server to the image processing apparatus 10 and executed.
  • the program may be stored in a medium, read from the recording medium, and executed.
  • the recording medium readable by the image processing apparatus 10 includes, for example, a memory card, a USB memory, an SD card, a flexible disk, a magneto-optical disk, a CD-ROM, a DVD, and a Blu-ray (registered trademark) disk.
  • the program is a data processing method described in an arbitrary language or an arbitrary description method, and may be in any format such as a source code or a binary code. Further, the program is not necessarily limited to a single program, but may be distributed in the form of a plurality of modules or a plurality of libraries, or may achieve its function in cooperation with a separate program represented by an OS. Including things.
  • the specific form of the distribution / integration of the image processing device 10 is not limited to the illustrated one, and all or a part of the image processing device 10 may be arbitrarily added according to various additions or the like, or according to a functional load. And can be configured to be distributed or integrated functionally or physically.
  • image processing device 11 control unit 13 storage unit 15 image processing unit 17 display unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Facsimiles In General (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

This image processing device can improve work efficiency for an operator. An image processing device (10) wherein a storage unit (13) stores a plurality of read images in a series, each of the read images including a medium image, and wherein an image processing unit (15): sets, in each of the plurality of read images, a first reference position in a first region in which the medium image is present; sets a second reference position in a second region, the second region being a region other than the first region; changes the location of the medium image within each of the read images on the basis of the positional relationship between the first reference position and the second reference position; sets an identical extraction region in the plurality of read images on the basis of the medium image that has undergone the location change; and extracts a medium image from each of the plurality of read images according to the set extraction region.

Description

画像処理装置及び画像処理方法Image processing apparatus and image processing method
 開示の技術は、画像処理装置及び画像処理方法に関する。 技術 The disclosed technology relates to an image processing device and an image processing method.
 スキャナを用いて一冊の本をページ毎に読み取って電子データ化することがある。 読 み 取 Some books are read page by page using a scanner and converted into electronic data.
特開平09-186854号公報JP 09-186854 A 特開2000-011192号公報JP 2000-011192 A
 読み取られた一連の複数ページの画像の位置がページ間でずれてしまうと、それら一連の複数ページの画像を順番に見たときに、ページ間に生じた画像の位置ズレにより、画像を見た者に違和感を与えてしまう。このような違和感を軽減するには、従来は、オペレータが、ページ間で画像の位置ズレが生じないように、ページ毎の読取の都度、スキャナの原稿台に本を慎重に設置する必要があった。このため、オペレータの作業効率が悪かった。 If the position of the read series of images on a plurality of pages is shifted between pages, when the images on the series of the plurality of pages are viewed in order, the images are viewed due to the positional deviation of the images generated between the pages. Gives a strange feeling to the person. Conventionally, in order to reduce such discomfort, the operator has to carefully place a book on the scanner platen every time each page is read so as to prevent image displacement between pages. Was. For this reason, the work efficiency of the operator was poor.
 開示の技術は、上記に鑑みてなされたものであって、オペレータの作業効率を向上することを目的とする。 技術 The disclosed technology has been made in view of the above, and has as its object to improve the work efficiency of an operator.
 開示の態様では、画像処理装置は、記憶部と、画像処理部とを有する。前記記憶部は、各々が媒体画像を含む一連の複数の読取画像を記憶する。前記画像処理部は、前記複数の読取画像の各々において、前記媒体画像が存在する第一領域に第一基準位置を設定し、前記第一領域以外の第二領域に第二基準位置を設定し、前記第一基準位置と前記第二基準位置との間の位置関係に基づいて、前記読取画像内での前記媒体画像の配置を変更し、配置変更後の前記媒体画像に基づいて、前記複数の読取画像において同一の抽出領域を設定し、前記抽出領域に従って、前記複数の読取画像の各々から前記媒体画像を抽出する。 According to an aspect of the disclosure, the image processing device includes a storage unit and an image processing unit. The storage unit stores a series of a plurality of read images each including a medium image. In each of the plurality of read images, the image processing unit sets a first reference position in a first area where the medium image exists, and sets a second reference position in a second area other than the first area. Changing the arrangement of the medium image in the read image based on a positional relationship between the first reference position and the second reference position, based on the medium image after the arrangement change, The same extraction area is set in the read images, and the medium image is extracted from each of the plurality of read images according to the extraction area.
 開示の態様によれば、オペレータの作業効率を向上することができる。 According to the aspect of the disclosure, the work efficiency of the operator can be improved.
図1は、実施例1の画像処理装置の構成例を示す図である。FIG. 1 is a diagram illustrating a configuration example of the image processing apparatus according to the first embodiment. 図2は、実施例1の画像処理装置の処理フローの一例を示す図である。FIG. 2 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment. 図3は、実施例1の画像処理装置の処理フローの一例を示す図である。FIG. 3 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment. 図4Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 4A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図4Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 4B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図4Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 4C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図4Dは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 4D is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図4Eは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 4E is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図5は、実施例1の画像処理装置の処理フローの一例を示す図である。FIG. 5 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment. 図6Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 6A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図6Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 6B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図7は、実施例1の画像処理装置の処理フローの一例を示す図である。FIG. 7 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the first embodiment. 図8Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 8A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図8Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 8B is a diagram for explaining an operation example of the image processing apparatus according to the first embodiment. 図8Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 8C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図9は、実施例1の第一の読取時の処理フローの一例を示す図である。FIG. 9 is a diagram illustrating an example of a processing flow at the time of the first reading according to the first embodiment. 図10Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 10A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図10Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 10B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図10Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 10C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図11Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 11A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図11Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 11B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図11Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 11C is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図12Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 12A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図12Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 12B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図12Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 12C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図13Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 13A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図13Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 13B is a diagram for explaining an operation example of the image processing apparatus according to the first embodiment. 図13Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 13C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図14Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 14A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図14Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 14B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図14Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 14C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図14Dは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 14D is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図15Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 15A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図15Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 15B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図15Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 15C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図15Dは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 15D is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図16Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 16A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図16Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 16B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図16Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 16C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図17は、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 17 is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図18Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 18A is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図18Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 18B is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図18Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 18C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図19Aは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 19A is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図19Bは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 19B is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図19Cは、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 19C is a diagram for describing an operation example of the image processing apparatus according to the first embodiment. 図20は、実施例1の画像処理装置の動作例の説明に供する図である。FIG. 20 is a diagram provided for describing an operation example of the image processing apparatus according to the first embodiment. 図21は、実施例2の古書の一例を示す図である。FIG. 21 is a diagram illustrating an example of an old book according to the second embodiment. 図22は、実施例2の画像処理装置の処理フローの一例を示す図である。FIG. 22 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. 図23は、実施例2の画像処理装置の処理フローの一例を示す図である。FIG. 23 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. 図24は、実施例2の画像処理装置の処理フローの一例を示す図である。FIG. 24 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. 図25は、実施例2の画像処理装置の処理フローの一例を示す図である。FIG. 25 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. 図26Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 26A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図26Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 26B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図26Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 26C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図27Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 27A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図27Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 27B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図27Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 27C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図28Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 28A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図28Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 28B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図28Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 28C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図29Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 29A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図29Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 29B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図29Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 29C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図30Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 30A is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図30Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 30B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図30Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 30C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図30Dは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 30D is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図31Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 31A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図31Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 31B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図31Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 31C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図31Dは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 31D is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図32は、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 32 is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図33Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 33A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図33Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 33B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図33Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 33C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図34は、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 34 is a diagram for explaining an operation example of the image processing apparatus according to the second embodiment. 図35Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 35A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図35Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 35B is a diagram provided for describing an operation example of the image processing apparatus according to the second embodiment. 図35Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 35C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図36Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 36A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図36Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 36B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図36Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 36C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図37は、実施例2の画像処理装置の処理フローの一例を示す図である。FIG. 37 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. 図38Aは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 38A is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図38Bは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 38B is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図38Cは、実施例2の画像処理装置の動作例の説明に供する図である。FIG. 38C is a diagram for describing an operation example of the image processing apparatus according to the second embodiment. 図39は、実施例3の画像処理装置の処理フローの一例を示す図である。FIG. 39 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the third embodiment. 図40Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 40A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図40Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 40B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図40Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 40C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図41Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 41A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図41Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 41B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図41Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 41C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図42Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 42A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図42Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 42B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図42Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 42C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図43Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 43A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図43Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 43B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図43Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 43C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図44Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 44A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図44Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 44B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図44Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 44C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図44Dは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 44D is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図45Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 45A is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図45Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 45B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図45Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 45C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図45Dは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 45D is a diagram for explaining an operation example of the image processing apparatus according to the third embodiment. 図46は、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 46 is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図47Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 47A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図47Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 47B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図47Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 47C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図48は、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 48 is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図49Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 49A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図49Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 49B is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図49Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 49C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図50Aは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 50A is a diagram for describing an operation example of the image processing apparatus according to the third embodiment. 図50Bは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 50B is a diagram provided for describing an operation example of the image processing apparatus according to the third embodiment. 図50Cは、実施例3の画像処理装置の動作例の説明に供する図である。FIG. 50C is a diagram for describing an operation example of the image processing apparatus according to the third embodiment.
 以下に、本願の開示する画像処理装置及び画像処理方法の実施例を図面に基づいて説明する。なお、この実施例により本願の開示する画像処理装置及び画像処理方法が限定されるものではない。また、実施例において同一の機能を有する構成、及び、同一の処理を行うステップには同一の符号を付す。 Hereinafter, embodiments of the image processing apparatus and the image processing method disclosed in the present application will be described with reference to the drawings. The embodiment is not intended to limit the image processing apparatus and the image processing method disclosed in the present application. In the embodiments, the same reference numerals are given to configurations having the same functions and steps for performing the same processing.
 [実施例1]
 <画像処理装置の構成>
 図1は、実施例1の画像処理装置の構成例を示す図である。図1において、画像処理装置1は、制御部11と、記憶部13と、画像処理部15と、表示部17とを有する。記憶部13、画像処理部15、及び、表示部17は、制御部11による制御の下で動作する。画像処理装置10は、例えば、スキャナに搭載されて、または、スキャナと接続されて使用される。
[Example 1]
<Configuration of image processing device>
FIG. 1 is a diagram illustrating a configuration example of the image processing apparatus according to the first embodiment. 1, the image processing device 1 includes a control unit 11, a storage unit 13, an image processing unit 15, and a display unit 17. The storage unit 13, the image processing unit 15, and the display unit 17 operate under the control of the control unit 11. The image processing apparatus 10 is used, for example, mounted on a scanner or connected to a scanner.
 記憶部13は、スキャナによって読み取られた画像(以下では「読取画像」と呼ぶことがある)を記憶する。例えば、一冊の本の各ページがスキャナによって読み取られて一冊の本が電子データ化される場合、記憶部13は、一冊の本に渡ってページが連なる一連の複数の読取画像を記憶する。また、読取画像には、読取の対象となった媒体の画像(以下では「媒体画像」と呼ぶことがある)が含まれ、読取画像内に媒体画像が配置されている。例えば、一冊の本の各ページがスキャナによって読み取られて一冊の本が電子データ化される場合、媒体画像は、各ページの画像である。つまり、記憶部13は、各々が媒体画像を含む(各々に媒体画像が配置された)一連の複数の読取画像を記憶する。 The storage unit 13 stores an image read by a scanner (hereinafter, may be referred to as a “read image”). For example, when each page of one book is read by a scanner and one book is converted into electronic data, the storage unit 13 stores a series of a plurality of read images in which pages are continuous over one book. I do. The read image includes an image of a medium to be read (hereinafter, may be referred to as a “medium image”), and the medium image is arranged in the read image. For example, when each page of one book is read by a scanner and one book is converted into electronic data, the medium image is an image of each page. That is, the storage unit 13 stores a series of a plurality of read images each including a medium image (in each of which the medium image is arranged).
 <画像処理装置の処理・動作>
 図2、図3、図5、及び、図7は、実施例1の画像処理装置の処理フローの一例を示す図である。図4A~E、図6A,B、図8A~C、図9、図10A~C、図11A~C、図12A~C、図13A~C、図14A~D、図15A~D、図16A~C、図17、図18A~C、図19A~C、及び、図20は、実施例1の画像処理装置の動作例の説明に供する図である。
<Processing and operation of image processing device>
FIG. 2, FIG. 3, FIG. 5, and FIG. 7 are diagrams illustrating an example of the processing flow of the image processing apparatus according to the first embodiment. 4A-E, 6A, B, 8A-C, 9, 9, 10A-C, 11A-C, 12A-C, 13A-C, 14A-D, 15A-D, 16A 20, FIG. 17, FIGS. 18A to C, FIGS. 19A to C, and FIG. 20 are diagrams for explaining an operation example of the image processing apparatus according to the first embodiment.
 実施例1では、一冊の古書の各ページをスキャナによって読み取って一冊の本を電子データ化する場合を一例に挙げて説明する。また、実施例1では、原稿台の上に見開きの状態で載置された古書の各ページをオーバーヘッド型のスキャナで読み取る場合を一例に挙げて説明する。 In the first embodiment, an example will be described in which each page of one old book is read by a scanner and one book is converted into electronic data. The first embodiment will exemplify a case in which each page of an old book placed in a two-page spread state on a document table is read by an overhead scanner.
 図2に示す処理フローは、例えば、画像読取装置10が有する処理開始ボタン(図示省略)がオペレータによって押下されたときに開始される。 The process flow shown in FIG. 2 is started when, for example, a process start button (not shown) of the image reading apparatus 10 is pressed by an operator.
 図2において、ステップS101では、制御部11は、記憶部13に記憶されている一連の複数の読取画像の中から、画像処理部15での処理の対象となる読取画像を選択して記憶部13から取得し、取得した読取画像を画像処理部15へ出力する。但し、白紙の画像は予め除去されて記憶部13には記憶されていない。 2, in step S101, the control unit 11 selects a read image to be processed by the image processing unit 15 from a series of a plurality of read images stored in the storage unit 13, and stores the read image in the storage unit. 13 and outputs the acquired read image to the image processing unit 15. However, the blank image is removed in advance and is not stored in the storage unit 13.
 例えば、一冊の古書の読取画像として、オモテ表紙の画像、1ページ目及び2ページ目の見開きの画像(以下では「見開き12画像」と呼ぶことがある)、3ページ目及び4ページ目の見開きの画像(以下では「見開き34画像」と呼ぶことがある)、…、ウラ表紙の画像の一連の画像が記憶部13に記憶されている場合は、制御部11は、ステップS101~S109の1回目の処理ループにおけるステップS101でオモテ表紙の画像を、ステップS101~S109の2回目の処理ループにおけるステップS101で見開き12画像を、ステップS101~S109の3回目の処理ループにおけるステップS101で見開き34画像を順に記憶部13から取得する。また、制御部11は、ステップS101~S109の最後の処理ループにおけるステップS101でウラ表紙の画像を記憶部13から取得する。また例えば、オモテ表紙の画像、見開き12画像、見開き34画像、…、ウラ表紙の画像の一連の画像は1つのファイルとして記憶部13に記憶されており、制御部11は、処理対象のファイルからオモテ表紙の画像、見開き12画像、見開き34画像、…、ウラ表紙の画像の各画像を順に選択する。処理対象のファイルは、例えば、オペレータによって任意に選択される。 For example, as a read image of one old book, a front cover image, a spread image of the first page and the second page (hereinafter, may be referred to as “spread 12 images”), a third page and a fourth page If a series of facing spread images (hereinafter sometimes referred to as “spread facing 34 images”),..., Back cover images are stored in the storage unit 13, the control unit 11 proceeds to steps S101 to S109. In step S101 in the first processing loop, the front cover image is read, in step S101 in the second processing loop of steps S101 to S109, the spread 12 image is displayed, and in step S101 in the third processing loop of steps S101 to S109, the spread 34 is displayed. Images are sequentially acquired from the storage unit 13. Further, the control unit 11 acquires the image of the back cover from the storage unit 13 in step S101 in the last processing loop of steps S101 to S109. Further, for example, a series of images of the front cover image, the facing spread image 12, the facing spread image 34,..., And the back cover image are stored in the storage unit 13 as one file. Each image of the front cover image, the facing spread image 12, the facing spread image 34,..., And the back cover image is sequentially selected. The file to be processed is arbitrarily selected by an operator, for example.
 ステップS103では、画像処理部15は、ステップS101で選択された読取画像に対して媒体画像位置検出処理を行う。 In step S103, the image processing unit 15 performs a medium image position detection process on the read image selected in step S101.
 図3に、媒体画像位置検出処理の処理フローの一例を示す。また、図4Aに、ステップS101で選択された読取画像RIの一例を示す。図4Aにおいて、読取画像RIは、例えば、媒体画像MIA,MIBと、読取の対象となった媒体が載置されている原稿台の画像(以下では「原稿台画像」と呼ぶことがある)MTIとを含む。例えば、媒体画像MIAは、古書の見開きの画像であり、媒体画像MIBは、古書の横に置かれたカラーチャートの画像である。 FIG. 3 shows an example of the processing flow of the medium image position detection processing. FIG. 4A shows an example of the read image RI selected in step S101. In FIG. 4A, for example, a read image RI is composed of, for example, medium images MIA and MIB, and an image of a platen on which a medium to be read is placed (hereinafter, may be referred to as “platen plate image”) MTI. And For example, the medium image MIA is an image of a spread of an old book, and the medium image MIB is an image of a color chart placed beside the old book.
 図3のステップS151では、画像処理部15は、読取画像RI(図4A)に対して近傍階調差を用いたエッジ検出EDを行う(図4B)。 で は In step S151 in FIG. 3, the image processing unit 15 performs edge detection ED using the nearby tone difference on the read image RI (FIG. 4A) (FIG. 4B).
 次いで、ステップS153では、画像処理部15は、エッジ検出ED後の読取画像RIに対して、ハフ検出及び最小二乗法を用いた直線検出SLDを行う(図4C)。 Next, in step S153, the image processing unit 15 performs a straight line detection SLD using the Hough detection and the least squares method on the read image RI after the edge detection ED (FIG. 4C).
 次いで、ステップS155では、画像処理部15は、エッジ検出ED後の読取画像RIに対して、ラベリングを用いた曲線検出CLDを行う(図4D)。 Next, in step S155, the image processing unit 15 performs curve detection CLD using labeling on the read image RI after the edge detection ED (FIG. 4D).
 そして、ステップS157では、画像処理部15は、直線検出SLD及び曲線検出CLDの結果に基づいて、読取画像RIに対して外接矩形検出RTDを行う(図4E)。この外接矩形検出RTDにより、媒体画像MIAの外接矩形BR1が、読取画像RIにおける媒体画像MIAの存在位置として検出され、媒体画像MIBの外接矩形BR2が、読取画像RIにおける媒体画像MIBの存在位置として検出される。 Then, in step S157, the image processing unit 15 performs circumscribed rectangle detection RTD on the read image RI based on the results of the straight line detection SLD and the curve detection CLD (FIG. 4E). With this circumscribed rectangle detection RTD, the circumscribed rectangle BR1 of the medium image MIA is detected as the position of the medium image MIA in the read image RI, and the circumscribed rectangle BR2 of the medium image MIB is detected as the position of the medium image MIB in the read image RI. Is detected.
 図2に戻り、ステップS105では、画像処理部15は、ステップS157で検出された各外接矩形に基づいて、各媒体画像の傾き及び重心を算出する。例えば、画像処理部15は、外接矩形の傾きを媒体画像の傾きとして算出し、外接矩形の重心を媒体画像の重心として算出する。 2, returning to FIG. 2, in step S105, the image processing unit 15 calculates the inclination and the center of gravity of each medium image based on each circumscribed rectangle detected in step S157. For example, the image processing unit 15 calculates the inclination of the circumscribed rectangle as the inclination of the medium image, and calculates the center of gravity of the circumscribed rectangle as the center of gravity of the medium image.
 次いで、ステップS107では、画像処理部15は、媒体画像基準位置設定処理を行う。 Next, in step S107, the image processing unit 15 performs a medium image reference position setting process.
 図5に、媒体画像基準位置設定処理の処理フローの一例を示す。図5におけるステップS151,S153,S155の各処理については、図3に示したものと同一であるため、説明を省略する。 FIG. 5 shows an example of the processing flow of the medium image reference position setting processing. The processes in steps S151, S153, and S155 in FIG. 5 are the same as those shown in FIG.
 図5のステップS161では、画像処理部15は、直線検出SLD及び曲線検出CLDの結果に基づいて、媒体画像の輪郭を示す図形(以下では「媒体輪郭図形」と呼ぶことがある)を媒体画像毎に検出する。 In step S161 in FIG. 5, the image processing unit 15 converts a figure (hereinafter, sometimes referred to as a “medium outline figure”) indicating the outline of the medium image based on the results of the straight line detection SLD and the curve detection CLD. Detect every time.
 次いで、ステップS163では、画像処理部15は、媒体画像毎に、媒体輪郭図形の上辺が直線であるか否かを判定する。媒体輪郭図形の上辺が直線である場合は(ステップS163:Yes)、処理はステップS165へ進み、媒体輪郭図形の上辺が直線でない場合は(ステップS163:No)、処理はステップS167へ進む。 Next, in step S163, the image processing unit 15 determines, for each medium image, whether the upper side of the medium outline figure is a straight line. If the upper side of the medium contour figure is a straight line (step S163: Yes), the process proceeds to step S165, and if the upper side of the medium contour figure is not a straight line (step S163: No), the process proceeds to step S167.
 ステップS165では、画像処理部15は、媒体輪郭図形の上辺の中点(つまり、直線の中点)を媒体画像基準位置に設定し、媒体画像基準位置設定後の読取画像を記憶部13に記憶させる。 In step S165, the image processing unit 15 sets the middle point of the upper side of the medium outline figure (that is, the middle point of the straight line) as the medium image reference position, and stores the read image after the medium image reference position is set in the storage unit 13. Let it.
 一方で、ステップS167では、画像処理部15は、媒体輪郭図形の上辺の両端のコーナー点を結ぶ直線(以下では「近似直線」と呼ぶことがある)を媒体画像に設定する。 On the other hand, in step S167, the image processing unit 15 sets a straight line (hereinafter, may be referred to as an “approximate straight line”) connecting the corner points at both ends of the upper side of the medium contour figure to the medium image.
 そして、ステップS169では、画像処理部15は、近似直線の中点を媒体画像基準位置に設定し、媒体画像基準位置設定後の読取画像を記憶部13に記憶させる。 Then, in step S169, the image processing unit 15 sets the midpoint of the approximate straight line to the medium image reference position, and causes the storage unit 13 to store the read image after setting the medium image reference position.
 図6A,Bに、媒体画像基準位置の設定例を示す。図6Aには、媒体画像に設定された近似直線の中点を媒体画像基準位置に設定する例を示す。図6Aにおいて、読取画像RIは、媒体画像MIと、原稿台画像MTIとを含む。ステップS157の処理により、媒体画像MIに対する外接矩形BRが検出される。また、ステップS161の処理により、媒体画像MIに対して、媒体輪郭図形DLが検出される。図6Aにおいて、媒体輪郭図形DLの上辺は直線でないため(ステップS163:No)、ステップS167の処理により、媒体画像MIに対して、媒体輪郭図形DLの上辺の両端のコーナー点CR1,CR2を結ぶ近似直線SLが設定される。そして、ステップS169の処理により、近似直線SLの中点が媒体画像基準位置RRPに設定される。 6A and 6B show examples of setting the medium image reference position. FIG. 6A shows an example in which the midpoint of the approximate straight line set in the medium image is set as the medium image reference position. In FIG. 6A, the read image RI includes a medium image MI and a document table image MTI. By the processing in step S157, a circumscribed rectangle BR for the medium image MI is detected. Further, by the processing in step S161, a medium outline figure DL is detected for the medium image MI. In FIG. 6A, since the upper side of the medium outline graphic DL is not a straight line (step S163: No), the processing of step S167 connects the corner points CR1 and CR2 at both ends of the upper side of the medium outline graphic DL to the medium image MI. An approximate straight line SL is set. Then, by the process of step S169, the midpoint of the approximate straight line SL is set to the medium image reference position RRP.
 なお、画像処理部15は、媒体輪郭図形DLの上辺が直線でない場合に、近似直線SLの中点を媒体画像基準位置RRPに設定する代わりに、図6Bに示すように、媒体輪郭図形DLの上辺の中点を媒体画像基準位置RRPに設定しても良い。 When the upper side of the medium outline figure DL is not a straight line, instead of setting the midpoint of the approximate straight line SL to the medium image reference position RRP, the image processing unit 15, as shown in FIG. The middle point of the upper side may be set as the medium image reference position RRP.
 図2に戻り、ステップS109では、制御部11は、記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS101~S107の処理が完了したか否かを判定する。記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS101~S107の処理が完了した場合は(ステップS109:Yes)、処理はステップS111へ進む。一方で、記憶部13に記憶されている一連の複数の読取画像において、ステップS101~S107の処理が行われていない読取画像が残っている場合は(ステップS109:No)、処理はステップS101へ戻る。例えば、一冊の古書の読取画像として、オモテ表紙の画像、見開き12画像、見開き34画像、…、ウラ表紙の画像の一連の画像が記憶部13に記憶されている場合は、ウラ表紙の画像に対してステップS101~S107の処理が完了したときに、処理はステップS111へ進む。 Returning to FIG. 2, in step S109, the control unit 11 determines whether or not the processing in steps S101 to S107 has been completed for all of a series of a plurality of read images stored in the storage unit 13. When the processing of steps S101 to S107 has been completed for all of the series of a plurality of read images stored in the storage unit 13 (step S109: Yes), the processing proceeds to step S111. On the other hand, when a series of read images stored in the storage unit 13 still have read images for which the processes of steps S101 to S107 have not been performed (step S109: No), the process proceeds to step S101. Return. For example, when a series of images of a front cover image, 12 spread pages, 34 spread pages,..., And a back cover image are stored in the storage unit 13 as read images of one old book, the back cover image is stored. When the processes of steps S101 to S107 are completed, the process proceeds to step S111.
 ステップS111では、画像処理部15は、各読取画像に対して読取画像基準位置を設定する。図6A,Bに、読取画像基準位置の設定例を示す。図6A,Bに示すように、画像処理部15は、例えば、読取画像RIに含まれる原稿台画像MTIの上辺の中点を読取画像基準位置MRP1に設定する。 In step S111, the image processing unit 15 sets a read image reference position for each read image. 6A and 6B show examples of setting the read image reference position. As illustrated in FIGS. 6A and 6B, the image processing unit 15 sets, for example, the middle point of the upper side of the document table image MTI included in the read image RI as the read image reference position MRP1.
 次いで、ステップS113では、ステップS101の処理と同様に、制御部11は、記憶部13に記憶されている一連の複数の読取画像(つまり、媒体画像基準位置及び読取画像基準位置設定後の一連の複数の読取画像)の中から、画像処理部15での処理の対象となる読取画像を選択して記憶部13から取得し、取得した読取画像を画像処理部15へ出力する。 Next, in step S113, similarly to the processing in step S101, the control unit 11 causes the series of read images stored in the storage unit 13 (that is, the series of read images after setting the medium image reference position and the read image reference position). From among the plurality of read images, a read image to be processed by the image processing unit 15 is selected and acquired from the storage unit 13, and the acquired read image is output to the image processing unit 15.
 次いで、ステップS115では、画像処理部15は、ステップS113で選択された読取画像に含まれる媒体画像に対して正立補正を施す。すなわち、ステップS115では、画像処理部15は、ステップS105で算出された媒体画像の傾きを補正する。 Next, in step S115, the image processing unit 15 performs erect correction on the medium image included in the read image selected in step S113. That is, in step S115, the image processing unit 15 corrects the tilt of the medium image calculated in step S105.
 次いで、ステップS117では、画像処理部15は、ステップS113で選択された読取画像が、一連の複数の読取画像における最初の読取画像であるか否かを判定する。ステップS113で選択された読取画像が最初の読取画像である場合は(ステップS117:Yes)、処理はステップS119へ進む。一方で、ステップS113で選択された読取画像が最初の読取画像でない場合は(ステップS117:No)、処理はステップS121へ進む。 Next, in step S117, the image processing unit 15 determines whether or not the read image selected in step S113 is the first read image in a series of a plurality of read images. If the read image selected in step S113 is the first read image (step S117: Yes), the process proceeds to step S119. On the other hand, if the read image selected in step S113 is not the first read image (step S117: No), the process proceeds to step S121.
 ステップS121では、画像処理部15は、対(つい)画像検出処理を行う。図7に、対画像検出処理の処理フローの一例を示す。 で は In step S121, the image processing unit 15 performs a pair (approx.) Image detection process. FIG. 7 illustrates an example of a processing flow of the image detection processing.
 図7において、ステップS171では、画像処理部15は、N番目に選択された読取画像(つまり、今回選択された読取画像)に含まれる媒体画像(以下では「今回媒体画像」と呼ぶことがある)の重心(以下では「今回媒体画像重心」と呼ぶことがある)、及び、N-1番目に選択された読取画像(つまり、前回選択された読取画像)に含まれる媒体画像(以下では「前回媒体画像」と呼ぶことがある)の重心(以下では「前回媒体画像重心」と呼ぶことがある)をそれぞれ算出する。例えば、画像処理部15は、ステップS157で検出した外接矩形の重心を、今回媒体画像重心及び前回媒体画像重心として算出する。 In FIG. 7, in step S171, the image processing unit 15 may call a medium image (hereinafter, referred to as a “current medium image”) included in the Nth selected read image (that is, the currently selected read image). ) (Hereinafter sometimes referred to as the “media image median point”), and a media image (hereinafter, referred to as “the previously selected read image”) included in the (N−1) th read image (ie, the previously selected read image). The center of gravity of the "previous medium image" (hereinafter sometimes referred to as "the previous medium image centroid") is calculated. For example, the image processing unit 15 calculates the center of gravity of the circumscribed rectangle detected in step S157 as the center of gravity of the current medium image and the center of gravity of the previous medium image.
 次いでステップS173では、画像処理部15は、読取画像基準位置と今回媒体画像重心との間の距離(以下では「今回距離」と呼ぶことがある)、及び、読取画像基準位置と前回媒体画像重心との間の距離(以下では「前回距離」と呼ぶことがある)を算出する。 Next, in step S173, the image processing unit 15 sets the distance between the read image reference position and the current medium image center of gravity (hereinafter, may be referred to as “current distance”), and the read image reference position and the previous medium image center of gravity. (Hereinafter, sometimes referred to as “previous distance”).
 次いで、ステップS175では、画像処理部15は、今回媒体画像の縦長さH及び横長さWと、前回媒体画像の縦長さH’及び横長さW’とを算出する。例えば、画像処理部15は、今回媒体画像の外接矩形の縦長さ及び横長さを今回媒体画像の縦長さH及び横長さWとして算出し、前回媒体画像の外接矩形の縦長さ及び横長さを前回媒体画像の縦長さH’及び横長さW’として算出する。 Next, in step S175, the image processing unit 15 calculates the vertical length H and the horizontal length W of the current medium image, and the vertical length H 'and the horizontal length W' of the previous medium image. For example, the image processing unit 15 calculates the vertical length and the horizontal length of the circumscribed rectangle of the current medium image as the vertical length H and the horizontal length W of the current medium image, and calculates the vertical length and the horizontal length of the previous circumscribed rectangle of the medium image. It is calculated as the height H ′ and the width W ′ of the medium image.
 次いで、ステップS177では、画像処理部15は、今回距離と前回距離との間の差の絶対値(以下では「距離差」と呼ぶことがある)が閾値TH1未満であるか否かを判定する。距離差が閾値TH1未満である場合は(ステップS177:Yes)、処理はステップS179へ進み、距離差が閾値TH1以上である場合は(ステップS177:No)、処理はステップS185へ進む。 Next, in step S177, the image processing unit 15 determines whether or not the absolute value of the difference between the current distance and the previous distance (hereinafter, may be referred to as “distance difference”) is less than a threshold value TH1. . If the distance difference is less than threshold TH1 (step S177: Yes), the process proceeds to step S179, and if the distance difference is greater than or equal to threshold TH1 (step S177: No), the process proceeds to step S185.
 ステップS179では、画像処理部15は、今回媒体画像の縦長さHと前回媒体画像の縦長さH’との間の差の絶対値(以下では「縦長さ差」と呼ぶことがある)が閾値TH2未満であるか否かを判定する。縦長さ差が閾値TH2未満である場合は(ステップS179:Yes)、処理はステップS181へ進み、縦長さ差が閾値TH2以上である場合は(ステップS179:No)、処理はステップS191へ進む。 In step S179, the image processing unit 15 sets the absolute value of the difference between the vertical length H of the current medium image and the vertical length H ′ of the previous medium image (hereinafter, may be referred to as “vertical length difference”) as a threshold. It is determined whether it is less than TH2. If the height difference is less than the threshold value TH2 (step S179: Yes), the process proceeds to step S181. If the height difference is equal to or greater than the threshold value TH2 (step S179: No), the process proceeds to step S191.
 ステップS181では、画像処理部15は、今回媒体画像の横長さWと前回媒体画像の横長さW’との間の差の絶対値(以下では「横長さ差」と呼ぶことがある)が閾値TH3未満であるか否かを判定する。横長さ差が閾値TH3未満である場合は(ステップS181:Yes)、処理はステップS183へ進み、横長さ差が閾値TH3以上である場合は(ステップS181:No)、処理はステップS187へ進む。 In step S181, the image processing unit 15 sets the absolute value of the difference between the horizontal length W of the current medium image and the horizontal length W ′ of the previous medium image (hereinafter, may be referred to as “horizontal length difference”) as a threshold. It is determined whether it is less than TH3. If the horizontal length difference is less than the threshold value TH3 (step S181: Yes), the process proceeds to step S183. If the horizontal length difference is equal to or greater than the threshold value TH3 (step S181: No), the process proceeds to step S187.
 ステップS183では、画像処理部15は、今回媒体画像及び前回媒体画像は共に見開きの画像、または、共にカラーチャートの画像で、かつ、今回媒体画像と前回媒体画像とは互いに対になる画像であると判定する。つまり、ステップS183では、画像処理部15は、今回媒体画像と前回媒体画像とが、見開き画像同士の対画像、または、カラーチャート画像同士の対画像であると判定する。画像処理部15は、今回媒体画像と前回媒体画像とが、見開き画像同士の対画像であるか、または、カラーチャート画像同士の対画像であるかを、例えば、今回媒体画像の横長さに基づいて判定する。例えば、画像処理部15は、今回媒体画像の横長さが閾値TH4以上である場合は、今回媒体画像と前回媒体画像とが見開き画像同士の対画像であると判定し、今回媒体画像の横長さが閾値TH4未満である場合は、今回媒体画像と前回媒体画像とがカラーチャート画像同士の対画像であると判定する。 In step S183, the image processing unit 15 determines that the current medium image and the previous medium image are both spread images or both are color chart images, and the current medium image and the previous medium image are images that form a pair with each other. Is determined. That is, in step S183, the image processing unit 15 determines that the current medium image and the previous medium image are a pair image of two-page spread images or a pair image of color chart images. The image processing unit 15 determines whether the current medium image and the previous medium image are paired images of facing pages or color chart images, for example, based on the horizontal length of the current medium image. Judgment. For example, when the horizontal length of the current medium image is equal to or greater than the threshold value TH4, the image processing unit 15 determines that the current medium image and the previous medium image are paired images of two-page spread images, and determines the horizontal length of the current medium image. Is smaller than the threshold value TH4, it is determined that the current medium image and the previous medium image are pairs of color chart images.
 一方で、ステップS185では、画像処理部15は、縦長さ差が閾値TH2未満であるか否かを判定する。縦長さ差が閾値TH2未満である場合は(ステップS185:Yes)、処理はステップS187へ進み、縦長さ差が閾値TH2以上である場合は(ステップS185:No)、処理はステップS191へ進む。 On the other hand, in step S185, the image processing unit 15 determines whether or not the vertical length difference is less than the threshold value TH2. If the vertical length difference is smaller than the threshold value TH2 (step S185: Yes), the process proceeds to step S187. If the vertical length difference is equal to or larger than the threshold value TH2 (step S185: No), the process proceeds to step S191.
 ステップS187では、画像処理部15は、今回媒体画像の横長さWが、前回媒体画像の横長さW’の所定範囲にあるか否かを判定する。画像処理部15は、例えば、今回媒体画像の横長さWが、前回媒体画像の横長さW’の2分の1(つまり、W’/2)以上、かつ、2倍(つまり、2W’)未満の所定範囲にあるか否かを判定する。今回媒体画像の横長さWがこの所定範囲にある場合は(ステップS187:Yes)、処理はステップS189へ進み、今回媒体画像の横長さWがこの所定範囲にない場合は(ステップS187:No)、処理はステップS191へ進む。 In で は Step S187, the image processing unit 15 determines whether the horizontal length W of the current medium image is within a predetermined range of the horizontal length W ′ of the previous medium image. For example, the image processing unit 15 determines that the horizontal length W of the current medium image is equal to or more than half (ie, W ′ / 2) and twice (ie, 2W ′) the horizontal length W ′ of the previous medium image. It is determined whether it is within a predetermined range of less than. If the horizontal width W of the current medium image is within this predetermined range (step S187: Yes), the process proceeds to step S189, and if the horizontal length W of the current medium image is not within this predetermined range (step S187: No). , The process proceeds to step S191.
 ステップS189では、画像処理部15は、今回媒体画像及び前回媒体画像の一方が表紙の画像、他方が見開きの画像で、かつ、今回媒体画像と前回媒体画像とは互いに対になる画像であると判定する。つまり、ステップS183では、画像処理部15は、今回媒体画像と前回媒体画像とが、表紙画像と見開き画像との対画像であると判定する。 In step S189, the image processing unit 15 determines that one of the current medium image and the previous medium image is a cover image, the other is a two-page spread image, and the current medium image and the previous medium image are images that form a pair with each other. judge. That is, in step S183, the image processing unit 15 determines that the current medium image and the previous medium image are paired images of the cover image and the facing image.
 一方で、ステップS191では、画像処理部15は、今回媒体画像と前回媒体画像とが互いに対にならない画像であると判定する。つまり、ステップS191では、画像処理部15は、今回媒体画像と前回媒体画像とが対画像ではないと判定する。 On the other hand, in step S191, the image processing unit 15 determines that the current medium image and the previous medium image are not paired with each other. That is, in step S191, the image processing unit 15 determines that the current medium image and the previous medium image are not paired images.
 図8A,B,Cに、対画像の判定例を示す。 FIGS. 8A, 8B, and 8C show examples of image-to-image determination.
 図8Aにおいて、例えば、読取画像RIAに含まれる媒体画像MIAが前回媒体画像であり、読取画像RIBに含まれる媒体画像MIBが今回媒体画像である。図8Aでは、読取画像基準位置MRP1と前回媒体画像重心GBAとの間の前回距離DTAと、読取画像基準位置MRP1と今回媒体画像重心GBBとの間の今回距離DTBとの距離差は閾値TH1未満である(ステップS177:Yes)。また、媒体画像MIAの縦長さH5と媒体画像MIBの縦長さH6との間の縦長さ差は閾値TH2未満である(ステップS179:Yes)。さらに、媒体画像MIAの横長さW5と媒体画像MIBの横長さW6との間の横長さ差は閾値TH3未満である(ステップS181:Yes)。よって、媒体画像MIAと媒体画像MIBとは、見開き画像同士の対画像であると判定される(ステップS183)。 8A, for example, the medium image MIA included in the read image RIA is the previous medium image, and the medium image MIB included in the read image RIB is the current medium image. In FIG. 8A, the distance difference between the previous distance DTA between the read image reference position MRP1 and the previous medium image center of gravity GBA and the current distance DTB between the read image reference position MRP1 and the current medium image center of gravity GBB is less than the threshold TH1. (Step S177: Yes). In addition, the vertical length difference between the vertical length H5 of the medium image MIA and the vertical length H6 of the medium image MIB is less than the threshold value TH2 (step S179: Yes). Furthermore, the horizontal length difference between the horizontal length W5 of the medium image MIA and the horizontal length W6 of the medium image MIB is less than the threshold value TH3 (step S181: Yes). Therefore, it is determined that the medium image MIA and the medium image MIB are paired images of two-page spread images (step S183).
 また、図8Bにおいて、例えば、読取画像RICに含まれる媒体画像MICが前回媒体画像であり、読取画像RIDに含まれる媒体画像MIDが今回媒体画像である。図8Bでは、読取画像基準位置MRP1と前回媒体画像重心GBCとの間の前回距離DTCと、読取画像基準位置MRP1と今回媒体画像重心GBDとの間の今回距離DTDとの距離差は閾値TH1未満である(ステップS177:Yes)。また、媒体画像MICの縦長さH3と媒体画像MIDの縦長さH4との間の縦長さ差は閾値TH2未満である(ステップS179:Yes)。しかし、媒体画像MICの横長さW3と媒体画像MIDの横長さW4との間の横長さ差は閾値TH3以上である(ステップS181:No)。但し、媒体画像MIDの横長さW4は、媒体画像MICの横長さW3の2分の1以上、かつ、2倍未満の範囲にある(ステップS187:Yes)。よって、媒体画像MICと媒体画像MIDとは、表紙画像と見開き画像との対画像であると判定される(ステップS189)。 8B, for example, the medium image MIC included in the read image RIC is the previous medium image, and the medium image MID included in the read image RID is the current medium image. In FIG. 8B, the distance difference between the previous distance DTC between the read image reference position MRP1 and the previous medium image center of gravity GBC and the current distance DTD between the read image reference position MRP1 and the current medium image center of gravity GBD is less than the threshold TH1. (Step S177: Yes). Further, the vertical length difference between the vertical length H3 of the medium image MIC and the vertical length H4 of the medium image MID is less than the threshold value TH2 (step S179: Yes). However, the horizontal length difference between the horizontal length W3 of the medium image MIC and the horizontal length W4 of the medium image MID is equal to or larger than the threshold TH3 (Step S181: No). However, the width W4 of the medium image MID is in a range of not less than half and less than twice the width W3 of the medium image MIC (step S187: Yes). Therefore, it is determined that the medium image MIC and the medium image MID are paired images of the cover image and the facing image (step S189).
 また、図8Cにおいて、例えば、読取画像RIEに含まれる媒体画像MIEが前回媒体画像であり、読取画像RIFに含まれる媒体画像MIFが今回媒体画像である。図8Cでは、読取画像基準位置MRP1と前回媒体画像重心GBEとの間の前回距離DTEと、読取画像基準位置MRP1と今回媒体画像重心GBFとの間の今回距離DTFとの距離差は閾値TH1以上である(ステップS177:No)。但し、媒体画像MIEの縦長さH1と媒体画像MIFの縦長さH2との間の縦長さ差は閾値TH2未満である(ステップS185:Yes)。さらに、媒体画像MIFの横長さW2は、媒体画像MIEの横長さW1の2分の1以上、かつ、2倍未満の範囲にある(ステップS187:Yes)。よって、媒体画像MIEと媒体画像MIFとは、表紙画像と見開き画像との対画像であると判定される(ステップS189)。 8C, for example, the medium image MIE included in the read image RIE is the previous medium image, and the medium image MIF included in the read image RIF is the current medium image. In FIG. 8C, the distance difference between the previous distance DTE between the read image reference position MRP1 and the previous medium image center of gravity GBE and the current distance DTF between the read image reference position MRP1 and the current medium image center of gravity GBF is equal to or greater than the threshold TH1. (Step S177: No). However, the vertical length difference between the vertical length H1 of the medium image MIE and the vertical length H2 of the medium image MIF is less than the threshold TH2 (step S185: Yes). Further, the width W2 of the medium image MIF is in a range of not less than half and less than twice the width W1 of the medium image MIE (Step S187: Yes). Therefore, it is determined that the medium image MIE and the medium image MIF are paired images of the cover image and the facing image (step S189).
 なお、画像処理部15は、今回選択された読取画像及び前回選択された読取画像に含まれるすべての媒体画像を対象としてステップS171~S191(図7)の処理を行う。 The image processing unit 15 performs the processing of steps S171 to S191 (FIG. 7) on the read image selected this time and all the medium images included in the read image selected last time.
 図2に戻り、ステップS123では、画像処理部15は、ステップS171~S191(図7)の処理において対画像が存在したか否かを判定する。図7においてステップS183またはステップS189の処理が行われた場合は、対画像が存在したと判定され、図7においてステップS191の処理が行われた場合は、対画像が存在しなかったと判定される。対画像が存在した場合は(ステップS123:Yes)、処理はステップS125へ進み、対画像が存在しなかった場合は(ステップS123:No)、処理はステップS131へ進む。 に Returning to FIG. 2, in step S123, the image processing unit 15 determines whether or not the paired image exists in the processing in steps S171 to S191 (FIG. 7). When the processing in step S183 or step S189 is performed in FIG. 7, it is determined that the paired image exists, and when the processing in step S191 is performed in FIG. 7, it is determined that the paired image does not exist. . If the paired image exists (step S123: Yes), the process proceeds to step S125. If the paired image does not exist (step S123: No), the process proceeds to step S131.
 ステップS125では、画像処理部15は、対となる2つの画像が見開き画像同士またはカラーチャート画像同士であるか否かを判定する。図7においてステップS183の処理が行われた場合は、対となる2つの画像が見開き画像同士またはカラーチャート画像同士であると判定され、図7においてステップS189の処理が行われた場合は、対となる2つの画像が見開き画像同士及びカラーチャート画像同士でないと判定される。対となる2つの画像が見開き画像同士またはカラーチャート画像同士である場合は(ステップS125:Yes)、処理はステップS127へ進み、対となる2つの画像が見開き画像同士及びカラーチャート画像同士でない場合は(ステップS125:No)、処理はステップS129へ進む。 In step S125, the image processing unit 15 determines whether the two images forming a pair are two-page spread images or two color chart images. When the process of step S183 is performed in FIG. 7, it is determined that the paired images are two-page spread images or color chart images, and when the process of step S189 is performed in FIG. It is determined that the two images are not two-page spread images or two color chart images. When two paired images are two-page spread images or two color chart images (step S125: Yes), the process proceeds to step S127, and two paired images are not two-page spread images or two color chart images. (Step S125: No), the process proceeds to Step S129.
 画像処理部15は、ステップS127では位置設定処理Aを行う一方で、ステップS129では位置設定処理Bを行う。位置設定処理A, Bの詳細は後述する。ステップS127,S129の処理後、処理はステップS135へ進む。 The image processing unit 15 performs the position setting process A in step S127, while performing the position setting process B in step S129. Details of the position setting processes A and $ B will be described later. After the processing in steps S127 and S129, the processing proceeds to step S135.
 一方で、ステップS131では、画像処理部15は、対画像が存在しないことを示す警告を出力する。例えば、画像処理部15は、N番目に選択された読取画像内に、N-1番目に選択された読取画像内の媒体画像と対になる媒体画像が存在しないことを示す警告メッセージを表示部17に表示させる。つまり、画像処理部15は、前回媒体画像と対になる今回媒体画像が存在しないときに(ステップS123:No)、警告を出力する。 On the other hand, in step S131, the image processing unit 15 outputs a warning indicating that there is no corresponding image. For example, the image processing unit 15 displays a warning message indicating that there is no medium image in the N-th selected read image that is the same as the medium image in the (N−1) -th read image. 17 is displayed. That is, the image processing unit 15 outputs a warning when there is no current medium image that is paired with the previous medium image (step S123: No).
 次いで、ステップS133では、画像処理部15は、処理を続行するか否かを判定する。例えば、ステップS131で出力された警告に対してオペレータが、処理を続行する旨の指示を画像処理装置10に与えた場合は、画像処理部15は、処理を続行すると判定する(ステップS133:Yes)。一方で、ステップS131で出力された警告に対してオペレータが、処理を中止する旨の指示を画像処理装置10に与えた場合は、画像処理部15は、処理を続行しないと判定する(ステップS133:No)。処理が続行される場合は(ステップS133:Yes)、処理はステップS119へ進み、処理が続行されない場合は(ステップS133:No)、処理は終了する。 Next, in step S133, the image processing unit 15 determines whether to continue the processing. For example, when the operator gives an instruction to continue the processing to the image processing apparatus 10 in response to the warning output in step S131, the image processing unit 15 determines that the processing is continued (step S133: Yes). ). On the other hand, when the operator gives an instruction to stop the processing to the image processing apparatus 10 in response to the warning output in step S131, the image processing unit 15 determines that the processing is not continued (step S133). : No). If the process is to be continued (step S133: Yes), the process proceeds to step S119, and if the process is not to be continued (step S133: No), the process ends.
 ステップS119では、画像処理部15は、位置設定処理Cを行う。位置設定処理Cの詳細は後述する。ステップS119の処理後、処理はステップS135へ進む。 In step S119, the image processing unit 15 performs a position setting process C. Details of the position setting process C will be described later. After the process in step S119, the process proceeds to step S135.
 次いで、ステップS135では、画像処理部15は、位置設定処理A,B,C(ステップS127,S129,S119)の結果に基づいて、読取画像において媒体画像を再配置する。ステップS135の処理の詳細は後述する。 Next, in step S135, the image processing unit 15 rearranges the medium image in the read image based on the results of the position setting processes A, B, and C (steps S127, S129, and S119). Details of the processing in step S135 will be described later.
 次いで、ステップS136では、画像処理部15は、ステップS113で選択された読取画像に含まれるすべての媒体画像に対してステップS115~S135の処理が完了したか否かを判定する。ステップS113で選択された読取画像に含まれるすべての媒体画像に対してステップS115~S135の処理が完了した場合は(ステップS136:Yes)、処理はステップS137へ進む。一方で、ステップS113で選択された読取画像において、ステップS115~S135の処理が行われていない媒体画像が残っている場合は(ステップS136:No)、処理はステップS115へ戻る。 Next, in step S136, the image processing unit 15 determines whether or not the processing in steps S115 to S135 has been completed for all the medium images included in the read image selected in step S113. If the processing of steps S115 to S135 has been completed for all the medium images included in the read image selected in step S113 (step S136: Yes), the processing proceeds to step S137. On the other hand, if there is a medium image in which the processes of steps S115 to S135 have not been performed in the read image selected in step S113 (step S136: No), the process returns to step S115.
 ステップS137では、制御部11は、記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS113~S135の処理が完了したか否かを判定する。記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS113~S135の処理が完了した場合は(ステップS137:Yes)、処理はステップS139へ進む。一方で、記憶部13に記憶されている一連の複数の読取画像において、ステップS113~S135の処理が行われていない読取画像が残っている場合は(ステップS137:No)、処理はステップS113へ戻る。 In step S137, the control unit 11 determines whether the processing in steps S113 to S135 has been completed for all of the series of multiple read images stored in the storage unit 13. When the processing of steps S113 to S135 is completed for all of the series of the plurality of read images stored in the storage unit 13 (step S137: Yes), the processing proceeds to step S139. On the other hand, if a series of read images stored in the storage unit 13 still have read images for which the processes of steps S113 to S135 have not been performed (step S137: No), the process proceeds to step S113. Return.
 ステップS139では、画像処理部15は、読取画像に対して抽出領域を設定する。ステップS139の処理の詳細は後述する。 In step S139, the image processing unit 15 sets an extraction area for the read image. Details of the processing in step S139 will be described later.
 次いで、ステップS141では、画像処理部15は、ステップS139で設定した抽出領域に基づいて、媒体画像を抽出する。ステップS141の処理の詳細は後述する。 Next, in step S141, the image processing unit 15 extracts a medium image based on the extraction area set in step S139. Details of the processing in step S141 will be described later.
 以下、図2、図3、図5及び図7に示した一連の処理に基づく動作例について説明する。 Hereinafter, an operation example based on a series of processes shown in FIGS. 2, 3, 5, and 7 will be described.
 以下では、図9に示すように、画像処理部15は、矩形のすべての読取画像上に、読取画像の左上の頂点を原点として右方向にX座標、下方向にY座標を設定する。つまり、読取画像において、X座標は横方向の座標であり、Y座標は縦方向の座標である。 In the following, as shown in FIG. 9, the image processing unit 15 sets the X coordinate in the right direction and the Y coordinate in the downward direction with the upper left vertex of the read image as the origin on all the read images in the rectangle. That is, in the read image, the X coordinate is the coordinate in the horizontal direction, and the Y coordinate is the coordinate in the vertical direction.
 また例えば、読取画像RI11,RI12,RI13,RI14(図10A,図11A,図12A,図13A)は、一冊の古書の一連の複数の読取画像に相当する。読取画像RI11は、オモテ表紙の画像である媒体画像MI11を含む(図10A)。つまり、読取画像RI11には、媒体画像MI11が配置されている。読取画像RI12は、見開き12画像である媒体画像MI121と、カラーチャートの画像である媒体画像MI122とを含む(図11A)。つまり、読取画像RI12には、媒体画像MI121と媒体画像MI122とが配置されている。読取画像RI13は、見開き34画像である媒体画像MI131と、カラーチャートの画像である媒体画像MI132とを含む(図12A)。つまり、読取画像RI13には、媒体画像MI131と媒体画像MI132とが配置されている。読取画像RI14は、ウラ表紙の画像である媒体画像MI14を含む(図13A)。つまり、読取画像RI14には、媒体画像MI14が配置されている。また、読取画像RI11,RI12,RI13,RI14は、原稿台画像MTIを含む。 Also, for example, the read images RI11, RI12, RI13, and RI14 (FIGS. 10A, 11A, 12A, and 13A) correspond to a series of a plurality of read images of one old book. The read image RI11 includes a medium image MI11 that is an image of the front cover (FIG. 10A). That is, the medium image MI11 is arranged in the read image RI11. The read image RI12 includes a medium image MI121 which is a 12-page spread image and a medium image MI122 which is an image of a color chart (FIG. 11A). That is, the medium image MI121 and the medium image MI122 are arranged in the read image RI12. The read image RI13 includes a medium image MI131 that is a double-page spread image and a medium image MI132 that is a color chart image (FIG. 12A). That is, the read image RI13 includes the medium image MI131 and the medium image MI132. The read image RI14 includes a medium image MI14 which is an image of the back cover (FIG. 13A). That is, the medium image MI14 is arranged in the read image RI14. Further, the read images RI11, RI12, RI13, RI14 include a document table image MTI.
 まず、ステップS103~S107の処理(図2)に従って、画像処理部15は、読取画像RI11,RI12,RI13,RI14の各々に対して、図10B、図11B、図12B、図13Bに示すように動作する。 First, in accordance with the processing of steps S103 to S107 (FIG. 2), the image processing unit 15 performs processing on each of the read images RI11, RI12, RI13, and RI14 as shown in FIGS. 10B, 11B, 12B, and 13B. Operate.
 すなわち、図10Bに示すように、画像処理部15は、媒体画像MI11の外接矩形BR11を検出し、検出した外接矩形BR11の重心GB11を算出する。また、画像処理部15は、媒体画像MI11の媒体輪郭図形の上辺が直線であるため、媒体画像MI11において、媒体輪郭図形の上辺の中点を媒体画像基準位置RRP11に設定する。 That is, as shown in FIG. 10B, the image processing unit 15 detects the circumscribed rectangle BR11 of the medium image MI11, and calculates the center of gravity GB11 of the detected circumscribed rectangle BR11. Further, since the upper side of the medium outline figure of the medium image MI11 is a straight line, the image processing unit 15 sets the middle point of the upper side of the medium outline figure in the medium image MI11 to the medium image reference position RRP11.
 また、図11Bに示すように、画像処理部15は、媒体画像MI121の外接矩形BR121を検出し、検出した外接矩形BR121の重心GB121を算出する。また、画像処理部15は、媒体画像MI121の媒体輪郭図形の上辺が直線でないため、媒体画像MI121において、媒体輪郭図形の上辺の両端のコーナー点CR121a,CR121bを結ぶ近似直線SL121の中点を媒体画像基準位置RRP121に設定する。一方で、画像処理部15は、媒体画像MI122の外接矩形BR122を検出し、検出した外接矩形BR122の重心GB122を算出する。また、画像処理部15は、媒体画像MI122の媒体輪郭図形の上辺が直線であるため、媒体画像MI122において、媒体輪郭図形の上辺の中点を媒体画像基準位置RRP122に設定する。 11B, the image processing unit 15 detects the circumscribed rectangle BR121 of the medium image MI121, and calculates the center of gravity GB121 of the detected circumscribed rectangle BR121, as shown in FIG. 11B. Further, since the upper side of the medium outline figure of the medium image MI121 is not a straight line, the image processing unit 15 determines the middle point of the approximate straight line SL121 connecting the corner points CR121a and CR121b at both ends of the upper side of the medium outline figure in the medium image MI121. It is set to the image reference position RRP121. On the other hand, the image processing unit 15 detects the circumscribed rectangle BR122 of the medium image MI122, and calculates the center of gravity GB122 of the detected circumscribed rectangle BR122. Further, since the upper side of the medium outline figure of the medium image MI122 is a straight line, the image processing unit 15 sets the midpoint of the upper side of the medium outline figure in the medium image MI122 to the medium image reference position RRP122.
 また、図12Bに示すように、画像処理部15は、媒体画像MI131の外接矩形BR131を検出し、検出した外接矩形BR131の重心GB131を算出する。また、画像処理部15は、媒体画像MI131の媒体輪郭図形の上辺が直線でないため、媒体画像MI131において、媒体輪郭図形の上辺の両端のコーナー点CR131a,CR131bを結ぶ近似直線SL131の中点を媒体画像基準位置RRP131に設定する。一方で、画像処理部15は、媒体画像MI132の外接矩形BR132を検出し、検出した外接矩形BR132の重心GB132を算出する。また、画像処理部15は、媒体画像MI132の媒体輪郭図形の上辺が直線であるため、媒体画像MI132において、媒体輪郭図形の上辺の中点を媒体画像基準位置RRP132に設定する。 12B, the image processing unit 15 detects the circumscribed rectangle BR131 of the medium image MI131, and calculates the center of gravity GB131 of the detected circumscribed rectangle BR131, as shown in FIG. 12B. Further, since the upper side of the medium outline figure of the medium image MI131 is not a straight line, the image processing unit 15 determines the middle point of the approximate straight line SL131 connecting the corner points CR131a and CR131b at both ends of the upper side of the medium outline figure in the medium image MI131. It is set to the image reference position RRP131. On the other hand, the image processing unit 15 detects the circumscribed rectangle BR132 of the medium image MI132, and calculates the center of gravity GB132 of the detected circumscribed rectangle BR132. Further, since the upper side of the medium outline figure of the medium image MI132 is a straight line, the image processing unit 15 sets the midpoint of the upper side of the medium outline figure in the medium image MI132 to the medium image reference position RRP132.
 また、図13Bに示すように、画像処理部15は、媒体画像MI14の外接矩形BR14を検出し、検出した外接矩形BR14の重心GB14を算出する。また、画像処理部15は、媒体画像MI14の媒体輪郭図形の上辺が直線であるため、媒体画像MI14において、媒体輪郭図形の上辺の中点を媒体画像基準位置RRP14に設定する。 13B, the image processing unit 15 detects the circumscribed rectangle BR14 of the medium image MI14 and calculates the center of gravity GB14 of the detected circumscribed rectangle BR14, as shown in FIG. 13B. Further, since the upper side of the medium outline figure of the medium image MI14 is a straight line, the image processing unit 15 sets the middle point of the upper side of the medium outline figure in the medium image MI14 to the medium image reference position RRP14.
 ここで、外接矩形BR11は、読取画像RI11において媒体画像MI11が存在する領域を示し、外接矩形BR121は、読取画像RI12において媒体画像MI121が存在する領域を示し、外接矩形BR122は、読取画像RI12において媒体画像MI122が存在する領域を示す。また、外接矩形BR131は、読取画像RI13において媒体画像MI131が存在する領域を示し、外接矩形BR132は、読取画像RI13において媒体画像MI132が存在する領域を示し、外接矩形BR14は、読取画像RI14において媒体画像MI14が存在する領域を示す。 Here, the circumscribed rectangle BR11 indicates an area where the medium image MI11 exists in the read image RI11, the circumscribed rectangle BR121 indicates an area where the medium image MI121 exists in the read image RI12, and the circumscribed rectangle BR122 indicates the area where the read image RI12 exists. The area where the medium image MI122 exists is shown. A circumscribed rectangle BR131 indicates an area where the medium image MI131 exists in the read image RI13, a circumscribed rectangle BR132 indicates an area where the medium image MI132 exists in the read image RI13, and a circumscribed rectangle BR14 indicates the medium in the read image RI14. The area where the image MI14 exists is shown.
 このように、画像処理部15は、読取画像RI11,RI12,RI13,RI14の各々において、媒体画像MI11,MI121,MI122,MI131,MI132,MI14がそれぞれ存在する領域に、媒体画像基準位置RRP11,RRP121,RRP122,RRP131,RRP132,RRP14をそれぞれ設定する。 As described above, the image processing unit 15 sets the medium image reference positions RRP11 and RRP121 in the regions where the medium images MI11, MI121, MI122, MI131, MI132, and MI14 exist in each of the read images RI11, RI12, RI13, and RI14. , RRP122, RRP131, RRP132, and RRP14 are respectively set.
 読取画像RI14は、一冊の古書の一連の複数の読取画像の中の最後の画像であるため、読取画像RI14に対してステップS101~S107の処理が完了したときに(ステップS109:Yes)、処理はステップS111へ進む。 Since the read image RI14 is the last image in a series of a plurality of read images of one old book, when the processing of steps S101 to S107 is completed for the read image RI14 (step S109: Yes), The process proceeds to step S111.
 次いで、画像処理部15は、ステップS111の処理(図2)に従って、読取画像RI11,RI12,RI13,RI14の各々に対して、図10B、図11B、図12B、図13Bに示すように動作する。すなわち、画像処理部15は、図10B、図11B、図12B、図13Bに示すように、読取画像RI11,RI12,RI13,RI14の各々において、原稿台画像MTIの上辺の中点を読取画像基準位置MRP1に設定する。 Next, the image processing unit 15 operates as shown in FIG. 10B, FIG. 11B, FIG. 12B, and FIG. 13B for each of the read images RI11, RI12, RI13, and RI14 according to the processing of step S111 (FIG. 2). . That is, as shown in FIGS. 10B, 11B, 12B, and 13B, the image processing unit 15 sets the middle point of the upper side of the platen image MTI in each of the read images RI11, RI12, RI13, and RI14 as the read image reference. Set to position MRP1.
 このように、画像処理部15は、読取画像RI11,RI12,RI13,RI14の各々において、媒体画像MI11,MI121,MI122,MI131,MI132,MI14がそれぞれ存在する領域以外の領域に、読取画像基準位置MRP1を設定する。 As described above, the image processing unit 15 sets the read image reference position in an area other than the area where the medium images MI11, MI121, MI122, MI131, MI132, and MI14 exist in each of the read images RI11, RI12, RI13, and RI14. Set MRP1.
 次いで、画像処理部15は、ステップS113の処理(図2)によって制御部11が選択した読取画像RI11(図10B)において、重心GB11を中心にして外接矩形BR11を右回りに回転させることによって媒体画像MI11の傾きを補正する(ステップS115,図10C)。 Next, the image processing unit 15 rotates the circumscribed rectangle BR11 clockwise about the center of gravity GB11 in the read image RI11 (FIG. 10B) selected by the control unit 11 in the process of FIG. The inclination of the image MI11 is corrected (step S115, FIG. 10C).
 また、読取画像RI11は一連の複数の読取画像の中の最初の読取画像であるため(ステップS117:Yes)、画像処理部15は、媒体画像MI11に対して位置設定処理Cを行う(ステップS119)。媒体画像MI11に対する位置設定処理Cでは、画像処理部15は、図10Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし、正立補正後の媒体画像MI11における媒体画像基準位置RRP11(座標(xb0,yb0))を終点とするベクトルVecb0を、原稿台画像MTIに対する媒体画像MI11の位置として設定する。つまり、媒体画像MI11に設定されたベクトルVecb0は、媒体画像基準位置RRP11と読取画像基準位置MRP1との間の位置関係を示す。 Further, since the read image RI11 is the first read image in the series of the plurality of read images (step S117: Yes), the image processing unit 15 performs the position setting process C on the medium image MI11 (step S119). ). In the position setting processing C for the medium image MI11, as shown in FIG. 10C, the image processing unit 15 sets the read image reference position MRP1 (coordinates (x s0 , y s0 )) as the starting point and sets the medium image MI11 after erecting correction. A vector Vec b0 ending at the medium image reference position RRP11 (coordinates (x b0 , y b0 )) is set as the position of the medium image MI11 with respect to the original table image MTI. That is, the vector Vec b0 set in the medium image MI11 indicates a positional relationship between the medium image reference position RRP11 and the read image reference position MRP1.
 そして、画像処理部15は、図10Cに示すように、読取画像RI11において、媒体画像MI11に設定したベクトルVecb0に基づいて媒体画像MI11を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecb0の終点の座標(xb0,yb0)に媒体画像基準位置RRP11が一致するように、媒体画像MI11を再配置する。画像処理部15は、媒体画像MI11の再配置後の読取画像RI11Fを記憶部13に記憶させる。なお、再配置後の媒体画像MI11の配置は、再配置前の媒体画像MI11の配置(つまり、正立補正後の媒体画像MI11の配置)と同一である。 Then, as shown in FIG. 10C, the image processing unit 15 rearranges the medium image MI11 in the read image RI11 based on the vector Vec b0 set in the medium image MI11 (step S135). That is, the image processing unit 15 rearranges the medium image MI11 such that the medium image reference position RRP11 matches the coordinates (x b0 , y b0 ) of the end point of the vector Vec b0 . The image processing unit 15 causes the storage unit 13 to store the read image RI11F after the rearrangement of the medium image MI11. The arrangement of the medium image MI11 after the rearrangement is the same as the arrangement of the medium image MI11 before the rearrangement (that is, the arrangement of the medium image MI11 after the erecting correction).
 また、画像処理部15は、ステップS113の処理(図2)によって制御部11が選択した読取画像RI12(図11B)において、重心GB121を中心にして外接矩形BR121を左回りに回転させることによって媒体画像MI121の傾きを補正する(ステップS115,図11C)。同様に、画像処理部15は、重心GB122を中心にして外接矩形BR122を右回りに回転させることによって媒体画像MI122の傾きを補正する(ステップS115,図11C)。 Further, the image processing unit 15 rotates the circumscribed rectangle BR121 counterclockwise about the center of gravity GB121 in the read image RI12 (FIG. 11B) selected by the control unit 11 in the process of FIG. The inclination of the image MI121 is corrected (step S115, FIG. 11C). Similarly, the image processing unit 15 corrects the tilt of the medium image MI122 by rotating the circumscribed rectangle BR122 clockwise around the center of gravity GB122 (step S115, FIG. 11C).
 また、読取画像RI12は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI12に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI11と媒体画像MI121とは、表紙画像と見開き画像との対画像であると判定される一方で、媒体画像MI11と媒体画像MI122とは、対画像ではないと判定される。 Further, since the read image RI12 is not the first read image in the series of the plurality of read images (Step S117: No), the image processing unit 15 performs the image detection process on the read image RI12 (Step S121). . By this paired image detection process, while the medium image MI11 and the medium image MI121 are determined to be a paired image of the cover image and the facing image, the medium image MI11 and the medium image MI122 are not a paired image. Is determined.
 媒体画像MI11と媒体画像MI121とが表紙画像と見開き画像との対画像であると判定されたため、画像処理部15は、媒体画像MI121に対して位置設定処理Bを行う(ステップS129)。媒体画像MI121に対する位置設定処理Bでは、画像処理部15は、図11Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし媒体画像基準位置RRP121(座標(xb1,yb0))を終点とするベクトルVecb1を、原稿台画像MTIに対する媒体画像MI121の位置として設定する。つまり、媒体画像MI121に設定されたベクトルVecb1は、媒体画像基準位置RRP121と読取画像基準位置MRP1との間の位置関係を示す。ここで、図11Cにおいて、媒体画像基準位置RRP121のX座標の値xb1は正立補正後の媒体画像MI121における媒体画像基準位置RRP121のX座標の値xb1と同一である一方で、媒体画像基準位置RRP121のY座標の値yb0は図10Cにおける媒体画像基準位置RRP11のY座標の値yb0と同一である。 Since it is determined that the medium image MI11 and the medium image MI121 are a pair image of the cover image and the facing image, the image processing unit 15 performs the position setting processing B on the medium image MI121 (step S129). In the position setting processing B for the medium image MI121, as shown in FIG. 11C, the image processing unit 15 starts from the read image reference position MRP1 (coordinates (x s0 , y s0 )) and sets the medium image reference position RRP121 (coordinates (x b1, y b0) vector Vec b1 to end point) is set as the position of the medium image MI121 for platen image MTI. That is, the vector Vec b1 set in the medium image MI121 indicates the positional relationship between the medium image reference position RRP121 and the read image reference position MRP1. Here, in FIG. 11C, the value x b1 of the X coordinate of the medium image reference position RRP121 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in the medium image MI121 after erecting correction. The value y b0 of the Y coordinate of the reference position RRP121 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP11 in FIG. 10C.
 また、媒体画像MI11と媒体画像MI122とが対画像ではないと判定されたため、画像処理部15は、ステップS131の処理後、ステップS133において処理を続行すると判定した場合は、媒体画像MI122に対して位置設定処理Cを行う(ステップS119)。媒体画像MI122に対する位置設定処理Cでは、画像処理部15は、図11Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし、正立補正後の媒体画像MI122における媒体画像基準位置RRP122(座標(xc0,yc0))を終点とするベクトルVecc1を、原稿台画像MTIに対する媒体画像MI122の位置として設定する。つまり、媒体画像MI122に設定されたベクトルVecc1は、媒体画像基準位置RRP122と読取画像基準位置MRP1との間の位置関係を示す。 Further, since it is determined that the medium image MI11 and the medium image MI122 are not a paired image, the image processing unit 15 determines whether to continue the processing in step S133 after the processing in step S131, A position setting process C is performed (step S119). In the position setting processing C for the medium image MI122, as shown in FIG. 11C, the image processing unit 15 sets the read image reference position MRP1 (coordinates (x s0 , y s0 )) as the starting point and the erect-corrected medium image MI122. , The vector Vec c1 ending at the medium image reference position RRP122 (coordinates (x c0 , y c0 )) is set as the position of the medium image MI 122 with respect to the original plate image MTI. That is, the vector Vec c1 set in the medium image MI122 indicates the positional relationship between the medium image reference position RRP122 and the read image reference position MRP1.
 そして、画像処理部15は、図11Cに示すように、読取画像RI12において、媒体画像MI121に設定したベクトルVecb1に基づいて媒体画像MI121を再配置するとともに、媒体画像MI122に設定したベクトルVecc1に基づいて媒体画像MI122を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecb1の終点の座標(xb1,yb0)に媒体画像基準位置RRP121が一致するように、媒体画像MI121を再配置する。また、画像処理部15は、ベクトルVecc1の終点の座標(xc0,yc0)に媒体画像基準位置RRP122が一致するように、媒体画像MI122を再配置する。画像処理部15は、媒体画像MI121,MI122の再配置後の読取画像RI12Fを記憶部13に記憶させる。媒体画像MI121の再配置により、再配置後の媒体画像MI121の配置は、再配置前の媒体画像MI121の配置(つまり、正立補正後の媒体画像MI121の配置)から変更される。なお、再配置後の媒体画像MI122の配置は、再配置前の媒体画像MI122の配置(つまり、正立補正後の媒体画像MI122の配置)と同一である。 Then, as shown in FIG. 11C, the image processing unit 15 rearranges the medium image MI121 based on the vector Vec b1 set in the medium image MI121 in the read image RI12, and also sets the vector Vec c1 set in the medium image MI122. The medium image MI122 is rearranged based on (Step S135). That is, the image processing unit 15, as the medium image reference position RRP121 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI121. The image processing unit 15, as the medium image reference position RRP122 the coordinates (x c0, y c0) of the end point of the vector Vec c1 matches, to reposition the media image MI122. The image processing unit 15 causes the storage unit 13 to store the read image RI12F after the rearrangement of the medium images MI121 and MI122. Due to the rearrangement of the medium image MI121, the arrangement of the medium image MI121 after the rearrangement is changed from the arrangement of the medium image MI121 before the rearrangement (that is, the arrangement of the medium image MI121 after the erecting correction). The arrangement of the medium image MI122 after the rearrangement is the same as the arrangement of the medium image MI122 before the rearrangement (that is, the arrangement of the medium image MI122 after the erecting correction).
 上記のように、画像処理部15は、ベクトルVecb1の終点の座標(xb1,yb0)に媒体画像基準位置RRP121が一致するように、媒体画像MI121を再配置する(図11C)。また上記のように、媒体画像基準位置RRP121のX座標の値xb1は正立補正後の媒体画像MI121における媒体画像基準位置RRP121のX座標の値xb1と同一である一方で、媒体画像基準位置RRP121のY座標の値yb0は図10Cにおける媒体画像基準位置RRP11のY座標の値yb0と同一である。また上記のように、媒体画像MI11と媒体画像MI121とは表紙画像と見開き画像との対画像である。また、一方の媒体画像と他方の媒体画像とが表紙画像と見開き画像との対画像であると判定されるのは(ステップS189)、一方の媒体画像の縦長さと他方の媒体画像の縦長さとの差が閾値TH2未満であり(ステップS179:Yes)、一方の媒体画像の横長さと他方の媒体画像の横長さとの差が閾値TH3以上であり(ステップS181:No)、かつ、他方の媒体画像の横長さが一方の媒体画像の横長さの所定範囲にある場合である(ステップS187:Yes)。つまり、画像処理部15は、媒体画像MI11の縦長さと媒体画像MI121の縦長さとの差が閾値TH2未満であり、媒体画像MI11の横長さと媒体画像MI121の横長さとの差が閾値TH3以上であり、かつ、媒体画像MI121の横長さが媒体画像MI11の横長さの所定範囲にあるときは、媒体画像MI121の再配置の際に、媒体画像MI121の縦方向の配置を変更する一方で、媒体画像MI121の横方向の配置を変更しない。 As described above, the image processing unit 15, as the medium image reference position RRP121 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI121 (Figure 11C). Further, as described above, the value x b1 of the X coordinate of the medium image reference position RRP121 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in the medium image MI121 after erecting correction. The value y b0 of the Y coordinate of the position RRP121 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP11 in FIG. 10C. Further, as described above, the medium image MI11 and the medium image MI121 are paired images of the cover image and the facing image. Further, it is determined that the one medium image and the other medium image are a pair image of the cover image and the facing image (step S189) because the vertical length of one medium image and the vertical length of the other medium image are determined. The difference is less than the threshold value TH2 (step S179: Yes), the difference between the horizontal length of one medium image and the horizontal length of the other medium image is equal to or larger than the threshold value TH3 (step S181: No), and the other medium image This is a case where the horizontal length is within a predetermined range of the horizontal length of one medium image (step S187: Yes). That is, the image processing unit 15 determines that the difference between the vertical length of the medium image MI11 and the vertical length of the medium image MI121 is less than the threshold TH2, the difference between the horizontal length of the medium image MI11 and the horizontal length of the medium image MI121 is equal to or greater than the threshold TH3, Further, when the horizontal length of the medium image MI121 is within a predetermined range of the horizontal length of the medium image MI11, the rearrangement of the medium image MI121 is performed while the vertical arrangement of the medium image MI121 is changed when the medium image MI121 is rearranged. Do not change the horizontal alignment of.
 また、画像処理部15は、ステップS113の処理(図2)によって制御部11が選択した読取画像RI13(図12B)において、重心GB131を中心にして外接矩形BR131を右回りに回転させることによって媒体画像MI131の傾きを補正する(ステップS115,図12C)。同様に、画像処理部15は、重心GB132を中心にして外接矩形BR132を左回りに回転させることによって媒体画像MI132の傾きを補正する(ステップS115,図12C)。 Further, the image processing unit 15 rotates the circumscribed rectangle BR131 clockwise around the center of gravity GB131 in the read image RI13 (FIG. 12B) selected by the control unit 11 in the process of FIG. The inclination of the image MI131 is corrected (step S115, FIG. 12C). Similarly, the image processing unit 15 corrects the tilt of the medium image MI132 by rotating the circumscribed rectangle BR132 counterclockwise about the center of gravity GB132 (step S115, FIG. 12C).
 また、読取画像RI13は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI13に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI121と媒体画像MI131とは見開き画像同士の対画像であると判定され、媒体画像MI122と媒体画像MI132とはカラーチャート画像同士の対画像であると判定される。 Further, since the read image RI13 is not the first read image in the series of the plurality of read images (Step S117: No), the image processing unit 15 performs an image detection process on the read image RI13 (Step S121). . By this paired image detection process, the medium image MI121 and the medium image MI131 are determined to be a paired image of facing images, and the medium image MI122 and the medium image MI132 are determined to be a paired image of color chart images. .
 媒体画像MI121と媒体画像MI131とが見開き画像同士の対画像であると判定されたため、画像処理部15は、媒体画像MI131に対して位置設定処理Aを行う(ステップS127)。媒体画像MI131に対する位置設定処理Aでは、画像処理部15は、図12Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし媒体画像基準位置RRP131(座標(xb1,yb0))を終点とするベクトルVecb1を、原稿台画像MTIに対する媒体画像MI131の位置として設定する。つまり、媒体画像MI131に設定されたベクトルVecb1は、媒体画像基準位置RRP131と読取画像基準位置MRP1との間の位置関係を示す。ここで、図12Cにおいて、媒体画像基準位置RRP131のX座標の値xb1は図11Cにおける媒体画像基準位置RRP121のX座標の値xb1と同一であり、媒体画像基準位置RRP131のY座標の値yb0は図11Cにおける媒体画像基準位置RRP121のY座標の値yb0と同一である。つまり、媒体画像MI131に設定されたベクトルVecb1は、媒体画像MI121に設定されたベクトルVecb1と同一である。 Since it has been determined that the medium image MI121 and the medium image MI131 are paired images of two-page spread images, the image processing unit 15 performs the position setting processing A on the medium image MI131 (step S127). In position setting process A for the medium image MI131, the image processing unit 15, as shown in FIG. 12C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP131 (coordinates (x A vector Vec b1 ending at b1 , y b0 )) is set as the position of the medium image MI131 with respect to the document table image MTI. That is, the vector Vec b1 set in the medium image MI131 indicates a positional relationship between the medium image reference position RRP131 and the read image reference position MRP1. Here, in FIG. 12C, the value x b1 of the X coordinate of the medium image reference position RRP131 is the same as the value x b1 of the X coordinate of the medium image reference position RRP121 in FIG. 11C, and the value of the Y coordinate of the medium image reference position RRP131 y b0 is the same as the y-coordinate value y b0 of the medium image reference position RRP121 in FIG. 11C. That is, the vector Vec b1 set in the medium image MI131 is the same as the vector Vec b1 set in the medium image MI121.
 また、媒体画像MI122と媒体画像MI132とがカラーチャート画像同士の対画像であると判定されたため、画像処理部15は、媒体画像MI132に対して位置設定処理Aを行う(ステップS127)。媒体画像MI132に対する位置設定処理Aでは、画像処理部15は、図12Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし媒体画像基準位置RRP132(座標(xc0,yc0))を終点とするベクトルVecc1を、原稿台画像MTIに対する媒体画像MI132の位置として設定する。つまり、媒体画像MI132に設定されたベクトルVecc1は、媒体画像基準位置RRP132と読取画像基準位置MRP1との間の位置関係を示す。ここで、図12Cにおいて、媒体画像基準位置RRP132のX座標の値xc0は図11Cにおける媒体画像基準位置RRP122のX座標の値xc0と同一であり、媒体画像基準位置RRP132のY座標の値yc0は図11Cにおける媒体画像基準位置RRP122のY座標の値yc0と同一である。つまり、媒体画像MI132に設定されたベクトルVecc1は、媒体画像MI122に設定されたベクトルVecc1と同一である。 Further, since it is determined that the medium image MI122 and the medium image MI132 are paired images of the color chart images, the image processing unit 15 performs the position setting processing A on the medium image MI132 (step S127). In position setting process A for the medium image MI132, the image processing unit 15, as shown in FIG. 12C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP132 (coordinates (x A vector Vec c1 ending at c0 , yc0 )) is set as the position of the medium image MI132 with respect to the document table image MTI. That is, the vector Vec c1 set in the medium image MI132 indicates a positional relationship between the medium image reference position RRP132 and the read image reference position MRP1. Here, in FIG. 12C, the value x c0 of the X coordinate of the medium image reference position RRP132 is the same as the value X c0 of the X coordinate of the medium image reference position RRP122 in FIG. 11C, and the value of the Y coordinate of the medium image reference position RRP 132 y c0 is the same as the y-coordinate value y c0 of the medium image reference position RRP122 in FIG. 11C. That is, the vector Vec c1 set in the medium image MI132 is the same as the vector Vec c1 set in the medium image MI122.
 そして、画像処理部15は、図12Cに示すように、読取画像RI13において、媒体画像MI131に設定したベクトルVecb1に基づいて媒体画像MI131を再配置するとともに、媒体画像MI132に設定したベクトルVecc1に基づいて媒体画像MI132を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecb1の終点の座標(xb1,yb0)に媒体画像基準位置RRP131が一致するように、媒体画像MI131を再配置する。また、画像処理部15は、ベクトルVecc1の終点の座標(xc0,yc0)に媒体画像基準位置RRP132が一致するように、媒体画像MI132を再配置する。画像処理部15は、媒体画像MI131,MI132の再配置後の読取画像RI13Fを記憶部13に記憶させる。媒体画像MI131の再配置により、再配置後の媒体画像MI131の配置は、再配置前の媒体画像MI131の配置(つまり、正立補正後の媒体画像MI131の配置)から変更される。また、媒体画像MI132の再配置により、再配置後の媒体画像MI132の配置は、再配置前の媒体画像MI132の配置(つまり、正立補正後の媒体画像MI132の配置)から変更される。 Then, as shown in FIG. 12C, the image processing unit 15 rearranges the medium image MI131 in the read image RI13 based on the vector Vec b1 set in the medium image MI131, and also sets the vector Vec c1 in the medium image MI132. Is rearranged on the basis of (step S135). That is, the image processing unit 15, as the medium image reference position RRP131 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, to reposition the media image MI131. The image processing unit 15, as the medium image reference position RRP132 the coordinates (x c0, y c0) of the end point of the vector Vec c1 matches, to reposition the media image MI132. The image processing unit 15 causes the storage unit 13 to store the read image RI13F after the rearrangement of the medium images MI131 and MI132. Due to the rearrangement of the medium image MI131, the arrangement of the medium image MI131 after the rearrangement is changed from the arrangement of the medium image MI131 before the rearrangement (that is, the arrangement of the medium image MI131 after the erecting correction). Further, due to the rearrangement of the medium image MI132, the arrangement of the medium image MI132 after the rearrangement is changed from the arrangement of the medium image MI132 before the rearrangement (that is, the arrangement of the medium image MI132 after the erecting correction).
 上記のように、媒体画像MI121と媒体画像MI131とは見開き画像同士の対画像である。また上記のように、媒体画像MI131に設定されたベクトルVecb1は、媒体画像MI121に設定されたベクトルVecb1と同一である。また上記のように、媒体画像MI121に設定されたベクトルVecb1は、媒体画像基準位置RRP121と読取画像基準位置MRP1との間の位置関係を示し、媒体画像MI131に設定されたベクトルVecb1は、媒体画像基準位置RRP131と読取画像基準位置MRP1との間の位置関係を示す。また上記のように、ベクトルVecb1の終点の座標(xb1,yb0)に媒体画像基準位置RRP121が一致するように媒体画像MI121が再配置される一方で、ベクトルVecb1の終点の座標(xb1,yb0)に媒体画像基準位置RRP131が一致するように媒体画像MI131が再配置される。つまり、画像処理部15は、媒体画像MI121と対になる媒体画像MI131を、媒体画像MI121と媒体画像MI131との間において位置関係が同一になるように再配置する。 As described above, the medium image MI121 and the medium image MI131 are paired images of facing images. Further, as described above, the vector Vec b1 set in the medium image MI131 is the same as the vector Vec b1 set in the medium image MI121. Further, as described above, the vector Vec b1 set in the medium image MI121 indicates a positional relationship between the medium image reference position RRP121 and the read image reference position MRP1, and the vector Vec b1 set in the medium image MI131 is The positional relationship between the medium image reference position RRP131 and the read image reference position MRP1 is shown. Also as described above, while the medium image MI121 is repositioned to medium image reference position RRP121 the coordinates (x b1, y b0) of end point of the vector Vec b1 match, the end point of the vector Vec b1 coordinates ( medium image MI131 is repositioned to x b1, y b0) in the medium image reference position RRP131 match. That is, the image processing unit 15 rearranges the medium image MI131 paired with the medium image MI121 such that the positional relationship between the medium image MI121 and the medium image MI131 is the same.
 また上記のように、媒体画像MI122と媒体画像MI132とはカラーチャート画像同士の対画像である。また上記のように、媒体画像MI132に設定されたベクトルVecc1は、媒体画像MI122に設定されたベクトルVecc1と同一である。また上記のように、媒体画像MI122に設定されたベクトルVecc1は、媒体画像基準位置RRP122と読取画像基準位置MRP1との間の位置関係を示し、媒体画像MI132に設定されたベクトルVecc1は、媒体画像基準位置RRP132と読取画像基準位置MRP1との間の位置関係を示す。また上記のように、ベクトルVecc1の終点の座標(xc0,yc0)に媒体画像基準位置RRP122が一致するように媒体画像MI122が再配置される一方で、ベクトルVecc1の終点の座標(xc0,yc0)に媒体画像基準位置RRP132が一致するように媒体画像MI132が再配置される。つまり、画像処理部15は、媒体画像MI122と対になる媒体画像MI132を、媒体画像MI122と媒体画像MI132との間において位置関係が同一になるように再配置する。 Further, as described above, the medium image MI122 and the medium image MI132 are pairs of color chart images. Further, as described above, the vector Vec c1 set in the medium image MI132 is the same as the vector Vec c1 set in the medium image MI122. Further, as described above, the vector Vec c1 set in the medium image MI122 indicates a positional relationship between the medium image reference position RRP122 and the read image reference position MRP1, and the vector Vec c1 set in the medium image MI132 is The positional relationship between the medium image reference position RRP132 and the read image reference position MRP1 is shown. Also as described above, while the medium image MI122 is repositioned to medium image reference position RRP122 the coordinates (x c0, y c0) of the end point of the vector Vec c1 match, the end point of the vector Vec c1 coordinates ( medium image MI132 is repositioned to x c0, y c0) to the medium image reference position RRP132 match. That is, the image processing unit 15 rearranges the medium image MI132 paired with the medium image MI122 such that the positional relationship between the medium image MI122 and the medium image MI132 is the same.
 また、ステップS113の処理(図2)によって制御部11が選択した読取画像RI14に含まれる媒体画像MI14の外接矩形BR14に傾きは無いため(図13B)、媒体画像MI14に対して正立補正が行われても(ステップS115)、媒体画像MI14は回転されない。 Further, since the circumscribed rectangle BR14 of the medium image MI14 included in the read image RI14 selected by the control unit 11 by the processing of step S113 (FIG. 2) has no inclination (FIG. 13B), the erecting correction is not performed on the medium image MI14. Even if performed (step S115), the medium image MI14 is not rotated.
 また、読取画像RI14は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI14に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI131と媒体画像MI14とは、表紙画像と見開き画像との対画像であると判定される。 Further, since the read image RI14 is not the first read image in the series of the plurality of read images (step S117: No), the image processing unit 15 performs the image detection process on the read image RI14 (step S121). . By this paired image detection process, the medium image MI131 and the medium image MI14 are determined to be paired images of the cover image and the facing image.
 媒体画像MI131と媒体画像MI14とが表紙画像と見開き画像との対画像であると判定されたため、画像処理部15は、媒体画像MI14に対して位置設定処理Bを行う(ステップS129)。媒体画像MI14に対する位置設定処理Bでは、画像処理部15は、図13Cに示すように、読取画像基準位置MRP1(座標(xs0,ys0))を始点とし媒体画像基準位置RRP14(座標(xb3,yb0))を終点とするベクトルVecb3を、原稿台画像MTIに対する媒体画像MI14の位置として設定する。つまり、媒体画像MI14に設定されたベクトルVecb3は、媒体画像基準位置RRP14と読取画像基準位置MRP1との間の位置関係を示す。ここで、図13Cにおいて、媒体画像基準位置RRP14のX座標の値xb3は図13Bにおける媒体画像MI14の媒体画像基準位置RRP14のX座標の値xb3と同一である一方で、媒体画像基準位置RRP14のY座標の値yb0は図12Cにおける媒体画像基準位置RRP131のY座標の値yb0と同一である。 Since it is determined that the medium image MI131 and the medium image MI14 are the paired image of the cover image and the facing image, the image processing unit 15 performs the position setting processing B on the medium image MI14 (Step S129). In position setting process B for the medium image MI14, the image processing unit 15, as shown in FIG. 13C, the read image reference position MRP1 (coordinates (x s0, y s0)) medium and starting the image reference position RRP14 (coordinates (x b3, y b0) vector Vec b3 to end point) is set as the position of the medium image MI14 against platen image MTI. That is, the vector Vec b3 set in the medium image MI14 indicates a positional relationship between the medium image reference position RRP14 and the read image reference position MRP1. Here, in FIG. 13C, the value x b3 of the X coordinate of the medium image reference position RRP14 is the same as the value x b3 of the X coordinate of the medium image reference position RRP14 of the medium image MI14 in FIG. the value y b0 Y coordinate of RRP14 is identical to the value y b0 Y coordinate of the media image reference position RRP131 in FIG 12C.
 そして、画像処理部15は、図13Cに示すように、読取画像RI14において、媒体画像MI14に設定したベクトルVecb3に基づいて媒体画像MI14を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecb3の終点の座標(xb3,yb0)に媒体画像基準位置RRP14が一致するように、媒体画像MI14を再配置する。画像処理部15は、媒体画像MI14の再配置後の読取画像RI14Fを記憶部13に記憶させる。媒体画像MI14の再配置により、再配置後の媒体画像MI14の配置は、再配置前の媒体画像MI14の配置(つまり、正立補正後の媒体画像MI14の配置)から変更される。 Then, as shown in FIG. 13C, the image processing unit 15 rearranges the medium image MI14 in the read image RI14 based on the vector Vec b3 set in the medium image MI14 (step S135). That is, the image processing unit 15, as the medium image reference position RRP14 the coordinates (x b3, y b0) of end point of the vector Vec b3 match, to reposition the media image MI14. The image processing unit 15 causes the storage unit 13 to store the read image RI14F after the rearrangement of the medium image MI14. Due to the rearrangement of the medium image MI14, the arrangement of the medium image MI14 after the rearrangement is changed from the arrangement of the medium image MI14 before the rearrangement (that is, the arrangement of the medium image MI14 after the erecting correction).
 上記のように、画像処理部15は、ベクトルVecb3の終点の座標(xb3,yb0)に媒体画像基準位置RRP14が一致するように、媒体画像MI14を再配置する(図13C)。また上記のように、媒体画像基準位置RRP14のX座標の値xb3は正立補正後の媒体画像MI14における媒体画像基準位置RRP14のX座標の値xb3と同一である一方で、媒体画像基準位置RRP14のY座標の値yb0は図12Cにおける媒体画像基準位置RRP131のY座標の値yb0と同一である。また上記のように、媒体画像MI14と媒体画像MI131とは表紙画像と見開き画像との対画像である。また、一方の媒体画像と他方の媒体画像とが表紙画像と見開き画像との対画像であると判定されるのは(ステップS189)、一方の媒体画像の縦長さと他方の媒体画像の縦長さとの差が閾値TH2未満であり(ステップS179:Yes)、一方の媒体画像の横長さと他方の媒体画像の横長さとの差が閾値TH3以上であり(ステップS181:No)、かつ、他方の媒体画像の横長さが一方の媒体画像の横長さの所定範囲にある場合である(ステップS187:Yes)。つまり、画像処理部15は、媒体画像MI131の縦長さと媒体画像MI14の縦長さとの差が閾値TH2未満であり、媒体画像MI131の横長さと媒体画像MI14の横長さとの差が閾値TH3以上であり、かつ、媒体画像MI131の横長さが媒体画像MI14の横長さの所定範囲にあるときは、媒体画像MI14の再配置の際に、媒体画像MI14の縦方向の配置を変更する一方で、媒体画像MI14の横方向の配置を変更しない。 As described above, the image processing unit 15, as the medium image reference position RRP14 the coordinates (x b3, y b0) of end point of the vector Vec b3 match, to reposition the media image MI14 (Fig @ 13 C). Further, as described above, the value x b3 of the X coordinate of the medium image reference position RRP14 is the same as the value x b3 of the X coordinate of the medium image reference position RRP14 in the medium image MI14 after erecting correction, while The value y b0 of the Y coordinate of the position RRP14 is the same as the value y b0 of the Y coordinate of the medium image reference position RRP131 in FIG. 12C. As described above, the medium image MI14 and the medium image MI131 are paired images of the cover image and the facing image. Further, it is determined that the one medium image and the other medium image are a pair image of the cover image and the facing image (step S189) because the vertical length of one medium image and the vertical length of the other medium image are determined. The difference is less than the threshold value TH2 (step S179: Yes), the difference between the horizontal length of one medium image and the horizontal length of the other medium image is equal to or larger than the threshold value TH3 (step S181: No), and the other medium image This is a case where the horizontal length is within a predetermined range of the horizontal length of one medium image (step S187: Yes). That is, the image processing unit 15 determines that the difference between the vertical length of the medium image MI131 and the vertical length of the medium image MI14 is less than the threshold TH2, the difference between the horizontal length of the medium image MI131 and the horizontal length of the medium image MI14 is equal to or greater than the threshold TH3, Further, when the horizontal length of the medium image MI131 is within the predetermined range of the horizontal length of the medium image MI14, the rearrangement of the medium image MI14 in the vertical direction is performed while the medium image MI14 is rearranged. Do not change the horizontal alignment of.
 読取画像RI14は、一冊の古書の一連の複数の読取画像の中の最後の画像であるため、読取画像RI14に対してステップS113~S135の処理が完了したときに(ステップS137:Yes)、処理はステップS139(抽出領域設定)へ進む。 Since the read image RI14 is the last image in a series of read images of one old book, when the processing of steps S113 to S135 is completed for the read image RI14 (step S137: Yes), The process proceeds to step S139 (extraction area setting).
 抽出領域の設定にあたり、画像処理部15は、記憶部13から読取画像RI11F,RI12F,RI13F,RI14Fを取得する。読取画像RI11Fには、正立補正後の媒体画像MI11が含まれ(図14A)、読取画像RI12Fには、正立補正後及び配置変更後の媒体画像MI121と、正立補正後の媒体画像MI122とが含まれる(図14B)。また、読取画像RI13Fには、正立補正後及び配置変更後の媒体画像MI131と、正立補正後及び配置変更後の媒体画像MI132とが含まれる(図14C)。また、読取画像RI14Fには、正立補正後及び配置変更後の媒体画像MI14が含まれる(図14D)。 In setting the extraction region, the image processing unit 15 acquires the read images RI11F, RI12F, RI13F, and RI14F from the storage unit 13. The read image RI11F includes the medium image MI11 after the erecting correction (FIG. 14A), and the read image RI12F includes the medium image MI121 after the erecting correction and the arrangement change, and the medium image MI122 after the erecting correction. (FIG. 14B). Further, the read image RI13F includes a medium image MI131 after the erecting correction and the arrangement change, and a medium image MI132 after the erecting correction and the arrangement change (FIG. 14C). Further, the read image RI14F includes the medium image MI14 after the erecting correction and the arrangement change (FIG. 14D).
 画像処理部15は、図14A~Dに示すように、正立補正後の媒体画像MI11,MI122と、正立補正後及び配置変更後の媒体画像MI121,MI131,MI132,MI14とに基づいて、読取画像RI11F,RI12F,RI13F,RI14Fに対して、位置及びサイズが同一の抽出領域EA11を設定する(ステップS139)。つまり、矩形の抽出領域EA11の縦長さDstH1は読取画像RI11F,RI12F,RI13F,RI14Fのすべてにおいて同一である。また、矩形の抽出領域EA11の横長さDstW1は読取画像RI11F,RI12F,RI13F,RI14Fのすべてにおいて同一である。また、読取画像基準位置MRP1を始点とし、矩形の抽出領域EA11の左上の頂点を終点とするベクトルVecLd0は、読取画像RI11F,RI12F,RI13F,RI14Fのすべてにおいて同一である。また、読取画像基準位置MRP1を始点とし、矩形の抽出領域EA11の右上の頂点を終点とするベクトルVecRd0は、読取画像RI11F,RI12F,RI13F,RI14Fのすべてにおいて同一である。また、読取画像RI11Fに設定された抽出領域EA11には媒体画像MI11が存在し(図14A)、読取画像RI12Fに設定された抽出領域EA11には媒体画像MI121,122が存在し(図14B)、読取画像RI13Fに設定された抽出領域EA11には媒体画像MI131,132が存在し(図14C)、読取画像RI14Fに設定された抽出領域EA11には媒体画像MI14が存在する(図14D)。 As shown in FIGS. 14A to 14D, the image processing unit 15 performs, based on the media images MI11 and MI122 after the erecting correction and the media images MI121, MI131, MI132, and MI14 after the erecting correction and the layout change, respectively. The extraction area EA11 having the same position and the same size is set for the read images RI11F, RI12F, RI13F, and RI14F (step S139). That is, the vertical length DstH1 of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F. The horizontal length DstW1 of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F. The vector VecL d0 starting from the read image reference position MRP1 and ending at the upper left vertex of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F. The vector VecR d0 starting from the read image reference position MRP1 and ending at the upper right vertex of the rectangular extraction area EA11 is the same in all of the read images RI11F, RI12F, RI13F, and RI14F. Further, a medium image MI11 exists in the extraction area EA11 set in the read image RI11F (FIG. 14A), and medium images MI121 and 122 exist in the extraction area EA11 set in the read image RI12F (FIG. 14B). Medium images MI131 and 132 exist in the extraction area EA11 set in the read image RI13F (FIG. 14C), and a medium image MI14 exists in the extraction area EA11 set in the read image RI14F (FIG. 14D).
 そこで、画像処理部15は、図15A~Dに示すように、ステップS139で設定した同一の抽出領域EA11に従って、読取画像RI11Fから媒体画像MI11を抽出し、読取画像RI12Fから媒体画像MI121,MI122を抽出し、読取画像RI13Fから媒体画像MI131,MI132を抽出し、読取画像RI14Fから媒体画像MI14を抽出する(ステップS141)。 Therefore, as shown in FIGS. 15A to 15D, the image processing unit 15 extracts the medium image MI11 from the read image RI11F and extracts the medium images MI121 and MI122 from the read image RI12F according to the same extraction area EA11 set in step S139. Then, the medium images MI131 and MI132 are extracted from the read image RI13F, and the medium image MI14 is extracted from the read image RI14F (step S141).
 図16A~C、図17、図18A~C、及び、図19A~Cに、媒体画像の再配置後の読取画像に対する抽出領域の設定例を示す。一例として、読取画像RI1Fには外接矩形BR1を有する再配置後の媒体画像MI1が含まれ(図16A)、読取画像RI2Fには外接矩形BR2を有する再配置後の媒体画像MI2が含まれ(図16B)、読取画像RI3Fには、外接矩形BR3を有する再配置後の媒体画像MI3と、外接矩形BR4を有する再配置後の媒体画像MI4とが含まれる(図16C)。また、読取画像RI1F,RI2F,RI3Fのすべてに読取画像基準位置MRP1が設定されている(図16A~C)。 FIGS. 16A to 16C, FIG. 17, FIGS. 18A to 18C, and FIGS. 19A to 19C show examples of setting an extraction area for a read image after rearrangement of a medium image. As an example, the read image RI1F includes the rearranged medium image MI1 having the circumscribed rectangle BR1 (FIG. 16A), and the read image RI2F includes the rearranged medium image MI2 having the circumscribed rectangle BR2 (see FIG. 16A). 16B), the read image RI3F includes a rearranged medium image MI3 having the circumscribed rectangle BR3 and a rearranged medium image MI4 having the circumscribed rectangle BR4 (FIG. 16C). The read image reference position MRP1 is set for all of the read images RI1F, RI2F, and RI3F (FIGS. 16A to 16C).
 画像処理部15は、読取画像RI1F,RI2F,RI3Fにおける再配置後のすべての媒体画像MI1,MI2,MI3,MI4を包含する外接矩形に基づいて抽出領域EA1を設定する。例えば、画像処理部15は、図17に示すように、外接矩形BR1,BR2,BR3,BR4のすべてを包含する矩形を抽出領域EA1に設定する。また、画像処理部15は、図17に示すように、読取画像基準位置MRP1を始点とし、抽出領域EA1の左上の頂点を終点とするベクトルVEA1aと、読取画像基準位置MRP1を始点とし、抽出領域EA1の右上の頂点を終点とするベクトルVEA1bとを、抽出領域EA1に対して設定する。 The image processing unit 15 sets the extraction area EA1 based on the circumscribed rectangle including all the rearranged medium images MI1, MI2, MI3, and MI4 in the read images RI1F, RI2F, and RI3F. For example, as shown in FIG. 17, the image processing unit 15 sets a rectangle including all of the circumscribed rectangles BR1, BR2, BR3, and BR4 as the extraction area EA1. Further, as shown in FIG. 17, the image processing unit 15 sets the vector VEA1a starting from the read image reference position MRP1 and ending at the upper left vertex of the extraction area EA1, the read image reference position MRP1 as the start point, and A vector VEA1b ending at the upper right vertex of EA1 is set for the extraction area EA1.
 次いで、画像処理部15は、ベクトルVEA1a,VEA1bが設定された抽出領域EA1を読取画像RI1F,RI2F,RI3Fの各々に設定する(図18A~C)。よって、読取画像RI1Fに設定された抽出領域EA1には媒体画像MI1が存在し(図18A)、読取画像RI2Fに設定された抽出領域EA1には媒体画像MI2が存在し(図18B)、読取画像RI3Fに設定された抽出領域EA1には媒体画像MI3,MI4が存在する(図18C)。画像RI1F,RI2F,RI3Fの各々に設定された抽出領域EA1は、画像RI1F,RI2F,RI3Fのすべてにおいて、位置及びサイズが同一である。 Next, the image processing unit 15 sets the extraction area EA1 in which the vectors VEA1a and VEA1b are set in each of the read images RI1F, RI2F, and RI3F (FIGS. 18A to 18C). Therefore, the medium image MI1 exists in the extraction area EA1 set in the read image RI1F (FIG. 18A), the medium image MI2 exists in the extraction area EA1 set in the read image RI2F (FIG. 18B), and the read image Medium images MI3 and MI4 exist in the extraction area EA1 set in the RI3F (FIG. 18C). The extraction area EA1 set in each of the images RI1F, RI2F, and RI3F has the same position and size in all of the images RI1F, RI2F, and RI3F.
 そこで、画像処理部15は、図19A~Cに示すように、位置及びサイズが同一の抽出領域EA1に従って、読取画像RI1Fから媒体画像MI1を抽出し、読取画像RI2Fから媒体画像MI2を抽出し、読取画像RI3Fから媒体画像MI3,MI4を抽出する。 Therefore, as shown in FIGS. 19A to 19C, the image processing unit 15 extracts the medium image MI1 from the read image RI1F and the medium image MI2 from the read image RI2F according to the extraction area EA1 having the same position and size. The medium images MI3 and MI4 are extracted from the read image RI3F.
 図20に、古書のオモテ表紙の画像、見開き12画像、見開き34画像、…、ウラ表紙の画像の一連の読取画像の一例と、それらの読取画像に対して図2示す処理が行われた後の画像(つまり、処理後画像)の一例とを示す。 FIG. 20 illustrates an example of a series of read images of the front cover image, the spread 12 images, the spread 34 images,..., The back cover image of the old book, and the image shown in FIG. (I.e., an image after processing).
 以上のように、実施例1では、画像処理装置10は、記憶部13と、画像処理部15とを有する。記憶部13は、各々が媒体画像を含む一連の複数の読取画像を記憶する。画像処理部15は、複数の読取画像の各々において、媒体画像が存在する領域(以下では「媒体画像領域」と呼ぶことがある)に媒体画像基準位置を設定し、媒体画像領域以外の領域に読取画像基準位置を設定する。また、画像処理部15は、媒体画像基準位置と読取画像基準位置との間の位置関係に基づいて、読取画像内で媒体画像を再配置する。また、画像処理部15は、再配置後の媒体画像に基づいて、複数の読取画像において同一の抽出領域を設定する。そして、画像処理部15は、設定した抽出領域に従って、複数の読取画像の各々から媒体画像を抽出する。 As described above, in the first embodiment, the image processing device 10 includes the storage unit 13 and the image processing unit 15. The storage unit 13 stores a series of a plurality of read images each including a medium image. The image processing unit 15 sets a medium image reference position in an area where a medium image exists (hereinafter, may be referred to as a “medium image area”) in each of the plurality of read images, and Set the read image reference position. Further, the image processing unit 15 rearranges the medium image in the read image based on the positional relationship between the medium image reference position and the read image reference position. Further, the image processing unit 15 sets the same extraction region in the plurality of read images based on the rearranged medium image. Then, the image processing unit 15 extracts a medium image from each of the plurality of read images according to the set extraction region.
 こうすることで、一連の複数の読取画像にそれぞれ含まれる媒体画像間における位置ズレを抑制することができる。よって、一連の複数の読取画像にそれぞれ含まれる媒体画像を連続して見る者の違和感を軽減することができる。このため、例えば、オペレータがオーバーヘッド型スキャナの原稿台に本を設置する際に、ページ間での媒体画像の位置ズレが生じないように気をつけながら慎重に設置することが不要になる。よって、例えば、オーバーヘッド型のスキャナを用いて一冊の本を電子データ化する場合のオペレータの作業効率を向上することができる。 By doing so, it is possible to suppress positional deviation between the medium images included in each of the series of read images. Therefore, it is possible to reduce a sense of incongruity of a person who continuously looks at the medium images included in the series of the plurality of read images. For this reason, for example, when the operator places a book on the platen of the overhead scanner, it is not necessary to carefully place the book while taking care not to displace the position of the medium image between pages. Therefore, for example, it is possible to improve the work efficiency of the operator when one book is converted into electronic data using an overhead scanner.
 また、実施例1では、記憶部13は少なくとも二つの読取画像を記憶する。画像処理部15は、二つの読取画像のうちの一方の読取画像に含まれる媒体画像と、二つの読取画像のうちの他方の読取画像に含まれ、かつ、一方の読取画像に含まれる媒体画像と対になる媒体画像との間において、位置関係が同一になるように、他方の読取画像に含まれる媒体画像を再配置する。 In the first embodiment, the storage unit 13 stores at least two read images. The image processing unit 15 includes a medium image included in one of the two read images and a medium image included in the other of the two read images and included in the one of the read images. The medium image included in the other read image is rearranged so that the positional relationship between the medium image and the paired medium image becomes the same.
 こうすることで、互いに対になる媒体画像間(例えば、見開き画像同士間、または、カラーチャート画像同士間)における位置ズレを抑制することができる。 By doing so, it is possible to suppress a positional shift between the paired medium images (for example, between the spread images or between the color chart images).
 また、実施例1では、画像処理部15は、複数の読取画像における再配置後のすべての媒体画像の外接矩形を包含する矩形に基づいて抽出領域を設定する。 In the first embodiment, the image processing unit 15 sets an extraction region based on a rectangle including a circumscribed rectangle of all rearranged medium images in a plurality of read images.
 こうすることで、特に媒体画像が見開き画像である場合に、適切な抽出領域を設定することができる。また、読取画像に例えば見開き画像とカラーチャート画像のような互いに別種の媒体画像が含まれる場合に、適切な抽出領域を設定することができる。 By doing so, an appropriate extraction region can be set particularly when the medium image is a two-page spread image. Further, when the read images include different types of medium images such as a double-page spread image and a color chart image, an appropriate extraction region can be set.
 また、実施例1では、記憶部13は少なくとも二つの読取画像を記憶する。画像処理部15は、二つの読取画像のうちの一方の読取画像に含まれる媒体画像の縦長さと、二つの読取画像のうちの他方の読取画像に含まれる媒体画像の縦長さとの差が閾値TH2未満であり、一方の読取画像に含まれる媒体画像の横長さと他方の読取画像に含まれる媒体画像の横長さとの差が閾値TH3以上であり、かつ、他方の読取画像に含まれる媒体画像の横長さが一方の読取画像に含まれる媒体画像の横長さの所定範囲にあるときは、他方の読取画像に含まれる媒体画像を再配置する際に、縦方向の配置を変更する一方で横方向の配置を変更しない。 In the first embodiment, the storage unit 13 stores at least two read images. The image processing unit 15 determines that the difference between the vertical length of the medium image included in one of the two read images and the vertical length of the medium image included in the other of the two read images is a threshold value TH2. Less than the threshold value TH3, and the difference between the horizontal length of the medium image included in the one read image and the horizontal length of the medium image included in the other read image is equal to or greater than the threshold value TH3. Is within a predetermined range of the horizontal length of the medium image included in one of the read images, when rearranging the medium image included in the other read image, the vertical position is changed while the horizontal direction is changed. Do not change the placement.
 こうすることで、例えば、媒体画像として、表紙の画像(つまり、1ページ分の画像)と見開き画像(つまり、2ページ分の画像)とが混在する場合に、媒体画像の過度な配置変更を抑制することができる。よって、表紙の画像と見開き画像とを連続して見る者の違和感を軽減することができる。 By doing so, for example, when a cover image (that is, an image for one page) and a facing image (that is, an image for two pages) are mixed as a medium image, excessive arrangement change of the medium image is performed. Can be suppressed. Therefore, it is possible to reduce the discomfort of a person who continuously views the cover image and the facing image.
 また、実施例1では、画像処理部15は、対になる媒体画像が存在しないときに警告を出力する。 In the first embodiment, the image processing unit 15 outputs a warning when there is no paired medium image.
 こうすることで、オペレータは、対になる媒体画像が存在しないことを知ることができる。このため、オペレータは、対になる媒体画像が存在しないときに、処理を続行するか否かを適宜選択可能になる。 Thus, the operator can know that there is no paired medium image. Therefore, the operator can appropriately select whether or not to continue the processing when there is no paired medium image.
 以上、実施例1について説明した。 The first embodiment has been described above.
 [実施例2]
 実施例2では、実施例1と同様に、一冊の古書の各ページをスキャナによって読み取って一冊の本を電子データ化する場合を一例に挙げて説明する。但し、実施例2では、図21に示すように、オペレータが右めくりの古書BKを裁断して背を切り離すことにより1枚ずつバラバラになった各ページをADF(Auto Document Feeder)付きのスキャナに連続して読み取らせる場合を一例に挙げて説明する。図21は、実施例2の古書の一例を示す図である。1冊の古書BKが切断されることにより生じた複数枚の原稿はオペレータによってまとめてシューターに置かれる。シューターに置かれた複数枚の原稿はADFによって1枚ずつ連続してスキャナの中に取り込まれ、スキャナによって各原稿の表面(おもてめん)の画像と裏面(うらめん)の画像とが読み取られる。
[Example 2]
In the second embodiment, as in the first embodiment, a case where each page of one old book is read by a scanner to convert one book into electronic data will be described as an example. However, in the second embodiment, as shown in FIG. 21, the operator cuts an old book BK turned to the right and cuts off the spine to separate each page one by one into a scanner with an ADF (Auto Document Feeder). A case in which reading is performed continuously will be described as an example. FIG. 21 is a diagram illustrating an example of an old book according to the second embodiment. A plurality of originals generated by cutting one old book BK are put together in a shooter by an operator. A plurality of originals placed on the shooter are successively loaded into the scanner one by one by the ADF, and the scanner reads the front (front) image and the rear (back) image of each original. Can be
 <画像処理装置の構成>
 実施例2の画像処理装置の構成例については、実施例1(図1)のものと同一であるため、説明を省略する。
<Configuration of image processing device>
The configuration example of the image processing apparatus according to the second embodiment is the same as that according to the first embodiment (FIG. 1), and a description thereof will not be repeated.
 <画像処理装置の処理・動作>
 図22、図23、図24、図25、及び、図37は、実施例2の画像処理装置の処理フローの一例を示す図である。図26A~C、図27A~C、図28A~C、図29A~C、図30A~D、図31A~D、図32、図33A~C、図34、図35A~C、図36A~C、及び、図38A~Cは、実施例2の画像処理装置の動作例の説明に供する図である。
<Processing and operation of image processing device>
FIGS. 22, 23, 24, 25, and 37 are diagrams illustrating an example of a processing flow of the image processing apparatus according to the second embodiment. FIGS. 26A-C, 27A-C, 28A-C, 29A-C, 30A-D, 31A-D, 32, 33A-C, 34, 35A-C, 36A-C. 38A to 38C are diagrams for explaining an operation example of the image processing apparatus according to the second embodiment.
 図22に示す処理フローは、実施例1同様、例えば、画像読取装置10が有する処理開始ボタン(図示省略)がオペレータによって押下されたときに開始される。 22. Similar to the first embodiment, the processing flow illustrated in FIG. 22 is started when, for example, a processing start button (not shown) of the image reading apparatus 10 is pressed by an operator.
 図22のステップS101では、制御部11は、記憶部13に記憶されている一連の複数の読取画像の中から、画像処理部15での処理の対象となる読取画像を選択して記憶部13から取得し、取得した読取画像を画像処理部15へ出力する。但し、白紙の画像は予め除去されて記憶部13には記憶されていない。 In step S101 of FIG. 22, the control unit 11 selects a read image to be processed by the image processing unit 15 from a series of a plurality of read images stored in the storage unit 13 and And outputs the obtained read image to the image processing unit 15. However, the blank image is removed in advance and is not stored in the storage unit 13.
 例えば、一冊の古書の読取画像として、オモテ表紙の画像、1ページ目の画像、2ページ目の画像、…、ウラ表紙の画像の一連の画像が記憶部13に記憶されている場合は、制御部11は、ステップS101~S109の1回目の処理ループにおけるステップS101でオモテ表紙の画像を、ステップS101~S109の2回目の処理ループにおけるステップS101で1ページ目の画像を、ステップS101~S109の3回目の処理ループにおけるステップS101で3ページ目の画像を順に記憶部13から取得する。また、制御部11は、ステップS101~S109の最後の処理ループにおけるステップS101でウラ表紙の画像を記憶部13から取得する。また例えば、オモテ表紙の画像、1ページ目の画像、2ページ目の画像、…、ウラ表紙の画像の一連の画像は1つのファイルとして記憶部13に記憶されており、制御部11は、処理対象のファイルから1ページ目の画像、2ページ目の画像、…、ウラ表紙の画像の各画像を順に選択する。処理対象のファイルは、例えば、オペレータによって任意に選択される。 For example, when a series of images of the front cover image, the first page image, the second page image,..., The back cover image is stored in the storage unit 13 as a read image of one old book, The control unit 11 outputs the image of the front cover in step S101 in the first processing loop of steps S101 to S109, the image of the first page in step S101 in the second processing loop of steps S101 to S109, and outputs the images of steps S101 to S109. In step S101 of the third processing loop, images of the third page are sequentially acquired from the storage unit 13. Further, the control unit 11 acquires the image of the back cover from the storage unit 13 in step S101 in the last processing loop of steps S101 to S109. Further, for example, a series of images of the front cover image, the first page image, the second page image,..., The back cover image is stored in the storage unit 13 as one file, and the control unit 11 From the target file, the first page image, the second page image,..., The back cover image are sequentially selected. The file to be processed is arbitrarily selected by an operator, for example.
 図22において、ステップS103~S105の処理については、実施例1(図2)のものと同一であるため、説明を省略する。 In FIG. 22, the processes in steps S103 to S105 are the same as those in the first embodiment (FIG. 2), and thus description thereof will be omitted.
 図22のステップS201では、画像処理部15は、媒体画像基準位置設定処理を行う。 で は In step S201 in FIG. 22, the image processing unit 15 performs a medium image reference position setting process.
 図23に、媒体画像基準位置設定処理の処理フローの一例を示す。図23におけるステップS151,S153,S155,S161の処理については、実施例1(図3,図5)に示したものと同一であるため、説明を省略する。 FIG. 23 shows an example of a processing flow of the medium image reference position setting processing. The processing in steps S151, S153, S155, and S161 in FIG. 23 is the same as that shown in the first embodiment (FIGS. 3 and 5), and a description thereof will not be repeated.
 図23のステップS211では、画像処理部15は、裁断箇所検出処理を行う。 で は In step S211 in FIG. 23, the image processing unit 15 performs a cut position detection process.
 図24及び図25に、裁断箇所検出処理の処理フローの一例を示す。図24に裁断箇所検出処理の第一処理例を示し、図25に裁断箇所検出処理の第二処理例を示す。以下、裁断箇所検出処理について第一処理例と第二処理例とに分けて説明する。 FIGS. 24 and 25 show an example of the processing flow of the cut location detection processing. FIG. 24 shows a first processing example of the cut position detection processing, and FIG. 25 shows a second processing example of the cut position detection processing. Hereinafter, the cut position detection processing will be described separately for the first processing example and the second processing example.
 <裁断箇所検出処理の第一処理例(図24)>
 図24のステップS221では、画像処理部15は、ステップS101で順に選択される一連の読取画像が左めくりの画像であるか否かを判定する。電子データ化の対象となる古書が左めくりの書物であるか右めくりの書物であるかは、オペレータによって画像処理装置10に予め指示され、「左めくり」または「右めくり」の指示結果が記憶部13に予め記憶されている。画像処理部15は、記憶部13に記憶されている指示結果が「左めくり」であるときは一連の複数の読取画像が左めくりの画像であると判定し、記憶部13に記憶されている指示結果が「右めくり」であるときは一連の複数の読取画像が右めくりの画像であると判定する。一連の複数の読取画像が左めくりの画像である場合は(ステップS221:Yes)、処理はステップS223へ進み、一連の複数の読取画像が右めくりの画像である場合は(ステップS221:No)、処理はステップS229へ進む。
<First Processing Example of Cutting Location Detection Processing (FIG. 24)>
In step S221 in FIG. 24, the image processing unit 15 determines whether a series of read images sequentially selected in step S101 is a left-turn image. Whether the old book to be converted into electronic data is a left-turning book or a right-turning book is previously instructed by the operator to the image processing apparatus 10, and the instruction result of "left-turning" or "right-turning" is stored. It is stored in the unit 13 in advance. When the instruction result stored in the storage unit 13 is “turn left”, the image processing unit 15 determines that a series of read images is a left-turn image and stores the read image in the storage unit 13. When the instruction result is “turn right”, it is determined that a series of a plurality of read images are right-turn images. If the series of read images is a left-turn image (step S221: Yes), the process proceeds to step S223. If the series of read images is a right-turn image (step S221: No). , The process proceeds to step S229.
 ステップS223では、画像処理部15は、ステップS101で選択された読取画像が奇数ページの画像であるか否かを判定する。ステップS101で選択された読取画像が奇数ページの画像である場合は(ステップS223:Yes)、処理はステップS225へ進み、ステップS101で選択された読取画像が偶数ページの画像である場合は(ステップS223:No)、処理はステップS227へ進む。 In step S223, the image processing unit 15 determines whether the read image selected in step S101 is an image of an odd page. If the read image selected in step S101 is an odd page image (step S223: Yes), the process proceeds to step S225, and if the read image selected in step S101 is an even page image (step S223). (S223: No), the process proceeds to step S227.
 ステップS229でも、画像処理部15は、ステップS101で選択された読取画像が奇数ページの画像であるか否かを判定する。ステップS101で選択された読取画像が奇数ページの画像である場合は(ステップS229:Yes)、処理はステップS227へ進み、ステップS101で選択された読取画像が偶数ページの画像である場合は(ステップS229:No)、処理はステップS225へ進む。 で も Also in step S229, the image processing unit 15 determines whether the read image selected in step S101 is an image of an odd page. If the read image selected in step S101 is an image of an odd page (step S229: Yes), the process proceeds to step S227, and if the read image selected in step S101 is an image of an even page (step S229). S229: No), the process proceeds to step S225.
 ステップS225では、画像処理部15は、媒体画像における裁断箇所(以下では「媒体裁断箇所」と呼ぶことがある)が媒体画像の左側に存在すると判定する。 In step S225, the image processing unit 15 determines that the cut portion in the medium image (hereinafter, may be referred to as “medium cut portion”) exists on the left side of the medium image.
 一方で、テップS227では、画像処理部15は、媒体裁断箇所が媒体画像の右側に存在すると判定する。 On the other hand, in step S227, the image processing unit 15 determines that the medium cutting portion exists on the right side of the medium image.
 <裁断箇所検出処理の第二処理例(図25)>
 図25のステップS231では、画像処理部15は、ステップS101で選択された読取画像に含まれる媒体画像の左辺側のコーナー角度LAを算出する。例えば、画像処理部15は、媒体画像の左上コーナーの角度をコーナー角度LAとして算出する。
<Second processing example of cutting position detection processing (FIG. 25)>
In step S231 in FIG. 25, the image processing unit 15 calculates the corner angle LA on the left side of the medium image included in the read image selected in step S101. For example, the image processing unit 15 calculates the angle of the upper left corner of the medium image as the corner angle LA.
 次いで、ステップS233では、画像処理部15は、コーナー角度LAと直角(90°)との差分の絶対値である差分角度DLAを算出する。 Next, in step S233, the image processing unit 15 calculates a difference angle DLA which is an absolute value of a difference between the corner angle LA and a right angle (90 °).
 次いで、ステップS235では、画像処理部15は、ステップS101で選択された読取画像に含まれる媒体画像の右辺側のコーナー角度RAを算出する。例えば、画像処理部15は、媒体画像の右上コーナーの角度をコーナー角度RAとして算出する。 Next, in step S235, the image processing unit 15 calculates a corner angle RA on the right side of the medium image included in the read image selected in step S101. For example, the image processing unit 15 calculates the angle of the upper right corner of the medium image as the corner angle RA.
 次いで、ステップS237では、画像処理部15は、コーナー角度RAと直角(90°)との差分の絶対値である差分角度DRAを算出する。 Next, in step S237, the image processing unit 15 calculates a difference angle DRA that is an absolute value of a difference between the corner angle RA and a right angle (90 °).
 次いで、ステップS239では、画像処理部15は、差分角度DLAが差分角度DRA未満であるか否かを判定する。差分角度DLAが差分角度DRA未満である場合は(ステップS239:Yes)、処理はステップS227へ進み、差分角度DLAが差分角度DRA以上である場合は(ステップS239:No)、処理はステップS225へ進む。 Next, in step S239, the image processing unit 15 determines whether or not the difference angle DLA is less than the difference angle DRA. If the difference angle DLA is smaller than the difference angle DRA (Step S239: Yes), the process proceeds to Step S227. If the difference angle DLA is equal to or larger than the difference angle DRA (Step S239: No), the process proceeds to Step S225. move on.
 ステップS227では、画像処理部15は、媒体裁断箇所が媒体画像の右側に存在すると判定する。 In step S227, the image processing unit 15 determines that the medium cutting portion exists on the right side of the medium image.
 一方で、ステップS225では、画像処理部15は、媒体裁断箇所が媒体画像の左側に存在すると判定する。 On the other hand, in step S225, the image processing unit 15 determines that the medium cutting portion exists on the left side of the medium image.
 以上、裁断箇所検出処理の処理例について説明した。 The processing example of the cut position detection processing has been described above.
 図23に戻り、ステップS213では、画像処理部15は、媒体裁断箇所が媒体画像の左側に存在するか否かを判定する。媒体裁断箇所が媒体画像の左側に存在する場合は(つまり、ステップS225の処理が行われた場合は)(ステップS213:Yes)、処理はステップS215へ進む。一方で、媒体裁断箇所が媒体画像の右側に存在する場合は(つまり、ステップS227の処理が行われた場合は)(ステップS213:No)、処理はステップS217へ進む。 戻 り Returning to FIG. 23, in step S213, the image processing unit 15 determines whether or not the medium cutting portion exists on the left side of the medium image. If the medium cutting portion exists on the left side of the medium image (that is, if the process of step S225 has been performed) (step S213: Yes), the process proceeds to step S215. On the other hand, if the medium cut position exists on the right side of the medium image (that is, if the process of step S227 is performed) (step S213: No), the process proceeds to step S217.
 ステップS215では、画像処理部15は、読取画像に含まれる媒体画像の上辺の右コーナー点を媒体画像基準位置に設定する。 In step S215, the image processing unit 15 sets the upper right corner point of the medium image included in the read image as the medium image reference position.
 一方で、ステップS217では、画像処理部15は、読取画像に含まれる媒体画像の上辺の左コーナー点を媒体画像基準位置に設定する。 On the other hand, in step S217, the image processing unit 15 sets the upper left corner point of the medium image included in the read image as the medium image reference position.
 図22に戻り、ステップS109では、制御部11は、記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS101~S201の処理が完了したか否かを判定する。記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS101~S201の処理が完了した場合は(ステップS109:Yes)、処理はステップS111へ進む。一方で、記憶部13に記憶されている一連の複数の読取画像において、ステップS101~S201の処理が行われていない読取画像が残っている場合は(ステップS109:No)、処理はステップS101へ戻る。例えば、一冊の古書の読取画像として、オモテ表紙の画像、1ページ目の画像、2ページ目の画像、…、ウラ表紙の画像の一連の画像が記憶部13に記憶されている場合は、ウラ表紙の画像に対してステップS101~S201の処理が完了したときに、処理はステップS111へ進む。 Returning to FIG. 22, in step S109, the control unit 11 determines whether or not the processing in steps S101 to S201 has been completed for all of a series of a plurality of read images stored in the storage unit 13. When the processing of steps S101 to S201 is completed for all of the series of a plurality of read images stored in the storage unit 13 (step S109: Yes), the processing proceeds to step S111. On the other hand, if a series of read images stored in the storage unit 13 still have read images for which the processes of steps S101 to S201 have not been performed (step S109: No), the process proceeds to step S101. Return. For example, when a series of images of the front cover image, the first page image, the second page image,..., The back cover image is stored in the storage unit 13 as a read image of one old book, When the processes of steps S101 to S201 are completed for the back cover image, the process proceeds to step S111.
 図22におけるステップS111~S133の処理については、実施例1(図2)に示したものと同一であるため、説明を省略する。但し、実施例2の読取画像には見開き画像もカラーチャート画像も含まれていないため、実施例2のステップS121で行われる対象画像検出処理(図7)では、ステップS183またはステップS189の処理が行われるときに、単に、今回媒体画像と前回媒体画像とが互いに対になる画像であると判定される。 処理 Since the processes in steps S111 to S133 in FIG. 22 are the same as those shown in the first embodiment (FIG. 2), description thereof will be omitted. However, since the spread image and the color chart image are not included in the read image of the second embodiment, in the target image detection process (FIG. 7) performed in step S121 of the second embodiment, the process of step S183 or S189 is not performed. When performed, it is simply determined that the current medium image and the previous medium image are images that form a pair.
 ステップS123では、画像処理部15は、ステップS171~S191(図7)の処理において対画像が存在したか否かを判定する。図7においてステップS183またはステップS189の処理が行われた場合は、対画像が存在したと判定され、図7においてステップS191の処理が行われた場合は、対画像が存在しなかったと判定される。対画像が存在した場合は(ステップS123:Yes)、処理はステップS203へ進み、対画像が存在しなかった場合は(ステップS123:No)、処理はステップS131へ進む。 In step S123, the image processing unit 15 determines whether or not a paired image exists in the processing in steps S171 to S191 (FIG. 7). When the processing in step S183 or step S189 is performed in FIG. 7, it is determined that the paired image exists, and when the processing in step S191 is performed in FIG. 7, it is determined that the paired image does not exist. . If the paired image exists (step S123: Yes), the process proceeds to step S203. If the paired image does not exist (step S123: No), the process proceeds to step S131.
 ステップS203では、画像処理部15は、位置設定処理Dを行う。位置設定処理Dの詳細は後述する。ステップS203の処理後、処理はステップS135へ進む。 In step S203, the image processing unit 15 performs a position setting process D. Details of the position setting processing D will be described later. After the process in step S203, the process proceeds to step S135.
 次いで、ステップS135では、画像処理部15は、位置設定処理C,D(ステップS119,S203)の結果に基づいて、読取画像において媒体画像を再配置する。ステップS135の処理の詳細は後述する。 Next, in step S135, the image processing unit 15 rearranges the medium image in the read image based on the results of the position setting processes C and D (steps S119 and S203). Details of the processing in step S135 will be described later.
 次いで、ステップS137では、制御部11は、記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS113~S135,S203の処理が完了したか否かを判定する。記憶部13に記憶されている一連の複数の読取画像のすべてに対してステップS113~S135,S203の処理が完了した場合は(ステップS137:Yes)、処理はステップS205へ進む。一方で、記憶部13に記憶されている一連の複数の読取画像において、ステップS113~S135,S203の処理が行われていない読取画像が残っている場合は(ステップS137:No)、処理はステップS113へ戻る。 Next, in step S137, the control unit 11 determines whether the processing in steps S113 to S135 and S203 has been completed for all of the series of multiple read images stored in the storage unit 13. If the processes of steps S113 to S135 and S203 have been completed for all of the series of read images stored in the storage unit 13 (step S137: Yes), the process proceeds to step S205. On the other hand, if a series of read images stored in the storage unit 13 still have read images for which the processes of steps S113 to S135 and S203 have not been performed (step S137: No), the process proceeds to step S137. It returns to S113.
 ステップS205では、画像処理部15は、読取画像に対して抽出領域を設定する。ステップS205の処理の詳細は後述する。 In step S205, the image processing unit 15 sets an extraction area for the read image. Details of the processing in step S205 will be described later.
 次いで、ステップS141では、画像処理部15は、ステップS205で設定した抽出領域に基づいて、媒体画像を抽出する。ステップS141の処理の詳細は後述する。 Next, in step S141, the image processing unit 15 extracts a medium image based on the extraction region set in step S205. Details of the processing in step S141 will be described later.
 以下、図22、図23、図24及び図25に示した一連の処理に基づく動作例について説明する。 Hereinafter, an operation example based on the series of processes shown in FIGS. 22, 23, 24, and 25 will be described.
 以下では、実施例1と同様に、画像処理部15は、矩形のすべての読取画像上に、読取画像の左上の頂点を原点として右方向にX座標、下方向にY座標を設定する。つまり、読取画像において、X座標は横方向の座標であり、Y座標は縦方向の座標である。 In the following, similarly to the first embodiment, the image processing unit 15 sets the X coordinate in the right direction and the Y coordinate in the downward direction on all the rectangular read images, with the upper left vertex of the read image as the origin. That is, in the read image, the X coordinate is the coordinate in the horizontal direction, and the Y coordinate is the coordinate in the vertical direction.
 また例えば、読取画像RI21,RI22,RI23,RI24(図26A,図27A,図28A,図29A)は、一冊の古書の一連の複数の読取画像に相当する。読取画像RI21は、オモテ表紙の画像である媒体画像MI21を含み、媒体画像MI21における媒体裁断箇所CPは媒体画像MI21の右側に存在する(図26A)。つまり、読取画像RI21には、媒体画像MI21が配置されている。読取画像RI22は、1ページ目の画像である媒体画像MI22を含み、媒体画像MI22における媒体裁断箇所CPは媒体画像MI22の左側に存在する(図27A)。つまり、読取画像RI22には、媒体画像MI22が配置されている。読取画像RI23は、2ページ目の画像である媒体画像MI23を含み、媒体画像MI23における媒体裁断箇所CPは媒体画像MI23の右側に存在する(図28A)。つまり、読取画像RI23には、媒体画像MI23が配置されている。読取画像RI24は、ウラ表紙の画像である媒体画像MI24を含み、媒体画像MI24における媒体裁断箇所CPは媒体画像MI24の左側に存在する(図29A)。つまり、読取画像RI24には、媒体画像MI24が配置されている。 Also, for example, the read images RI21, RI22, RI23, and RI24 (FIGS. 26A, 27A, 28A, and 29A) correspond to a series of a plurality of read images of one old book. The read image RI21 includes a medium image MI21 which is an image of the front cover, and the medium cut portion CP in the medium image MI21 exists on the right side of the medium image MI21 (FIG. 26A). That is, the medium image MI21 is arranged in the read image RI21. The read image RI22 includes the medium image MI22 which is the image of the first page, and the medium cut portion CP in the medium image MI22 exists on the left side of the medium image MI22 (FIG. 27A). That is, the medium image MI22 is arranged in the read image RI22. The read image RI23 includes the medium image MI23 which is the image of the second page, and the medium cut portion CP in the medium image MI23 exists on the right side of the medium image MI23 (FIG. 28A). That is, the medium image MI23 is arranged on the read image RI23. The read image RI24 includes the medium image MI24 that is the image of the back cover, and the medium cutting point CP in the medium image MI24 is on the left side of the medium image MI24 (FIG. 29A). That is, the medium image MI24 is arranged on the read image RI24.
 まず、ステップS103~S105,S201の処理(図22)に従って、画像処理部15は、読取画像RI21,RI22,RI23,RI24の各々に対して、図26B、図27B、図28B、図29Bに示すように動作する。 First, in accordance with the processing in steps S103 to S105 and S201 (FIG. 22), the image processing unit 15 displays each of the read images RI21, RI22, RI23, and RI24 in FIGS. 26B, 27B, 28B, and 29B. Works like that.
 すなわち、図26Bに示すように、画像処理部15は、媒体画像MI21の外接矩形BR21を検出し、検出した外接矩形BR21の重心GB21を算出する。また、画像処理部15は、媒体裁断箇所CPが媒体画像MI21の右側に存在するため、外接矩形BR21の左上の頂点を媒体画像基準位置RRP21に設定する。 That is, as shown in FIG. 26B, the image processing unit 15 detects the circumscribed rectangle BR21 of the medium image MI21, and calculates the center of gravity GB21 of the detected circumscribed rectangle BR21. Further, since the medium cutting point CP exists on the right side of the medium image MI21, the image processing unit 15 sets the upper left vertex of the circumscribed rectangle BR21 to the medium image reference position RRP21.
 また、図27Bに示すように、画像処理部15は、媒体画像MI22の外接矩形BR22を検出し、検出した外接矩形BR22の重心GB22を算出する。また、画像処理部15は、媒体裁断箇所CPが媒体画像MI22の左側に存在するため、外接矩形BR22の右上の頂点を媒体画像基準位置RRP22に設定する。 {Circle around (2)} As shown in FIG. 27B, the image processing unit 15 detects the circumscribed rectangle BR22 of the medium image MI22, and calculates the center of gravity GB22 of the detected circumscribed rectangle BR22. Further, since the medium cutting position CP exists on the left side of the medium image MI22, the image processing unit 15 sets the upper right vertex of the circumscribed rectangle BR22 to the medium image reference position RRP22.
 また、図28Bに示すように、画像処理部15は、媒体画像MI23の外接矩形BR23を検出し、検出した外接矩形BR23の重心GB23を算出する。また、画像処理部15は、媒体裁断箇所CPが媒体画像MI23の右側に存在するため、外接矩形BR23の左上の頂点を媒体画像基準位置RRP23に設定する。 {Circle around (2)} As shown in FIG. 28B, the image processing unit 15 detects the circumscribed rectangle BR23 of the medium image MI23, and calculates the center of gravity GB23 of the detected circumscribed rectangle BR23. Further, since the medium cutting position CP exists on the right side of the medium image MI23, the image processing unit 15 sets the upper left vertex of the circumscribed rectangle BR23 to the medium image reference position RRP23.
 また、図29Bに示すように、画像処理部15は、媒体画像MI24の外接矩形BR24を検出し、検出した外接矩形BR24の重心GB24を算出する。また、画像処理部15は、媒体裁断箇所CPが媒体画像MI24の左側に存在するため、外接矩形BR24の右上の頂点を媒体画像基準位置RRP24に設定する。 {Circle around (2)} As shown in FIG. 29B, the image processing unit 15 detects the circumscribed rectangle BR24 of the medium image MI24, and calculates the center of gravity GB24 of the detected circumscribed rectangle BR24. In addition, since the medium cutting position CP exists on the left side of the medium image MI24, the image processing unit 15 sets the upper right vertex of the circumscribed rectangle BR24 to the medium image reference position RRP24.
 ここで、外接矩形BR21は、読取画像RI21において媒体画像MI21が存在する領域を示し、外接矩形BR22は、読取画像RI22において媒体画像MI22が存在する領域を示し、外接矩形BR23は、読取画像RI23において媒体画像MI23が存在する領域を示し、外接矩形BR24は、読取画像RI24において媒体画像MI24が存在する領域を示す。 Here, the circumscribed rectangle BR21 indicates an area where the medium image MI21 exists in the read image RI21, the circumscribed rectangle BR22 indicates an area where the medium image MI22 exists in the read image RI22, and the circumscribed rectangle BR23 indicates the area in the read image RI23. The area where the medium image MI23 exists is shown, and the circumscribed rectangle BR24 shows the area where the medium image MI24 exists in the read image RI24.
 このように、画像処理部15は、読取画像RI21,RI22,RI23,RI24の各々において、媒体画像MI21,MI22,MI23,MI24がそれぞれ存在する領域に、媒体画像基準位置RRP21,RRP22,RRP23,RRP24をそれぞれ設定する。 As described above, the image processing unit 15 sets the medium image reference positions RRP21, RRP22, RRP23, and RRP24 in the areas where the medium images MI21, MI22, MI23, and MI24 exist in each of the read images RI21, RI22, RI23, and RI24. Are set respectively.
 読取画像RI24は、一冊の古書の一連の複数の読取画像の中の最後の画像であるため、読取画像RI24に対してステップS101~S105,S201の処理が完了したときに(ステップS109:Yes)、処理はステップS111へ進む。 Since the read image RI24 is the last image in a series of a plurality of read images of one old book, when the processes of steps S101 to S105 and S201 are completed for the read image RI24 (step S109: Yes) ), The process proceeds to step S111.
 次いで、画像処理部15は、ステップS111の処理(図2)に従って、読取画像RI21,RI22,RI23,RI24の各々に対して、図26B、図27B、図28B、図29Bに示すように動作する。すなわち、画像処理部15は、図26B、図27B、図28B、図29Bに示すように、縦長さ及び横長さが互いに同一の矩形の読取画像RI21,RI22,RI23,RI24の各々において、読取画像RI21,RI22,RI23,RI24の上辺の中点を読取画像基準位置MRP2に設定する。矩形の読取画像RI21,RI22,RI23,RI24の横長さがSrcWと表される場合、読取画像基準位置MRP2の座標は(SrcW/2,0)と表される。 Next, the image processing unit 15 operates as shown in FIG. 26B, FIG. 27B, FIG. 28B, and FIG. 29B for each of the read images RI21, RI22, RI23, and RI24 according to the process of FIG. 2 (FIG. 2). . That is, as shown in FIGS. 26B, 27B, 28B, and 29B, the image processing unit 15 reads the read image RI21, RI22, RI23, and RI24 of the rectangular read images having the same vertical length and horizontal length. The middle point of the upper side of RI21, RI22, RI23, RI24 is set as the read image reference position MRP2. When the horizontal length of the rectangular read images RI21, RI22, RI23, RI24 is expressed as SrcW, the coordinates of the read image reference position MRP2 are expressed as (SrcW / 2,0).
 このように、画像処理部15は、読取画像RI21,RI22,RI23,RI24の各々において、媒体画像MI21,MI22,MI23,MI24がそれぞれ存在する領域以外の領域に、読取画像基準位置MRP2を設定する。 As described above, the image processing unit 15 sets the read image reference position MRP2 in an area other than the area where the medium images MI21, MI22, MI23, and MI24 exist in each of the read images RI21, RI22, RI23, and RI24. .
 次いで、ステップS113の処理(図22)によって制御部11が選択した読取画像RI21に含まれる媒体画像MI21の外接矩形BR21に傾きは無いため(図26B)、媒体画像MI21に対して正立補正が行われても(ステップS115)、媒体画像MI21は回転されない。 Next, since the circumscribed rectangle BR21 of the medium image MI21 included in the read image RI21 selected by the control unit 11 in the process of FIG. 22 (FIG. 22) has no inclination (FIG. 26B), the erecting correction is not performed on the medium image MI21. Even if performed (step S115), the medium image MI21 is not rotated.
 また、読取画像RI21は一連の複数の読取画像の中の最初の読取画像であるため(ステップS117:Yes)、画像処理部15は、媒体画像MI21に対して位置設定処理Cを行う(ステップS119)。媒体画像MI21に対する位置設定処理Cでは、画像処理部15は、図26Cに示すように、読取画像基準位置MRP2(座標(SrcW/2,0))を始点とし、正立補正後の媒体画像MI21における媒体画像基準位置RRP21(座標(xb0,yb0))を終点とするベクトルVecLb0を、読取画像RI21に対する媒体画像MI21の位置として設定する。つまり、媒体画像MI21に設定されたベクトルVecLb0は、媒体画像基準位置RRP21と読取画像基準位置MRP2との間の位置関係を示す。 In addition, since the read image RI21 is the first read image in the series of the plurality of read images (Step S117: Yes), the image processing unit 15 performs the position setting process C on the medium image MI21 (Step S119). ). In the position setting processing C for the medium image MI21, as shown in FIG. 26C, the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image MI21 after erecting correction. the vector VecL b0 to ending the media image reference position RRP21 (coordinates (x b0, y b0)) in, set as the position of the media image MI21 for reading image RI21. That is, the vector VecL b0 set in the medium image MI21 indicates a positional relationship between the medium image reference position RRP21 and the read image reference position MRP2.
 そして、画像処理部15は、図26Cに示すように、読取画像RI21において、媒体画像MI21に設定したベクトルVecLb0に基づいて媒体画像MI21を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecLb0の終点の座標(xb0,yb0)に媒体画像基準位置RRP21が一致するように、媒体画像MI21を再配置する。画像処理部15は、媒体画像MI21の再配置後の読取画像RI21Fを記憶部13に記憶させる。なお、再配置後の媒体画像MI21の配置は、再配置前の媒体画像MI21の配置(つまり、正立補正後の媒体画像MI21の配置)と同一である。 Then, the image processing unit 15, as shown in FIG. 26C, the read image RI21, to reposition the media image MI21 based on vector VecL b0 set in the medium image MI21 (step S135). That is, the image processing unit 15, as the medium image reference position RRP21 the coordinates (x b0, y b0) of end point of the vector VecL b0 match, to reposition the media image MI21. The image processing unit 15 causes the storage unit 13 to store the read image RI21F after the rearrangement of the medium image MI21. The arrangement of the medium image MI21 after the rearrangement is the same as the arrangement of the medium image MI21 before the rearrangement (that is, the arrangement of the medium image MI21 after the erecting correction).
 また、ステップS113の処理(図22)によって制御部11が選択した読取画像RI22に含まれる媒体画像MI22の外接矩形BR22に傾きは無いため(図27B)、媒体画像MI22に対して正立補正が行われても(ステップS115)、媒体画像MI22は回転されない。 Further, since the circumscribed rectangle BR22 of the medium image MI22 included in the read image RI22 selected by the control unit 11 by the process of FIG. 22 (FIG. 22) has no inclination (FIG. 27B), the erecting correction is not performed on the medium image MI22. Even if performed (step S115), the medium image MI22 is not rotated.
 また、読取画像RI22は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI22に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI21と媒体画像MI22とは対画像であると判定される。 Further, since the read image RI22 is not the first read image in the series of the plurality of read images (Step S117: No), the image processing unit 15 performs an image detection process on the read image RI22 (Step S121). . By this paired image detection process, it is determined that the medium image MI21 and the medium image MI22 are paired images.
 媒体画像MI21と媒体画像MI22とが対画像であると判定されたため、画像処理部15は、媒体画像MI22に対して位置設定処理Dを行う(ステップS203)。媒体画像MI22に対する位置設定処理Dでは、画像処理部15は、図27Cに示すように、読取画像基準位置MRP2(座標(SrcW/2,0))を始点とし媒体画像基準位置RRP22(座標(xb1,yb0))を終点とするベクトルVecRb0を、読取画像RI22に対する媒体画像MI22の位置として設定する。つまり、媒体画像MI22に設定されたベクトルVecRb0は、媒体画像基準位置RRP22と読取画像基準位置MRP2との間の位置関係を示す。ここで、媒体画像MI22に設定されたベクトルVecRb0は、読取画像基準位置MRP2に対してX座標方向で、媒体画像MI21に設定されたベクトルVecLb0と対称なベクトルである。つまり、ベクトルVecRb0の終点のY座標yb0は、ベクトルVecLb0の終点のY座標yb0と同一である。また、読取画像基準位置MRP2からベクトルVecRb0の終点のX座標xb1までの距離は、読取画像基準位置MRP2からベクトルVecLb0の終点のX座標xb0までの距離と同一である。 Since it is determined that the medium image MI21 and the medium image MI22 are paired images, the image processing unit 15 performs the position setting processing D on the medium image MI22 (Step S203). In the position setting processing D for the medium image MI22, as illustrated in FIG. 27C, the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image reference position RRP22 (coordinates (x The vector VecR b0 ending at b1 , y b0 )) is set as the position of the medium image MI22 with respect to the read image RI22. That is, the vector VecR b0 set in the medium image MI22 indicates a positional relationship between the medium image reference position RRP22 and the read image reference position MRP2. Here, the vector VecR b0 set in the medium image MI22 is a vector symmetrical to the vector VecL b0 set in the medium image MI21 in the X coordinate direction with respect to the read image reference position MRP2. That, Y-coordinate y b0 of the end point of the vector VecR b0 is the same as the Y-coordinate y b0 of the end point of the vector VecL b0. The distance from the read image reference position MRP2 to X-coordinate x b1 of the end point of the vector VecR b0 is the same as the distance from the read image reference position MRP2 to X-coordinate x b0 of the end point of the vector VecL b0.
 そして、画像処理部15は、図27Cに示すように、読取画像RI22において、媒体画像MI22に設定したベクトルVecRb0に基づいて媒体画像MI22を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecRb0の終点の座標(xb1,yb1)に媒体画像基準位置RRP22が一致するように、媒体画像MI22を再配置する。画像処理部15は、媒体画像MI22の再配置後の読取画像RI22Fを記憶部13に記憶させる。媒体画像MI22の再配置により、再配置後の媒体画像MI22の配置は、再配置前の媒体画像MI22の配置(つまり、正立補正後の媒体画像MI22の配置)から変更される。 Then, as shown in FIG. 27C, the image processing unit 15 rearranges the medium image MI22 in the read image RI22 based on the vector VecR b0 set in the medium image MI22 (step S135). That is, the image processing unit 15, as the medium image reference position RRP22 the coordinates (x b1, y b1) of the end point of the vector VecR b0 match, to reposition the media image MI22. The image processing unit 15 causes the storage unit 13 to store the read image RI22F after the rearrangement of the medium image MI22. Due to the rearrangement of the medium image MI22, the arrangement of the medium image MI22 after the rearrangement is changed from the arrangement of the medium image MI22 before the rearrangement (that is, the arrangement of the medium image MI22 after the erecting correction).
 また、画像処理部15は、ステップS113の処理(図2)によって制御部11が選択した読取画像RI23(図28B)において、重心GB23を中心にして外接矩形BR23を右回りに回転させることによって媒体画像MI23の傾きを補正する(ステップS115,図28C)。 Further, the image processing unit 15 rotates the circumscribed rectangle BR23 clockwise around the center of gravity GB23 in the read image RI23 (FIG. 28B) selected by the control unit 11 in the process of FIG. The inclination of the image MI23 is corrected (step S115, FIG. 28C).
 また、読取画像RI23は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI23に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI22と媒体画像MI23とは対画像であると判定される。 Further, since the read image RI23 is not the first read image in the series of the plurality of read images (Step S117: No), the image processing unit 15 performs an image detection process on the read image RI23 (Step S121). . By this paired image detection process, it is determined that the medium image MI22 and the medium image MI23 are paired images.
 媒体画像MI22と媒体画像MI23とが対画像であると判定されたため、画像処理部15は、媒体画像MI23に対して位置設定処理Dを行う(ステップS203)。媒体画像MI23に対する位置設定処理Dでは、画像処理部15は、図28Cに示すように、読取画像基準位置MRP2(座標(SrcW/2,0))を始点とし媒体画像基準位置RRP23(座標(xb0,yb0))を終点とするベクトルVecLb0を、読取画像RI23に対する媒体画像MI23の位置として設定する。つまり、媒体画像MI23に設定されたベクトルVecLb0は、媒体画像基準位置RRP23と読取画像基準位置MRP2との間の位置関係を示す。ここで、媒体画像MI23に設定されたベクトルVecLb0は、媒体画像MI21に設定されたベクトルVecLb0と同一のベクトルである。 Since it is determined that the medium image MI22 and the medium image MI23 are paired images, the image processing unit 15 performs the position setting processing D on the medium image MI23 (Step S203). In the position setting processing D for the medium image MI23, as shown in FIG. 28C, the image processing unit 15 starts from the read image reference position MRP2 (coordinates (SrcW / 2,0)) and sets the medium image reference position RRP23 (coordinates (x b0, y b0) vector VecL b0 to end point) is set as the position of the medium image MI23 for reading image RI23. That is, the vector VecL b0 set in the medium image MI23 indicates a positional relationship between the medium image reference position RRP23 and the read image reference position MRP2. Here, the vector VecL b0 set in the medium image MI23 is the same vector as the vector VecL b0 set in the medium image MI21.
 そして、画像処理部15は、図28Cに示すように、読取画像RI23において、媒体画像MI23に設定したベクトルVecLb0に基づいて媒体画像MI23を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecLb0の終点の座標(xb0,yb0)に媒体画像基準位置RRP23が一致するように、媒体画像MI23を再配置する。画像処理部15は、媒体画像MI23の再配置後の読取画像RI23Fを記憶部13に記憶させる。媒体画像MI23の再配置により、再配置後の媒体画像MI23の配置は、再配置前の媒体画像MI23の配置(つまり、正立補正後の媒体画像MI23の配置)から変更される。 Then, the image processing unit 15, as shown in FIG. 28C, the read image RI23, to reposition the media image MI23 based on vector VecL b0 set in the medium image MI23 (step S135). That is, the image processing unit 15, as the medium image reference position RRP23 the coordinates (x b0, y b0) of end point of the vector VecL b0 match, to reposition the media image MI23. The image processing unit 15 causes the storage unit 13 to store the read image RI23F after the rearrangement of the medium image MI23. Due to the rearrangement of the medium image MI23, the arrangement of the medium image MI23 after the rearrangement is changed from the arrangement of the medium image MI23 before the rearrangement (that is, the arrangement of the medium image MI23 after the erecting correction).
 上記のように、媒体画像MI21と媒体画像MI22とは対画像であり、また、媒体画像MI22と媒体画像MI23とは対画像である。よって、媒体画像MI21と媒体画像MI23とは対画像である。また上記のように、媒体画像MI23に設定されたベクトルVecLb0は、媒体画像MI21に設定されたベクトルVecLb0と同一である。また上記のように、媒体画像MI21に設定されたベクトルVecLb0は、媒体画像基準位置RRP21と読取画像基準位置MRP2との間の位置関係を示し、媒体画像MI23に設定されたベクトルVecLb0は、媒体画像基準位置RRP23と読取画像基準位置MRP2との間の位置関係を示す。また上記のように、ベクトルVecLb0の終点の座標(xb0,yb0)に媒体画像基準位置RRP21が一致するように媒体画像MI21が再配置される一方で、ベクトルVecLb0の終点の座標(xb0,yb0)に媒体画像基準位置RRP23が一致するように媒体画像MI23が再配置される。つまり、画像処理部15は、媒体画像MI21と対になる媒体画像MI23を、媒体画像MI21と媒体画像MI23との間において位置関係が同一になるように再配置する。 As described above, the medium image MI21 and the medium image MI22 are paired images, and the medium image MI22 and the medium image MI23 are paired images. Therefore, the medium image MI21 and the medium image MI23 are paired images. Further, as described above, the vector VecL b0 set in the medium image MI23 is the same as the vector VecL b0 set in the medium image MI21. Further, as described above, the vector VecL b0 set in the medium image MI21 indicates the positional relationship between the medium image reference position RRP21 and the read image reference position MRP2, and the vector VecL b0 set in the medium image MI23 is The positional relationship between the medium image reference position RRP23 and the read image reference position MRP2 is shown. Also as described above, while the medium image MI21 are repositioned to medium image reference position RRP21 the coordinates (x b0, y b0) of end point of the vector VecL b0 match, the end point of the vector VecL b0 coordinates ( medium image MI23 are rearranged as x b0, y b0) in the medium image reference position RRP23 match. That is, the image processing unit 15 rearranges the medium image MI23 paired with the medium image MI21 so that the positional relationship between the medium image MI21 and the medium image MI23 is the same.
 また、ステップS113の処理(図22)によって制御部11が選択した読取画像RI24に含まれる媒体画像MI24の外接矩形BR24に傾きは無いため(図29B)、媒体画像MI24に対して正立補正が行われても(ステップS115)、媒体画像MI24は回転されない。 Further, since the circumscribed rectangle BR24 of the medium image MI24 included in the read image RI24 selected by the control unit 11 by the process of step S113 (FIG. 22) has no inclination (FIG. 29B), the erecting correction is not performed on the medium image MI24. Even if performed (step S115), the medium image MI24 is not rotated.
 また、読取画像RI24は一連の複数の読取画像の中の最初の読取画像でないため(ステップS117:No)、画像処理部15は、読取画像RI24に対して対画像検出処理を行う(ステップS121)。この対画像検出処理により、媒体画像MI23と媒体画像MI24とは対画像であると判定される。 Further, since the read image RI24 is not the first read image in the series of the plurality of read images (Step S117: No), the image processing unit 15 performs the image detection process on the read image RI24 (Step S121). . By this paired image detection process, it is determined that the medium image MI23 and the medium image MI24 are paired images.
 媒体画像MI23と媒体画像MI24とが対画像であると判定されたため、画像処理部15は、媒体画像MI24に対して位置設定処理Dを行う(ステップS203)。媒体画像MI24に対する位置設定処理Dでは、画像処理部15は、図29Cに示すように、読取画像基準位置MRP2(座標(SrcW/2,0))を始点とし媒体画像基準位置RRP24(座標(xb1,yb0))を終点とするベクトルVecRb0を、読取画像RI24に対する媒体画像MI24の位置として設定する。つまり、媒体画像MI24に設定されたベクトルVecRb0は、媒体画像基準位置RRP24と読取画像基準位置MRP2との間の位置関係を示す。ここで、媒体画像MI24に設定されたベクトルVecRb0は、媒体画像MI22に設定されたベクトルVecRb0と同一のベクトルである。 Since it is determined that the medium image MI23 and the medium image MI24 are paired images, the image processing unit 15 performs the position setting processing D on the medium image MI24 (Step S203). In the position setting process D for the medium image MI24, as shown in FIG. 29C, the image processing unit 15 sets the read image reference position MRP2 (coordinates (SrcW / 2,0)) as a start point and sets the medium image reference position RRP24 (coordinates (x b1, y b0) vector VecR b0 to end point) is set as the position of the medium image MI24 for reading image RI24. That is, the vector VecR b0 set in the medium image MI24 indicates a positional relationship between the medium image reference position RRP24 and the read image reference position MRP2. Here, the vector VecR b0 set in the medium image MI24 is the same vector as the vector VecR b0 set in the medium image MI22.
 そして、画像処理部15は、図29Cに示すように、読取画像RI24において、媒体画像MI24に設定したベクトルVecRb0に基づいて媒体画像MI24を再配置する(ステップS135)。つまり、画像処理部15は、ベクトルVecRb0の終点の座標(xb1,yb1)に媒体画像基準位置RRP24が一致するように、媒体画像MI24を再配置する。画像処理部15は、媒体画像MI24の再配置後の読取画像RI24Fを記憶部13に記憶させる。媒体画像MI24の再配置により、再配置後の媒体画像MI24の配置は、再配置前の媒体画像MI24の配置(つまり、正立補正後の媒体画像MI24の配置)から変更される。 Then, as shown in FIG. 29C, the image processing unit 15 rearranges the medium image MI24 in the read image RI24 based on the vector VecR b0 set in the medium image MI24 (step S135). That is, the image processing unit 15, as the medium image reference position RRP24 the coordinates (x b1, y b1) of the end point of the vector VecR b0 match, to reposition the media image Mil Mi-24. The image processing unit 15 causes the storage unit 13 to store the read image RI24F after the rearrangement of the medium image MI24. Due to the rearrangement of the medium image MI24, the arrangement of the medium image MI24 after the rearrangement is changed from the arrangement of the medium image MI24 before the rearrangement (that is, the arrangement of the medium image MI24 after the erecting correction).
 上記のように、媒体画像MI22と媒体画像MI23とは対画像であり、また、媒体画像MI23と媒体画像MI24とは対画像である。よって、媒体画像MI22と媒体画像MI24とは対画像である。また上記のように、媒体画像MI22に設定されたベクトルVecRb0は、媒体画像MI24に設定されたベクトルVecRb0と同一である。また上記のように、媒体画像MI22に設定されたベクトルVecRb0は、媒体画像基準位置RRP22と読取画像基準位置MRP2との間の位置関係を示し、媒体画像MI24に設定されたベクトルVecRb0は、媒体画像基準位置RRP24と読取画像基準位置MRP2との間の位置関係を示す。また上記のように、ベクトルVecRb0の終点の座標(xb1,yb0)に媒体画像基準位置RRP22が一致するように媒体画像MI22が再配置される一方で、ベクトルVecRb0の終点の座標(xb1,yb0)に媒体画像基準位置RRP24が一致するように媒体画像MI24が再配置される。つまり、画像処理部15は、媒体画像MI22と対になる媒体画像MI24を、媒体画像MI22と媒体画像MI24との間において位置関係が同一になるように再配置する。 As described above, the medium image MI22 and the medium image MI23 are paired images, and the medium image MI23 and the medium image MI24 are paired images. Therefore, the medium image MI22 and the medium image MI24 are paired images. Further, as described above, the vector VecR b0 set in the medium image MI22 is the same as the vector VecR b0 set in the medium image MI24. Further, as described above, the vector VecR b0 set in the medium image MI22 indicates the positional relationship between the medium image reference position RRP22 and the read image reference position MRP2, and the vector VecR b0 set in the medium image MI24 is The positional relationship between the medium image reference position RRP24 and the read image reference position MRP2 is shown. Also as described above, while the medium image MI22 are repositioned to medium image reference position RRP22 the coordinates (x b1, y b0) of end point of the vector VecR b0 match, the end point of the vector VecR b0 coordinates ( medium image MI24 are repositioned to x b1, y b0) in the medium image reference position RRP24 match. That is, the image processing unit 15 rearranges the medium image MI24 paired with the medium image MI22 so that the positional relationship between the medium image MI22 and the medium image MI24 is the same.
 読取画像RI24は、一冊の古書の一連の複数の読取画像の中の最後の画像であるため、読取画像RI24に対してステップS113~S135,S203の処理が完了したときに(ステップS137:Yes)、処理はステップS205(抽出領域設定)へ進む。 Since the read image RI24 is the last image in a series of a plurality of read images of one old book, when the processing of steps S113 to S135 and S203 is completed for the read image RI24 (step S137: Yes) ), The process proceeds to step S205 (extraction area setting).
 抽出領域の設定にあたり、画像処理部15は、記憶部13から読取画像RI21F,RI22F,RI23F,RI24Fを取得する。読取画像RI21Fには、媒体画像MI21が含まれ(図30A)、読取画像RI22Fには、配置変更後の媒体画像MI22が含まれ(図30B)、読取画像RI23Fには、正立補正後及び配置変更後の媒体画像MI23が含まれる(図30C)、読取画像RI24Fには、配置変更後の媒体画像MI24が含まれる(図30D)。 In setting the extraction area, the image processing unit 15 acquires the read images RI21F, RI22F, RI23F, and RI24F from the storage unit 13. The read image RI21F includes the medium image MI21 (FIG. 30A), the read image RI22F includes the medium image MI22 after the arrangement change (FIG. 30B), and the read image RI23F includes the erect state correction and the arrangement. The read image RI24F includes the medium image MI23 after the change (FIG. 30C), and the medium image MI24 after the change in the arrangement (FIG. 30D).
 画像処理部15は、図30A~Dに示すように、媒体画像MI21,MI22,MI23,MI24に基づいて、読取画像RI21F,RI22F,RI23F,RI24Fに対してサイズが同一の抽出領域EA21,EA22を設定する(ステップS139)。つまり、矩形の抽出領域EA21の縦長さDstH2は読取画像RI21F,RI22F,RI23F,RI24Fのすべてにおいて同一である。また、矩形の抽出領域EA21の横長さDstW2は読取画像RI21F,RI22F,RI23F,RI24Fのすべてにおいて同一である。また、上記のように、媒体画像MI21に設定されたベクトルVecLb0とMI23に設定されたベクトルVecLb0とは同一であり、媒体画像MI22に設定されたベクトルVecRb0と媒体画像MI24に設定されたベクトルVecRb0とは同一である。よって、読取画像RI21Fと読取画像RI23Fとに対しては、位置及びサイズが同一の抽出領域EA21が設定される。また、読取画像RI22Fと読取画像RI24Fとに対しては、位置及びサイズが同一の抽出領域EA22が設定される。また、抽出領域EA21と抽出領域EA22とは、サイズと、読取画像上におけるY座標方向の位置とが同一である一方で、読取画像上におけるX座標方向の位置が異なる。また、読取画像基準位置MRP2から抽出領域EA21の左上の頂点までの距離は、読取画像基準位置MRP2から抽出領域EA22の右上の頂点までの距離と同一である。また、読取画像RI21Fに設定された抽出領域EA21には媒体画像MI21の主要部分が存在し(図30A)、読取画像RI22Fに設定された抽出領域EA22には媒体画像MI22の主要部分が存在し(図30B)、読取画像RI23Fに設定された抽出領域EA21には媒体画像MI23の主要部分が存在し(図30C)、読取画像RI24Fに設定された抽出領域EA22には媒体画像MI24の主要部分が存在する(図30D)。 As shown in FIGS. 30A to 30D, the image processing unit 15 extracts extraction areas EA21 and EA22 having the same size as the read images RI21F, RI22F, RI23F, and RI24F based on the medium images MI21, MI22, MI23, and MI24. It is set (step S139). That is, the vertical length DstH2 of the rectangular extraction area EA21 is the same in all of the read images RI21F, RI22F, RI23F, and RI24F. The horizontal length DstW2 of the rectangular extraction area EA21 is the same in all of the read images RI21F, RI22F, RI23F, and RI24F. Further, as described above, is identical to the vector VecL b0 set in vector VecL b0 and MI23 set in the medium image MI21, it was set to the vector VecR b0 and medium image MI24 set in the medium image MI22 It is the same as the vector VecR b0 . Therefore, an extracted area EA21 having the same position and size is set for the read image RI21F and the read image RI23F. Further, an extracted area EA22 having the same position and size is set for the read image RI22F and the read image RI24F. Further, the extraction area EA21 and the extraction area EA22 have the same size and the same position in the Y coordinate direction on the read image, but have different positions in the X coordinate direction on the read image. The distance from the read image reference position MRP2 to the upper left vertex of the extraction area EA21 is the same as the distance from the read image reference position MRP2 to the upper right vertex of the extraction area EA22. Also, a main part of the medium image MI21 exists in the extraction area EA21 set in the read image RI21F (FIG. 30A), and a main part of the medium image MI22 exists in the extraction area EA22 set in the read image RI22F ( 30B), the main part of the medium image MI23 exists in the extraction area EA21 set in the read image RI23F (FIG. 30C), and the main part of the medium image MI24 exists in the extraction area EA22 set in the read image RI24F. (FIG. 30D).
 ここで、媒体画像の「主要部分」とは、媒体画像において文字、写真、図形、表等の何かしらの情報の少なくとも一部が存在する部分である。 Here, the “main part” of the medium image is a part of the medium image in which at least a part of some information such as characters, photographs, figures, and tables exists.
 そこで、画像処理部15は、図31A~Dに示すように、ステップS139で設定した抽出領域EA21,EA22に従って、読取画像RI21Fから媒体画像MI21の主要部分MI21Cを抽出し、読取画像RI22Fから媒体画像MI22の主要部分MI22Cを抽出し、読取画像RI23Fから媒体画像MI23の主要部分MI23Cを抽出し、読取画像RI24Fから媒体画像MI24の主要部分MI24Cを抽出する(ステップS141)。 Thus, as shown in FIGS. 31A to 31D, the image processing unit 15 extracts the main part MI21C of the medium image MI21 from the read image RI21F in accordance with the extraction areas EA21 and EA22 set in step S139, and extracts the medium image from the read image RI22F. The main part MI22C of the MI22 is extracted, the main part MI23C of the medium image MI23 is extracted from the read image RI23F, and the main part MI24C of the medium image MI24 is extracted from the read image RI24F (step S141).
 そして、画像処理部15は、図32に示すように、主要部分MI21C,MI22C,MI23C,MI24Cを表示部17に表示させる。すなわち、画像処理部15は、オモテ表紙の画像の主要部分MI21C、1ページ目の画像の主要部分MI22C、2ページ目の画像の主要部分MI23C、…、ウラ表紙の画像の主要部分MI24Cを順に表示部17に表示させる。また、画像処理部15は、オモテ表紙の画像の主要部分MI21C、及び、ウラ表紙の画像の主要部分MI24Cをそれぞれ単独のページとして表示部17に表示させる一方で、1ページ目の画像の主要部分MI22Cと2ページ目の画像の主要部分MI23Cとを裁断箇所で接合して見開きのページとして表示部17に表示させる。 Then, the image processing unit 15 causes the display unit 17 to display the main parts MI21C, MI22C, MI23C, and MI24C as shown in FIG. That is, the image processing unit 15 sequentially displays the main part MI21C of the front cover image, the main part MI22C of the first page image, the main part MI23C of the second page image,..., And the main part MI24C of the back cover image. It is displayed on the unit 17. Further, the image processing unit 15 displays the main part MI21C of the front cover image and the main part MI24C of the back cover image on the display unit 17 as a single page, while the main part of the image on the first page. The MI22C and the main part MI23C of the image of the second page are joined at the cut portion and displayed on the display unit 17 as a double-page spread.
 図33A~C、図34、図35A~C、及び、図36A~Cに、媒体画像の再配置後の読取画像に対する抽出領域の設定例を示す。一例として、読取画像RI6Fには再配置後の媒体画像MI6が含まれ(図33A)、読取画像RI7Fには再配置後の媒体画像MI7が含まれ(図33B)、読取画像RI8Fには、再配置後の媒体画像MI8が含まれる(図33C)。 FIGS. 33A to 33C, 34, 35A to 35C, and 36A to 36C show examples of setting an extraction area for a read image after rearrangement of a medium image. As an example, the read image RI6F includes the medium image MI6 after rearrangement (FIG. 33A), the read image RI7F includes the medium image MI7 after rearrangement (FIG. 33B), and the read image RI8F includes The medium image MI8 after the arrangement is included (FIG. 33C).
 図33Aに示すように、媒体画像MI6における媒体裁断箇所CPは媒体画像MI6の右側に存在するため、画像処理部15は、媒体画像MI6の外接矩形の右辺を左方向に平行移動させることにより、読取画像RI6Fにおいて、媒体裁断箇所CPを含まない矩形IR6を作成する。 As shown in FIG. 33A, since the medium cutting position CP in the medium image MI6 exists on the right side of the medium image MI6, the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image MI6 leftward, In the read image RI6F, a rectangle IR6 not including the medium cutting portion CP is created.
 また、図33Bに示すように、媒体画像MI7における媒体裁断箇所CPは媒体画像MI7の左側に存在するため、画像処理部15は、媒体画像MI7の外接矩形の左辺を右方向に平行移動させることにより、読取画像RI7Fにおいて、媒体裁断箇所CPを含まない矩形IR7を作成する。 Further, as shown in FIG. 33B, since the medium cutting position CP in the medium image MI7 exists on the left side of the medium image MI7, the image processing unit 15 moves the left side of the circumscribed rectangle of the medium image MI7 in the right direction. Thus, a rectangle IR7 that does not include the medium cutting portion CP is created in the read image RI7F.
 また、図33Cに示すように、媒体画像MI8における媒体裁断箇所CPは媒体画像MI8の左側に存在するため、画像処理部15は、媒体画像MI8の外接矩形の右辺を左方向に平行移動させることにより、読取画像RI8Fにおいて、媒体裁断箇所CPを含まない矩形IR8を作成する。 Further, as shown in FIG. 33C, since the medium cutting point CP in the medium image MI8 exists on the left side of the medium image MI8, the image processing unit 15 moves the right side of the circumscribed rectangle of the medium image MI8 in the left direction. Thus, a rectangle IR8 that does not include the medium cutting portion CP is created in the read image RI8F.
 次いで、画像処理部15は、図34に示すように、矩形IR6,IR7,IR8のすべてが重複する矩形領域を抽出領域EA2として決定する。また、画像処理部15は、読取画像基準位置MRP2を始点とし矩形の抽出領域EA2の左上の頂点を終点とするベクトルVL2を求める。 Next, as shown in FIG. 34, the image processing unit 15 determines a rectangular area where all of the rectangles IR6, IR7, and IR8 overlap as the extraction area EA2. Further, the image processing unit 15 obtains a vector VL2 starting from the read image reference position MRP2 and ending at the upper left vertex of the rectangular extraction area EA2.
 そして、画像処理部15は、図35Aに示すように、ベクトルVL2が設定された抽出領域EA2を読取画像RI6Fに設定する。 {Circle around (5)} Then, as illustrated in FIG. 35A, the image processing unit 15 sets the extraction area EA2 in which the vector VL2 is set as the read image RI6F.
 また、画像処理部15は、図35Bに示すように、ベクトルVL3が設定された抽出領域EA3を読取画像RI7Fに設定する。ここで、抽出領域EA3のサイズは、抽出領域EA2のサイズと同一である。また、ベクトルVL3は、読取画像基準位置MRP2に対してX座標方向で、ベクトルVL2と対称なベクトルである。すなわち、抽出領域EA3のY座標方向の位置は、抽出領域EA2のY座標方向の位置と同一である。また、読取画像基準位置MRP2から抽出領域EA3の右上の頂点までの距離は、読取画像基準位置MRP2から抽出領域EA23の左上の頂点までの距離と同一である。 {Circle around (5)} As shown in FIG. 35B, the image processing unit 15 sets the extraction area EA3 in which the vector VL3 is set as the read image RI7F. Here, the size of the extraction area EA3 is the same as the size of the extraction area EA2. The vector VL3 is a vector symmetrical to the vector VL2 in the X coordinate direction with respect to the read image reference position MRP2. That is, the position of the extraction area EA3 in the Y coordinate direction is the same as the position of the extraction area EA2 in the Y coordinate direction. The distance from the read image reference position MRP2 to the upper right vertex of the extraction area EA3 is the same as the distance from the read image reference position MRP2 to the upper left vertex of the extraction area EA23.
 また、画像処理部15は、図35Cに示すように、ベクトルVL2が設定された抽出領域EA2を読取画像RI8Fに設定する。よって、読取画像RI6Fと読取画像RI8Fとに対しては、位置及びサイズが同一の読取画像RI8Fが設定される。 {Circle around (5)} As shown in FIG. 35C, the image processing unit 15 sets the extraction area EA2 in which the vector VL2 is set as the read image RI8F. Therefore, the read image RI8F having the same position and size is set for the read image RI6F and the read image RI8F.
 よって、読取画像RI6Fに設定された抽出領域EA2には媒体画像MI6の主要部分が存在し、読取画像RI7Fに設定された抽出領域EA3には媒体画像MI7の主要部分が存在し、読取画像RI8Fに設定された抽出領域EA2には媒体画像MI8の主要部分が存在する。 Therefore, the main part of the medium image MI6 exists in the extraction area EA2 set in the read image RI6F, the main part of the medium image MI7 exists in the extraction area EA3 set in the read image RI7F, and the main part of the medium image MI8F exists in the read image RI8F. The main part of the medium image MI8 exists in the set extraction area EA2.
 そこで、画像処理部15は、図36A~Cに示すように、抽出領域EA2,EA3に従って、読取画像RI6Fから媒体画像MI6の主要部分MI6Cを抽出し、読取画像RI7Fから媒体画像MI7の主要部分MI7Cを抽出し、読取画像RI8Fから媒体画像MI8の主要部分MI8Cを抽出する。 Accordingly, as shown in FIGS. 36A to 36C, the image processing unit 15 extracts the main part MI6C of the medium image MI6 from the read image RI6F according to the extraction areas EA2 and EA3, and outputs the main part MI7C of the medium image MI7 from the read image RI7F. To extract the main part MI8C of the medium image MI8 from the read image RI8F.
 図37に、媒体裁断箇所を含まない矩形の作成処理の一例を示す。 FIG. 37 shows an example of a process of creating a rectangle that does not include a medium cutting portion.
 図37のステップS251では、画像処理部15は、媒体輪郭図形における変曲点を検出する。 で は In step S251 of FIG. 37, the image processing unit 15 detects an inflection point in the medium contour figure.
 次いで、ステップS253では、画像処理部15は、媒体裁断箇所が媒体画像の左側に存在するか否かを判定する。媒体裁断箇所が媒体画像の左側に存在する場合は(ステップS253:Yes)、処理はステップS255へ進み、媒体裁断箇所が媒体画像の右側に存在する場合は(ステップS253:No)、処理はステップS257へ進む。 Next, in step S253, the image processing unit 15 determines whether or not the medium cutting portion exists on the left side of the medium image. If the medium cut position exists on the left side of the medium image (step S253: Yes), the process proceeds to step S255. If the medium cut position exists on the right side of the medium image (step S253: No), the process proceeds to step S255. Proceed to S257.
 ステップS255では、画像処理部15は、ステップS251で検出した変曲点のうちX座標の最大値を有する変曲点の位置(以下では「X座標最大値位置」と呼ぶことがある)を検出する。 In step S255, the image processing unit 15 detects the position of the inflection point having the maximum value of the X coordinate among the inflection points detected in step S251 (hereinafter, may be referred to as “X coordinate maximum value position”). I do.
 一方で、ステップS257では、画像処理部15は、ステップS251で検出した変曲点のうちX座標の最小値を有する変曲点の位置(以下では「X座標最小値位置」と呼ぶことがある)を検出する。 On the other hand, in step S257, the image processing unit 15 calls the position of the inflection point having the minimum value of the X coordinate among the inflection points detected in step S251 (hereinafter, may be referred to as “X coordinate minimum value position”). ) Is detected.
 そして、ステップS259では、画像処理部15は、媒体裁断箇所が媒体画像の左側に存在する場合は、媒体画像の外接矩形の左辺をステップS255で検出したX座標最大値位置まで右方向に平行移動させることにより、読取画像において、媒体裁断箇所を含まない矩形を作成する。また、画像処理部15は、媒体裁断箇所が媒体画像の右側に存在する場合は、媒体画像の外接矩形の右辺をステップS257で検出したX座標最小値位置まで左方向に平行移動させることにより、読取画像において、媒体裁断箇所を含まない矩形を作成する。 Then, in step S259, when the medium cutting portion exists on the left side of the medium image, the image processing unit 15 translates the left side of the circumscribed rectangle of the medium image rightward to the X coordinate maximum value position detected in step S255. By doing so, a rectangle that does not include the medium cutting portion is created in the read image. In addition, when the medium cutting portion exists on the right side of the medium image, the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image to the X coordinate minimum value position detected in step S257 in the left direction. In the read image, a rectangle not including the medium cutting portion is created.
 以下、図38Aに示すように、読取画像RIに媒体画像MIが含まれ、外接矩形BRが検出された媒体画像MIの左側に媒体裁断箇所が存在する場合の動作例を説明する。図38Bに示すように、画像処理部15は、媒体画像MIの媒体輪郭図形における複数の変曲点IPを検出する。媒体裁断箇所CPが媒体画像MIの左側に存在するため、次いで、画像処理部15は、図38Cに示すように、複数の変曲点IPのうちX座標の最大値を有する変曲点IPの位置IPMを検出する。そして、画像処理部15は、図38Cに示すように、外接矩形BRの左辺を位置IPM(つまり、X座標最大値位置)まで右方向に平行移動させることにより、読取画像RIにおいて、媒体裁断箇所CPを含まない矩形IRを作成する。 Hereinafter, as shown in FIG. 38A, an operation example in the case where the read image RI includes the medium image MI and the medium cutting portion exists on the left side of the medium image MI where the circumscribed rectangle BR is detected will be described. As shown in FIG. 38B, the image processing unit 15 detects a plurality of inflection points IP in the medium contour graphic of the medium image MI. Since the medium cutting point CP exists on the left side of the medium image MI, the image processing unit 15 then determines the inflection point IP having the maximum value of the X coordinate among the plurality of inflection points IP as shown in FIG. 38C. Detect the position IPM. Then, as shown in FIG. 38C, the image processing unit 15 translates the left side of the circumscribed rectangle BR in the right direction to the position IPM (that is, the X coordinate maximum value position), so that the medium cutting portion Create a rectangular IR that does not include a CP.
 以上のように、実施例2では、画像処理装置10は、記憶部13と、画像処理部15とを有する。記憶部13は、各々が媒体画像を含む一連の複数の読取画像を記憶する。画像処理部15は、複数の読取画像の各々において、媒体画像領域に媒体画像基準位置を設定し、媒体画像領域以外の領域に読取画像基準位置を設定する。また、画像処理部15は、媒体画像基準位置と読取画像基準位置との間の位置関係に基づいて、読取画像内で媒体画像を再配置する。また、画像処理部15は、再配置後の媒体画像に基づいて、複数の読取画像において同一の抽出領域を設定する。そして、画像処理部15は、設定した抽出領域に従って、複数の読取画像の各々から媒体画像を抽出する。 As described above, in the second embodiment, the image processing device 10 includes the storage unit 13 and the image processing unit 15. The storage unit 13 stores a series of a plurality of read images each including a medium image. The image processing unit 15 sets a medium image reference position in a medium image area and sets a read image reference position in an area other than the medium image area in each of the plurality of read images. Further, the image processing unit 15 rearranges the medium image in the read image based on the positional relationship between the medium image reference position and the read image reference position. Further, the image processing unit 15 sets the same extraction region in the plurality of read images based on the rearranged medium image. Then, the image processing unit 15 extracts a medium image from each of the plurality of read images according to the set extraction region.
 こうすることで、一連の複数の読取画像にそれぞれ含まれる媒体画像間における位置ズレを抑制することができる。このため、例えば、1冊の本が裁断されて1枚ずつバラバラになった各ページをADFを用いて電子データ化する場合に、搬送時のスキューの影響による媒体画像間における位置ズレを抑制することができる。よって、ADFを用いて電子データ化された媒体画像を複数ページに渡って連続して見る者の違和感を軽減することができる。 By doing so, it is possible to suppress positional deviation between the medium images included in each of the series of read images. For this reason, for example, in a case where one book is cut and each page that has been separated one by one is converted into electronic data by using the ADF, positional deviation between medium images due to the influence of skew during conveyance is suppressed. be able to. Therefore, it is possible to reduce a sense of incongruity of a person who continuously views the medium image converted into electronic data using the ADF over a plurality of pages.
 また、実施例2では、記憶部13は少なくとも二つの読取画像を記憶する。画像処理部15は、二つの読取画像のうちの一方の読取画像に含まれる媒体画像と、二つの読取画像のうちの他方の読取画像に含まれ、かつ、一方の読取画像に含まれる媒体画像と対になる媒体画像との間において、位置関係が同一になるように、他方の読取画像に含まれる媒体画像を再配置する。 In the second embodiment, the storage unit 13 stores at least two read images. The image processing unit 15 includes a medium image included in one of the two read images and a medium image included in the other of the two read images and included in the one of the read images. The medium image included in the other read image is rearranged so that the positional relationship between the medium image and the paired medium image becomes the same.
 こうすることで、互いに対になる媒体画像間(例えば、奇数ページの媒体画像同士間、または、偶数ページの媒体画像同士間)における位置ズレを抑制することができる。 By doing so, it is possible to suppress a positional shift between the paired medium images (for example, between the odd-numbered page media images or between the even-numbered page media images).
 また、実施例2では、画像処理部15は、複数の読取画像における再配置後のすべての媒体画像の外接矩形が重複する矩形領域に基づいて抽出領域を設定する。 In the second embodiment, the image processing unit 15 sets an extraction area based on a rectangular area where circumscribed rectangles of all rearranged medium images in a plurality of read images overlap.
 こうすることで、特に媒体画像が媒体裁断箇所が含まれる場合に、適切な抽出領域を設定することができる。 This makes it possible to set an appropriate extraction area especially when the medium image includes a medium cut portion.
 また、実施例2では、画像処理部15は、媒体画像において媒体裁断箇所を検出し、検出した媒体裁断箇所に基づいて媒体画像上に媒体画像基準位置を設定する。 In the second embodiment, the image processing unit 15 detects a medium cutting position in a medium image, and sets a medium image reference position on the medium image based on the detected medium cutting position.
 こうすることで、媒体裁断箇所を含む媒体画像に対して適切な媒体画像基準位置を設定することができる。 This makes it possible to set an appropriate medium image reference position for a medium image including a medium cut portion.
 以上、実施例2について説明した。 The second embodiment has been described above.
 [実施例3]
 実施例3は、媒体画像基準位置の設定方法と抽出領域の設定方法とが、実施例2と相違する。以下、実施例2と相違する点について説明する。
[Example 3]
The third embodiment differs from the second embodiment in the method of setting the medium image reference position and the method of setting the extraction area. Hereinafter, points different from the second embodiment will be described.
 <画像処理装置の処理・動作>
 図39は、実施例3の画像処理装置の処理フローの一例を示す図である。図40A~C、図41A~C、図42A~C、図43A~C、図44A~D、図45A~D、図46、図47A~C、図48、図49A~C、及び、図50A~Cは、実施例3の画像処理装置の動作例の説明に供する図である。
<Processing and operation of image processing device>
FIG. 39 is a diagram illustrating an example of a processing flow of the image processing apparatus according to the third embodiment. FIGS. 40A-C, 41A-C, 42A-C, 43A-C, 44A-D, 45A-D, 46, 47A-C, 48, 49A-C, and 50A. FIGS. 9A to 9C are diagrams for explaining an operation example of the image processing apparatus according to the third embodiment.
 図39に、媒体画像基準位置設定処理の処理フローの一例を示す。図39におけるステップS151,S153,S155,S161の処理については、実施例2(図23)示したものと同一であるため、説明を省略する。 FIG. 39 shows an example of the processing flow of the medium image reference position setting processing. The processing in steps S151, S153, S155, and S161 in FIG. 39 is the same as that shown in the second embodiment (FIG. 23), and thus the description is omitted.
 図39のステップS271では、画像処理部15は、媒体画像の外接矩形の重心を媒体画像基準位置に設定する。 In step S271 of FIG. 39, the image processing unit 15 sets the center of gravity of the circumscribed rectangle of the medium image as the medium image reference position.
 以下、実施例3の画像処理装置の動作例について説明する。 Hereinafter, an operation example of the image processing apparatus according to the third embodiment will be described.
 例えば、読取画像RI31,RI32,RI33,RI34(図40A,図41A,図42A,図43A)は、一冊の古書の一連の複数の読取画像に相当する。読取画像RI31は、オモテ表紙の画像である媒体画像MI31を含み、媒体画像MI31における媒体裁断箇所CPは媒体画像MI31の右側に存在する(図40A)。つまり、読取画像RI31には、媒体画像MI31が配置されている。読取画像RI32は、1ページ目の画像である媒体画像MI32を含み、媒体画像MI32における媒体裁断箇所CPは媒体画像MI32の左側に存在する(図41A)。つまり、読取画像RI32には、媒体画像MI32が配置されている。読取画像RI33は、2ページ目の画像である媒体画像MI33を含み、媒体画像MI33における媒体裁断箇所CPは媒体画像MI33の右側に存在する(図42A)。つまり、読取画像RI33には、媒体画像MI33が配置されている。読取画像RI34は、ウラ表紙の画像である媒体画像MI34を含み、媒体画像MI34における媒体裁断箇所CPは媒体画像MI34の左側に存在する(図43A)。つまり、読取画像RI34には、媒体画像MI34が配置されている。 For example, the read images RI31, RI32, RI33, and RI34 (FIGS. 40A, 41A, 42A, and 43A) correspond to a series of a plurality of read images of one old book. The read image RI31 includes a medium image MI31 which is an image of the front cover, and the medium cutting portion CP in the medium image MI31 exists on the right side of the medium image MI31 (FIG. 40A). That is, the medium image MI31 is arranged in the read image RI31. The read image RI32 includes the medium image MI32 which is the image of the first page, and the medium cut portion CP in the medium image MI32 exists on the left side of the medium image MI32 (FIG. 41A). That is, the medium image MI32 is arranged in the read image RI32. The read image RI33 includes the medium image MI33 that is the image of the second page, and the medium cut portion CP in the medium image MI33 exists on the right side of the medium image MI33 (FIG. 42A). That is, the medium image MI33 is arranged in the read image RI33. The read image RI34 includes the medium image MI34 that is the image of the back cover, and the medium cutting position CP in the medium image MI34 is on the left side of the medium image MI34 (FIG. 43A). That is, the medium image MI34 is arranged in the read image RI34.
 まず、図40Bに示すように、画像処理部15は、媒体画像MI31の外接矩形BR31を検出し、検出した外接矩形BR31の重心GB31を算出する。また、画像処理部15は、重心GB31を媒体画像MI31の媒体画像基準位置に設定する。 First, as shown in FIG. 40B, the image processing unit 15 detects the circumscribed rectangle BR31 of the medium image MI31, and calculates the center of gravity GB31 of the detected circumscribed rectangle BR31. In addition, the image processing unit 15 sets the center of gravity GB31 at the medium image reference position of the medium image MI31.
 また、図41Bに示すように、画像処理部15は、媒体画像MI32の外接矩形BR32を検出し、検出した外接矩形BR32の重心GB32を算出する。また、画像処理部15は、重心GB32を媒体画像MI32の媒体画像基準位置に設定する。 As shown in FIG. 41B, the image processing unit 15 detects the circumscribed rectangle BR32 of the medium image MI32, and calculates the center of gravity GB32 of the detected circumscribed rectangle BR32. In addition, the image processing unit 15 sets the center of gravity GB32 at the medium image reference position of the medium image MI32.
 また、図42Bに示すように、画像処理部15は、媒体画像MI33の外接矩形BR33を検出し、検出した外接矩形BR33の重心GB33を算出する。また、画像処理部15は、重心GB33を媒体画像MI33の媒体画像基準位置に設定する。 As shown in FIG. 42B, the image processing unit 15 detects the circumscribed rectangle BR33 of the medium image MI33, and calculates the center of gravity GB33 of the detected circumscribed rectangle BR33. In addition, the image processing unit 15 sets the center of gravity GB33 at the medium image reference position of the medium image MI33.
 また、図43Bに示すように、画像処理部15は、媒体画像MI34の外接矩形BR34を検出し、検出した外接矩形BR34の重心GB34を算出する。また、画像処理部15は、重心GB34を媒体画像MI34の媒体画像基準位置に設定する。 43B, the image processing unit 15 detects the circumscribed rectangle BR34 of the medium image MI34 and calculates the center of gravity GB34 of the detected circumscribed rectangle BR34, as shown in FIG. 43B. In addition, the image processing unit 15 sets the center of gravity GB34 at the medium image reference position of the medium image MI34.
 次いで、画像処理部15は、矩形のクロッピング領域CA31を読取画像RI31に対して設定する(図40B)。クロッピング領域CA31は外接矩形BR31を包含し、クロッピング領域CA31の縦長さは外接矩形BR31の縦長さに所定の余白MG1を付したものであり、クロッピング領域CA31の横長さは外接矩形BR31の横長さに所定の余白MG2を付したものである。そして、画像処理部15は、クロッピング領域CA31に従って、読取画像RI31からクロッピング画像RI31Fを切り出す(図40C)。 Next, the image processing unit 15 sets a rectangular cropping area CA31 for the read image RI31 (FIG. 40B). The cropping area CA31 includes the circumscribed rectangle BR31, and the vertical length of the cropping area CA31 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR31, and the horizontal length of the cropping area CA31 is the horizontal length of the circumscribed rectangle BR31. It has a predetermined margin MG2. Then, the image processing unit 15 cuts out the cropped image RI31F from the read image RI31 according to the cropping area CA31 (FIG. 40C).
 また、画像処理部15は、矩形のクロッピング領域CA32を読取画像RI32に対して設定する(図41B)。クロッピング領域CA32は外接矩形BR32を包含し、クロッピング領域CA32の縦長さは外接矩形BR32の縦長さに所定の余白MG1を付したものであり、クロッピング領域CA32の横長さは外接矩形BR32の横長さに所定の余白MG2を付したものである。そして、画像処理部15は、クロッピング領域CA32に従って、読取画像RI32からクロッピング画像RI32Fを切り出す(図41C)。 (4) The image processing unit 15 sets the rectangular cropping area CA32 for the read image RI32 (FIG. 41B). The cropping area CA32 includes the circumscribed rectangle BR32, and the vertical length of the cropping area CA32 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR32, and the horizontal length of the cropping area CA32 is the horizontal length of the circumscribed rectangle BR32. It has a predetermined margin MG2. Then, the image processing unit 15 cuts out the cropped image RI32F from the read image RI32 according to the cropping area CA32 (FIG. 41C).
 また、画像処理部15は、矩形のクロッピング領域CA33を読取画像RI33に対して設定する(図42B)。クロッピング領域CA33は外接矩形BR33を包含し、クロッピング領域CA33の縦長さは外接矩形BR33の縦長さに所定の余白MG1を付したものであり、クロッピング領域CA33の横長さは外接矩形BR33の横長さに所定の余白MG2を付したものである。そして、画像処理部15は、クロッピング領域CA33に従って、正立補正後のクロッピング画像RI33Fを読取画像RI33から切り出す(図42C)。 {Circle around (4)} The image processing unit 15 sets a rectangular cropping area CA33 for the read image RI33 (FIG. 42B). The cropping area CA33 includes the circumscribed rectangle BR33, and the vertical length of the cropping area CA33 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR33, and the horizontal length of the cropping area CA33 is the horizontal length of the circumscribed rectangle BR33. It has a predetermined margin MG2. Then, the image processing unit 15 cuts out the cropped image RI33F after the erecting from the read image RI33 according to the cropping area CA33 (FIG. 42C).
 また、画像処理部15は、矩形のクロッピング領域CA34を読取画像RI34に対して設定する(図43B)。クロッピング領域CA34は外接矩形BR34を包含し、クロッピング領域CA34の縦長さは外接矩形BR34の縦長さに所定の余白MG1を付したものであり、クロッピング領域CA34の横長さは外接矩形BR34の横長さに所定の余白MG2を付したものである。そして、画像処理部15は、クロッピング領域CA34に従って、読取画像RI34からクロッピング画像RI34Fを切り出す(図43C)。 {Circle around (4)} The image processing unit 15 sets the rectangular cropping area CA34 for the read image RI34 (FIG. 43B). The cropping area CA34 includes the circumscribed rectangle BR34, and the vertical length of the cropping area CA34 is obtained by adding a predetermined margin MG1 to the vertical length of the circumscribed rectangle BR34, and the horizontal length of the cropping area CA34 is the horizontal length of the circumscribed rectangle BR34. It has a predetermined margin MG2. Then, the image processing unit 15 cuts out the cropped image RI34F from the read image RI34 according to the cropping area CA34 (FIG. 43C).
 次いで、画像処理部15は、図44A~Dに示すように、媒体画像MI31,M322,MI33,MI34に基づいて、クロッピング画像RI31F,RI32F,RI33F,RI34Fに対して位置及びサイズが同一の抽出領域EA31を設定する。矩形の抽出領域EA31の縦長さDstH3はクロッピング画像RI31F,RI32F,RI33F,RI34Fのすべてにおいて同一である。また、矩形の抽出領域EA31の横長さDstW3はクロッピング画像RI31F,RI32F,RI33F,RI34Fのすべてにおいて同一である。また、クロッピング画像RI31Fに設定された抽出領域EA31には媒体画像MI31の主要部分が存在し(図44A)、クロッピング画像RI32Fに設定された抽出領域EA31には媒体画像MI32の主要部分が存在し(図44B)、クロッピング画像RI331Fに設定された抽出領域EA31には媒体画像MI33の主要部分が存在し(図44C)、クロッピング画像RI34Fに設定された抽出領域EA31には媒体画像MI34の主要部分が存在する(図44D)。 Next, as shown in FIGS. 44A to 44D, based on the medium images MI31, M322, MI33, and MI34, the image processing unit 15 extracts an area having the same position and size as the cropping images RI31F, RI32F, RI33F, and RI34F. EA31 is set. The vertical length DstH3 of the rectangular extraction area EA31 is the same in all of the cropping images RI31F, RI32F, RI33F, and RI34F. The horizontal length DstW3 of the rectangular extraction area EA31 is the same in all of the cropping images RI31F, RI32F, RI33F, and RI34F. Also, a main part of the medium image MI31 exists in the extraction area EA31 set in the cropping image RI31F (FIG. 44A), and a main part of the medium image MI32 exists in the extraction area EA31 set in the cropping image RI32F ( 44B), the main part of the medium image MI33 exists in the extraction area EA31 set in the cropping image RI331F (FIG. 44C), and the main part of the medium image MI34 exists in the extraction area EA31 set in the cropping image RI34F. (FIG. 44D).
 そこで、画像処理部15は、図45A~Dに示すように、抽出領域EA31に従って、クロッピング画像RI31Fから媒体画像MI31の主要部分MI31Cを抽出し、クロッピング画像RI32Fから媒体画像MI32の主要部分MI32Cを抽出し、クロッピング画像RI33Fから媒体画像MI33の主要部分MI33Cを抽出し、クロッピング画像RI34Fから媒体画像MI34の主要部分MI34Cを抽出を抽出する。 Therefore, as shown in FIGS. 45A to 45D, the image processing unit 15 extracts the main part MI31C of the medium image MI31 from the cropping image RI31F and extracts the main part MI32C of the medium image MI32 from the cropping image RI32F according to the extraction area EA31. Then, the main part MI33C of the medium image MI33 is extracted from the cropping image RI33F, and the main part MI34C of the medium image MI34 is extracted from the cropping image RI34F.
 そして、画像処理部15は、図46に示すように、主要部分MI31C,MI32C,MI33C,MI34Cを表示部17に表示させる。すなわち、画像処理部15は、オモテ表紙の画像の主要部分MI31C、1ページ目の画像の主要部分MI32C、2ページ目の画像の主要部分MI33C、…、ウラ表紙の画像の主要部分MI34Cを順に表示部17に表示させる。また、画像処理部15は、オモテ表紙の画像の主要部分MI31C、及び、ウラ表紙の画像の主要部分MI34Cをそれぞれ単独のページとして表示部17に表示させる一方で、1ページ目の画像の主要部分MI32Cと2ページ目の画像の主要部分MI33Cとを裁断箇所で接合して見開きのページとして表示部17に表示させる。 (4) Then, the image processing unit 15 causes the display unit 17 to display the main parts MI31C, MI32C, MI33C, and MI34C as shown in FIG. That is, the image processing unit 15 sequentially displays the main part MI31C of the front cover image, the main part MI32C of the first page image, the main part MI33C of the second page image,..., And the main part MI34C of the back cover image. It is displayed on the unit 17. Further, the image processing unit 15 causes the display unit 17 to display the main part MI31C of the front cover image and the main part MI34C of the back cover image as individual pages, while the main part of the first page image. The MI32C and the main part MI33C of the image of the second page are joined at the cut part and displayed on the display unit 17 as a double-page spread.
 図47A~C、図48、図49A~C、及び、図50A~Cに、読取画像に対する抽出領域の設定例を示す。一例として、クロッピング画像RI41Fには媒体画像MI41が含まれ(図47A)、クロッピング画像RI42Fには媒体画像MI42が含まれ(図47B)、クロッピング画像RI43Fには媒体画像MI43が含まれる(図47C)。また、クロッピング画像RI41Fには、媒体画像MI41の外接矩形の重心GB41が設定され、クロッピング画像RI42Fには、媒体画像MI42の外接矩形の重心GB42が設定され、クロッピング画像RI43Fには、媒体画像MI43の外接矩形の重心GB43が設定されている。 FIGS. 47A to 47C, 48, 49A to 49C, and 50A to 50C show examples of setting an extraction area for a read image. As an example, the cropping image RI41F includes the medium image MI41 (FIG. 47A), the cropping image RI42F includes the medium image MI42 (FIG. 47B), and the cropping image RI43F includes the medium image MI43 (FIG. 47C). . The center of gravity GB41 of the circumscribed rectangle of the medium image MI41 is set in the cropping image RI41F, the center of gravity GB42 of the circumscribed rectangle of the medium image MI42 is set in the cropping image RI42F, and the center of gravity GB42 of the medium image MI43 is set in the cropping image RI43F. The center of gravity GB43 of the circumscribed rectangle is set.
 図47Aに示すように、媒体画像MI41における媒体裁断箇所CPは媒体画像MI41の右側に存在するため、画像処理部15は、媒体画像MI41の外接矩形の右辺を左方向に平行移動させることにより、クロッピング画像RI41Fにおいて、媒体裁断箇所CPを含まない矩形IR41を作成する。 As shown in FIG. 47A, since the medium cutting point CP in the medium image MI41 exists on the right side of the medium image MI41, the image processing unit 15 translates the right side of the circumscribed rectangle of the medium image MI41 in the leftward direction. In the cropping image RI41F, a rectangle IR41 not including the medium cutting portion CP is created.
 また、図47Bに示すように、媒体画像MI42における媒体裁断箇所CPは媒体画像MI42の左側に存在するため、画像処理部15は、媒体画像MI42の外接矩形の左辺を右方向に平行移動させることにより、クロッピング画像RI42Fにおいて、媒体裁断箇所CPを含まない矩形IR42を作成する。 Further, as shown in FIG. 47B, since the medium cutting portion CP in the medium image MI42 exists on the left side of the medium image MI42, the image processing unit 15 moves the left side of the circumscribed rectangle of the medium image MI42 in the right direction. Thus, a rectangular IR 42 that does not include the medium cutting portion CP is created in the cropping image RI42F.
 また、図47Cに示すように、媒体画像MI43における媒体裁断箇所CPは媒体画像MI43の右側に存在するため、画像処理部15は、媒体画像MI43の外接矩形の右辺を左方向に平行移動させることにより、クロッピング画像RI43Fにおいて、媒体裁断箇所CPを含まない矩形IR43を作成する。 Further, as shown in FIG. 47C, since the medium cutting portion CP in the medium image MI43 exists on the right side of the medium image MI43, the image processing unit 15 moves the right side of the circumscribed rectangle of the medium image MI43 in the left direction. Accordingly, a rectangular IR 43 not including the medium cutting portion CP is created in the cropping image RI 43F.
 次いで、画像処理部15は、図48に示すように、重心GB41,GB42,GB43を互いに一致させて矩形IR41,IR42,IR43を配置する。そして、画像処理部15は、矩形IR41,IR42,IR43のすべてが重複する矩形領域を抽出領域EA4として決定する。よって、矩形の抽出領域EA4の重心GB5は、重心GB41,GB42,GB43に一致する。 Next, as shown in FIG. 48, the image processing unit 15 arranges the rectangles IR41, IR42, IR43 with the centers of gravity GB41, GB42, GB43 coincident with each other. Then, the image processing unit 15 determines a rectangular area where all of the rectangles IR41, IR42, and IR43 overlap as the extraction area EA4. Therefore, the center of gravity GB5 of the rectangular extraction area EA4 matches the centers of gravity GB41, GB42, and GB43.
 そして、画像処理部15は、図49Aに示すように、重心GB5を重心GB41に一致させて抽出領域EA4をクロッピング画像RI41Fに設定する。 Then, as shown in FIG. 49A, the image processing unit 15 sets the extraction area EA4 to the cropping image RI41F by matching the center of gravity GB5 with the center of gravity GB41.
 また、画像処理部15は、図49Bに示すように、重心GB5を重心GB42に一致させて抽出領域EA4をクロッピング画像RI42Fに設定する。 {Circle around (4)} As shown in FIG. 49B, the image processing unit 15 sets the extraction area EA4 to the cropping image RI42F by matching the center of gravity GB5 with the center of gravity GB42.
 また、画像処理部15は、図49Cに示すように、重心GB5を重心GB43に一致させて抽出領域EA4をクロッピング画像RI43Fに設定する。 {Circle around (4)} As shown in FIG. 49C, the image processing unit 15 sets the extraction area EA4 to the cropping image RI43F by matching the center of gravity GB5 with the center of gravity GB43.
 よって、クロッピング画像RI41Fに設定された抽出領域EA4には媒体画像MI41の主要部分が存在し、クロッピング画像RI42Fに設定された抽出領域EA4には媒体画像MI42の主要部分が存在し、クロッピング画像RI43Fに設定された抽出領域EA4には媒体画像MI43の主要部分が存在する。 Therefore, the main part of the medium image MI41 exists in the extraction area EA4 set in the cropping image RI41F, and the main part of the medium image MI42 exists in the extraction area EA4 set in the cropping image RI42F. The main part of the medium image MI43 exists in the set extraction area EA4.
 そこで、画像処理部15は、図50A~Cに示すように、抽出領域EA4に従って、クロッピング画像RI41Fから媒体画像MI41の主要部分MI41Cを抽出し、クロッピング画像RI42Fから媒体画像MI42の主要部分MI42Cを抽出し、クロッピング画像RI43Fから媒体画像MI43の主要部分MI43Cを抽出する。 Therefore, as shown in FIGS. 50A to 50C, the image processing unit 15 extracts the main part MI41C of the medium image MI41 from the cropping image RI41F and extracts the main part MI42C of the medium image MI42 from the cropping image RI42F according to the extraction area EA4. Then, the main part MI43C of the medium image MI43 is extracted from the cropping image RI43F.
 以上、実施例3について説明した。 The third embodiment has been described above.
 [実施例4]
 実施例1では、画像処理部15は、媒体輪郭図形の上辺の中点、または、媒体輪郭図形の上辺の両端のコーナー点を結ぶ直線の中点を媒体画像基準位置に設定した。
[Example 4]
In the first embodiment, the image processing unit 15 sets the middle point of the upper side of the medium outline figure or the middle point of a straight line connecting both corner points of the upper side of the medium outline figure as the medium image reference position.
 しかし、画像処理部15は、媒体輪郭図形の下辺の中点、または、媒体輪郭図形の下辺の両端のコーナー点を結ぶ直線の中点を媒体画像基準位置に設定しても良い。また、画像処理部15は、媒体輪郭図形の左辺の中点、または、媒体輪郭図形の左辺の両端のコーナー点を結ぶ直線の中点を媒体画像基準位置に設定しても良い。また、画像処理部15は、媒体輪郭図形の右辺の中点、または、媒体輪郭図形の右辺の両端のコーナー点を結ぶ直線の中点を媒体画像基準位置に設定しても良い。 However, the image processing unit 15 may set the middle point of the lower side of the medium outline figure or the middle point of a straight line connecting both corner points of the lower side of the medium outline figure as the medium image reference position. In addition, the image processing unit 15 may set the middle point of the left side of the medium outline figure or the middle point of a straight line connecting both corner points of the left side of the medium outline figure as the medium image reference position. In addition, the image processing unit 15 may set the middle point of the right side of the medium outline figure or the middle point of a straight line connecting both corner points of the right side of the medium outline figure as the medium image reference position.
 また、画像処理部15は、媒体画像において、本のノド部の上端を媒体画像基準位置に設定しても良い。 {Circle around (4)} The image processing unit 15 may set the upper end of the book gutter at the medium image reference position in the medium image.
 [実施例5]
 実施例1では、画像処理部15は、読取画像に含まれる原稿台画像の上辺の中点を読取画像基準位置に設定した。
[Example 5]
In the first embodiment, the image processing unit 15 sets the middle point of the upper side of the document table image included in the read image as the read image reference position.
 しかし、画像処理部15は、読取画像に含まれる原稿台画像の下辺の中点を読取画像基準位置に設定しても良い。 However, the image processing unit 15 may set the middle point of the lower side of the document table image included in the read image as the read image reference position.
 また、画像処理部15は、矩形の読取画像の何れかの頂点を読取画像基準位置に設定しても良い。 (4) The image processing unit 15 may set any vertex of the rectangular read image as the read image reference position.
 [実施例6]
 実施例1では、画像処理部15は、複数の読取画像における再配置後のすべての媒体画像の外接矩形を包含する矩形に基づいて抽出領域を設定した。また、実施例2では、画像処理部15は、複数の読取画像における再配置後のすべての媒体画像の外接矩形が重複する矩形領域に基づいて抽出領域を設定した。実施例1,2における抽出領域の設定の際に、画像処理部15は、外接矩形に所定の余白を付した領域に基づいて抽出領域を設定しても良い。こうすることで、読取画像から媒体画像を抽出する際に媒体画像の一部が欠けてしまう可能性を低くすることができる。
[Example 6]
In the first embodiment, the image processing unit 15 sets the extraction area based on a rectangle including a circumscribed rectangle of all the rearranged medium images in a plurality of read images. In the second embodiment, the image processing unit 15 sets the extraction area based on a rectangular area where circumscribed rectangles of all rearranged medium images in a plurality of read images overlap. When setting the extraction area in the first and second embodiments, the image processing unit 15 may set the extraction area based on an area where a predetermined margin is added to a circumscribed rectangle. By doing so, it is possible to reduce the possibility that a part of the medium image is missing when extracting the medium image from the read image.
 [実施例7]
 記憶部13は、ハードウェアとして、例えば、メモリにより実現される。メモリの一例として、SDRAM(Synchronous Dynamic Random Access Memory)等のRAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ等が挙げられる。
[Example 7]
The storage unit 13 is realized by, for example, a memory as hardware. Examples of the memory include a random access memory (RAM) such as a synchronous dynamic random access memory (SDRAM), a read only memory (ROM), and a flash memory.
 制御部11及び画像処理部15は、ハードウェアとして、例えばプロセッサにより実現される。プロセッサの一例として、CPU(Central Processing Unit)、DSP(Digital Signal Processor)、FPGA(Field Programmable Gate Array)等が挙げられる。また、制御部11及び画像処理部15は、プロセッサと周辺回路とを含むLSI(Large Scale Integrated circuit)によって実現されても良い。さらに、制御部11及び画像処理部15は、GPU(Graphics Processing Unit)、ASIC(Application Specific Integrated Circuit)等を用いて実現されても良い。 The control unit 11 and the image processing unit 15 are realized by, for example, a processor as hardware. Examples of the processor include a CPU (Central Processing Unit), a DSP (Digital Signal Processor), and an FPGA (Field Programmable Gate Array). The control unit 11 and the image processing unit 15 may be realized by an LSI (Large Scale Integrated Circuit) including a processor and peripheral circuits. Further, the control unit 11 and the image processing unit 15 may be realized using a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), or the like.
 表示部17は、ハードウェアとして、例えばディスプレイにより実現される。画像処理装置10に用いられるディスプレイの一例として、液晶ディスプレイが挙げられる。 The display unit 17 is realized by, for example, a display as hardware. As an example of a display used in the image processing apparatus 10, a liquid crystal display is given.
 画像処理装置10での上記説明における各処理の全部または一部は、各処理に対応するプログラムを画像処理装置10が有するプロセッサに実行させることによって実現しても良い。例えば、上記説明における各処理に対応するプログラムがメモリに記憶され、プログラムがプロセッサによってメモリから読み出されて実行されても良い。また、プログラムは、任意のネットワークを介して画像処理装置10に接続されたプログラムサーバに記憶され、そのプログラムサーバから画像処理装置10にダウンロードされて実行されたり、画像処理装置10が読み取り可能な記録媒体に記憶され、その記録媒体から読み出されて実行されても良い。画像処理装置10が読み取り可能な記録媒体には、例えば、メモリカード、USBメモリ、SDカード、フレキシブルディスク、光磁気ディスク、CD-ROM、DVD、及び、Blu-ray(登録商標)ディスク等の可搬の記憶媒体が含まれる。また、プログラムは、任意の言語や任意の記述方法にて記述されたデータ処理方法であり、ソースコードやバイナリコード等の形式を問わない。また、プログラムは必ずしも単一的に構成されるものに限られず、複数のモジュールや複数のライブラリとして分散構成されるものや、OSに代表される別個のプログラムと協働してその機能を達成するものも含む。 All or a part of each processing in the above description in the image processing apparatus 10 may be realized by causing a processor of the image processing apparatus 10 to execute a program corresponding to each processing. For example, a program corresponding to each process in the above description may be stored in the memory, and the program may be read from the memory and executed by the processor. Further, the program is stored in a program server connected to the image processing apparatus 10 via an arbitrary network, and is downloaded from the program server to the image processing apparatus 10 and executed. The program may be stored in a medium, read from the recording medium, and executed. The recording medium readable by the image processing apparatus 10 includes, for example, a memory card, a USB memory, an SD card, a flexible disk, a magneto-optical disk, a CD-ROM, a DVD, and a Blu-ray (registered trademark) disk. Transport storage media. The program is a data processing method described in an arbitrary language or an arbitrary description method, and may be in any format such as a source code or a binary code. Further, the program is not necessarily limited to a single program, but may be distributed in the form of a plurality of modules or a plurality of libraries, or may achieve its function in cooperation with a separate program represented by an OS. Including things.
 画像処理装置10の分散・統合の具体的形態は図示するものに限られず、画像処理装置10の全部または一部を、各種の付加等に応じて、または、機能負荷に応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。 The specific form of the distribution / integration of the image processing device 10 is not limited to the illustrated one, and all or a part of the image processing device 10 may be arbitrarily added according to various additions or the like, or according to a functional load. And can be configured to be distributed or integrated functionally or physically.
1 画像処理装置
11 制御部
13 記憶部
15 画像処理部
17 表示部
1 image processing device 11 control unit 13 storage unit 15 image processing unit 17 display unit

Claims (12)

  1.  各々が媒体画像を含む一連の複数の読取画像を記憶する記憶部と、
     前記複数の読取画像の各々において、前記媒体画像が存在する第一領域に第一基準位置を設定し、前記第一領域以外の第二領域に第二基準位置を設定し、前記第一基準位置と前記第二基準位置との間の位置関係に基づいて、前記読取画像内で前記媒体画像を再配置し、
     再配置後の前記媒体画像に基づいて、前記複数の読取画像において同一の抽出領域を設定し、前記抽出領域に従って、前記複数の読取画像の各々から前記媒体画像を抽出する画像処理部と、
     を具備する画像処理装置。
    A storage unit for storing a series of a plurality of read images each including a medium image,
    In each of the plurality of read images, a first reference position is set in a first area where the medium image is present, a second reference position is set in a second area other than the first area, and the first reference position is set. Based on the positional relationship between the and the second reference position, rearrange the medium image in the read image,
    An image processing unit that sets the same extraction region in the plurality of read images based on the medium image after the rearrangement, and extracts the medium image from each of the plurality of read images according to the extraction region.
    An image processing apparatus comprising:
  2.  前記複数の読取画像は、二つの読取画像を含み、
     前記画像処理部は、前記二つの読取画像のうちの一方の読取画像に含まれる第一媒体画像と、前記二つの読取画像のうちの他方の読取画像に含まれ、かつ、前記第一媒体画像と対になる第二媒体画像との間において、前記位置関係が同一になるように前記第二媒体画像を再配置する、
     請求項1に記載の画像処理装置。
    The plurality of read images include two read images,
    The image processing unit includes a first medium image included in one of the two read images and a second medium image included in the other of the two read images, and the first medium image. And between the second medium image to be paired, rearrange the second medium image so that the positional relationship is the same,
    The image processing device according to claim 1.
  3.  前記画像処理部は、前記複数の読取画像における再配置後のすべての前記媒体画像の外接矩形を包含する矩形に基づいて前記抽出領域を設定する、
     請求項1に記載の画像処理装置。
    The image processing unit sets the extraction area based on a rectangle including a circumscribed rectangle of all the medium images after rearrangement in the plurality of read images,
    The image processing device according to claim 1.
  4.  前記画像処理部は、前記複数の読取画像における再配置後のすべての前記媒体画像の外接矩形が重複する矩形領域に基づいて前記抽出領域を設定する、
     請求項1に記載の画像処理装置。
    The image processing unit sets the extraction area based on a rectangular area where circumscribed rectangles of all the medium images after rearrangement in the plurality of read images overlap.
    The image processing device according to claim 1.
  5.  前記画像処理部は、前記読取画像に含まれる原稿台画像の上辺の中点を前記第二基準位置に設定する、
     請求項1に記載の画像処理装置。
    The image processing unit sets a middle point of an upper side of the platen image included in the read image as the second reference position,
    The image processing device according to claim 1.
  6.  前記画像処理部は、矩形の前記読取画像の頂点を前記第二基準位置に設定する、
     請求項1に記載の画像処理装置。
    The image processing unit sets a vertex of the rectangular read image at the second reference position,
    The image processing device according to claim 1.
  7.  前記画像処理部は、前記媒体画像上の二つのコーナー点を結ぶ直線の中点を前記第一基準位置に設定する、
     請求項1に記載の画像処理装置。
    The image processing unit sets a middle point of a straight line connecting two corner points on the medium image to the first reference position,
    The image processing device according to claim 1.
  8.  前記画像処理部は、前記外接矩形に所定の余白を付した領域に基づいて前記抽出領域を設定する、
     請求項3または4に記載の画像処理装置。
    The image processing unit sets the extraction area based on an area obtained by adding a predetermined margin to the circumscribed rectangle,
    The image processing device according to claim 3.
  9.  前記画像処理部は、前記媒体画像において媒体の裁断箇所を検出し、前記裁断箇所に基づいて前記媒体画像上に前記第一基準位置を設定する、
     請求項1に記載の画像処理装置。
    The image processing unit detects a cut portion of the medium in the medium image, and sets the first reference position on the medium image based on the cut portion,
    The image processing device according to claim 1.
  10.  前記複数の読取画像は、二つの読取画像を含み、
     前記画像処理部は、前記二つの読取画像のうちの一方の読取画像に含まれる第一媒体画像の縦の長さと、前記二つの読取画像のうちの他方の読取画像に含まれる第二媒体画像の縦の長さとの差が第一閾値未満であり、前記第一媒体画像の横の長さと前記第二媒体画像の横の長さとの差が第二閾値以上であり、かつ、前記第二媒体画像の横の長さが前記第一媒体画像の横の長さの所定範囲にあるときは、前記第二媒体画像の縦方向の配置を変更する一方で、前記第二媒体画像の横方向の配置を変更しない、
     請求項1に記載の画像処理装置。
    The plurality of read images include two read images,
    The image processing unit includes a vertical length of a first medium image included in one of the two read images and a second medium image included in the other of the two read images. Is less than a first threshold, the difference between the horizontal length of the first medium image and the horizontal length of the second medium image is greater than or equal to a second threshold, and the second When the horizontal length of the medium image is within a predetermined range of the horizontal length of the first medium image, while changing the vertical arrangement of the second medium image, the horizontal direction of the second medium image is changed. Do not change the placement of
    The image processing device according to claim 1.
  11.  前記画像処理部は、前記第一媒体画像と対になる媒体画像が存在しないときに、警告を出力する、
     請求項2に記載の画像処理装置。
    The image processing unit outputs a warning when there is no medium image that is paired with the first medium image,
    The image processing device according to claim 2.
  12.  各々が媒体画像を含む一連の複数の読取画像を記憶し、
     前記複数の読取画像の各々において、前記媒体画像が存在する第一領域に第一基準位置を設定し、前記第一領域以外の第二領域に第二基準位置を設定し、前記第一基準位置と前記第二基準位置との間の位置関係に基づいて、前記読取画像内で前記媒体画像を再配置し、
     再配置後の前記媒体画像に基づいて、前記複数の読取画像において同一の抽出領域を設定し、前記抽出領域に従って、前記複数の読取画像の各々から前記媒体画像を抽出する、
     画像処理方法。
    Storing a series of multiple read images each including a media image;
    In each of the plurality of read images, a first reference position is set in a first area where the medium image is present, a second reference position is set in a second area other than the first area, and the first reference position is set. Based on the positional relationship between the and the second reference position, rearrange the medium image in the read image,
    Based on the medium image after the rearrangement, set the same extraction region in the plurality of read images, according to the extraction region, extract the medium image from each of the plurality of read images,
    Image processing method.
PCT/JP2018/027354 2018-07-20 2018-07-20 Image processing device and image processing method WO2020017045A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/027354 WO2020017045A1 (en) 2018-07-20 2018-07-20 Image processing device and image processing method
JP2020530857A JP6956269B2 (en) 2018-07-20 2018-07-20 Image processing device and image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/027354 WO2020017045A1 (en) 2018-07-20 2018-07-20 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
WO2020017045A1 true WO2020017045A1 (en) 2020-01-23

Family

ID=69163698

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/027354 WO2020017045A1 (en) 2018-07-20 2018-07-20 Image processing device and image processing method

Country Status (2)

Country Link
JP (1) JP6956269B2 (en)
WO (1) WO2020017045A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000004348A (en) * 1997-07-24 2000-01-07 Ricoh Co Ltd Image processor
JP2005269095A (en) * 2004-03-17 2005-09-29 Fuji Xerox Co Ltd Image processing method and image processor
JP2013004088A (en) * 2011-06-15 2013-01-07 Fujitsu Ltd Image processing method, image processing device, scanner and computer program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000004348A (en) * 1997-07-24 2000-01-07 Ricoh Co Ltd Image processor
JP2005269095A (en) * 2004-03-17 2005-09-29 Fuji Xerox Co Ltd Image processing method and image processor
JP2013004088A (en) * 2011-06-15 2013-01-07 Fujitsu Ltd Image processing method, image processing device, scanner and computer program

Also Published As

Publication number Publication date
JP6956269B2 (en) 2021-11-02
JPWO2020017045A1 (en) 2021-02-25

Similar Documents

Publication Publication Date Title
JP6784704B2 (en) Image processing method and equipment
US9665168B2 (en) Image processing apparatus, information processing method, and program
KR101557999B1 (en) Image processing method recording medium recording image processing program and image processing apparatus
KR102593893B1 (en) Training and upscaling of large images
US20210113172A1 (en) Lesion Detection Method, Apparatus and Device, and Storage Medium
US10007846B2 (en) Image processing method
JP2017092947A (en) Learning support program, learning support device, learning tool formation notebook, and learning tool formation method
WO2019039015A1 (en) Medicine inspection assistance device, image processing device, image processing method, and program
JP2006313550A (en) System, method and recording medium for automatically classifying document
WO2020017045A1 (en) Image processing device and image processing method
US10296802B2 (en) Image processing device, image processing method, and computer program product
JP2016051110A (en) Display device and display control program
JP2011141600A (en) Image processing apparatus, method, and program
US9734610B2 (en) Image processing device, image processing method, and image processing program
CN111133474B (en) Image processing apparatus, image processing method, and computer-readable recording medium
CN111062902A (en) Image deformation method, medium, device and apparatus
JP6337680B2 (en) Image processing system, image processing apparatus, program, and image processing method
JPH09147109A (en) Method and device for specific mark detection
US20230237687A1 (en) Product identification apparatus, product identification method, and non-transitory computer-readable medium
JP2004357801A (en) Display device for make-up
JP5563390B2 (en) Image processing apparatus, control method therefor, and program
JP2020166653A (en) Information processing device, information processing method, and program
JP6148426B1 (en) Image processing apparatus, image processing method, and image processing program
JP6613625B2 (en) Image processing program, image processing apparatus, and image processing method
JP7443965B2 (en) Information processing device, correction method, program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18926814

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020530857

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18926814

Country of ref document: EP

Kind code of ref document: A1