WO2020026739A1 - Image processing device - Google Patents

Image processing device Download PDF

Info

Publication number
WO2020026739A1
WO2020026739A1 PCT/JP2019/027390 JP2019027390W WO2020026739A1 WO 2020026739 A1 WO2020026739 A1 WO 2020026739A1 JP 2019027390 W JP2019027390 W JP 2019027390W WO 2020026739 A1 WO2020026739 A1 WO 2020026739A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
integral
path
differential
Prior art date
Application number
PCT/JP2019/027390
Other languages
French (fr)
Japanese (ja)
Inventor
順一 田口
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2020026739A1 publication Critical patent/WO2020026739A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present invention relates to an image processing device.
  • a converted image is obtained by directly processing an original image or by performing a Fourier transform, a wavelet transform, or the like on the original image. After processing the converted image, an inverse conversion is performed to obtain a processed image.
  • the edge image is used as a material for extracting and recognizing features, or as a processed material for adding to the original image and sharpening the image.
  • CLAHE Contrast limited adaptive adaptive histogram
  • Prior Art Document 2 a process such as CLAHE (Contrast limited adaptive adaptive histogram) equalization described in Prior Art Document 2 is used.
  • CLAHE Contrast limited adaptive adaptive histogram
  • an image is divided into small areas of 8 * 8 pixels, and adaptive processing is performed using flattening of a histogram in which contrast is limited for each small area.
  • bilinear interpolation is performed in order to eliminate pseudo contours appearing at boundaries between adjacent small areas.
  • smoothing is performed based on a local deviation in consideration of the direction.
  • Adaptive processing such as deciding a direction and a degree and changing parameters such as smoothing according to an edge portion or a flat portion is performed.
  • the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image processing apparatus capable of realizing a process of processing a differential image to create an integrated image from a modified differential image obtained by modification and a process equivalent thereto. Is to provide.
  • an image processing apparatus includes a differential processing unit that generates a differential image obtained by differentiating an input image, and a differential image changing process that generates a modified differential image obtained by changing the differential image. And a path integration processing section that creates a path integral image by path integrating the modified differential image.
  • FIG. 1 is a block diagram illustrating a configuration of the image processing system according to the first embodiment.
  • FIG. 2 is a block diagram showing an example of the configuration of the image processing apparatus of FIG. 1 and a processed image.
  • FIG. 3 is a diagram illustrating an example of a direction of a processed image of the image processing apparatus of FIG.
  • FIG. 4 is a flowchart illustrating the path integration processing of the image processing apparatus of FIG.
  • FIG. 5 is a diagram showing a path matching integration method for the entire processed image of the image processing apparatus of FIG.
  • FIG. 6 is a diagram showing a path matching integration method for a part of the processed image of the image processing apparatus of FIG.
  • FIG. 7 is a flowchart illustrating a method of changing the offset of an integral image according to the second embodiment.
  • FIG. 1 is a block diagram illustrating a configuration of the image processing system according to the first embodiment.
  • FIG. 2 is a block diagram showing an example of the configuration of the image processing apparatus of FIG. 1 and
  • FIG. 8 is a diagram showing a setting example of the average value calculation area in FIG.
  • FIG. 9 is a diagram illustrating a method for creating an integral image according to the third embodiment.
  • FIG. 10 is a flowchart illustrating a method for creating an integral image according to the third embodiment.
  • FIG. 11 is a diagram showing an example of block division of a block division integral image according to the fourth embodiment.
  • FIG. 12 is a diagram showing the positional relationship between the four blocks in FIG. 11 and the pixel of interest.
  • FIG. 13 is a flowchart illustrating a method for removing noise from a block-divided integral image according to the fourth embodiment.
  • FIG. 14 is a diagram showing an integral image according to the presence or absence of the processing of FIG. 13 with respect to the input image of FIG. FIG.
  • FIG. 15 is a flowchart illustrating the level adjustment processing of the integral image according to the fifth embodiment.
  • FIG. 16 is a diagram showing an integrated image according to the presence or absence of the processing of FIG.
  • FIG. 17 is a flowchart illustrating the display processing of the integral image according to the sixth embodiment.
  • FIG. 18 is a diagram showing an integral image according to the presence or absence of the processing of FIG.
  • FIG. 19 is a block diagram illustrating an example of a configuration and a processed image of the image processing apparatus according to the seventh embodiment.
  • FIG. 20 is a flowchart showing the binarization processing of the integral image of FIG.
  • FIG. 21 is a diagram showing a binarized image of the input image and the integral image of FIG. FIG.
  • FIG. 22 is a diagram illustrating an example of another processed image of the image processing apparatus in FIG.
  • FIG. 23 is a flowchart illustrating a method for generating a modified differential image of the image processing device according to the eighth embodiment.
  • FIG. 24 is a block diagram illustrating a configuration of the path matching integration processing unit of the image processing device according to the ninth embodiment.
  • FIG. 25 is a flowchart illustrating a block image creation method of the image processing apparatus according to the tenth embodiment.
  • FIG. 26 is a flowchart illustrating a block distortion removal method for a block image according to the eleventh embodiment.
  • FIG. 27 is a block diagram illustrating a configuration of an image processing device according to the twelfth embodiment.
  • FIG. 28 is a diagram illustrating a state of a gradient in an adjacent four-point mesh when a matched differential image is created by the image processing apparatus in FIG. 27.
  • FIG. 29 is a flowchart showing a matched differential image creating process of the image processing apparatus of FIG.
  • FIG. 30 is a diagram illustrating an example of the distance between adjacent four-point meshes when the matched differential image is created by the image processing apparatus in FIG. 27.
  • FIG. 31 is a block diagram illustrating a hardware configuration example of the image processing apparatus in FIG.
  • FIG. 1 is a block diagram illustrating a configuration of the image processing system according to the first embodiment.
  • the image processing system includes a photographing device 100, image processing devices 111, 121, 131, display devices 112, 122, 132, input devices 113, 123, 133, and storage devices 114, 124, 134.
  • the image processing apparatuses 111, 121, and 131 are connected to each other via a communication network 140 such as the Internet.
  • the image processing device 111 is connected to the imaging device 100, the display device 112, the input device 113, and the storage device 114.
  • the image processing device 121 is connected to the display device 122, the input device 123, and the storage device 124.
  • the image processing device 131 is connected to the display device 132, the input device 133, and the storage device 134.
  • the image capturing apparatus 100 captures an image of a subject and generates image data.
  • the imaging device 100 is, for example, a digital camera, a camera attached to a smartphone or a mobile phone, a scanner, an X-ray photography device used in a medical site, a monitoring camera used in a monitoring site such as an MRI (Magnetic Resonance Imaging), and the like. Examples include various imaging devices that capture images capturing ultrasonic waves, infrared rays, visible light, ultraviolet rays, X-rays, ⁇ -rays, electron beams, and the like used in various inspection sites.
  • the image processing device 111 can receive image data from the photographing device 100 and perform various image processing based on input information from the input device 113. Further, the image processing device 111 can display the processing result including the processed image data on the display device 112, store the processing result in the storage device 114, or transmit the processing result to the communication network 140. Further, the image processing device 111 can receive request information from the outside and transmit various information such as image data stored in the storage device 114 to the outside.
  • the image processing device 111 may be, for example, a general-purpose computer such as a workstation, a desktop personal computer, a notebook personal computer, a tablet terminal, or a smartphone, or may be hardware dedicated to image processing.
  • a display a television, or the like can be used.
  • a keyboard a mouse, or the like can be used.
  • the storage device 114 a magnetic disk device, an optical disk device, an SSD (Solid State Drive), a USB (Universal Serial Bus) memory, or the like can be used.
  • the image processing device 111, the display device 112, the input device 113, and the storage device 114 are integrated in a notebook computer, a publet terminal, a smartphone, and the like.
  • the communication network 140 is a line capable of transmitting and receiving various information data including image data, and can be connected to all over the world.
  • the communication network 140 for example, the Internet can be used.
  • the communication network 140 may also have its own dedicated line in a local area.
  • the image processing device 121 can receive image data from the storage device 114 connected to the image processing device 111 and perform various image processing based on input information from the input device 123. Further, the image processing device 121 can display a processing result including the processed image data on the display device 122, store the processing result in the storage device 124, or transmit the processing result to the communication network 140. Further, the image processing device 121 can receive request information from the outside and transmit various information such as image data stored in the storage device 124 to the outside.
  • the image processing apparatus 131 can receive image data from the storage device 124 connected to the image processing apparatus 121 and perform various image processing based on input information from the input apparatus 133. Further, the image processing device 131 can display a processing result including the processed image data on the display device 132, store the processing result in the storage device 134, or transmit the processing result to the communication network 140.
  • each of the image processing apparatuses 111, 121, and 131 can be implemented by installing software (a program) for implementing image processing.
  • the built-in image processing apparatus can also perform image processing, and the image processing can be performed by installing hardware dedicated to image processing. Can also.
  • a person can use a digital camera as the photographing device 100 and a notebook personal computer with a built-in storage device 114 as the image processing device 111. Then, the image data photographed by the individual with the digital camera is stored in the storage device 114, and the storage device 124 connected to the image processing device 121 of the information service company of the external SNS (social networking service) via the communication network 140. You can upload your images to and make them accessible to the general public. At this time, the user can view the uploaded image on the display device 132 connected to the image processing device 131 owned by the user.
  • SNS social networking service
  • image data in various image capturing apparatuses such as an X-ray photographing apparatus or an MRI as the image capturing apparatus 100 is connected to these image capturing apparatuses, or an image processing apparatus incorporating these image capturing apparatuses.
  • 111 can be sent to the image processing device 121 which is a data server in the hospital. Then, the doctor can refer to the image on the display device 132 connected to the image processing device 131.
  • the image processing device 111 creates a differentiated image obtained by differentiating the captured image captured by the imaging device 100, creates a modified differential image obtained by modifying the differentiated image, and integrates the modified differential image into a path.
  • An integral image can be created.
  • various characteristics can be given to the path integral image obtained by path integrating the changed differential image depending on the manner of the change. For example, a sharpened image with improved contrast or a noise-reduced image with reduced noise components can be obtained while preserving edges.
  • FIG. 2 is a block diagram showing an example of the configuration of the image processing apparatus of FIG. 1 and a processed image.
  • the image processing device 111 includes a differential processing unit 211, a differential image change processing unit 212, and a path matching integration processing unit 213.
  • the differential processing unit 211 creates a differential image obtained by differentiating the input image 201.
  • the input image 201 may be a color image or a black and white image.
  • the input image 201 may be an image photographed by the photographing device 100 of FIG. In FIG. 2, a monochrome image having monochrome grayscale values in each pixel is taken as an example of the input image 201.
  • the differentiation direction can be set in two directions, the X direction and the Y direction.
  • the differential processing unit 211 creates an X-direction partial differential image 202 and a Y-direction partial differential image 203 by performing one-dimensional differentiation in each of the X direction and the Y direction.
  • FIG. 3 is a diagram illustrating an example of a direction of a processed image of the image processing apparatus of FIG.
  • the input image 201 has two directions, that is, vertical and horizontal directions on a two-dimensional plane.
  • the upper left point of the two-dimensional plane is defined as the coordinate origin 221
  • the right direction is defined as the X direction 222
  • the downward direction is defined as the Y direction 223.
  • the pixel value of the input image 201 at the point (pixel) at the coordinates (x, y) can be expressed as I (x, y).
  • the X-direction partial differential image 202 is described as Dx
  • the Y-direction partial differential image 203 is described as Dy
  • the respective differential values of the coordinates (x, y) can be given by the following equations (1) and (2).
  • the above differential image is a partial differential image.
  • the differential image may be an image linked to the partial differential image by conversion and inverse conversion.
  • the X direction partial differential image and the Y direction partial differential image can be converted into an absolute value image and an angle image.
  • the absolute value image and the angle image can be inversely transformed into an X direction partial differential image and a Y direction partial differential image. Since the differential image is a partial differential image or an image that can be converted to a partial differential image, line integration along a path is possible.
  • differential images there are differential images different from the above-described definition.
  • Reference 1 a differential image defined by an inverse operation is introduced for integral @ image (integrated image) referred to in Reference 2 below.
  • the integral @ image (integrated image) of Reference 2 has a different definition from the integrated image of the present embodiment because line integral along the path is not used.
  • the differential image of Reference 1 cannot be line-integrated along a path, and therefore cannot be a target differential image in the present embodiment by itself.
  • the differential image targeted in the present embodiment provides information that allows line integration along the path.
  • the differential image change processing unit 212 creates an X-direction changed partial differential image 204 and a Y-direction changed partial differential image 205 obtained by changing the X-direction partial differential image 202 and the Y-direction partial differential image 203.
  • the X-direction changed partial differential image 204 is expressed by Ex and the Y-direction changed partial differential image 205 is expressed by Ey
  • the X-direction changed partial differential image 204 and the Y-direction changed partial differential image can be given by, for example, the following Expressions 3 and 4.
  • Dx (x, y) is the absolute value of Dx (x, y).
  • Dx (x, y) of the X-direction changed partial differential image 204 has a smaller absolute value in a portion where the absolute value is smaller than lv according to the square equation when the absolute value is smaller than lv.
  • the absolute value is larger than lv, the value of the part where the absolute value is larger than lv is multiplied by k.
  • the path matching integration processing unit 213 creates a path matching integration image 206 by performing path matching integration of the X direction changed partial differential image 204 and the Y direction changed partial differential image 205.
  • the path matching integration processing unit 213 refers to the input image 201 when setting the brightness of the initial point at which the integration is started to be the brightness of the input image 201.
  • the path matching integration processing unit 213 refers to the input image 201 also when it is desired to divide and integrate into blocks and to match the average brightness in the block with the corresponding average brightness in the block of the input image 201.
  • the path matching integration processing unit 213 can also refer to the input image 201 when setting the display level of the path matching integration image 206.
  • the path integral of the present embodiment is a line integral along a path from a point on an image to another point. It is a different concept from path integral when dealing with quantum mechanical state transitions used in the field of physics.
  • line integration is performed by adding and subtracting pixel values on a path set in an image.
  • the path matching integration processing unit 213 refers to the X-direction changed partial differential image 204 when performing path integration in the X direction, and refers to the Y-direction changed partial differential image 205 when performing path integration in the Y direction.
  • the differential image is path-integrated without changing the differential image, even when there are a plurality of paths from a certain pixel to another pixel, the line integral values of the plurality of paths do not differ.
  • the line integrated values of the plurality of paths may be different.
  • an integral including a process of calculating one integral value based on a plurality of line integral values obtained by path integral of the modified differential image is referred to as path matching integral.
  • path matching integral As a method of calculating one integral value, an average value of a plurality of line integral values may be obtained, or one integral value may be obtained based on a plurality of line integral values using a neural network. Is also good.
  • the line integral value of any one of the paths may be selected as the integral value, or a value obtained by adding an appropriate value to the line integral value may be used as the integral value.
  • the maximum value may be selected from a plurality of line integral values, or the maximum value may be selected.
  • the computer implements the differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213.
  • the differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213 may be realized by dedicated hardware.
  • FIG. 4 is a flowchart illustrating the path integration processing of the image processing apparatus of FIG.
  • step S11 in FIG. 4 an initial value and an initial point such as a map are set.
  • a map M having the same size as the input image 201 and an image F into which an integral value is substituted are prepared.
  • the path matching integral image 206 is obtained.
  • the size of the input image 201 in the X direction is nx
  • the size in the Y direction is ny.
  • the predetermined range for calculating the integral value is x1 or more and x2 or less in the X direction, and y1 or more and y2 or less in the Y direction.
  • the initial point to start the integration be (x0, y0).
  • the initial point (x0, y0) is registered in the 0 adjacent point array as an adjacent point (0 adjacent point) whose distance from the initial point is 0.
  • the number of arrays of the 0 adjacent point array is 1. That is, the value of k (k is an integer of 0 or more) of the k adjacent point array is 0, and the number of arrays is 1.
  • the k adjacent points are adjacent points at a distance k from the initial point.
  • the integral value of the image F is set as the value of the input image 201. At this time, the integral value of the image F can be given by Expression 5 below.
  • the initial value of the number of arrays of k adjacent point arrays when k is other than 0 is 0.
  • the value of the unreferenced confirmation flag of the k-adjacent-point array indicating that the k-adjacent point has not been focused on is set to true.
  • the map value of the map M is 0 in a predetermined range for obtaining the integral value, and -1 in a range outside the predetermined range.
  • a point of interest Pa is set.
  • the k-neighboring points in the k-neighboring point array are sequentially set as the point of interest Pa. That is, when the value of the unreferenced flag of the k-neighboring point array is true, the point stored at the first position of the k-neighboring point array is regarded as the point of interest Pa, and the value of the unreferenced flag of the k-neighboring point array is false. I do.
  • the k-neighboring point stored at the position next to the position of the k-neighboring point that has been focused on is determined from the k-neighboring point array.
  • a point of interest Pb is set.
  • four adjacent points of the point of interest Pb of the k adjacent points are sequentially set as the point of interest Pb.
  • the point of interest Pb has four adjacent points, left, right, upper and lower. For example, the point of interest Pb is determined in this order.
  • step S14 it is determined whether the point of interest Pb should be integrated. If the map value of the position of the point of interest Pb is 0 or 1, the process proceeds to step S15. If the map value of the position of the point of interest Pb is neither 0 nor 1, the process proceeds to step S16.
  • step S15 processing such as integration of the point of interest Pb is performed.
  • processing such as integration path matching integration is performed.
  • the map value of the position of the point of interest Pb is 0, the map value is set to 1, and the integrated value of the position is integrated with the integrated value of the point of interest Pa, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image.
  • the value of the point of interest Pb of the image F is obtained by line integration with reference to 205. Then, the point is added to the k + 1 adjacent point array, and the number of k + 1 adjacent point arrays is increased by one.
  • the map value of the position of the point of interest Pb is 1, the map value is set to 2, and the integrated value of the position is referred to the integrated value of the point of interest Pa, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image 205. And calculate by line integration. Then, it is averaged with the value of the point of interest Pb of the image F, which is the integration value of the position already obtained, and the average value is newly set as the value of the point of interest Pb of the image F.
  • step S16 it is determined whether the point of interest Pb is the last point adjacent to the point of interest Pa. If the target point Pb is the last point adjacent to the target point Pa, the process proceeds to step S17. If the target point Pb is not the last point adjacent to the target point Pa, the process returns to step S13.
  • step S17 it is determined whether the point of interest Pa is the last point of the k adjacent points. If the point of interest Pa is the last point of the k adjacent points, the process proceeds to step S18. If the point of interest Pa is not the last point of the k adjacent points, the process returns to step S12.
  • step S18 it is determined whether there is a k + 1 adjacent point. If the number of arrangements of the (k + 1) adjacent points is not 0, there is a (k + 1) adjacent point, and the process proceeds to step S19. If the number of arrays of the k + 1 adjacent points is 0, there is no k + 1 adjacent point, and the path matching integration process ends. At this time, the integral value is substituted into a predetermined range of the image F, and the path matching integral image 206 of FIG. 2 is obtained.
  • step S19 k + 1 reference is performed. At this time, the location where the map value is 1 is set to 2. Then, the process returns to step S12 by referring to the next k + 1 adjacent point. That is, in the processing after step S12, the k + 1 adjacent point is replaced with the k adjacent point, and the value of the unreferenced flag of the k adjacent point array is set to true.
  • the processing from step S12 to step S19 in FIG. 4 is referred to as S10.
  • FIG. 5 is a diagram showing a path matching integration method for the entire processed image of the image processing apparatus of FIG. 5, the entire image 231 having the same size as the input image 201 is sequentially integrated based on the path matching integration processing in FIG. 4, thereby creating the path matching integration image 206 in FIG.
  • the path matching integrated image 206 created based on the path matching integration processing in FIG. 4 is referred to as an equal distance path average integrated image 206a.
  • the display level of the same distance path average integral image 206a is displayed in the range from the minimum value 3 to the maximum value 244 of the input image 201.
  • the path at a distance of 1 from the initial point P0 is shown at 233a
  • the path at a distance of 2 from the initial point P0 is shown at 233b.
  • the path from the initial point P0 to the point P11 includes two paths L1 and L2. At this time, for example, by taking the average of the integral value on the route L1 and the integral value on the route L2, the route matching integral value at the point P11 can be obtained.
  • the path matching integral value from the initial point P0 to the points P1 to P4 can be given by the following equations 6 to 9.
  • the path matching integral value from the initial point P0 to the points P5 to P12 can be given by the following equations 10 to 17.
  • Equations 6 to 13 are for the case where there is only one shortest path for performing line integration.
  • Equations 14 to 17 are for the case where there are two shortest paths for performing line integration. At this time, the average value of the integral values of these two routes was taken as the path matching integral value.
  • the entire image 231 having the same size as the input image 201 is subjected to path matching integration.
  • the portion near the center of the image that is, the initial point P0 at which the integration starts, is an image close to the value of the input image 201, but generally moves away from the value of the input image 201 toward the end. Will be. At this time, the whole balance may be disturbed, deviating from the value of the input image, and an unnatural oblique strip-shaped portion Z1 may be generated.
  • FIG. 6 is a diagram showing a path matching integration method for a part of the processed image of the image processing apparatus of FIG. 1 similarly to FIG. 4 and 5 have different integration ranges. That is, the integration range in FIG. 5 is the same size as the input image 201, and the integration range in FIG. 6 is a region 234 near the center smaller than the input image.
  • the range of the small area 234 near the center in the image 231 having the same size as the input image 201 is sequentially integrated to obtain the same distance path average integrated image 206b. It was created.
  • the same distance path average integral image 206b is path-matched integrated in a small area 234 near the center.
  • an unnatural band-like portion Z1 that stands out toward the end as seen in FIG. 5 is not seen.
  • the extent to which the unnatural upper part of the band is seen depends on the size of the originally undesirable change in the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205 and the size of the range 234 for obtaining the integral value.
  • the X direction partial differential image 202 and the Y direction partial differential image 203 are taken as examples of differential images, but differential images created by other methods may be used.
  • partial differential images in oblique directions may be created, and the partial differential images in these four directions may be used as differential images.
  • distances are defined for vertical, horizontal, vertical, and diagonal connections. For example, vertical and horizontal distances are defined as 1, diagonal distances are defined as 2, and so on. You can measure the distance you go.
  • the average value of the integration values of the plurality of paths is calculated by integrating the point. It can be a value. For example, there are three diagonal routes to the upper right point, a diagonal route, a route from the top to the right point, and a route from the right to the upper point.
  • the average value of the integral values can be used as the integral value of the point on the upper right.
  • the integral value of the initial point P0 of the created integral image is set to a predetermined value.
  • the integral value of the initial point P0 and the modified differential image By performing line integration by a predetermined addition / subtraction operation with reference to the value at the corresponding position, an integrated value at a point adjacent to the initial point P0 is obtained.
  • the integral value of the modified differential image obtained by modifying the differential image has a different value depending on the path, the integral value of the modified differential image can be uniquely determined. Therefore, even when the differential image is changed, an integral image can be created from the changed differential image that has been changed.
  • the integral value F (x0, y0) of the initial point P0 (x0, y0) was set to the value I (x0, y0) of the input image 201.
  • the path matching integration, addition, subtraction, and averaging are performed starting from the initial point P0 (x0, y0). Therefore, if the value of the integral value F (x0, y0) of the initial point P0 (x0, y0) is increased by a, the integral value increases by a at all points. That is, an image to which the offset is added by a is obtained. Therefore, when the integrated value of the initial point P0 (x0, y0) is set to 0, an obtained integrated image is obtained in which the offset differs by -I (x0, y0).
  • the integration value of the initial point P0 (x0, y0) is not set to I (x0, y0), but any value including 0 or a random value is used. Even if it is set to the predetermined value, the integrated image in which the average value in the predetermined range as the final result is set to the predetermined value does not change.
  • FIG. 7 is a flowchart illustrating a method of changing the offset of the integral image according to the second embodiment
  • FIG. 8 is a diagram illustrating an example of setting the average value calculation area in FIG.
  • an average value calculation area 236 can be provided in the area 234 for calculating the integral value.
  • the same distance path average integrated image 206c with the offset changed was created from the same distance path average integrated image 206b of FIG.
  • the display level of the same distance route average integrated image 206c is displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201.
  • the process S20 in FIG. 7 includes a process S21 and a process S23.
  • the process S21 is a process for obtaining an integral value.
  • the value of the integral value F (x0, y0) at the initial point P0 at which the integration is started is set to 0 instead of I (x0, y0).
  • the processing S21 includes steps S22 and S10.
  • the process S23 is a process of changing the offset of the integral image.
  • the process S23 executes an offset adjustment for setting the average value in the average value calculation area 236 to a predetermined value.
  • the processing S23 includes steps S24 to S27.
  • step S22 initial values such as maps and initial point 0 are set.
  • the integral value F (x0, y0) at the initial point where the integration is started is set to zero. Others are the same as the processing of step S11 in FIG. Next, a process similar to the process of step S10 in FIG. 4 is performed.
  • step S24 the average value of the input image 201 is calculated.
  • step S25 the average value of the integrated image is calculated.
  • the average value h1 in the average value calculation area 236 is obtained for the integrated image obtained in step S21.
  • step S26 an offset value is calculated.
  • the difference h0 ⁇ h1 between the average values h0 and h1 is set as the offset change amount.
  • step S27 the offset of the integral image is changed.
  • the offset change amount is added to the integrated image obtained in step S21 to create an integrated image with the offset changed.
  • step S21 of FIG. 7 the integral value F (x0, y0) of the initial point P0 at which the integration is started is set to 0.
  • the value I (x0, The integrated image obtained in step S27 does not change whether it is set to y0) or a random value. Therefore, in step S21, the integration value of the initial point P0 may be set to any value.
  • the target to be averaged is not limited to the input image 201, and may be an image obtained by processing the input image 201. Further, the target to be averaged may be a uniform image having a value such as 0 or 128.
  • offset adjustment is performed on the path matching integrated image to change the average value in the average value calculation area 232 to a predetermined value.
  • the path matching integral image 206 in which the average value in the average value calculation area 236 does not change from that of the input image 201 is obtained. Can be.
  • the path matching integrated image 206 in which the average value in the averaged value calculation area 236 does not change with the processed image of the input image 201 Can be obtained.
  • the average value of the integrated image obtained in step S10 is adjusted to a uniform image, the path matching integrated image 206 in which the average value in the average value calculation area 236 becomes a uniform value can be obtained.
  • FIG. 9 is a diagram illustrating a method for creating an integral image according to the third embodiment.
  • the image 231 is divided into blocks of a small size such as 16 * 16.
  • the processing order 235 of the target block 240 is determined from each block, and the path matching integration is executed for each target block 240 according to the order 235.
  • a block division integrated image 206d was obtained for the input image 201 of FIG.
  • the band Z1 in FIG. 5 can be made inconspicuous at the block end.
  • the display level of the block division integrated image 206d is displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201, and this range is displayed.
  • FIG. 10 is a flowchart illustrating a method for creating an integral image according to the third embodiment.
  • the process S30 in FIG. 10 includes steps S31, S20, and S32.
  • step S31 the image 231 is divided into blocks.
  • the block of interest 240 is determined in order from each block.
  • step S20 the processing 20 of FIG.
  • step S32 if the block of interest 240 is the last block of the image 231, the process ends. If the block of interest 240 is not the last block of the image 231, the process returns to step S31. When the process proceeds to the last block, a block division integrated image 206d having the same size as the input image 201 is obtained.
  • the size of the image 231 is an integral multiple of the block size, and the divided block sizes are all the same. If the size of the image 231 is not an integral multiple of the block size, a block having a smaller block size occurs at the right end or lower end. For this small block as well, the process S30 of FIG. 10 is executed similarly to the other blocks, and a block division integral image can be created.
  • the target for which the average value is adjusted for each block is not limited to the input image, but may be an image obtained by processing the input image, or may be a uniform image having a value such as 0 or 128.
  • the image 231 is divided into predetermined blocks, and a block division integrated image is created for all the divided blocks. Further, offset adjustment for changing the average value in the predetermined average value calculation area 236 to a predetermined value is performed for all the divided blocks.
  • block distortion Z2 may occur at a connection part of the blocks.
  • a weighted average of the block division integral image is taken.
  • the block-divided integral image obtained by taking the weighted average will be referred to as a block-divided average image.
  • FIG. 11 is a diagram showing an example of block division of a block division integral image according to the fourth embodiment.
  • a division method for creating four block division integrated images 250a to 250d having different division methods is determined.
  • the block division integrated image 250a is an image obtained by the same division method as the image 231 in FIG.
  • the block division integral image 250b is an image obtained by dividing the block division integral image 250a by a half block in the ⁇ X direction in FIG.
  • the block division integrated image 250c is an image obtained by dividing the block division integral image 250a by a half block in the ⁇ Y direction in FIG.
  • the block division integral image 250d is an image obtained by dividing the integral image 250a by a half block in the ⁇ X direction and the ⁇ Y direction in FIG.
  • block division integrated image 250c Although the upper and lower ends of the block division integrated image 250c have only half blocks, a block division integral image can be created from the half blocks. However, for simplicity, the value of the corresponding position of the block division integral image 250a can be substituted for only this half block.
  • the left and right ends and the upper and lower ends of the block division integrated image 250d have only half blocks, a block division integral image can be created from these half blocks.
  • the four corners of the block division integral image 250d have only 1/4 blocks, a block division integral image can be created from these quarter blocks.
  • the values of the corresponding positions of the block division integrated image 250c can be substituted for only the left and right half blocks.
  • the values of the corresponding positions of the block division integrated image 250b can be substituted for only the upper and lower half blocks.
  • the average value of the value of the corresponding position of the block division integrated image 250b and the value of the corresponding position of the block division integration image 250c can be substituted.
  • the pixel 251 belongs to each of the four blocks 252a to 252d of the block division integrated images 250a to 250d.
  • the position of the target pixel 251 determines which of the block division integrated images 250a to 250d the four blocks 252a to 252d to which the pixel 251 belongs.
  • FIG. 12 is a diagram showing the positional relationship between the four blocks in FIG. 11 and the pixel of interest.
  • blocks 252a to 252d are overlapped so as to include these four blocks 252a to 252d, block 253 is obtained.
  • the weights of the blocks 252a to 252d are obtained based on the positional relationship between the pixel of interest 251 and the centers 254a to 254d of the blocks 252a to 252d, and the weighted average of the four blocks 252a to 252d is obtained.
  • the point of interest 251 is separated from the center 254a by a distance a in the X direction and a distance b in the Y direction.
  • the block division integrated images of the blocks 252a to 252d are F0, F1, F2, and F3
  • the size of each of the blocks 252a to 252d in the X direction is sx
  • the size in the Y direction is sy
  • the weighted average image G Can be given by the following equations 18 to 22.
  • FIG. 13 is a flowchart illustrating a method for removing noise from a block-divided integral image according to the fourth embodiment.
  • step S41 of FIG. 13 a division method for creating four block division integrated images 250a to 250d having different division methods for each half block is determined.
  • step S30a the block division integral image 250a is created by executing the process S30 of FIG. 10 according to the method of division Ba for creating the block division integral image 250a.
  • step S30b the block division integral image 250b is created by executing the process S30 of FIG. 10 according to the method of division Bb for creating the block division integral image 250b.
  • step S30c the block division integral image 250c is created by executing the processing S30 of FIG. 10 according to the method of division Bc for creating the block division integral image 250c.
  • step S30d the block division integral image 250d is created by executing the process S30 of FIG. 10 according to the method of division Bd for creating the block division integral image 250d.
  • step S42 a weight is determined according to the distance between the center 254a to 254d of each of the corresponding blocks 252a to 252d of the four block division integrated images 250a to 250d to which the focused pixel 251 belongs and the focused pixel 251. Weighted averaging of the four block division integrated images 250a to 250d. An image obtained by taking a weighted average of the four block division integral images 250a to 250d is referred to as a block average integral image.
  • weighted averaging is performed by creating four block division integrated images 250a to 250d in which the division method is different for each half block.
  • weighted averaging can be performed by creating 16 block division integrated images having different division methods for each quarter block.
  • the weight is determined according to the distance between the pixel 251 of interest and the centers 254a to 254d of the block division blocks 252a to 252d in the X and Y directions. did.
  • the weight can be determined according to the Euclidean distance between the pixel of interest 251 and the centers 254a to 254d of the block division blocks 252a to 252d.
  • each block of one block division integrated image does not overlap.
  • overlapping block division is performed at the end of each block, an average is calculated at the overlapped portion, the difference from the average is determined as an error at the end of each block, and a portion other than the end of the block is determined from the error at the end of the block.
  • a discrete integrated image obtained by calculating an integral value in discrete blocks obtained by skipping overlapping portions of blocks is created, and a plurality of discrete images are created. Interpolating the inner part of the block with the difference from the average as an error at the overlapping end portion is equivalent to performing a weighted average of the error according to the distance. Therefore, by this processing, a weighted average integrated image in which the method of weighted averaging is changed can be created.
  • a plurality of block division integrated images having different division methods are created, and the center of each block of the plurality of block division integral images to which the target pixel belongs and the target pixel Weighted averaging according to the distance is performed to create a block average integrated image 206e.
  • the path matching integral image of the entire image 231 having the same size as the input image 201 can be created from the modified differential image while reducing the band Z1 in FIG. 5 and the block distortion Z2 in FIG.
  • FIG. 14 is a diagram showing an integral image according to the presence or absence of the processing of FIG. 13 with respect to the input image of FIG.
  • a block distortion Z2 occurs in the block division integrated image 206d obtained from the input image 201.
  • the block distortion Z2 is reduced.
  • the block distortion Z2 is also reduced in the block average integral image 206f created by changing parameters for the block average integral image 206e.
  • the display levels of the block division integrated image 206d and the block average integrated images 206e and 206f are displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201.
  • the change from the X-direction partial differential image 202 and the Y-direction partial differential image 203 to the X-direction modified partial differential image 204 and the Y-direction modified partial differential image 205 can be performed by various methods other than the methods using the mathematical formulas 3 and 4. There is a way.
  • the change from the X direction partial differential image 202 and the Y direction partial differential image 203 to the X direction changed partial differential image 204 and the Y direction changed partial differential image 205 may be given by the following Expressions 23 to 25.
  • D (x, y) sqrt (Dx (x, y) * Dx (x, y) + Dy (x, y) * Dy (x, y))
  • sqrt () is an operation that takes a square root.
  • lv is set to an appropriate value obtained by evaluating the level of the noise level and k is set to 1, the absolute value of the value lower than the value lv obtained by evaluating the level of the noise level is reduced to a value larger than lv. Edge changes can be preserved. Therefore, by this change, it is possible to perform noise removal while preserving the edges of the differential image and flattening while suppressing minute changes, and it is possible to obtain an edge-preserving smoothed image with noise removal and flattening. If lv is set to a predetermined value and k is set to a value larger than 1, the absolute value of the value lower than lv can be reduced and the value higher than lv can be changed to k times.
  • This change is a process including a process of reducing a value whose absolute value is lower than lv.
  • This change includes a process of multiplying the value of the differential image by a constant, since the value having an absolute value higher than lv is multiplied by a constant.
  • the block average integrated image has a noise reduction in which a change in a small change in the noise level is reduced.
  • a local contrast improvement is achieved in which the change in the portion where the change is smaller than lv2 is increased by a change larger than the noise level, and the portion where the change is larger than lv2 has a local contrast faithful to the change. be able to. If lv2 is set to a value smaller than 1, local contrast in a portion where the change is larger than lv2 can be suppressed.
  • D (x, y) is smaller than lv
  • noise removal by simply performing a process of setting the value to 0 can be performed.
  • Dx (x, y) * D (x, y) in Expression 24 is converted to sqrt (
  • sgn () is a function that returns 1 when the value is positive, ⁇ 1 when the value is negative, and 0 when the value is 0. In this case, a change with a low absolute value can be emphasized using a square root function.
  • Equations 26 and 27 show the case of averaging in a 3 * 3 area around the point of interest.
  • Ey (x, y) (Dy (x-1, y-1) + Dy (x, y-1) + Dy (x + 1, y-1) + Dy (x-1, y) + Dy (x, y) + Dy (x-1, y + 1) + Dy (x, y + 1)) / 9
  • the integrated image obtained by integrating the differential images subjected to the averaging process described above is an image very similar to the averaged image. Equations 28 and 29 show the case where the image is sharpened.
  • Ex (x, y) Ex (x, y)-(Dx (x-1, y-1) + Dx (x, y-1) + Dx (x + 1, y-1) + Dx (x-1, y) + Dx (X, y) + Dx (x + 1, y) + Dx (x-1, y + 1) + Dx (x, y + 1)) / 9
  • Ey (x, y) Ey (x, y)-(Dy (x-1, y-1) + Dy (x, y-1) + Dy (x + 1, y-1) + Dy (x-1, y) + Dy (X, y) + Dy (x + 1, y) + Dy (x-1, y + 1) + Dy (x + 1, y + 1)) / 9
  • the integrated image formed by integrating the differentiated image subjected to the sharpening process described above is an image very similar to the image formed by sharpening the image.
  • Non-Patent Document 3 can be performed not on an input image but on a differential image.
  • the absolute value (square root of the sum of squares) of the partial differential image in the X direction and the partial differential image in the Y direction is calculated, and the minimum change direction in eight directions is found for the absolute value image.
  • the directional partial differential image can be processed by the X-directional partial differential image
  • the Y-directional partial differential image can be processed by the Y-directional partial differential image.
  • Edge-preserving processing including the processing of Non-Patent Document 3 can be performed.
  • an adaptive processing that changes the value of the differential image can be performed according to the processing result of evaluating whether the point of interest of the differential image is an edge portion or a flat portion. This adaptive processing is effective for filter processing for preserving an edge portion and smoothing a flat portion.
  • the input image 201 is an image having a black-and-white gray scale.
  • a process of changing a differential image can be performed in the same manner.
  • the RGB colors are sequentially represented by a color number c
  • the input image 201 is represented by I [c]
  • Ex [c] Ey [c]
  • Dx [c] Dx [c]
  • Dy [c] Dy [c]
  • the expression is obtained by adding [c] to each image, and can be independently calculated.
  • Expression 23 is the respective addition, it can be given by Expression 30 below in which the addition of the color numbers 0 to 2 is performed.
  • D (x, y) sqrt (Dx [0] (x, y) * Dx [0] (x, y) + Dy [0] (x, y) * Dy [0] (x, y) + Dx [1 ] (X, y) * Dx [1] (x, y) + Dy [1] (x, y) * Dy [1] (x, y) + Dx [2] (x, y) * Dx [2] ( (x, y) + Dy [2] (x, y) * Dy [2] (x, y)))
  • the path matching integral value can be an average value of the line integral values of the path that is the shortest path.
  • the block average integral image is obtained based on Expressions 18 to 22, but in the three-dimensional case, the block average integral image is created based on a symmetric expression obtained by adding the z direction to Expressions 18 to 22.
  • the fourth and subsequent dimensions can be similarly created.
  • the input image 201 is prepared as an image input, but another input image may be prepared in addition to the input image 201.
  • a differential image of each of these input images is created, and an image obtained by integrating the modified differential image obtained by adding both is equal to an image obtained by adding two input images.
  • the value of the differential image of the input image 201 of the overlapping portion is added to the closest contour portion, the overlapping portion is processed to be 0, and a differential image of another input image is added.
  • a block average integral image with another input image added can be created without impairing the overall luminance of the original input image 201.
  • FIG. 15 is a flowchart illustrating the level adjustment processing of the integral image according to the fifth embodiment.
  • the image processing apparatus 111 creates the path matching integral image 206, and then adjusts the level of the path matching integral image 206 based on the input image 201, so that the level adjusted path matching integral image 271 is created.
  • the pixel value of the path matching integral image 206 exceeding the maximum value of the input image 201 is set to the maximum value of the input image 201, and the pixel value of the path matching integral image 206 smaller than the minimum value of the input image 201 is set. Is set to the minimum value of the input image 201.
  • the pixel value of the path matching integral image 206 is set to 0 (minimum predetermined value) when the pixel value is smaller than 0 (minimum predetermined value).
  • it is larger than 255 (the maximum predetermined value)
  • it can be set to 255 (the maximum predetermined value).
  • the path matching integral image 206 can be displayed at an appropriate level. For this reason, even when the display level of the path matching integral image 206 is different from the display level of the input image 201, the path matching integral image 206 is displayed in this range based on the maximum value and the minimum value of the path matching integral image 206. In addition, the path matching integral image 206 can be displayed in this range based on the maximum value and the minimum value of the input image 201, and unnatural display of the path matching integral image 206 can be prevented.
  • FIG. 16 is a diagram showing an integrated image according to the presence or absence of the processing of FIG.
  • the block average integrated image 206 e displays this range by adjusting the display level from the minimum value 3 to the maximum value 244 of the input image 201.
  • the block average integrated image 206e2 displays this range by adjusting the display level from the minimum value of -128 to the maximum value of 356.
  • the block average integral images 206e and 206e2 are created with the same parameters, the appearance such as contrast differs depending on the display level.
  • the path matching integral image 206 in FIG. 2 As another method for displaying the path matching integral image 206 in FIG. 2 at an appropriate level, information on the default display level of the path matching integral image 206 is generated, and the default display level is set in the image display processing. And displaying the path matching integral image 206 by referring to the above, and changing the default display level by a user who has viewed the displayed image, adjusting the display level according to the change, and displaying the path matching integral image 206. Can be.
  • FIG. 17 is a flowchart illustrating the display processing of the integral image according to the sixth embodiment.
  • a default display level of the path matching integral image 206 is determined based on the input image 201.
  • a default value of the display level is calculated or set, and an attribute-added integral image 282 added as attribute data to the path matching integral image 206 is created and stored.
  • the attribute-added integrated image 282 is stored in the storage device 124.
  • the default value of the display level has an upper limit value indicating a value above the display level and a lower limit value indicating a value below the display level.
  • the upper limit can be set to the maximum value of the input image 201
  • the lower limit can be set to the minimum value of the input image 201.
  • the upper limit may be set to 255 and the lower limit to 0.
  • step S61 image display processing of the attributed integrated image 282 is executed.
  • the attribute-added integral image 282 stored in step S60 is read and displayed on the display device.
  • the integrated image 282 with attributes is read from the storage device 124 and displayed on the display device 122.
  • the attribute-added integral image 282 gives default upper and lower display levels as attribute data together with the path matching integral image 206.
  • the image processing device 121 displays the path matching integral image 206 in the range between the upper limit and the lower limit.
  • FIG. 18 is a diagram showing an integral image according to the presence or absence of the processing of FIG.
  • an upper limit value or a lower limit value of the display level can be designated from the input device 123 in FIG.
  • the block average integral image 206e whose display level is set from the minimum value 3 to the maximum value 244 of the input image 201 can be displayed at the default display level.
  • a block average integrated image 206e3 in which the display level is set from the minimum value 100 to the maximum value 244 of the input image 201 can be displayed.
  • FIG. 19 is a block diagram illustrating an example of a configuration and a processed image of the image processing apparatus according to the seventh embodiment.
  • the image processing apparatus includes a differential processing unit 311, a differential image change processing unit 312, and a path matching integration processing unit 313.
  • the differential processing unit 311 creates a differential image obtained by differentiating the input image 301.
  • the differentiation processing unit 311 performs differentiation in the one-dimensional direction in each of the X direction and the Y direction, thereby creating an X-direction partial differential image 302 and a Y-direction partial differential image 303.
  • the input image 301 is not limited to a photographed image photographed by the photographing device 100 of FIG. 1, but may be a computer graphic image created by an image processing device, or may be a copy image or a scanner image.
  • a character image in which a gradual shade occurs in the background is taken as an example.
  • the gradual shading of the background of the input image 301 may be caused by the degree of light when the imaging device 100 is a camera.
  • the image size of the input image 301 is 256 * 256, and when the pixel (x, y) is the background portion, the pixel value is 128 + ((128 ⁇ x) + (128 ⁇ y)) * 0.5, and the pixel ( When (x, y) is a character portion, the pixel value is set to a value 50 lower than that of the background portion.
  • the pixel values of the X-direction partial differential image 302 and the Y-direction partial differential image 303 are 0.5 in the background portion, and the pixel value of the edge of the character portion has a square root of the sum of squares of both values of 49. To 51, corresponding to the angle of the character, and the pixel value inside the character portion becomes 0.
  • the differential image change processing unit 312 creates an X direction changed partial differential image 304 and a Y direction changed partial differential image 305 obtained by changing the X direction partial differential image 302 and the Y direction partial differential image 303.
  • the differential image changing process 312 sets the value to 0 if the absolute value of the pixel value of the X-direction partial differential image 302 is equal to or smaller than the threshold lv3, and sets the absolute value of the pixel value of the Y-direction partial differential image 203 to the threshold. If it is less than lv3, the value can be set to 0.
  • the threshold value lv3 is set to a value that makes the pixel value due to the gentle inclination of the background portion zero.
  • the threshold value lv3 was set to 1 because the input image 301 was created by changing the luminance in the X direction and the Y direction by 0.5 per pixel in the background portion.
  • the background portion becomes 0, the edge of the character portion hardly changes, and the inside of the character portion remains 0.
  • the path matching integration processing unit 313 creates a path matching integration image 306 by performing path matching integration of the X direction changed partial differential image 304 and the Y direction changed partial differential image 305.
  • the path matching integration processing unit 313 can refer to the uniform image 307.
  • the path matching integration processing unit 313 creates a block average integration image as the path matching integration image 306 by the processing of FIG.
  • a block division integral image is created by the processing of FIG.
  • the average value of the block division integral image of each block is adjusted to the average value of the uniform image 307 (that is, a predetermined uniform value).
  • the image size of each block of the block division integrated image is set to half the image size of the input image 301. That is, since the image size of the input image 301 is 256 * 256, the block size of the block division integrated image is 128 * 128.
  • the value of the uniform image 307 was set to an intermediate value of 128.
  • the X-direction partial differential image 302 and the Y-direction partial differential image are obtained by differentiating the input image 301 to reduce the value of the background portion where the gradation of the input image 301 changes gradually while preserving the edges of the characters.
  • 303 can be created.
  • the value of the background portion of the X-direction partial differential image 302 and the Y-direction partial differential image 303 can be larger than the value of the character edge portion of the X-direction partial differential image 302 and the Y-direction partial differential image 303.
  • the gradation of the input image 301 changes gently while preserving the edge of the character.
  • An X-direction changed partial differential image 304 and a Y-direction changed partial differential image 305 with the value of the background portion set to 0 can be created.
  • the value of the background portion can be maintained at 0 even when the value of the background portion is integrated. Therefore, by integrating the X-direction changed partial differential image 304 and the Y-direction changed partial differential image 305, it is possible to remove the gradual shading of the background portion while preserving the edges of the characters of the input image 301.
  • the path matching integral image 306 in which the density of the portion is uniform can be created.
  • the characters are bright in a bright portion of the background, and the characters are dark in a dark portion of the background.
  • the letter A is brighter than the letter F.
  • the shading between characters can be made uniform.
  • the brightness of the character A and the brightness of the character F are substantially equal.
  • FIG. 20 is a flowchart showing the binarization processing of the path matching integral image 306 of FIG.
  • the image processing apparatus 111 creates a binarized image 322 by binarizing the path matching integral image 306 with respect to the path matching integral image 306 created by the processing in FIG. I do.
  • the binarization process is a process of setting the pixel value to 0 when the pixel value of the image is equal to or smaller than the threshold, and setting the pixel value to 1 when the pixel value is higher than the threshold.
  • the binarized image is not limited to the case where the pixel value takes a binary value of 0 and 1, and the pixel value may be multiplied by 255 to take the binary value of 0 and 255.
  • FIG. 21 is a diagram showing a binarized image of the input image 301 and the path matching integral image 306 of FIG.
  • a binarized image 308 is created by performing a binarization process on an input image 301 in which the density of the background is gently inclined.
  • the binarization threshold was set to 103.
  • the binarized image 308 in a portion with a bright background, the character and the background become white, and the character and the background cannot be distinguished. In a portion with a dark background, the characters and the background are black, and the characters and the background cannot be distinguished.
  • the path matching integral image 306 is created from the input image 301 by the processing of FIG. Then, when a binarized image 309 obtained by binarizing the path matching integral image 306 is created, it is possible to make only the background white and only the characters black. At this time, when the binarization threshold was set to 98 or more and 135 or less, a binary image 309 in which the background and the character were separated was obtained.
  • the input image 301 is created so as to be in the range of a luminance level from 0 to 255, and the path matching integral image 306 is adjusted to have a maximum value of 255 and a minimum value of 0.
  • a path matching integral image 306 can be created in which the edges of the characters of the input image 301 are preserved and the gradual gradient of the density of the background is removed, and a threshold for binarization that can separate the characters from the background is set. can do.
  • FIG. 22 is a diagram illustrating an example of another processed image of the image processing apparatus in FIG.
  • the input image 301c has been input to the image processing apparatus in FIG.
  • the background of the input image 301c is given a density of 0.1 per pixel from the right to the left, with the center at 128.
  • a uniform square having a density of 128 was created at the center of the input image 301c.
  • the differential image changing process 312 executes a changing process using Expressions 3 and 4, and sets lv to 1.
  • the input image 301c displays image values from 115 to 142 at the minimum luminance to the maximum luminance.
  • the square placed at the center of the input image 301c has a uniform density, but has a gradation in the background. For this reason, even if the density of the inside of the square is originally uniform, the illusion of a human being appears as if a gradation is applied to the inside of the square.
  • a gradation appears inside a square located at the center.
  • the background also shows a slight change in shading. For example, the background becomes dark around a bright portion inside the rectangle, and the background becomes bright around a dark portion inside the rectangle.
  • this block average integral image 306b for example, by setting the value of the background other than the central square to a constant value of 128, the block average integral image 306c can be obtained.
  • the block average integral image 306c a change in shading of the background can be removed while preserving the gradation of the square in the center, and an image reproducing the illusion of a human can be obtained.
  • FIG. 23 is a flowchart illustrating a method for generating a modified differential image of the image processing device according to the eighth embodiment.
  • this image processing apparatus includes differential image change processing units 400a and 400b.
  • the differential image change processing unit 400b includes a convolutional neural network 401.
  • the convolutional neural network 401 includes a convolution layer 402 and a neural net layer 403.
  • the differential image change processing unit 400a includes a convolution layer 402.
  • the convolution layer 402 performs convolution of the input image.
  • the input image can be an edge image created by subtracting the image from a locally averaged image.
  • the convolution layer 402 performs convolution of a small area such as 3 * 3 or 5 * 5 on the entire image, and stacks the images to extract features of the images.
  • the neural network layer 403 performs calculations for recognizing the features captured by the convolution layer 402.
  • the neural network layer 403 can improve the recognition rate by deep learning in which deep calculations are performed over many layers.
  • the neural network layer 403 outputs a recognition result 404.
  • the recognition object is displayed by surrounding it with a square, the outline of the recognition object is displayed, or only the outline of the recognition object is displayed. Instead, the interior can be painted black.
  • the network structure of the convolutional neural network 401 and the method of deep learning are described in, for example, the following Reference 4.
  • the method of learning the convolution layer 402 and the neural network layer 403 includes a method of performing pre-learning and a method of performing deep learning.
  • Reference 4 Takayuki Okaya, "Deep Learning", Kodansha
  • differential image change processing units 400a and 400b are used instead of the differential image change processing unit 212 in FIG. 2, a differential image is used as an input to the convolutional neural network 401. Since there are two differential images, the X-direction partial differential image 202 and the Y-direction partial differential image 203, the number of input data is twice as large as usual and the network capacity is twice as large as usual. However, the differential image has reproducible information except for the offset of the original image, and the original image can be reproduced by giving some information regarding the offset or limiting the image to an image that can be calibrated internally.
  • the differential image change processing unit 400a performs the simplification processing 411 on the data appearing in the convolution layer 402.
  • the simplification processing 411 sparse processing is performed to limit the data appearing in the convolution layer 402 to a convolution having a maximum value for each pixel.
  • the differential image change processing unit 400a performs the image reconstruction processing 412 on the sparse data.
  • the image reconstruction processing 412 learning is performed on sparse data so that the output image is as close as possible to the input image.
  • an output image in which the features of the input image appear well can be created, and can be used for noise removal and creation of a deformed image.
  • the differential image change processing unit 400a sets the images obtained by performing the image reconstruction processing 412 as the X-direction changed partial differential image 204a and the Y-direction changed partial differential image 205a. At this time, as the X-direction changed partial differential image 204a and the Y-direction changed partial differential image 205a, an output image with less noise expressing the characteristics of the input image is obtained. Can be obtained.
  • the differential image change processing unit 400b learns the extraction processing 421 of the recognized object recognized via the neural network layer 403. For learning, a deep learning method using an input image for learning, a recognition flag of an object, and data indicating an extraction area is widely known.
  • the differential image change processing unit 400b performs the simplification processing 422 when the recognition object extraction processing 421 is performed on the output of the neural network layer 403.
  • the simplification processing 422 based on the information of the object extracted in the recognition object extraction processing 421, the convolutional neural network 401 is traced backward, and the selection and the sparse processing of the part related to the object extracted in the convolution layer 402 are performed. I do.
  • the differential image change processing unit 400b performs the image reconstruction processing 423 on the selected sparse data.
  • the image reconstruction processing 423 learning is performed on the selected sparse data so that the output image is as close as possible to the input image.
  • the differential image change processing unit 400b sets the images obtained by performing the image reconstruction processing 423 as an X-direction changed partial differential image 204b and a Y-direction changed partial differential image 205b. At this time, since an output image with less noise that well expresses the features of the object extracted from the input image can be obtained as the X-direction changed partial differential image 204b and the Y-direction changed partial differential image 205b, noise is included as an integral image. It is possible to obtain an output image in which the form of the object extracted from the input image is reduced and the shape of the object appears well.
  • FIG. 24 is a block diagram illustrating a configuration of the path matching integration processing unit of the image processing device according to the ninth embodiment.
  • the image processing apparatus includes a neural network layer 431 and a parameter update amount calculation unit 432.
  • the neural network layer 431 creates a path matching integral image 433 based on the input image 201, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image 205.
  • the parameter update amount calculation unit 432 calculates the parameter update amount of the neural network layer 431 based on the path matching integral images 206 and 433, and updates the parameters of the neural network layer 431.
  • the parameter update amount calculation unit 432 can use, for example, a block average integral image as the path matching integral image 206 as a reference data image for learning. Then, the parameter update amount calculation unit 432 can make the neural network layer 431 learn so that the path matching integral image 433 output from the neural network layer 431 approaches the reference data image as much as possible. In this learning, for the difference image between the path matching integrated images 206 and 433, the sum of squares of each pixel is used as an evaluation value, and the update amount of a parameter for reducing the evaluation value is obtained.
  • the calculation for obtaining the parameter update amount can be performed by deep learning such as a back propagation method for reducing the evaluation value.
  • the neural network layer 431 can have a function of creating an image substantially equivalent to the path matching integral image 206.
  • the neural network layer 431 trained in this way has some personality depending on the reference data image for learning and the net structure, but is similar to the path matching integral that created the path matching integral image 206 used for learning.
  • Machine learning can have the function of making calculations. That is, the neural network layer 431 trained in this way can execute path matching integration or make a slight change by performing path matching integration, and is used as the path matching integration processing unit 213 in FIG. be able to.
  • the neural network layer 431 trained in this way can function as a unit that performs calculation including path matching integration by itself.
  • the neural network layer 431 can create a path matching integral image based on the image obtained by adding noise to the input image 201, the X-direction modified partial differential image 204, and the Y-directional modified partial differential image 205. . Then, the parameter update amount calculation unit 432 uses the path matching integral image 206 as a reference data image for learning, and causes the neural network layer 431 to learn so that the output image of the neural network layer 431 approaches the reference data image as much as possible. it can. At this time, the neural network layer 431 can perform a process in which a noise removing function is added to the integration process when the road matching integral image 206 used for learning is created. The neural network layer 431 thus learned can perform path matching integration when the path matching integration image 206 is created, and can be used as the path matching integration processing unit 213 in FIG.
  • an image obtained by multiplying the differential image by k (k is a real number greater than 1) has the same line integral value regardless of the path.
  • k is a real number greater than 1
  • a simple process for obtaining the same mathematically equivalent result without performing a line matching integration in which a line integral value is obtained for each path of an image obtained by multiplying the differential image by k and an average value is used as an integral value is obtained.
  • the differential image is limited to an image multiplied by k
  • the path matching integral image is equal to the image obtained by simply multiplying the original image by k except for the offset.
  • FIG. 25 is a flowchart illustrating a block image creation method of the image processing apparatus according to the tenth embodiment. 25 includes steps S81, S82, S83, and S88. In step S81, the image 231 is divided into blocks. Then, the block of interest 240 is determined in order from each block.
  • step S82 the input image 201 is input and the image of the block of interest 240 is multiplied by a constant (for example, k times).
  • step S84 the average value of the input image 201 is calculated. At this time, the average value in the average value calculation area 236 in FIG.
  • step S85 the average value of the constant-multiplied image is calculated. At this time, an average value in the average value calculation area 236 is obtained for the constant-multiplied image obtained in step S82.
  • step S86 an offset value is calculated. At this time, the difference between the average value obtained in step S84 and the average value obtained in step S85 is set as the offset change amount.
  • step S87 the offset of the constant multiple image is changed.
  • the offset change amount is added to the constant-multiplied image obtained in step S22 to create an offset-changed constant-multiplied image.
  • step S88 if the block of interest 240 is the last block of the image 231, the process ends. If the block of interest 240 is not the last block of the image 231, the process returns to step S81. When the process proceeds to the last block, a block division constant multiplied image 206j having the same size as the input image 201 is obtained.
  • the block average constant multiple image 206j can be displayed at the same display level as when the input image 201 is displayed. At this time, since the image of the block average constant multiplied image 206j is multiplied by k times in the block, the contrast can be locally made k times larger than that of the input image 201.
  • a block-divided integrated image in which the local shading change is multiplied by k is obtained.
  • the process of creating a block-divided integral image involves differentiation of the input image and integration of each block.
  • the differentiation of the input image and integration of each block are performed by a constant for each block.
  • the image obtained by multiplying the differential image by k and performing the path matching integration is equal to the image obtained by multiplying the input image before differentiation by k, except for the offset.
  • offset adjustment is performed to match the average value in the block to the input image 201, the uniform image 307, and the like.
  • the image in the block is multiplied by k, and the average value in the block is adjusted to the input image 201, the uniform image 307, and the like, so that mathematically equivalent processing including the offset is performed. Can be realized.
  • the local contrast in the block increases by k times, but block distortion occurs due to discontinuity in density between the blocks.
  • a plurality of block division constant multiplied images can be created to eliminate discontinuity in density between blocks.
  • the k-times local contrast slightly changes to become almost k times the local contrast.
  • the display level of the image may be changed or the input image 201 may be displayed. This can be dealt with by processing the image in which the minimum value or the maximum value of the average luminance in the block has been changed as a reference image for matching the average value.
  • FIG. 26 is a flowchart illustrating a block distortion removal method for a block image according to the eleventh embodiment.
  • step S41 of FIG. 26 a division method for creating four block division constant multiplied images having different division methods for each half block is determined.
  • step S80a the first block division constant image is created by executing the process S80 of FIG. 25 in accordance with the first division Ba for creating the block division constant image.
  • step S80b according to the method of the second division Bb for producing a multiplied block division constant image, the processing S80 in FIG. 25 is executed to produce a second multiplication block division constant image.
  • step S80c a third divisional multiplication image is created by executing the process S80 of FIG. 25 in accordance with the third division Bc for creating a multiplication block division constant image.
  • step S80d the fourth division Bb of FIG. 25 is executed in accordance with the method of the fourth division Bb for producing a multiplied block division constant image, thereby producing a fourth multiplication block division constant image.
  • step S42 the weight is determined according to the distance between the target pixel and the center of each of the corresponding blocks of the four block division constant multiplied images to which the pixel of interest belongs, and the weighted average of the four block division constant multiplied images is determined. Is performed to create a block average constant-multiplied image 206k.
  • the reference image for matching the average value may be the uniform image 307, or the input image 201 is processed. It may be an image.
  • the image is divided into blocks, the image in each block is multiplied by a constant, and the average value in the average value calculation area of each block is set to a desired value. Average value matching processing to obtain an image multiplied by a block division constant.
  • a plurality of block division constant multiplied images having different methods of division are created, and a weighted average according to the distance between the center of each block of the plurality of block division constant multiplied images to which the pixel of interest belongs and the pixel of interest is calculated. Create an average constant multiple image.
  • an image obtained by Fourier-transforming an image in a block may be multiplied by a constant to perform an inverse Fourier transform, or an image obtained by transforming an image in a block by a wavelet may be multiplied by a constant to perform an inverse wavelet transform.
  • the inverse transformation can be performed by multiplying the transformed image by a constant.
  • the process of multiplying the image by a constant includes a process of multiplying the converted image by a constant and performing inverse conversion, and a process of multiplying the image by a constant except for the offset.
  • the method of calculating one integral value based on the plurality of line integral values when the line integral values of the plurality of paths obtained by the path integral of the modified differential image are different has been described.
  • the modified differential image is matched so that the line integral value of the modified differential image is uniquely determined regardless of the path, and the integrated modified differential image is line-integrated along an arbitrary path, thereby obtaining a mathematical expression. May be obtained.
  • FIG. 27 is a block diagram illustrating a configuration of an image processing device according to the twelfth embodiment.
  • the image processing apparatus includes a differential processing unit 211, a differential image change processing unit 212, a matched differential image creation processing unit 214, and a matched differential image integration processing unit 215.
  • the differential processing unit 211 and the differential image change processing unit 212 have the same configuration as in FIG.
  • the matched differential image creation processing unit 214 creates an X-direction matched partial differential image 207 and a Y-direction matched partial differential image 207 in which the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205 are matched.
  • the X direction matched partial differential image 207 and the Y direction matched partial differential image 207 are images in which the line integral value is uniquely determined regardless of the path.
  • the matched differential image integration processing unit 215 creates a path integral image 209 obtained by performing path integration on the X direction matched partial differential image 207 and the Y direction matched partial differential image 207.
  • the matched differential image integration processing unit 215 can refer to the input image 201.
  • the process of creating the path integral image 209 is mathematically equivalent to the process of creating the path matching integral image 206 in FIG.
  • the matched differential image integration processing unit 215 obtains an integral value by performing line integration with reference to the X-direction matched partial differential image 207 and the Y-direction matched partial differential image 207 for successive points starting from the initial point.
  • the value obtained by line integration does not change, so that any path may be followed.
  • the route can be narrowed down to one and the line integration can be performed, for example, a route is determined as an order in which left and right are given priority over top and bottom.
  • the modified differential image E is substituted for the differential image E '.
  • the differentiated image E ' is not matched, and the matched differential image is finally obtained by sequentially performing the following matching processing.
  • the value of the X-direction modified partial differential image 204 is substituted for the X-direction differential image Ex ', and the value of the Y-direction partial differential image 205 is substituted for Ey' of the Y-direction differential image.
  • Matching means changing the value of the differential image E 'so that the line integral value is the same regardless of the path.
  • FIG. 28 is a diagram illustrating a state of a gradient in an adjacent four-point mesh when a matched differential image is created by the image processing apparatus in FIG. 27.
  • line integration along the path from the initial point (x0, y0) to the point (x0 + 1, y0 + 1) is performed.
  • the route from the initial point (x0, y0) to the point (x0 + 1, y0 + 1) includes a route R1 of (x0, y0) ⁇ (x0 + 1, y0) ⁇ (x0 + 1, y0 + 1) and a route (x0, y0).
  • the path matching integration of the twelfth embodiment can be executed using the following equations 31 to 34.
  • F is a path matching integral image, and the value of (x0, y0) of the input image 201 is substituted for F (x0, y0).
  • the matching process is also performed on the adjacent four-point mesh including the other initial points (x0, y0).
  • the adjacent four-point mesh there are two paths from the point at a distance of 1 to the point at a distance of 2 from the point at a distance of 1 based on the initial point (x0, y0).
  • the average value of the line integral values of the respective paths is set as a new line integral value so that the line integral values of the two paths become the same value, and the distance from the point at a distance of 1 is determined.
  • the value of the differential image E ′ which is the gradient toward the point whose distance is 2 away, is changed.
  • the distance is k + 1 in the adjacent four-point mesh.
  • the differential image E ′ is a gradient of a point at a distance of k + 2 from a point at a distance of k + 1 so that the average value of the line integrals along each path becomes a new line integral. You can change the value.
  • the matching process is started from the adjacent four-point mesh including the initial point (x0, y0), the matching process is performed on the adjacent four-point meshes that are successively separated from each other, and the matching of all the adjacent four-point meshes in the area for which the integral value is to be obtained is performed. After the conversion process, a matched differential image can be obtained as the differential image E ′.
  • FIG. 29 is a flowchart showing a matched differential image creating process of the image processing apparatus of FIG.
  • the matched differential image creation processing unit 214 in FIG. 27 calculates the distance from the initial point P0 for each of the four adjacent meshes of the image 231 having the same size as the input image 201.
  • the distance between the nearest point to the initial point P0 among the four points on the adjacent four-point mesh is defined as the distance between the adjacent four-point mesh and the initial point P0.
  • step S92 the matched differential image creation processing unit 214 substitutes the changed differential image E for the differential image E '.
  • the value of the X direction modified partial differential image 204 is substituted for the X direction differential image Ex '
  • the value of the Y direction partial differential image 205 is substituted for Ey' of the Y direction differential image.
  • the differential image E ' has not been matched yet.
  • step S93 the matched differential image creation processing unit 214 performs matching processing on the differential images E 'in ascending order of the distance between the adjacent four-point meshes.
  • the differential image E 'become is a matched differential image.
  • matching is performed between two adjacent four-point meshes of interest that have two routes from the point closest to the initial point P0 to the point farthest from the initial point P0. For example, assuming that the point closest to the initial point P0 is (x, y) and the point farthest from the initial point P0 is (x + ⁇ , y + ⁇ ) in the four adjacent meshes of interest, the following equations 39 to 43 are obtained. And use for matching.
  • ⁇ and ⁇ are 1 or ⁇ 1.
  • e ′ is set to the average value (e1 ′ + e2 ′) / 2, but it is set to one of e1 ′ and e2 ′, a weighted average value with an appropriate weight, or a slight average value. The value may deviate from the value.
  • the process of performing the line integration is performed on the different line integral values of the different image paths that are not matched. This is substantially equivalent to the process of using the average value as the integral value. Therefore, the process of creating a differential image matched by averaging Expression 41 and performing line integration is included in the same distance path average integration.
  • the modified differential image (the X direction partial differential image 204 and the Y direction partial differential image 204) is obtained so as to obtain the matched differential image. 205) is also included in the matching process.
  • the configuration is shown in which the differential processing unit 211, the differential image change processing unit 212, the matched differential image creation processing unit 214, and the matched differential image integration processing unit 215 are provided in the image processing apparatus.
  • the differential processing unit 211, the differential image change processing unit 212, the matched differential image creation processing unit 214, and the matched differential image integration processing unit 215 are realized by separate programs or provided in separate devices, and operate separately. You may be able to.
  • the matched differential image integration processing unit 215 that receives the output of the matched differential image creation processing unit 214 as an input is realized by another program or provided in another device, and the matched differential image of the matched differential image is provided by another program or another device. Integration processing may be performed.
  • FIG. 30 is a diagram illustrating an example of the distance between adjacent four-point meshes when the matched differential image is created by the image processing apparatus in FIG. 27.
  • a value obtained by calculating the distance from the initial point P0 is assigned to each adjacent four-point mesh. For example, there are four adjacent 4-point meshes having a distance of 0 including the initial point P0, 8 adjacent 4-point meshes having a distance of 1 adjacent thereto, and 12 adjacent 4-point meshes having a distance of 2 adjacent thereto. I know there is.
  • FIG. 30 is a diagram illustrating an example of the distance between adjacent four-point meshes when the matched differential image is created by the image processing apparatus in FIG. 27.
  • a value obtained by calculating the distance from the initial point P0 is assigned to each adjacent four-point mesh. For example, there are four adjacent 4-point meshes having a distance of 0 including the initial point P0, 8 adjacent 4-point meshes having a distance of 1 adjacent thereto, and 12 adjacent 4-point meshes having
  • the radiographing data is X-ray CT or MRI data, and the matching processing is performed on a differential image created from such data.
  • the matching process of Reference 5 is described in “V. IMAGE RECONSTRUCTION FROM FROM GRADIENTS” in the literature.
  • Reference 6 is for producing a differential image showing irregularities in the direction perpendicular to the photographing surface of the subject from the shadows and rays of the photographed image, and performing a matching process of the differential images showing irregularities of the differential image.
  • the above-mentioned References 5 and 6 are different from the present embodiment in the overall configuration, and are characterized by “the differential image of the input image is obtained, the differential image is changed, The integration process is performed on the differential image to perform line integration or path matching integration.
  • FIG. 31 is a block diagram illustrating a hardware configuration example of the image processing apparatus in FIG. 31, the image processing apparatus 111 includes a processor 11, a communication control device 12, a communication interface 13, a main storage device 14, and an external storage device 15.
  • the processor 11, the communication control device 12, the communication interface 13, the main storage device 14, and the external storage device 15 are mutually connected via an internal bus 16.
  • the main storage device 14 and the external storage device 15 are accessible from the processor 11.
  • an input device 20 and an output device 21 are provided outside the image processing device 111.
  • the input device 20 and the output device 21 are connected to the internal bus 16 via the input / output interface 17.
  • the input device 20 is, for example, a keyboard, a mouse, a touch panel, a card reader, or a voice input device.
  • the output device 21 is, for example, a screen display device (a liquid crystal monitor, an organic EL (Electro Luminescence) display, a graphic card, or the like), an audio output device (such as a speaker), or a printing device.
  • the processor 11 is hardware that controls the operation of the entire image processing apparatus 111.
  • the processor 11 may be a general-purpose processor or a dedicated processor specialized in image processing.
  • the main storage device 14 can be composed of, for example, a semiconductor memory such as an SRAM or a DRAM.
  • the main storage device 14 can store a program being executed by the processor 11 or provide a work area for the processor 11 to execute the program.
  • the external storage device 15 is a storage device having a large storage capacity, such as a hard disk drive or an SSD.
  • the external storage device 15 can hold executable files of various programs and data used for executing the programs.
  • the external storage device 15 can store an image processing program 15A.
  • the image processing program 15A may be software that can be installed in the image processing device 111, or may be incorporated in the image processing device 111 as firmware.
  • the communication control device 12 is hardware having a function of controlling communication with the outside.
  • the communication control device 12 is connected to a network 19 via a communication interface 13.
  • the network 19 may be a WAN (Wide Area Network) such as the Internet, a LAN (Local Area Network) such as WiFi, or a mixture of WAN and LAN.
  • the input / output interface 17 converts data input from the input device 20 into a data format that can be processed by the processor 11, and converts data output from the processor 11 into a data format that can be output by the output device 21. .
  • the processor 11 reads the image processing program 15A into the main storage device 14, executes the image processing program 15A, creates a differentiated image obtained by differentiating the input image, creates a modified differentiated image obtained by changing the differentiated image, A path integral image can be created by path integrating the modified differential image.
  • the image processing program 15A can realize the functions of the differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213 in FIG.
  • the execution of the image processing program 15A may be shared by a plurality of processors or computers.
  • the processor 11 may instruct a cloud computer or the like to execute all or part of the image processing program 15A via the network 19 and receive the execution result.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention makes it possible to implement a process for creating an integral image from changed differential images in which a differential image has been processed and changed, or to implement an equivalent process thereto. A differential processing unit 211 carries out a one-dimensional direction differentiation for both an X direction and a Y direction, thereby creating an X direction partial differential image 202 and a Y direction partial differential image 203. A differential image change processing unit 212 creates an X direction changed partial differential image 204 and a Y direction changed partial differential image 205 in which the X direction partial differential image 202 and the Y direction partial differential image 203 have been subjected to a change process. A route matching integration processing unit 213 creates a route matching integrated image 206 in which the X direction changed partial differential image 204 and the Y direction changed partial differential image 205 have been subjected to a route matching integration.

Description

画像処理装置Image processing device
 本発明は、画像処理装置に関する。 The present invention relates to an image processing device.
 画像処理技術の分野においては、大きく枠組みを見ると、例えば、非特許文献1に記載されているように、原画像を直接加工したり、原画像をフーリエ変換やウエーブレット変換等して変換画像を作成し、変換画像を処理した後に逆変換をして処理画像を得たりしている。原画像からエッジ画像を作した場合は、エッジ画像は、特徴を抽出して認識するための材料になったり、原画像に加算して画像を鮮鋭化するための加工材料になったりする。 In the field of image processing technology, when looking at the framework, for example, as described in Non-Patent Document 1, a converted image is obtained by directly processing an original image or by performing a Fourier transform, a wavelet transform, or the like on the original image. After processing the converted image, an inverse conversion is performed to obtain a processed image. When an edge image is created from an original image, the edge image is used as a material for extracting and recognizing features, or as a processed material for adding to the original image and sharpening the image.
 目的や対象画像の特徴により、様々な画像処理が用いられている。例えば、画像の局所的なコントラストを向上させる場合は、先行技術文献2に記載されたCLAHE(Contrast limited adaptive histogram equalization)などの処理が用いられる。CLAHEでは、画像を8*8画素の小領域に分け、小領域ごとにコントラストの制限を加えたヒストグラムの平坦化を用いた適応型処理を行う。また、隣り合う小領域の境界に現れる擬似輪郭を消すため、バイリニアの内挿(補間)をする。 様 々 Various types of image processing are used depending on the purpose and the characteristics of the target image. For example, to improve the local contrast of an image, a process such as CLAHE (Contrast limited adaptive adaptive histogram) equalization described in Prior Art Document 2 is used. In CLAHE, an image is divided into small areas of 8 * 8 pixels, and adaptive processing is performed using flattening of a histogram in which contrast is limited for each small area. In addition, bilinear interpolation (interpolation) is performed in order to eliminate pseudo contours appearing at boundaries between adjacent small areas.
 また、例えば、画像のエッジを保存してノイズを低減するエッジ保存型の平滑化を行う場合は、先行技術文献3に記載されているように、方向を考慮した局所的な偏差から平滑化の方向と度合いを決め、エッジ部か平坦部かに応じて平滑化等のパラメータを変えるなどの適応型処理が行われている。 Further, for example, when performing edge-preserving smoothing in which noise is reduced by preserving the edges of an image, as described in Prior Art Document 3, smoothing is performed based on a local deviation in consideration of the direction. Adaptive processing such as deciding a direction and a degree and changing parameters such as smoothing according to an edge portion or a flat portion is performed.
 しかしながら、従来技術では、エッジ画像の一種の微分画像を作成することが行われているが、元の微分画像を処理して変更を加えた変更微分画像は、経路によって異なる積分値を持つことがある。このため、厳密には、元の微分画像を処理して変更を加えた変更微分画像の逆変換が存在せず、何らかの整合を取る処理を加えて積分画像を作成する必要がある。これまで、元の微分画像を処理して変更を加えた変更微分画像から積分画像を作成する枠組みを持った画像処理は知られていなかった。 However, in the related art, a kind of differential image of the edge image is created, but the modified differential image obtained by processing and modifying the original differential image may have a different integral value depending on a path. is there. Therefore, strictly speaking, there is no inverse transformation of the modified differential image obtained by processing and modifying the original differential image, and it is necessary to create an integral image by performing some kind of matching processing. Heretofore, there has been no known image processing having a framework for creating an integral image from a modified differential image obtained by processing an original differential image and making a modification.
 本発明は、前記事情に鑑みなされたものであり、その目的は、微分画像を処理して変更を加えた変更微分画像から積分画像を作成する処理またはそれと等価な処理を実現可能な画像処理装置を提供することにある。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image processing apparatus capable of realizing a process of processing a differential image to create an integrated image from a modified differential image obtained by modification and a process equivalent thereto. Is to provide.
 前記目的を達成するため、第1の観点に係る画像処理装置は、入力画像を微分した微分画像を作成する微分処理部と、前記微分画像を変更処理した変更微分画像を作成する微分画像変更処理部と、前記変更微分画像を経路積分した経路積分画像を作成する経路積分処理部とを備える。 To achieve the above object, an image processing apparatus according to a first aspect includes a differential processing unit that generates a differential image obtained by differentiating an input image, and a differential image changing process that generates a modified differential image obtained by changing the differential image. And a path integration processing section that creates a path integral image by path integrating the modified differential image.
 本発明によれば、微分画像を処理して変更を加えた変更微分画像から積分画像を作成する処理またはそれと等価な処理を実現することができる。 According to the present invention, it is possible to realize a process of creating an integrated image from a modified differential image obtained by processing a differential image and making a change, or a process equivalent thereto.
図1は、第1実施形態に係る画像処理システムの構成を示すブロック図である。FIG. 1 is a block diagram illustrating a configuration of the image processing system according to the first embodiment. 図2は、図1の画像処理装置の構成および処理画像の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the configuration of the image processing apparatus of FIG. 1 and a processed image. 図3は、図1の画像処理装置の処理画像の方向の一例を示す図である。FIG. 3 is a diagram illustrating an example of a direction of a processed image of the image processing apparatus of FIG. 図4は、図1の画像処理装置の経路積分処理を示すフローチャートである。FIG. 4 is a flowchart illustrating the path integration processing of the image processing apparatus of FIG. 図5は、図1の画像処理装置の処理画像全体の経路整合積分方法を示す図である。FIG. 5 is a diagram showing a path matching integration method for the entire processed image of the image processing apparatus of FIG. 図6は、図1の画像処理装置の処理画像の一部の経路整合積分方法を示す図である。FIG. 6 is a diagram showing a path matching integration method for a part of the processed image of the image processing apparatus of FIG. 図7は、第2実施形態に係る積分画像のオフセット変更方法を示すフローチャートである。FIG. 7 is a flowchart illustrating a method of changing the offset of an integral image according to the second embodiment. 図8は、図7の平均値算出領域の設定例を示す図である。FIG. 8 is a diagram showing a setting example of the average value calculation area in FIG. 図9は、第3実施形態に係る積分画像の作成方法を示す図である。FIG. 9 is a diagram illustrating a method for creating an integral image according to the third embodiment. 図10は、第3実施形態に係る積分画像の作成方法を示すフローチャートである。FIG. 10 is a flowchart illustrating a method for creating an integral image according to the third embodiment. 図11は、第4実施形態に係るブロック分割積分画像のブロック分割例を示す図である。FIG. 11 is a diagram showing an example of block division of a block division integral image according to the fourth embodiment. 図12は、図11の4つのブロックと着目画素との位置関係を示す図である。FIG. 12 is a diagram showing the positional relationship between the four blocks in FIG. 11 and the pixel of interest. 図13は、第4実施形態に係るブロック分割積分画像のノイズ除去方法を示すフローチャートである。FIG. 13 is a flowchart illustrating a method for removing noise from a block-divided integral image according to the fourth embodiment. 図14は、図2の入力画像に対して図13の処理の有無に応じた積分画像を示す図である。FIG. 14 is a diagram showing an integral image according to the presence or absence of the processing of FIG. 13 with respect to the input image of FIG. 図15は、第5実施形態に係る積分画像のレベル調整処理を示すフローチャートである。FIG. 15 is a flowchart illustrating the level adjustment processing of the integral image according to the fifth embodiment. 図16は、図15の処理の有無に応じた積分画像を示す図である。FIG. 16 is a diagram showing an integrated image according to the presence or absence of the processing of FIG. 図17は、第6実施形態に係る積分画像の表示処理を示すフローチャートである。FIG. 17 is a flowchart illustrating the display processing of the integral image according to the sixth embodiment. 図18は、図17の処理の有無に応じた積分画像を示す図である。FIG. 18 is a diagram showing an integral image according to the presence or absence of the processing of FIG. 図19は、第7実施形態に係る画像処理装置の構成および処理画像の一例を示すブロック図である。FIG. 19 is a block diagram illustrating an example of a configuration and a processed image of the image processing apparatus according to the seventh embodiment. 図20は、図19の積分画像の2値化処理を示すフローチャートである。FIG. 20 is a flowchart showing the binarization processing of the integral image of FIG. 図21は、図19の入力画像および積分画像の2値化画像を示す図である。FIG. 21 is a diagram showing a binarized image of the input image and the integral image of FIG. 図22は、図19の画像処理装置のその他の処理画像の一例を示す図である。FIG. 22 is a diagram illustrating an example of another processed image of the image processing apparatus in FIG. 図23は、第8実施形態に係る画像処理装置の変更微分画像の作成方法を示すフローチャートである。FIG. 23 is a flowchart illustrating a method for generating a modified differential image of the image processing device according to the eighth embodiment. 図24は、第9実施形態に係る画像処理装置の経路整合積分処理部の構成を示すブロック図である。FIG. 24 is a block diagram illustrating a configuration of the path matching integration processing unit of the image processing device according to the ninth embodiment. 図25は、第10実施形態に係る画像処理装置のブロック画像の作成方法を示すフローチャートである。FIG. 25 is a flowchart illustrating a block image creation method of the image processing apparatus according to the tenth embodiment. 図26は、第11実施形態に係るブロック画像のブロック歪除去方法を示すフローチャートである。FIG. 26 is a flowchart illustrating a block distortion removal method for a block image according to the eleventh embodiment. 図27は、第12実施形態に係る画像処理装置の構成を示すブロック図である。FIG. 27 is a block diagram illustrating a configuration of an image processing device according to the twelfth embodiment. 図28は、図27の画像処理装置の整合微分画像の作成時の隣接4点メッシュ内の勾配の様子を示す図である。FIG. 28 is a diagram illustrating a state of a gradient in an adjacent four-point mesh when a matched differential image is created by the image processing apparatus in FIG. 27. 図29は、図27の画像処理装置の整合微分画像作成処理を示すフローチャートである。FIG. 29 is a flowchart showing a matched differential image creating process of the image processing apparatus of FIG. 図30は、図27の画像処理装置の整合微分画像の作成時の隣接4点メッシュの距離の一例を示す図である。FIG. 30 is a diagram illustrating an example of the distance between adjacent four-point meshes when the matched differential image is created by the image processing apparatus in FIG. 27. 図31は、図1の画像処理装置のハードウェア構成例を示すブロック図である。FIG. 31 is a block diagram illustrating a hardware configuration example of the image processing apparatus in FIG.
 実施形態について、図面を参照して説明する。なお、以下に説明する実施形態は特許請求の範囲に係る発明を限定するものではなく、また、実施形態の中で説明されている諸要素およびその組み合わせの全てが発明の解決手段に必須であるとは限らない。 The embodiment will be described with reference to the drawings. It should be noted that the embodiments described below do not limit the invention according to the claims, and all the elements and combinations thereof described in the embodiments are essential for solving the invention. Not necessarily.
 図1は、第1実施形態に係る画像処理システムの構成を示すブロック図である。
 図1において、画像処理システムは、撮影装置100、画像処理装置111、121、131、表示装置112、122、132、入力装置113、123、133および記憶装置114、124、134を備える。
FIG. 1 is a block diagram illustrating a configuration of the image processing system according to the first embodiment.
1, the image processing system includes a photographing device 100, image processing devices 111, 121, 131, display devices 112, 122, 132, input devices 113, 123, 133, and storage devices 114, 124, 134.
 画像処理装置111、121、131は、インターネットなどの通信ネットワーク140を介して互いに接続されている。画像処理装置111は、撮影装置100、表示装置112、入力装置113および記憶装置114に接続されている。画像処理装置121は、表示装置122、入力装置123および記憶装置124に接続されている。画像処理装置131は、表示装置132、入力装置133および記憶装置134に接続されている。 The image processing apparatuses 111, 121, and 131 are connected to each other via a communication network 140 such as the Internet. The image processing device 111 is connected to the imaging device 100, the display device 112, the input device 113, and the storage device 114. The image processing device 121 is connected to the display device 122, the input device 123, and the storage device 124. The image processing device 131 is connected to the display device 132, the input device 133, and the storage device 134.
 撮影装置100は、被写体を撮影し、画像データを生成する。撮影装置100は、例えば、デジタルカメラ、スマートフォンや携帯電話に付属したカメラ、スキャナ、医療現場で用いられるX線写真撮影装置、MRI(Magnetic Resonance Imaging)等、監視現場で用いられる監視カメラ、その他、各種検査現場で用いられる超音波や赤外線、可視光、紫外線、X線やγ線、電子線などを捉えた画像を撮影する各種撮影装置などである。 The image capturing apparatus 100 captures an image of a subject and generates image data. The imaging device 100 is, for example, a digital camera, a camera attached to a smartphone or a mobile phone, a scanner, an X-ray photography device used in a medical site, a monitoring camera used in a monitoring site such as an MRI (Magnetic Resonance Imaging), and the like. Examples include various imaging devices that capture images capturing ultrasonic waves, infrared rays, visible light, ultraviolet rays, X-rays, γ-rays, electron beams, and the like used in various inspection sites.
 画像処理装置111は、入力装置113からの入力情報に基づいて、撮影装置100から画像データを受け取ったり、各種画像処理を行ったりすることができる。また、画像処理装置111は、処理画像データを含む処理結果を表示装置112に表示したり、記憶装置114に記憶したり、通信ネットワーク140に送信したりできる。さらに、画像処理装置111は、外部からのリクエスト情報を受け取り、記憶装置114に記憶した画像データなどの各種情報を外部に送信することができる。画像処理装置111は、例えば、汎用計算機であるワークステーション、デスクトップパソコン、ノートパソコン、タブレット端末、スマートフォン等であってもよいし、画像処理専用ハードウェアであってもよい。 (4) The image processing device 111 can receive image data from the photographing device 100 and perform various image processing based on input information from the input device 113. Further, the image processing device 111 can display the processing result including the processed image data on the display device 112, store the processing result in the storage device 114, or transmit the processing result to the communication network 140. Further, the image processing device 111 can receive request information from the outside and transmit various information such as image data stored in the storage device 114 to the outside. The image processing device 111 may be, for example, a general-purpose computer such as a workstation, a desktop personal computer, a notebook personal computer, a tablet terminal, or a smartphone, or may be hardware dedicated to image processing.
 表示装置112は、ディスプレイやテレビなどを用いることができる。入力装置113は、キーボードやマウスなどを用いることができる。記憶装置114は、磁気ディスク装置、光ディスク装置、SSD(Solid State Drive)またはUSB(Universal Serial Bus)メモリなどを用いることができる。 As the display device 112, a display, a television, or the like can be used. As the input device 113, a keyboard, a mouse, or the like can be used. As the storage device 114, a magnetic disk device, an optical disk device, an SSD (Solid State Drive), a USB (Universal Serial Bus) memory, or the like can be used.
 なお、ノートパソコンやパブレット端末、スマートフォン等は、画像処理装置111、表示装置112、入力装置113および記憶装置114が一体となっている。 Note that the image processing device 111, the display device 112, the input device 113, and the storage device 114 are integrated in a notebook computer, a publet terminal, a smartphone, and the like.
 通信ネットワーク140は、画像データを含む各種情報データを送受信可能な回線で、世界中に繋がることができる。通信ネットワーク140は、例えば、インターネットを用いることができる。通信ネットワーク140には、ローカルなエリアに独自の専用の回線を設けることもできる。 The communication network 140 is a line capable of transmitting and receiving various information data including image data, and can be connected to all over the world. As the communication network 140, for example, the Internet can be used. The communication network 140 may also have its own dedicated line in a local area.
 画像処理装置121は、入力装置123からの入力情報に基づき、画像処理装置111に接続された記憶装置114から画像データを受け取ったり、各種画像処理を行うことができる。また、画像処理装置121は、処理画像データを含む処理結果を表示装置122に表示したり、記憶装置124に記憶したり、通信ネットワーク140に送信したりできる。さらに、画像処理装置121は、外部からのリクエスト情報を受け取り、記憶装置124に記憶した画像データなどの各種情報を外部に送信することができる。 (4) The image processing device 121 can receive image data from the storage device 114 connected to the image processing device 111 and perform various image processing based on input information from the input device 123. Further, the image processing device 121 can display a processing result including the processed image data on the display device 122, store the processing result in the storage device 124, or transmit the processing result to the communication network 140. Further, the image processing device 121 can receive request information from the outside and transmit various information such as image data stored in the storage device 124 to the outside.
 画像処理装置131は、入力装置133からの入力情報に基づき、画像処理装置121に接続された記憶装置124から画像データを受け取ったり、各種画像処理を行うことができる。また、画像処理装置131は、処理画像データを含む処理結果を表示装置132に表示したり、記憶装置134に記憶したり、通信ネットワーク140に送信したりできる。 The image processing apparatus 131 can receive image data from the storage device 124 connected to the image processing apparatus 121 and perform various image processing based on input information from the input apparatus 133. Further, the image processing device 131 can display a processing result including the processed image data on the display device 132, store the processing result in the storage device 134, or transmit the processing result to the communication network 140.
 各画像処理装置111、121、131の画像処理機能は、画像処理を実現するソフトウェア(プログラム)をインストールすることで実装することができる。また、撮影装置100が画像処理装置を内蔵している場合は、この内蔵された画像処理装置も画像処理を実施することができるし、画像処理専用ハードウェアを搭載して画像処理を実施することもできる。 The image processing function of each of the image processing apparatuses 111, 121, and 131 can be implemented by installing software (a program) for implementing image processing. When the image capturing apparatus 100 includes an image processing apparatus, the built-in image processing apparatus can also perform image processing, and the image processing can be performed by installing hardware dedicated to image processing. Can also.
 この画像処理システムは、例えば、個人が撮影装置100としてデジタルカメラを用い、画像処理装置111として記憶装置114を内蔵したノートパソコンを用いることができる。そして、個人がデジタルカメラで撮影した画像データを記憶装置114に記憶し、通信ネットワーク140を介して、外部のSNS(social networking service)の情報サービス会社の画像処理装置121に接続された記憶装置124に画像をアップロードし、広く一般に画像にアクセスできるようにすることができる。この時、ユーザは、自分が所有する画像処理装置131に接続された表示装置132上でアップロードされた画像を見ることができる。 In this image processing system, for example, a person can use a digital camera as the photographing device 100 and a notebook personal computer with a built-in storage device 114 as the image processing device 111. Then, the image data photographed by the individual with the digital camera is stored in the storage device 114, and the storage device 124 connected to the image processing device 121 of the information service company of the external SNS (social networking service) via the communication network 140. You can upload your images to and make them accessible to the general public. At this time, the user can view the uploaded image on the display device 132 connected to the image processing device 131 owned by the user.
 その他、医療現場では、撮影装置100としてX線写真撮影装置やMRI等の各種画像撮影装置にある画像データを、これらの画像撮影装置と繋がった、あるいはこれらの画像撮影装置を内蔵した画像処理装置111から病院内のデータサーバである画像処理装置121に送ることができる。そして、医師は、画像処理装置131に接続された表示装置132上で画像を参照することができる。 In addition, at a medical site, image data in various image capturing apparatuses such as an X-ray photographing apparatus or an MRI as the image capturing apparatus 100 is connected to these image capturing apparatuses, or an image processing apparatus incorporating these image capturing apparatuses. 111 can be sent to the image processing device 121 which is a data server in the hospital. Then, the doctor can refer to the image on the display device 132 connected to the image processing device 131.
 ここで、画像処理装置111は、撮影装置100で撮影された撮影画像を微分した微分画像を作成し、その微分画像を変更処理した変更微分画像を作成し、その変更微分画像を経路積分した経路積分画像を作成することができる。ここで、微分画像上で変更処理を行うことで、その変更の仕方により、変更微分画像を経路積分した経路積分画像に、様々な特徴を持たせることができる。例えば、エッジを保存しつつ、コントラストを向上した鮮鋭化画像にしたり、ノイズ成分を低減したノイズ低減画像にしたりすることができる。 Here, the image processing device 111 creates a differentiated image obtained by differentiating the captured image captured by the imaging device 100, creates a modified differential image obtained by modifying the differentiated image, and integrates the modified differential image into a path. An integral image can be created. Here, by performing the change processing on the differential image, various characteristics can be given to the path integral image obtained by path integrating the changed differential image depending on the manner of the change. For example, a sharpened image with improved contrast or a noise-reduced image with reduced noise components can be obtained while preserving edges.
 図2は、図1の画像処理装置の構成および処理画像の一例を示すブロック図である。
 図2において、画像処理装置111は、微分処理部211、微分画像変更処理部212および経路整合積分処理部213を備える。
FIG. 2 is a block diagram showing an example of the configuration of the image processing apparatus of FIG. 1 and a processed image.
2, the image processing device 111 includes a differential processing unit 211, a differential image change processing unit 212, and a path matching integration processing unit 213.
 微分処理部211は、入力画像201を微分した微分画像を作成する。入力画像201は、カラー画像であってもよいし、白黒画像であってもよい。入力画像201は、図1の撮影装置100で撮影された画像であってもよい。図2では、入力画像201として、各画素に白黒のグレースケールの値を持つ白黒画像を例にとる。入力画像201が2次元画像の場合、微分方向をX方向およびY方向の2方向に設定することができる。この時、微分処理部211は、X方向およびY方向のそれぞれについて1次元方向の微分を行うことで、X方向偏微分画像202とY方向偏微分画像203を作成する。 The differential processing unit 211 creates a differential image obtained by differentiating the input image 201. The input image 201 may be a color image or a black and white image. The input image 201 may be an image photographed by the photographing device 100 of FIG. In FIG. 2, a monochrome image having monochrome grayscale values in each pixel is taken as an example of the input image 201. When the input image 201 is a two-dimensional image, the differentiation direction can be set in two directions, the X direction and the Y direction. At this time, the differential processing unit 211 creates an X-direction partial differential image 202 and a Y-direction partial differential image 203 by performing one-dimensional differentiation in each of the X direction and the Y direction.
 図3は、図1の画像処理装置の処理画像の方向の一例を示す図である。
 図3において、入力画像201は、2次元平面で縦横の2方向がある。2次元平面の左上の点を座標の原点221とし、右方向をX方向222、下方向をY方向223と定義する。
FIG. 3 is a diagram illustrating an example of a direction of a processed image of the image processing apparatus of FIG.
In FIG. 3, the input image 201 has two directions, that is, vertical and horizontal directions on a two-dimensional plane. The upper left point of the two-dimensional plane is defined as the coordinate origin 221, the right direction is defined as the X direction 222, and the downward direction is defined as the Y direction 223.
 この時、入力画像201をIと記すと、座標(x,y)の点(画素)にある入力画像201の画素値をI(x,y)と表すことができる。X方向偏微分画像202をDx、Y方向偏微分画像203をDyと記すと、座標(x,y)のそれぞれの微分値は、以下の数式1、2で与えることができる。 At this time, if the input image 201 is described as I, the pixel value of the input image 201 at the point (pixel) at the coordinates (x, y) can be expressed as I (x, y). When the X-direction partial differential image 202 is described as Dx and the Y-direction partial differential image 203 is described as Dy, the respective differential values of the coordinates (x, y) can be given by the following equations (1) and (2).
(数1)
 Dx(x,y)=I(x+1,y)-I(x,y)
(Equation 1)
Dx (x, y) = I (x + 1, y) -I (x, y)
(数2)
 Dy(x,y)=I(x,y+1)-I(x,y)
(Equation 2)
Dy (x, y) = I (x, y + 1) -I (x, y)
 上述した微分画像は、偏微分画像である。その他にも、微分画像は、偏微分画像と変換および逆変換で結びついた画像であってもよい。例えば、X方向偏微分画像とY方向偏微分画像は、絶対値画像と角度画像に変換することができる。また、絶対値画像と角度画像は、X方向偏微分画像とY方向偏微分画像に逆変換することもできる。微分画像は、偏微分画像か、または偏微分画像に変換可能な画像なので、経路に沿った線積分が可能である。 微分 The above differential image is a partial differential image. Alternatively, the differential image may be an image linked to the partial differential image by conversion and inverse conversion. For example, the X direction partial differential image and the Y direction partial differential image can be converted into an absolute value image and an angle image. Further, the absolute value image and the angle image can be inversely transformed into an X direction partial differential image and a Y direction partial differential image. Since the differential image is a partial differential image or an image that can be converted to a partial differential image, line integration along a path is possible.
 なお、一般的な画像処理の分野では、上述した定義と異なる微分画像が存在する。例えば、以下の参考文献1では、以下の参考文献2で呼んでいるintegral image(積分画像)について、その逆演算で定義された微分画像を導入している。この参考文献2のintegral image(積分画像)は、経路に沿った線積分を用いていないため、本実施形態の積分画像とは定義が異なるものである。また、参考文献1の微分画像は、経路に沿った線積分ができないため、これ単独では、本実施形態で対象とする微分画像とはならない。参考文献1の微分画像を本実施形態の対象とするには、経路積分が可能となるように、何らかの情報を加える必要がある。本実施形態で対象とする微分画像は、上記のように、経路に沿った線積分が可能な情報を与えるものである。 In the field of general image processing, there are differential images different from the above-described definition. For example, in Reference 1 below, a differential image defined by an inverse operation is introduced for integral @ image (integrated image) referred to in Reference 2 below. The integral @ image (integrated image) of Reference 2 has a different definition from the integrated image of the present embodiment because line integral along the path is not used. In addition, the differential image of Reference 1 cannot be line-integrated along a path, and therefore cannot be a target differential image in the present embodiment by itself. In order to target the differential image of Reference 1 in the present embodiment, it is necessary to add some information so that path integration can be performed. As described above, the differential image targeted in the present embodiment provides information that allows line integration along the path.
 参考文献1:Kohei Inoue,Kenji Hara,Kiichi Urahama,“Integral Image-Based Differential Filters,”International Journal of Computer,Electrical, Automation, Control and Information Engineering,vol.8,no.5,pp.812-821,2014. References 1: Kohei Inoue, Kenji Hara, Kiichi Urahama, "Integral Image-Based Differential Filters," International eJournal, Computer, Electronica, International, Computer, Electrification. 8, no. 5, pp. 812-821, 2014.
 微分画像変更処理部212は、X方向偏微分画像202とY方向偏微分画像203を変更処理したX方向変更偏微分画像204とY方向変更偏微分画像205を作成する。X方向偏微分画像202とY方向偏微分画像203の変更の仕方は各種ある。例えば、X方向変更偏微分画像204およびY方向変更偏微分画像205の画素値の絶対値に応じて、その絶対値を拡縮することができる。この時、X方向変更偏微分画像204をEx、Y方向変更偏微分画像205をEyで表すと、X方向偏微分画像202およびY方向偏微分画像203からX方向変更偏微分画像204およびY方向変更偏微分画像205への変更は、例えば、以下の数式3、4で与えることができる。 The differential image change processing unit 212 creates an X-direction changed partial differential image 204 and a Y-direction changed partial differential image 205 obtained by changing the X-direction partial differential image 202 and the Y-direction partial differential image 203. There are various ways of changing the X-direction partial differential image 202 and the Y-direction partial differential image 203. For example, in accordance with the absolute values of the pixel values of the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205, the absolute values can be scaled. At this time, when the X-direction changed partial differential image 204 is expressed by Ex and the Y-direction changed partial differential image 205 is expressed by Ey, the X-direction changed partial differential image 204 and the Y-direction changed partial differential image The change to the changed partial differential image 205 can be given by, for example, the following Expressions 3 and 4.
(数3)
Ex(x,y)=k*(Dx(x,y)-lv)+lv(Dx(x,y)≧lvの場合)
=Dx(x,y)*|Dx(x,y)|/lv(|Dx(x,y)|<lvの場合)
=k*(Dx(x,y)+lv)-lv(Dx(x,y)≦-lvの場合)
(Equation 3)
Ex (x, y) = k * (Dx (x, y) -lv) + lv (when Dx (x, y) ≧ lv)
= Dx (x, y) * | Dx (x, y) | / lv (when | Dx (x, y) | <lv)
= K * (Dx (x, y) + lv) -lv (when Dx (x, y) ≤-lv)
(数4)
Ey(x,y)=k*(Dy(x,y)-lv)+lv(Dy(x,y)≧lvの場合)
=Dy(x,y)*|Dy(x,y)|/lv(|Dy(x,y)|<lvの場合)
=k*(Dy(x,y)+lv)-lv(Dy(x,y)≦-lvの場合)
(Equation 4)
Ey (x, y) = k * (Dy (x, y) -lv) + lv (when Dy (x, y) ≧ lv)
= Dy (x, y) * | Dy (x, y) | / lv (when | Dy (x, y) | <lv)
= K * (Dy (x, y) + lv) -lv (when Dy (x, y) ≤-lv)
 ただし、kとlvは所定値である。|Dx(x,y)|は、Dx(x,y)の絶対値である。数式3の処理により、X方向変更偏微分画像204のDx(x,y)は、絶対値がlvより小さい場合は、2乗の式に従い、絶対値がlvより小さい部分の絶対値がより小さくなり、絶対値がlvより大きい場合は、絶対値がlvより大きい部分の値がk倍される。 However, k and lv are predetermined values. | Dx (x, y) | is the absolute value of Dx (x, y). By the processing of Expression 3, Dx (x, y) of the X-direction changed partial differential image 204 has a smaller absolute value in a portion where the absolute value is smaller than lv according to the square equation when the absolute value is smaller than lv. When the absolute value is larger than lv, the value of the part where the absolute value is larger than lv is multiplied by k.
 経路整合積分処理部213は、X方向変更偏微分画像204およびY方向変更偏微分画像205を経路整合積分した経路整合積分画像206を作成する。経路整合積分処理部213は、積分を始める初期点の輝度を入力画像201の輝度とする場合は、入力画像201を参照する。また、経路整合積分処理部213は、ブロックに分割して積分して、ブロック内平均輝度を、入力画像201の対応するブロック内平均輝度に合わせたい場合なども、入力画像201を参照する。さらに、経路整合積分処理部213は、経路整合積分画像206の表示レベルを設定する場合にも、入力画像201を参照することができる。 The path matching integration processing unit 213 creates a path matching integration image 206 by performing path matching integration of the X direction changed partial differential image 204 and the Y direction changed partial differential image 205. The path matching integration processing unit 213 refers to the input image 201 when setting the brightness of the initial point at which the integration is started to be the brightness of the input image 201. In addition, the path matching integration processing unit 213 refers to the input image 201 also when it is desired to divide and integrate into blocks and to match the average brightness in the block with the corresponding average brightness in the block of the input image 201. Further, the path matching integration processing unit 213 can also refer to the input image 201 when setting the display level of the path matching integration image 206.
 ここで、本実施形態の経路積分は、画像上のある点から別の点に向かう経路に沿った線積分のことである。物理学の分野で使われる量子力学的な状態の遷移を扱う際の経路積分とは異なる概念である。本実施形態の経路積分は、画像に設定された経路上の画素値の加減算演算による線積分を行う。この時、経路整合積分処理部213は、X方向の経路積分を行う場合、X方向変更偏微分画像204を参照し、Y向の経路積分を行う場合、Y方向変更偏微分画像205を参照する。微分画像を変更処理することなく、微分画像を経路積分すると、ある画素から他の画素への経路が複数ある場合においても、それら複数の経路の線積分値が異なることはない。 Here, the path integral of the present embodiment is a line integral along a path from a point on an image to another point. It is a different concept from path integral when dealing with quantum mechanical state transitions used in the field of physics. In the path integration according to the present embodiment, line integration is performed by adding and subtracting pixel values on a path set in an image. At this time, the path matching integration processing unit 213 refers to the X-direction changed partial differential image 204 when performing path integration in the X direction, and refers to the Y-direction changed partial differential image 205 when performing path integration in the Y direction. . When the differential image is path-integrated without changing the differential image, even when there are a plurality of paths from a certain pixel to another pixel, the line integral values of the plurality of paths do not differ.
 一方、微分画像を変更処理した変更微分画像を経路積分すると、ある画素から他の画素への経路が複数ある場合、それら複数の経路の線積分値が異なることがある。ここで、変更微分画像の経路積分によって得られる複数の線積分値に基づいて1つの積分値を算出する処理を含む積分を経路整合積分と言う。1つの積分値を算出する方法としては、複数の線積分値の平均値を求めるようにしてもよいし、ニューラルネットワークを用いて複数の線積分値に基づいて1つの積分値を求めるようにしてもよい。その他にも、いずれか一つの経路の線積分値を積分値として選ぶようにしてもよいし、それらの線積分値に適当な値を加算した値を積分値にするようにしてもよい。例えば、複数の線積分値のうち最大値を選択するようにしてもよいし、最大値を選択するようにしてもよい。 On the other hand, when the path of the modified differential image obtained by modifying the differential image is integrated, if there are a plurality of paths from a certain pixel to another pixel, the line integrated values of the plurality of paths may be different. Here, an integral including a process of calculating one integral value based on a plurality of line integral values obtained by path integral of the modified differential image is referred to as path matching integral. As a method of calculating one integral value, an average value of a plurality of line integral values may be obtained, or one integral value may be obtained based on a plurality of line integral values using a neural network. Is also good. Alternatively, the line integral value of any one of the paths may be selected as the integral value, or a value obtained by adding an appropriate value to the line integral value may be used as the integral value. For example, the maximum value may be selected from a plurality of line integral values, or the maximum value may be selected.
 なお、一般的な画像処理の分野では、上述した本実施形態の積分画像の定義とは異なる積分画像が存在する。例えば、以下の参考文献2では、高速演算のために、画像の左上の点と着目点で囲われる四角形内の画素値の総和を画素値とする画像を作り、それをintegral image と呼んでいる。 In the general field of image processing, there is an integral image different from the definition of the integral image of the present embodiment described above. For example, in Reference 2 below, for high-speed calculation, an image is created in which the pixel value is the sum of the pixel values in a rectangle surrounded by the upper left point of the image and the point of interest, and the image is called integral {image}. .
 参考文献2:Paul Viola,Michael Jones,“Rapid Object Detection using a Boosted Cascade of Simple Features,“CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2001,volume I,pp.511-518. Reference 2: Paul Viola, Michael Jones, “Rapid Object Detection—using a a Boosted Cascade of Simple Features”, “CONFERENCE ON COMPUTER, ICON ENGINE REVOLUTION VOL. 511-518.
 また、以下の参考文献3では、高速演算のために、画像の左下の点と着目点で囲われる四角形内の画素値の総和を求めたsummed-area table(画像)を作る。参考文献2のintegral imageと、参考文献3のsummed area tableは、四角形の取り方が異なるだけで、高速化の原理は同じである。以上の参考文献2のintegral imageも、summed area tableも、本実施形態の積分画像ではない。 In Reference 3, below, for high-speed calculation, a summed-area @ table (image) in which the sum of pixel values in a rectangle surrounded by a lower left point of the image and a point of interest is calculated. Integral image in Reference 2 and summed area table in Reference 3 differ in the way of taking a rectangle, but the principle of speeding up is the same. Neither the integral image nor the summed area table in Reference 2 described above are integrated images of the present embodiment.
 参考文献3:Franklin C.Crow,“Summed-Area Tables for Texture Mapping,”Computer Graphics Volume 18,Number 3,pp.207-212,July 1984. Reference 3: Franklin C. Crow, "Summed-Area Tables for Texture Texture Mapping," Computer Graphics Volume 18, Number 3, pp. 146-64. 207-212, July @ 1984.
 以上で述べた手順を実現するソフトウェア(プログラム)を作成し、そのソフトウェアを計算機にインストールすると、その計算機は、微分処理部211、微分画像変更処理部212および経路整合積分処理部213を実現することができる。微分処理部211、微分画像変更処理部212および経路整合積分処理部213は、専用ハードウェアで実現するようにしてもよい。 When software (program) for implementing the above-described procedure is created and the software is installed in a computer, the computer implements the differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213. Can be. The differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213 may be realized by dedicated hardware.
 図4は、図1の画像処理装置の経路積分処理を示すフローチャートである。
 図4のステップS11において、マップ等の初期値および初期点を設定する。この時、入力画像201と同じサイズのマップMと、積分値を代入する画像Fを用意する。後の処理により、画像Fの所定範囲に積分値を代入し終えたら、経路整合積分画像206が得られる。
FIG. 4 is a flowchart illustrating the path integration processing of the image processing apparatus of FIG.
In step S11 in FIG. 4, an initial value and an initial point such as a map are set. At this time, a map M having the same size as the input image 201 and an image F into which an integral value is substituted are prepared. When the integral value has been substituted into a predetermined range of the image F by a later process, the path matching integral image 206 is obtained.
 ここで、入力画像201のX方向のサイズをnx、Y方向のサイズをnyとする。また、積分値を求める所定範囲をX方向がx1以上かつx2以下、Y方向がy1以上かつy2以下とする。なお、x1=0、x2=nx-1、y1=0、y2=ny-1の時、入力画像201のサイズ全体の積分値が求まる。 Here, the size of the input image 201 in the X direction is nx, and the size in the Y direction is ny. Further, the predetermined range for calculating the integral value is x1 or more and x2 or less in the X direction, and y1 or more and y2 or less in the Y direction. When x1 = 0, x2 = nx-1, y1 = 0, and y2 = ny-1, an integrated value of the entire size of the input image 201 is obtained.
 積分を始める初期点を(x0,y0)とする。初期点から距離が0の隣接点(0隣接点)として、初期点(x0,y0)を0隣接点配列に登録する。0隣接点配列の配列数は1である。すなわち、k隣接点配列のk(kは0以上の整数)の値を0、配列数を1とする。k隣接点は、初期点から距離がkの隣接点である。また、画像Fの積分値を入力画像201の値とする。この時、画像Fの積分値は、以下の数式5で与えることができる。 初期 Let the initial point to start the integration be (x0, y0). The initial point (x0, y0) is registered in the 0 adjacent point array as an adjacent point (0 adjacent point) whose distance from the initial point is 0. The number of arrays of the 0 adjacent point array is 1. That is, the value of k (k is an integer of 0 or more) of the k adjacent point array is 0, and the number of arrays is 1. The k adjacent points are adjacent points at a distance k from the initial point. The integral value of the image F is set as the value of the input image 201. At this time, the integral value of the image F can be given by Expression 5 below.
(数5)
F(x0,y0)=I(x0,y0)
(Equation 5)
F (x0, y0) = I (x0, y0)
 なお、k隣接点配列のk=0以外の時の配列数の初期値は0とする。また、k隣接点がまだ着目されていないことを示すk隣接点配列の未参照確認フラグの値を真とする。マップMのマップ値は、積分値を求める所定範囲では0、その範囲外では-1とする。ただし、初期点のマップ値を2とする。すなわち、M(x0,y0)=2である。 初期 Note that the initial value of the number of arrays of k adjacent point arrays when k is other than 0 is 0. Further, the value of the unreferenced confirmation flag of the k-adjacent-point array indicating that the k-adjacent point has not been focused on is set to true. The map value of the map M is 0 in a predetermined range for obtaining the integral value, and -1 in a range outside the predetermined range. However, the map value of the initial point is 2. That is, M (x0, y0) = 2.
 次に、ステップS12において、着目点Paを設定する。k隣接点配列の中からk隣接点を順番に着目点Paとする。すなわち、k隣接点配列の未参照フラグの値が真の場合は、k隣接点配列の最初の位置に格納された点を着目点Paとし、k隣接点配列の未参照フラグの値を偽とする。k隣接点配列の未参照フラグの値が偽の場合は、k隣接点配列の中から、今まで着目していたk隣接点の位置の次の位置に格納されたk隣接点を着目点Paとする。 Next, in step S12, a point of interest Pa is set. The k-neighboring points in the k-neighboring point array are sequentially set as the point of interest Pa. That is, when the value of the unreferenced flag of the k-neighboring point array is true, the point stored at the first position of the k-neighboring point array is regarded as the point of interest Pa, and the value of the unreferenced flag of the k-neighboring point array is false. I do. If the value of the unreferenced flag in the k-neighboring point array is false, the k-neighboring point stored at the position next to the position of the k-neighboring point that has been focused on is determined from the k-neighboring point array. And
 次に、ステップS13において、着目点Pbを設定する。この時、k隣接点の着目点Pbの4つの隣接点を順番に着目点Pbとする。着目点Pbには、左、右、上および下の4つの隣接点があり、例えば、この順番に着目点Pbを定める。 Next, in step S13, a point of interest Pb is set. At this time, four adjacent points of the point of interest Pb of the k adjacent points are sequentially set as the point of interest Pb. The point of interest Pb has four adjacent points, left, right, upper and lower. For example, the point of interest Pb is determined in this order.
 次に、ステップS14において、着目点Pbが積分すべき点か判断する。着目点Pbの位置のマップ値が0か1であれば、ステップS15に進み、着目点Pbの位置のマップ値が0および1のいずれでもなければ、ステップS16に進む。 Next, in step S14, it is determined whether the point of interest Pb should be integrated. If the map value of the position of the point of interest Pb is 0 or 1, the process proceeds to step S15. If the map value of the position of the point of interest Pb is neither 0 nor 1, the process proceeds to step S16.
 ステップS15では、着目点Pbの積分等の処理を行う。積分等の処理では、経路整合積分を行う。この時、着目点Pbの位置のマップ値が0であれば、マップ値を1とし、その位置の積分値を着目点Paの積分値とX方向変更偏微分画像204とY方向変更偏微分画像205を参照して線積分にて求め、画像Fの着目点Pbの値とする。そして、k+1隣接点配列に、その点を加え、k+1隣接点の配列数を1だけ増やす。 In step S15, processing such as integration of the point of interest Pb is performed. In processing such as integration, path matching integration is performed. At this time, if the map value of the position of the point of interest Pb is 0, the map value is set to 1, and the integrated value of the position is integrated with the integrated value of the point of interest Pa, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image. The value of the point of interest Pb of the image F is obtained by line integration with reference to 205. Then, the point is added to the k + 1 adjacent point array, and the number of k + 1 adjacent point arrays is increased by one.
 着目点Pbの位置のマップ値が1であれば、マップ値を2とし、その位置の積分値を着目点Paの積分値とX方向変更偏微分画像204とY方向変更偏微分画像205を参照して線積分にて求める。そして、既に求めていたその位置の積分値である画像Fの着目点Pbの値と平均し、その平均値を新たに画像Fの着目点Pbの値とする。 If the map value of the position of the point of interest Pb is 1, the map value is set to 2, and the integrated value of the position is referred to the integrated value of the point of interest Pa, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image 205. And calculate by line integration. Then, it is averaged with the value of the point of interest Pb of the image F, which is the integration value of the position already obtained, and the average value is newly set as the value of the point of interest Pb of the image F.
 次に、ステップS16において、着目点Pbが着目点Paの隣接点の最後の点か判断する。着目点Pbが着目点Paの隣接点の最後の点であれば、ステップS17に進み、着目点Pbが着目点Paの隣接点の最後の点でなければ、ステップS13に戻る。 Next, in step S16, it is determined whether the point of interest Pb is the last point adjacent to the point of interest Pa. If the target point Pb is the last point adjacent to the target point Pa, the process proceeds to step S17. If the target point Pb is not the last point adjacent to the target point Pa, the process returns to step S13.
 ステップS17では、着目点Paがk隣接点の最後の点か判断する。着目点Paがk隣接点の最後の点であれば、ステップS18に進む。着目点Paがk隣接点の最後の点でなければ、ステップS12に戻る。 In step S17, it is determined whether the point of interest Pa is the last point of the k adjacent points. If the point of interest Pa is the last point of the k adjacent points, the process proceeds to step S18. If the point of interest Pa is not the last point of the k adjacent points, the process returns to step S12.
 次に、ステップS18において、k+1隣接点があるか判断する。k+1隣接点の配列数が0でなければ、k+1隣接点があるので、ステップS19に進む。k+1隣接点の配列数が0ならば、k+1隣接点がないので、経路整合積分処理は終了する。この時、画像Fの所定範囲に積分値が代入され、図2の経路整合積分画像206が得られる。 Next, in step S18, it is determined whether there is a k + 1 adjacent point. If the number of arrangements of the (k + 1) adjacent points is not 0, there is a (k + 1) adjacent point, and the process proceeds to step S19. If the number of arrays of the k + 1 adjacent points is 0, there is no k + 1 adjacent point, and the path matching integration process ends. At this time, the integral value is substituted into a predetermined range of the image F, and the path matching integral image 206 of FIG. 2 is obtained.
 次に、ステップS19において、k+1参照化を行う。この時、マップ値が1の箇所を2にする。そして、次のk+1隣接点を以後参照するようにしてステップS12に戻る。すなわち、ステップS12以降の処理でk+1隣接点をk隣接点と読み替え、k隣接点配列の未参照フラグの値を真とする。なお、図4のステップS12からステップS19の処理をS10とする。 Next, in step S19, k + 1 reference is performed. At this time, the location where the map value is 1 is set to 2. Then, the process returns to step S12 by referring to the next k + 1 adjacent point. That is, in the processing after step S12, the k + 1 adjacent point is replaced with the k adjacent point, and the value of the unreferenced flag of the k adjacent point array is set to true. The processing from step S12 to step S19 in FIG. 4 is referred to as S10.
 図5は、図1の画像処理装置の処理画像全体の経路整合積分方法を示す図である。
 図5において、図4の経路整合積分処理に基づいて、入力画像201と同じサイズの画像231全体を順次積分することで、図2の経路整合積分画像206を作成した。なお、図4の経路整合積分処理に基づいて作成した経路整合積分画像206を同距離経路平均積分画像206aと言うことにする。
FIG. 5 is a diagram showing a path matching integration method for the entire processed image of the image processing apparatus of FIG.
5, the entire image 231 having the same size as the input image 201 is sequentially integrated based on the path matching integration processing in FIG. 4, thereby creating the path matching integration image 206 in FIG. The path matching integrated image 206 created based on the path matching integration processing in FIG. 4 is referred to as an equal distance path average integrated image 206a.
 この時、画像231の中心を、積分を始める初期点P0とした。すなわち、x0=nx/2、y0=ny/2とした。同距離経路平均積分画像206aの表示レベルは、入力画像201の最小値3から最大値244に合わせて、この範囲を表示した。 時 At this time, the center of the image 231 was set as the initial point P0 at which the integration was started. That is, x0 = nx / 2 and y0 = ny / 2. The display level of the same distance path average integral image 206a is displayed in the range from the minimum value 3 to the maximum value 244 of the input image 201.
 積分を始める初期点P0の近傍233について、初期点P0からの距離が1の経路を233aに示し、初期点P0からの距離が2の経路を233bに示した。初期点P0からの距離が1の経路233a上には、点P1~P4がある。初期点P0からの距離が2の経路233b上には、点P5~P12がある。ここで、例えば、初期点P0から点P11までの経路には、2つの経路L1、L2がある。この時、例えば、経路L1での積分値と経路L2での積分値との平均を取ることで、点P11での経路整合積分値を求めることができる。 に つ い て With respect to the neighborhood 233 of the initial point P0 at which the integration is started, the path at a distance of 1 from the initial point P0 is shown at 233a, and the path at a distance of 2 from the initial point P0 is shown at 233b. There are points P1 to P4 on the route 233a whose distance from the initial point P0 is 1. There are points P5 to P12 on the route 233b whose distance from the initial point P0 is 2. Here, for example, the path from the initial point P0 to the point P11 includes two paths L1 and L2. At this time, for example, by taking the average of the integral value on the route L1 and the integral value on the route L2, the route matching integral value at the point P11 can be obtained.
 具体的には、初期点P0から点P1~P4までの経路整合積分値は、以下の数式6~9で与えることができる。 Specifically, the path matching integral value from the initial point P0 to the points P1 to P4 can be given by the following equations 6 to 9.
(数6)
F(x0-1,y0)=F(x0,y0)-Ex(x0-1,y0)
(Equation 6)
F (x0-1, y0) = F (x0, y0) -Ex (x0-1, y0)
(数7)
F(x0+1,y0)=F(x0,y0)+Ex(x0,y0)
(Equation 7)
F (x0 + 1, y0) = F (x0, y0) + Ex (x0, y0)
(数8)
F(x0,y0-1)=F(x0,y0)-Ey(x0,y0-1)
(Equation 8)
F (x0, y0-1) = F (x0, y0) -Ey (x0, y0-1)
(数9)
F(x0,y0+1)=F(x0,y0)+Ey(x0,y0)
(Equation 9)
F (x0, y0 + 1) = F (x0, y0) + Ey (x0, y0)
 また、初期点P0から点P5~P12までの経路整合積分値は、以下の数式10~17で与えることができる。 経 路 Also, the path matching integral value from the initial point P0 to the points P5 to P12 can be given by the following equations 10 to 17.
(数10)
F(x0-2,y0)=F(x0-1,y0)-Ex(x0-2,y0)
(Equation 10)
F (x0-2, y0) = F (x0-1, y0) -Ex (x0-2, y0)
(数11)
F(x0+2,y0)=F(x0+1,y0)+Ex(x0+1,y0)
(Equation 11)
F (x0 + 2, y0) = F (x0 + 1, y0) + Ex (x0 + 1, y0)
(数12)
F(x0,y0-2)=F(x0,y0-1)-Ey(x0,y0-2)
(Equation 12)
F (x0, y0-2) = F (x0, y0-1) -Ey (x0, y0-2)
(数13)
F(x0,y0+2)=F(x0,y0+1)+Ey(x0,y0+1)
(Equation 13)
F (x0, y0 + 2) = F (x0, y0 + 1) + Ey (x0, y0 + 1)
(数14)
F(x0-1,y0-1)=(F(x0-1,y0)-Ey(x0-1,y0-1)
+F(x0,y0-1)-Ex(x0-1,y0-1))/2
(Equation 14)
F (x0-1, y0-1) = (F (x0-1, y0) -Ey (x0-1, y0-1)
+ F (x0, y0-1) -Ex (x0-1, y0-1)) / 2
(数15)
F(x0+1,y0-1)=(F(x0+1,y0)-Ey(x0+1,y0-1)
+F(x0,y0-1)+Ex(x0,y0-1))/2
(Equation 15)
F (x0 + 1, y0-1) = (F (x0 + 1, y0) -Ey (x0 + 1, y0-1)
+ F (x0, y0-1) + Ex (x0, y0-1)) / 2
(数16)
F(x0+1,y0+1)=(F(x0+1,y0)+Ey(x0+1,y0)
+F(x0,y0+1)+Ex(x0,y0+1))/2
(Equation 16)
F (x0 + 1, y0 + 1) = (F (x0 + 1, y0) + Ey (x0 + 1, y0)
+ F (x0, y0 + 1) + Ex (x0, y0 + 1)) / 2
(数17)
F(x0-1,y0+1)=(F(x0-1,y0)+Ey(x0-1,y0)
+F(x0,y0+1)-Ex(x0-1,y0+1))/2
(Equation 17)
F (x0-1, y0 + 1) = (F (x0-1, y0) + Ey (x0-1, y0)
+ F (x0, y0 + 1) -Ex (x0-1, y0 + 1)) / 2
 なお、数式6~13までは、線積分を行う最短距離の道筋が1通りしかない場合である。数式14~17までは、線積分を行う最短距離の道筋が2通りある場合である。この時、これら2通りの道筋の積分値の平均値を経路整合積分値とした。 数 式 Expressions 6 to 13 are for the case where there is only one shortest path for performing line integration. Equations 14 to 17 are for the case where there are two shortest paths for performing line integration. At this time, the average value of the integral values of these two routes was taken as the path matching integral value.
 例えば、数式17の場合、1番目の経路に沿った積分値F(x0-1,y0)+Ey(x0-1,y0)と、2番目の経路に沿った積分値F(x0,y0+1)-Ex(x0-1,y0+1)の平均値を経路整合積分値とした。 For example, in the case of Equation 17, the integral F (x0-1, y0) + Ey (x0-1, y0) along the first path and the integral F (x0, y0 + 1)-along the second path The average value of Ex (x0-1, y0 + 1) was defined as a path matching integral value.
 図5の同距離経路平均積分画像206aでは、入力画像201と同じサイズの画像231全体で経路整合積分されている。同距離経路平均積分画像206aは、画像の中心、すなわち積分を始める初期点P0に近い部分は、入力画像201の値に近い画像となるが、端に行くに従って一般に、入力画像201の値から離れることになる。この時、全体のバランスを崩して入力画像の値から離れ、不自然な斜めの帯状部Z1などが発生することがある。これは、X方向偏微分画像202とY方向偏微分画像203を変更処理したX方向変更偏微分画像204とY方向変更偏微分画像205は、一般に画像全体でバランス良く変更されず、全体のバランスを崩して変更した部分の影響が、積分の初期点P0から離れるにつれ大きくなり目立つようになったためである。特に、同距離経路の平均を取る斜め方向は、より多くの矛盾を受けた変更を積分画像に負わすため、斜めの不自然に見える帯状部Z1などが現れることがある。 (5) In the same distance path average integrated image 206a in FIG. 5, the entire image 231 having the same size as the input image 201 is subjected to path matching integration. In the same distance path average integrated image 206a, the portion near the center of the image, that is, the initial point P0 at which the integration starts, is an image close to the value of the input image 201, but generally moves away from the value of the input image 201 toward the end. Will be. At this time, the whole balance may be disturbed, deviating from the value of the input image, and an unnatural oblique strip-shaped portion Z1 may be generated. This is because the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205 obtained by changing the X-direction partial differential image 202 and the Y-direction partial differential image 203 are generally not changed in a well-balanced manner in the entire image, and the overall balance is not changed. This is because the influence of the portion changed by breaking the line becomes larger and more conspicuous as the distance from the initial point P0 of the integration is increased. In particular, in the diagonal direction in which the average of the same distance route is averaged, more contradictory changes are imposed on the integrated image, and thus a diagonally unnaturally-appearing band Z1 may appear.
 図6は、図5と同様に、図1の画像処理装置の処理画像の一部の経路整合積分方法を示す図である。図4と図5では、積分範囲が異なる。すなわち、図5の積分範囲は、入力画像201と同じサイズで、図6の積分範囲は、入力画像より小さい中心付近の領域234である。
 図6においては、図4の経路整合積分処理に基づいて、入力画像201と同じサイズの画像231の中の中心付近の小さな領域234の範囲を順次積分することで、同距離経路平均積分画像206bを作成した。
 この同距離経路平均積分画像206bは、中心付近の小さな領域234で経路整合積分されている。このため、図5で見られたような端に行くに従って目立つ不自然な帯状部Z1が見られない。この不自然な帯上部が見られる程度は、X方向変更偏微分画像204とY方向変更偏微分画像205が持つ本来好ましくない変更の大きさと、積分値を求める範囲234の大きさに依存する。
FIG. 6 is a diagram showing a path matching integration method for a part of the processed image of the image processing apparatus of FIG. 1 similarly to FIG. 4 and 5 have different integration ranges. That is, the integration range in FIG. 5 is the same size as the input image 201, and the integration range in FIG. 6 is a region 234 near the center smaller than the input image.
In FIG. 6, based on the path matching integration processing of FIG. 4, the range of the small area 234 near the center in the image 231 having the same size as the input image 201 is sequentially integrated to obtain the same distance path average integrated image 206b. It was created.
The same distance path average integral image 206b is path-matched integrated in a small area 234 near the center. Therefore, an unnatural band-like portion Z1 that stands out toward the end as seen in FIG. 5 is not seen. The extent to which the unnatural upper part of the band is seen depends on the size of the originally undesirable change in the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205 and the size of the range 234 for obtaining the integral value.
 なお、上述した説明では、微分画像として、X方向偏微分画像202とY方向偏微分画像203を例にとったが、その他の方法で作成した微分画像であってもよい、例えば、X方向とY方向に加え、斜め方向の偏微分画像も作り、これら4方向の偏微分画像を微分画像とするようにしてもよい。この場合、縦横、上下および斜めの接続に対し、距離を定義し、例えば、縦および横の距離が1で、斜めの距離が2などと定義し、その定義に従って、ある点から別の点に行く距離を測ることができる。 In the above description, the X direction partial differential image 202 and the Y direction partial differential image 203 are taken as examples of differential images, but differential images created by other methods may be used. In addition to the Y direction, partial differential images in oblique directions may be created, and the partial differential images in these four directions may be used as differential images. In this case, distances are defined for vertical, horizontal, vertical, and diagonal connections. For example, vertical and horizontal distances are defined as 1, diagonal distances are defined as 2, and so on. You can measure the distance you go.
 この時、経路整合積分は、初期点P0から積分値を求める点の離散的な距離の定義に基づく最短経路が複数現れた場合は、それら複数の経路の積分値の平均値をその点の積分値とすることができる。例えば、距離が2の斜めの右上の点に行く経路は、斜めの経路の他、上から右の点に行く経路と右から上の点に行く経路の3通りあり、これらの3つの経路の積分値の平均値を右斜め上の点の積分値とすることができる。ただし、斜め上の経路の方に大きな重みを付けた重み付き平均を行うこともできる。本明細書では、重み付き平均も、平均の中に含まれることとする。 At this time, when a plurality of shortest paths based on the definition of the discrete distance of the point for which the integration value is to be calculated from the initial point P0 appear, the average value of the integration values of the plurality of paths is calculated by integrating the point. It can be a value. For example, there are three diagonal routes to the upper right point, a diagonal route, a route from the top to the right point, and a route from the right to the upper point. The average value of the integral values can be used as the integral value of the point on the upper right. However, it is also possible to perform a weighted average in which a larger weight is assigned to the diagonally upper route. In this specification, the weighted average is also included in the average.
 以上説明したように、上述した第1実施形態によれば、作成する積分画像の初期点P0の積分値を所定値に設定する。次に、初期点P0の隣の点について、(X方向とY方向の2方向の偏微分の場合は数式6~9に示したように)、初期点P0の積分値と、変更微分画像の該当位置の値を参照して、所定の加減算演算による線積分を行うことにより、この初期点P0の隣の点の積分値を求める。 As described above, according to the above-described first embodiment, the integral value of the initial point P0 of the created integral image is set to a predetermined value. Next, with respect to a point adjacent to the initial point P0 (in the case of partial differentiation in two directions of the X direction and the Y direction, as shown in Expressions 6 to 9), the integral value of the initial point P0 and the modified differential image By performing line integration by a predetermined addition / subtraction operation with reference to the value at the corresponding position, an integrated value at a point adjacent to the initial point P0 is obtained.
 順次、既に積分値を求めた点の隣の点について、既に積分値を求めた点の積分値と微分画像の該当位置の値を参照して、所定の加減算演算による線積分を行うことにより、既に積分値を求めた点の隣の点の積分値を求める。この時、積分値を求める点と初期点との離散的な距離が同じ経路が複数ある場合は、その複数の経路ごとの線積分を行った値の平均値をその点の積分値とする。以上の処理を順次、所定範囲まで繰り返して、所定範囲の全ての点の線積分した値を求めることにより、所定範囲内の同距離経路平均積分画像を作成する。 By sequentially performing a line integration by a predetermined addition / subtraction operation with respect to a point next to the point for which the integral value has already been obtained, by referring to the integral value of the point for which the integral value has already been obtained and the value of the corresponding position of the differential image, The integral value of a point next to the point whose integral value has already been obtained is obtained. At this time, if there are a plurality of paths having the same discrete distance between the point for which the integration value is to be obtained and the initial point, the average value of the values obtained by performing line integration for each of the plurality of paths is defined as the integration value of the point. The above processing is sequentially repeated up to a predetermined range, and a line-integrated value of all points in the predetermined range is obtained, thereby creating an average integrated image of the same distance path within the predetermined range.
 これにより、微分画像を変更処理した変更微分画像の積分値が経路によって異なる値を持つ場合においても、その変更微分画像の積分値を一意に定めることができる。このため、微分画像を変更処理した場合においても、その変更処理された変更微分画像から積分画像を作成することができる。 Thereby, even when the integral value of the modified differential image obtained by modifying the differential image has a different value depending on the path, the integral value of the modified differential image can be uniquely determined. Therefore, even when the differential image is changed, an integral image can be created from the changed differential image that has been changed.
 なお、数式5で示したように、初期点P0(x0,y0)の積分値F(x0,y0)を入力画像201の値I(x0,y0)とした。経路整合積分は、初期点P0(x0,y0)を起点に加算、減算および平均処理を行う。このため、初期点P0(x0,y0)の積分値F(x0,y0)の値をaだけ増やせば、積分値は、全ての点でaだけ値が増える。すなわち、オフセットがaだけ付加された画像が得られる。従って、初期点P0(x0,y0)の積分値を0に設定した場合、得られる積分画像は、オフセットが-I(x0,y0)だけ異なる画像が得られる。 As shown in Expression 5, the integral value F (x0, y0) of the initial point P0 (x0, y0) was set to the value I (x0, y0) of the input image 201. In the path matching integration, addition, subtraction, and averaging are performed starting from the initial point P0 (x0, y0). Therefore, if the value of the integral value F (x0, y0) of the initial point P0 (x0, y0) is increased by a, the integral value increases by a at all points. That is, an image to which the offset is added by a is obtained. Therefore, when the integrated value of the initial point P0 (x0, y0) is set to 0, an obtained integrated image is obtained in which the offset differs by -I (x0, y0).
 従って、積分画像の所定範囲の平均値を所定値に設定する場合は、初期点P0(x0,y0)の積分値をI(x0,y0)とせずに、0またはランダム値を含むどのような所定値に設定しても、最終的な結果である所定範囲の平均値を所定値にした積分画像は変らない。 Therefore, when the average value of the predetermined range of the integrated image is set to a predetermined value, the integration value of the initial point P0 (x0, y0) is not set to I (x0, y0), but any value including 0 or a random value is used. Even if it is set to the predetermined value, the integrated image in which the average value in the predetermined range as the final result is set to the predetermined value does not change.
 図7は、第2実施形態に係る積分画像のオフセット変更方法を示すフローチャート、図8は、図7の平均値算出領域の設定例を示す図である。
 図8において、積分画像のオフセットを変更するために、積分値を求める領域234の中に平均値算出領域236を設けることができる。図6の同距離経路平均積分画像206bから、オフセットを変更した同距離経路平均積分画像206cを作成した。なお、この同距離経路平均積分画像206cの表示レベルは、入力画像の201最小値3から最大値244に合わせて、この範囲を表示した。
FIG. 7 is a flowchart illustrating a method of changing the offset of the integral image according to the second embodiment, and FIG. 8 is a diagram illustrating an example of setting the average value calculation area in FIG.
In FIG. 8, in order to change the offset of the integral image, an average value calculation area 236 can be provided in the area 234 for calculating the integral value. The same distance path average integrated image 206c with the offset changed was created from the same distance path average integrated image 206b of FIG. The display level of the same distance route average integrated image 206c is displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201.
 図7の処理S20は、処理S21と処理S23を備える。処理S21は、積分値を求める処理である。処理S21では、積分を始める初期点P0の積分値F(x0,y0)の値をI(x0,y0)ではなく、0に設定する。処理S21は、ステップS22、S10を備える。処理S23は、積分画像のオフセットを変更する処理である。処理S23は、平均値算出領域236内の平均値を所定値にするオフセット調整を実行する。処理S23は、ステップS24~S27を備える。 処理 The process S20 in FIG. 7 includes a process S21 and a process S23. The process S21 is a process for obtaining an integral value. In process S21, the value of the integral value F (x0, y0) at the initial point P0 at which the integration is started is set to 0 instead of I (x0, y0). The processing S21 includes steps S22 and S10. The process S23 is a process of changing the offset of the integral image. The process S23 executes an offset adjustment for setting the average value in the average value calculation area 236 to a predetermined value. The processing S23 includes steps S24 to S27.
 ステップS22において、マップ等の初期値および初期点0設定を行う。この時、積分を始める初期点の積分値F(x0,y0)を0に設定する。その他は、図4のステップS11の処理と同様である。次に、図4のステップS10の処理と同様の処理を行う。 に お い て In step S22, initial values such as maps and initial point 0 are set. At this time, the integral value F (x0, y0) at the initial point where the integration is started is set to zero. Others are the same as the processing of step S11 in FIG. Next, a process similar to the process of step S10 in FIG. 4 is performed.
 次に、ステップS24において、入力画像201の平均値を算出する。この時、入力画像201について、図8の平均値算出領域236内の平均値h0を求める。 Next, in step S24, the average value of the input image 201 is calculated. At this time, an average value h0 in the average value calculation area 236 in FIG.
 次に、ステップS25において、積分画像の平均値を算出する。この時、ステップS21で得られた積分画像について、平均値算出領域236内の平均値h1を求める。 Next, in step S25, the average value of the integrated image is calculated. At this time, the average value h1 in the average value calculation area 236 is obtained for the integrated image obtained in step S21.
 次に、ステップS26において、オフセット値を算出する。この時、平均値h0、h1の差分h0-h1をオフセット変更量とする。 Next, in step S26, an offset value is calculated. At this time, the difference h0−h1 between the average values h0 and h1 is set as the offset change amount.
 次に、ステップS27において、積分画像のオフセットを変更する。この時、ステップS21で得られた積分画像にオフセット変更量を加算し、オフセット変更した積分画像を作成する。 Next, in step S27, the offset of the integral image is changed. At this time, the offset change amount is added to the integrated image obtained in step S21 to create an integrated image with the offset changed.
 なお、図7のステップS21では、積分を始める初期点P0の積分値F(x0,y0)を0に設定したが、第1実施形態に示したように、入力画像201の値I(x0,y0)に設定しても、ランダムの値に設定しても、ステップS27で得られる積分画像は変らない。このため、ステップS21は、初期点P0の積分値をどのような値に設定してもかまわない。 In step S21 of FIG. 7, the integral value F (x0, y0) of the initial point P0 at which the integration is started is set to 0. However, as shown in the first embodiment, the value I (x0, The integrated image obtained in step S27 does not change whether it is set to y0) or a random value. Therefore, in step S21, the integration value of the initial point P0 may be set to any value.
 また、上述した第2実施形態では、平均値算出領域236内において、入力画像201の平均値と、経路整合積分画像206の平均値を合わる方法について説明した。平均値を合わせる対象は、入力画像201に限らず、入力画像201を加工した画像であってもよい。また、平均値を合わせる対象は、0または128などの値を持つ均一画像であってもよい。 In the above-described second embodiment, a method has been described in which the average value of the input image 201 and the average value of the path matching integrated image 206 are matched in the average value calculation area 236. The target to be averaged is not limited to the input image 201, and may be an image obtained by processing the input image 201. Further, the target to be averaged may be a uniform image having a value such as 0 or 128.
 以上説明したように、上述した第2実施形態によれば、経路整合積分画像について、平均値算出領域232内の平均値を所定値に変更するオフセット調整を行う。 As described above, according to the above-described second embodiment, offset adjustment is performed on the path matching integrated image to change the average value in the average value calculation area 232 to a predetermined value.
 ここで、ステップS10で得られた積分画像の平均値を入力画像201の平均値に合わせる場合は、入力画像201と平均値算出領域236内の平均値が変らない経路整合積分画像206を得ることができる。ステップS10で得られた積分画像の平均値を入力画像201の加工画像の平均値に合わせる場合は、入力画像201の加工画像と平均値算出領域236内の平均値が変わらない経路整合積分画像206を得ることができる。ステップS10で得られた積分画像の平均値を均一画像に合わせる場合は、平均値算出領域236内の平均値が均一値になる経路整合積分画像206を得ることができる。 Here, when the average value of the integral image obtained in step S10 is matched with the average value of the input image 201, the path matching integral image 206 in which the average value in the average value calculation area 236 does not change from that of the input image 201 is obtained. Can be. When matching the average value of the integrated image obtained in step S10 with the average value of the processed image of the input image 201, the path matching integrated image 206 in which the average value in the averaged value calculation area 236 does not change with the processed image of the input image 201 Can be obtained. When the average value of the integrated image obtained in step S10 is adjusted to a uniform image, the path matching integrated image 206 in which the average value in the average value calculation area 236 becomes a uniform value can be obtained.
 図5において、入力画像201と同じサイズの画像231全体を順次積分すると、上述のように、積分の初期点P0から離れるにつれ大きくなり目立つ、斜めの不自然に見える帯状部Z1が発生する場合がある。これに対して、図6に示すように、中心付近の小さな領域234のみ積分すると、斜めの不自然に見える帯状部Z1が目立たなくなる。従って、この処理は、小さな領域に限る必要があるため、入力画像201と同じサイズの画像231全体の積分画像を得るようにするためには、画像231を小さなブロックに分割する必要がある。以下の第3実施形態では、画像231をブロックに分割し、ブロックごとに経路整合積分を実行する。ブロックごとに経路整合積分した画像をブロック分割積分画像と言うことにする。 In FIG. 5, when the entire image 231 having the same size as the input image 201 is sequentially integrated, as described above, there is a case where the obliquely unnaturally-looking band-like portion Z1 that becomes larger and becomes more prominent as the distance from the initial point P0 of the integration is increased. is there. On the other hand, as shown in FIG. 6, when only the small area 234 near the center is integrated, the obliquely unnatural band-like portion Z1 becomes inconspicuous. Therefore, since this processing needs to be limited to a small area, it is necessary to divide the image 231 into small blocks in order to obtain an integrated image of the entire image 231 having the same size as the input image 201. In the following third embodiment, the image 231 is divided into blocks, and path matching integration is performed for each block. An image obtained by performing path matching integration for each block is referred to as a block division integrated image.
 図9は、第3実施形態に係る積分画像の作成方法を示す図である。
 図9において、例えば、画像231を16*16などの小さなサイズのブロックに分割する。そして、各ブロックの中から着目ブロック240の処理の順番235を定め、その順番235に従って着目ブロック240ごとに経路整合積分を実行する。この時、図2の入力画像201に対してブロック分割積分画像206dが得られた。着目ブロック240ごとに経路整合積分を実行する方法では、ブロック端で図5の帯状部Z1を目立たなくすることができる。なお、ブロック分割積分画像206dの表示レベルは、入力画像201の最小値3から最大値244に合わせて、この範囲を表示した。
FIG. 9 is a diagram illustrating a method for creating an integral image according to the third embodiment.
In FIG. 9, for example, the image 231 is divided into blocks of a small size such as 16 * 16. Then, the processing order 235 of the target block 240 is determined from each block, and the path matching integration is executed for each target block 240 according to the order 235. At this time, a block division integrated image 206d was obtained for the input image 201 of FIG. In the method of executing the path matching integration for each block of interest 240, the band Z1 in FIG. 5 can be made inconspicuous at the block end. The display level of the block division integrated image 206d is displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201, and this range is displayed.
 図10は、第3実施形態に係る積分画像の作成方法を示すフローチャートである。
 図10の処理S30は、ステップS31、S20、S32を備える。ステップS31において、画像231をブロックに分割する。そして、各ブロックの中から着目ブロック240を順番に定める。次に、ステップS20において、着目ブロック240について、図7の処理20を実行する。
FIG. 10 is a flowchart illustrating a method for creating an integral image according to the third embodiment.
The process S30 in FIG. 10 includes steps S31, S20, and S32. In step S31, the image 231 is divided into blocks. Then, the block of interest 240 is determined in order from each block. Next, in step S20, the processing 20 of FIG.
 なお、図7の処理20をブロックごとに実行する場合、平均値算出領域236は、各着目ブロック240内の所定領域に設定する。例えば、着目ブロック240の画像サイズをsx*syとし、平均値算出領域236のブロックサイズをhx*hyとすると、sx=16、sy=16、hx=8、hy=8とし、着目ブロック240の中心と平均値算出領域236の中心を一致させることができる。 When the process 20 of FIG. 7 is executed for each block, the average value calculation area 236 is set to a predetermined area in each block 240 of interest. For example, assuming that the image size of the block of interest 240 is sx * sy and the block size of the average value calculation area 236 is hx * hy, sx = 16, sy = 16, hx = 8, hy = 8. The center and the center of the average value calculation area 236 can be matched.
 次に、ステップS32において、着目ブロック240が画像231の最後のブロックであれば、処理を終了する。着目ブロック240が画像231の最後のブロックでなければ、ステップS31に戻る。最後のブロックまで進むと、入力画像201と同じサイズのブロック分割積分画像206dが得られる。 Next, in step S32, if the block of interest 240 is the last block of the image 231, the process ends. If the block of interest 240 is not the last block of the image 231, the process returns to step S31. When the process proceeds to the last block, a block division integrated image 206d having the same size as the input image 201 is obtained.
 なお、上述した第3実施形態では、sx=16、sy=16、hx=8、hy=8とした例を示したが、他の値に設定することもできる。例えば、sx=16、sy=16、hx=16、hy=16としたり、sx=32、sy=32、hx=16、hy=16としたりすることができる。 In the third embodiment described above, an example is shown in which sx = 16, sy = 16, hx = 8, and hy = 8, but other values may be set. For example, sx = 16, sy = 16, hx = 16, hy = 16, or sx = 32, sy = 32, hx = 16, hy = 16.
 また、図9の例では、画像231のサイズが、ブロックサイズの整数倍になっており、分割したブロックサイズが全て同じサイズになる場合を示した。画像231のサイズが、ブロックサイズの整数倍になっていない場合は、右端や下端で、ブロックサイズが他より小さいブロックが生じる。この小さいブロックも、他のブロックと同様に図10の処理S30を実行し、ブロック分割積分画像を作成できる。 In the example of FIG. 9, the size of the image 231 is an integral multiple of the block size, and the divided block sizes are all the same. If the size of the image 231 is not an integral multiple of the block size, a block having a smaller block size occurs at the right end or lower end. For this small block as well, the process S30 of FIG. 10 is executed similarly to the other blocks, and a block division integral image can be created.
 また、ブロックごとに平均値を合わせる対象は、入力画像に限らず、入力画像を加工した画像であってもよいし、0または128などの値を持つ均一画像であってもよい。 The target for which the average value is adjusted for each block is not limited to the input image, but may be an image obtained by processing the input image, or may be a uniform image having a value such as 0 or 128.
 以上説明したように、上述した第3実施形態によれば、画像231を所定のブロックに分割し、分割した全てのブロックについてブロック分割積分画像を作成する。さらに、分割した全てのブロックについて所定の平均値算出領域236内の平均値を所定値に変更するオフセット調整を行う。 As described above, according to the third embodiment described above, the image 231 is divided into predetermined blocks, and a block division integrated image is created for all the divided blocks. Further, offset adjustment for changing the average value in the predetermined average value calculation area 236 to a predetermined value is performed for all the divided blocks.
 これにより、図5の帯状部Z1を目立たせることなく、入力画像201と同じサイズの画像231全体の積分画像を得ることができる。 Thereby, an integral image of the entire image 231 having the same size as the input image 201 can be obtained without making the band Z1 in FIG. 5 stand out.
 なお、平均値算出領域236は、各着目ブロック240内の所定領域に設定されるため、ブロックの接続部でブロック歪Z2が発生する場合がある。ブロック歪Z2を低減させるために、以下の第4実施形態では、ブロック分割積分画像の重み付き平均をとる。重み付き平均をとったブロック分割積分画像をブロック分割平均画像と言うことにする。 Since the average value calculation area 236 is set in a predetermined area in each block of interest 240, block distortion Z2 may occur at a connection part of the blocks. In order to reduce the block distortion Z2, in the following fourth embodiment, a weighted average of the block division integral image is taken. The block-divided integral image obtained by taking the weighted average will be referred to as a block-divided average image.
 図11は、第4実施形態に係るブロック分割積分画像のブロック分割例を示す図である。
 図11において、分割の仕方が異なる4つのブロック分割積分画像250a~250dを作成する分割の仕方を決定する。ブロック分割積分画像250aは、図9の画像231と同一の分割の仕方から得られた画像である。ブロック分割積分画像250bは、ブロック分割積分画像250aに対して図3の-X方向に半ブロックだけずらした分割の仕方から得られた画像である。ブロック分割積分画像250cは、ブロック分割積分画像250aに対して図3の-Y方向に半ブロックだけずらした分割の仕方から得られた画像である。ブロック分割積分画像250dは、積分画像250aに対して図3の-X方向および-Y方向に半ブロックずつずらした分割の仕方から得られた画像である。
FIG. 11 is a diagram showing an example of block division of a block division integral image according to the fourth embodiment.
In FIG. 11, a division method for creating four block division integrated images 250a to 250d having different division methods is determined. The block division integrated image 250a is an image obtained by the same division method as the image 231 in FIG. The block division integral image 250b is an image obtained by dividing the block division integral image 250a by a half block in the −X direction in FIG. The block division integrated image 250c is an image obtained by dividing the block division integral image 250a by a half block in the −Y direction in FIG. The block division integral image 250d is an image obtained by dividing the integral image 250a by a half block in the −X direction and the −Y direction in FIG.
 ブロック分割積分画像250bの左右端は半ブロックしかないが、この半ブロックからブロック分割積分画像を作成することができる。ただし、簡単のため、この半ブロックだけは、ブロック分割積分画像250aの対応位置の値を代入することもできる。 左右 Although the left and right ends of the block division integral image 250b have only half blocks, a block division integral image can be created from these half blocks. However, for simplicity, the value of the corresponding position of the block division integral image 250a can be substituted for only this half block.
 ブロック分割積分画像250cの上下端は半ブロックしかないが、この半ブロックからブロック分割積分画像を作成することができる。ただし、簡単のため、この半ブロックだけは、ブロック分割積分画像250aの対応位置の値を代入することもできる。 Although the upper and lower ends of the block division integrated image 250c have only half blocks, a block division integral image can be created from the half blocks. However, for simplicity, the value of the corresponding position of the block division integral image 250a can be substituted for only this half block.
 ブロック分割積分画像250dの左右端および上下端は半ブロックしかないが、この半ブロックからブロック分割積分画像を作成することができる。ブロック分割積分画像250dの四隅は1/4ブロックしかないが、この1/4ブロックからブロック分割積分画像を作成することができる。ただし、簡単のため、左右端の半ブロックだけは、ブロック分割積分画像250cの対応位置の値を代入することもできる。また、上下端の半ブロックだけは、ブロック分割積分画像250bの対応位置の値を代入することもできる。また、四隅の1/4ブロックだけは、ブロック分割積分画像250bの対応位置の値とブロック分割積分画像250cの対応位置の値との平均値を代入することもできる。 Although the left and right ends and the upper and lower ends of the block division integrated image 250d have only half blocks, a block division integral image can be created from these half blocks. Although the four corners of the block division integral image 250d have only 1/4 blocks, a block division integral image can be created from these quarter blocks. However, for simplicity, the values of the corresponding positions of the block division integrated image 250c can be substituted for only the left and right half blocks. Also, the values of the corresponding positions of the block division integrated image 250b can be substituted for only the upper and lower half blocks. In addition, for only the quarter blocks at the four corners, the average value of the value of the corresponding position of the block division integrated image 250b and the value of the corresponding position of the block division integration image 250c can be substituted.
 ある画素251に着目すると、画素251は、ブロック分割積分画像250a~250dのそれぞれの4つのブロック252a~252dに属する。着目する画素251の位置によって、画素251が属する4つのブロック252a~252dが、どのブロック分割積分画像250a~250dのブロックであるか決まる。 着 目 Focusing on a certain pixel 251, the pixel 251 belongs to each of the four blocks 252a to 252d of the block division integrated images 250a to 250d. The position of the target pixel 251 determines which of the block division integrated images 250a to 250d the four blocks 252a to 252d to which the pixel 251 belongs.
 図12は、図11の4つのブロックと着目画素との位置関係を示す図である。
 図12において、これら4つのブロック252a~252dを含むようにブロック252a~252dを重ねると、ブロック253が得られる。この時、着目する画素251と、各ブロック252a~252dの中心254a~254dの位置関係に基づいて、各ブロック252a~252dの重みを求め、4つのブロック252a~252dの重み付き平均をとる。
FIG. 12 is a diagram showing the positional relationship between the four blocks in FIG. 11 and the pixel of interest.
In FIG. 12, when blocks 252a to 252d are overlapped so as to include these four blocks 252a to 252d, block 253 is obtained. At this time, the weights of the blocks 252a to 252d are obtained based on the positional relationship between the pixel of interest 251 and the centers 254a to 254d of the blocks 252a to 252d, and the weighted average of the four blocks 252a to 252d is obtained.
 ここで、着目点251は、中心254aからX方向に距離a、Y方向に距離bだけ離れている。この時、各ブロック252a~252dのブロック分割積分画像をF0、F1、F2、F3とし、各ブロック252a~252dのX方向のサイズをsx、Y方向のサイズをsyとすると、重み付き平均画像Gは、以下の数式18~22で与えることができる。 Here, the point of interest 251 is separated from the center 254a by a distance a in the X direction and a distance b in the Y direction. At this time, assuming that the block division integrated images of the blocks 252a to 252d are F0, F1, F2, and F3, the size of each of the blocks 252a to 252d in the X direction is sx, and the size in the Y direction is sy, the weighted average image G Can be given by the following equations 18 to 22.
(数18)
G(x,y)=w0*F0(x,y)+w1*F1(x,y)+w2*F2(x,y)
+w3*F3(x,y)
(Equation 18)
G (x, y) = w0 * F0 (x, y) + w1 * F1 (x, y) + w2 * F2 (x, y)
+ W3 * F3 (x, y)
(数19)
w0=(sx/2-a)*(sy/2-b)/(sx*sy/4)
(Equation 19)
w0 = (sx / 2-a) * (sy / 2-b) / (sx * sy / 4)
(数20)
w1=a*(sy/2-b)/(sx*sy/4)
(Equation 20)
w1 = a * (sy / 2-b) / (sx * sy / 4)
(数21)
w2=(sx/2-a)*b/(sx*sy/4)
(Equation 21)
w2 = (sx / 2−a) * b / (sx * sy / 4)
(数22)
w3=a*b/(sx*sy/4)
(Equation 22)
w3 = a * b / (sx * sy / 4)
 図13は、第4実施形態に係るブロック分割積分画像のノイズ除去方法を示すフローチャートである。
 図13のステップS41において、半ブロックずつ分割の仕方が異なる4つのブロック分割積分画像250a~250dを作成する分割の仕方を決定する。
FIG. 13 is a flowchart illustrating a method for removing noise from a block-divided integral image according to the fourth embodiment.
In step S41 of FIG. 13, a division method for creating four block division integrated images 250a to 250d having different division methods for each half block is determined.
 次に、ステップS30aにおいて、ブロック分割積分画像250aを作成する分割Baの仕方に従って、図10の処理S30を実行することにより、ブロック分割積分画像250aを作成する。 Next, in step S30a, the block division integral image 250a is created by executing the process S30 of FIG. 10 according to the method of division Ba for creating the block division integral image 250a.
 また、ステップS30bにおいて、ブロック分割積分画像250bを作成する分割Bbの仕方に従って、図10の処理S30を実行することにより、ブロック分割積分画像250bを作成する。 In step S30b, the block division integral image 250b is created by executing the process S30 of FIG. 10 according to the method of division Bb for creating the block division integral image 250b.
 また、ステップS30cにおいて、ブロック分割積分画像250cを作成する分割Bcの仕方に従って、図10の処理S30を実行することにより、ブロック分割積分画像250cを作成する。 {Circle around (3)} In step S30c, the block division integral image 250c is created by executing the processing S30 of FIG. 10 according to the method of division Bc for creating the block division integral image 250c.
 また、ステップS30dにおいて、ブロック分割積分画像250dを作成する分割Bdの仕方に従って、図10の処理S30を実行することにより、ブロック分割積分画像250dを作成する。 In addition, in step S30d, the block division integral image 250d is created by executing the process S30 of FIG. 10 according to the method of division Bd for creating the block division integral image 250d.
 次に、ステップS42において、着目した画素251が属する4つのブロック分割積分画像250a~250dの該当ブロック252a~252dの各々の中心254a~254dと、着目する画素251の距離に応じて重みを決定し、4つのブロック分割積分画像250a~250dの重み付き平均を行う。4つのブロック分割積分画像250a~250dの重み付き平均をとった画像をブロック平均積分画像と言うことにする。 Next, in step S42, a weight is determined according to the distance between the center 254a to 254d of each of the corresponding blocks 252a to 252d of the four block division integrated images 250a to 250d to which the focused pixel 251 belongs and the focused pixel 251. Weighted averaging of the four block division integrated images 250a to 250d. An image obtained by taking a weighted average of the four block division integral images 250a to 250d is referred to as a block average integral image.
 上述した第4実施形態では、半ブロックずつ分割の仕方が異なる4つのブロック分割積分画像250a~250dを作成して重み付き平均を行った。他にも、1/4ブロックずつ分割の仕方が異なる16個のブロック分割積分画像を作成して重み付き平均を行うこともできる。重み付けの方法も、数式19~22では、着目する画素251と、ブロック分割ブロック252a~252dの各々の中心254a~254dとのX方向およびY方向のそれぞれの方向での距離に応じて重みを決定した。この他にも、着目する画素251と、ブロック分割ブロック252a~252dの各々の中心254a~254dとのユークリッド距離に応じて重みを決定することができる。 In the above-described fourth embodiment, weighted averaging is performed by creating four block division integrated images 250a to 250d in which the division method is different for each half block. Alternatively, weighted averaging can be performed by creating 16 block division integrated images having different division methods for each quarter block. As for the weighting method, in Equations 19 to 22, the weight is determined according to the distance between the pixel 251 of interest and the centers 254a to 254d of the block division blocks 252a to 252d in the X and Y directions. did. In addition, the weight can be determined according to the Euclidean distance between the pixel of interest 251 and the centers 254a to 254d of the block division blocks 252a to 252d.
 また、上述した第4実施形態では、1つのブロック分割積分画像の各々のブロックは重なり合うことがない。ここで、各ブロックの端で重なり合うブロック分割を行ない、重なり合った部分で平均を取り、各ブロックの端の部分で平均からの違いを誤差とし、ブロックの端の部分の誤差からブロックの端でない部分の誤差を補間して求め、この誤差を引いてブロックの端でない部分を修正した積分値を求めることもできる。 In addition, in the above-described fourth embodiment, each block of one block division integrated image does not overlap. Here, overlapping block division is performed at the end of each block, an average is calculated at the overlapped portion, the difference from the average is determined as an error at the end of each block, and a portion other than the end of the block is determined from the error at the end of the block. Can be obtained by interpolation, and by subtracting this error, an integrated value obtained by correcting a portion other than the end of the block can be obtained.
 この場合、ブロックの重なり合った部分を飛ばした飛び飛びのブロックで積分値を求めた飛び飛びの積分画像を作り、それを複数作成したことになる。そして、重なり合った端の部分で、平均からの違いを誤差として、ブロックの内側の部分を補間することは、距離に応じて誤差の重み付き平均を行ったことに相当する。このため、この処理により、重み付き平均の仕方を変えた重み付き平均積分画像を作成することができる。 In this case, a discrete integrated image obtained by calculating an integral value in discrete blocks obtained by skipping overlapping portions of blocks is created, and a plurality of discrete images are created. Interpolating the inner part of the block with the difference from the average as an error at the overlapping end portion is equivalent to performing a weighted average of the error according to the distance. Therefore, by this processing, a weighted average integrated image in which the method of weighted averaging is changed can be created.
 以上説明したように、上述した第4実施形態によれば、複数の分割の仕方の異なるブロック分割積分画像を作成し、着目画素が属する複数のブロック分割積分画像の各ブロックの中心と着目画素の距離に応じた重み付き平均を行い、ブロック平均積分画像206eを作成する。 As described above, according to the above-described fourth embodiment, a plurality of block division integrated images having different division methods are created, and the center of each block of the plurality of block division integral images to which the target pixel belongs and the target pixel Weighted averaging according to the distance is performed to create a block average integrated image 206e.
 これにより、図5の帯状部Z1および図9のブロック歪Z2を低減しつつ、入力画像201と同じサイズの画像231全体の経路整合積分画像を変更微分画像から作成することができる。 Accordingly, the path matching integral image of the entire image 231 having the same size as the input image 201 can be created from the modified differential image while reducing the band Z1 in FIG. 5 and the block distortion Z2 in FIG.
 図14は、図2の入力画像に対して図13の処理の有無に応じた積分画像を示す図である。
 図14において、入力画像201から得られたブロック分割積分画像206dでは、ブロック歪Z2が発生している。ブロック分割積分画像206dに対して作成されたブロック平均積分画像206eでは、ブロック歪Z2が低減されている。ブロック平均積分画像206eに対してパラメータを変えて作成したブロック平均積分画像206fでも、ブロック歪Z2が低減されている。なお、ブロック分割積分画像206dおよびブロック平均積分画像206e、206fの表示レベルは、入力画像201の最小値3から最大値244に合わせて、この範囲を表示した。
FIG. 14 is a diagram showing an integral image according to the presence or absence of the processing of FIG. 13 with respect to the input image of FIG.
In FIG. 14, a block distortion Z2 occurs in the block division integrated image 206d obtained from the input image 201. In the block average integral image 206e created for the block division integral image 206d, the block distortion Z2 is reduced. The block distortion Z2 is also reduced in the block average integral image 206f created by changing parameters for the block average integral image 206e. The display levels of the block division integrated image 206d and the block average integrated images 206e and 206f are displayed in accordance with the minimum value 3 to the maximum value 244 of the input image 201.
 ブロック平均積分画像206eは、上記、数3、4の中のパラメータをlv=10かつk=2にして作成した。lvを10にすることにより、小さい輝度変動が抑えられ、例えば、飛行機の上側に映っている雲が平坦化され、kを2にすることにより、輝度変動が大きい部分は、画像の表示レベルを超えない範囲で、入力画像201に比べほぼ2倍のコントラストの向上を達成できた。 The block average integrated image 206e was created by setting the parameters in the above equations 3 and 4 to lv = 10 and k = 2. By setting lv to 10, small luminance fluctuations are suppressed. For example, clouds reflected on the upper side of the airplane are flattened. Within the range not exceeding, the improvement of the contrast almost twice as compared with the input image 201 could be achieved.
 ブロック平均積分画像206fは、lv=0かつk=3にして作成した。lvを0にしたため、濃度変化の小さい箇所も平坦化はされず、kを3にしたため、表示レベルを超えない範囲で、局所的なコントラスト向上がなされ、例えば、雲の部分の濃淡変化も強調されている。 The block average integral image 206f was created with lv = 0 and k = 3. Since lv is set to 0, even a portion having a small density change is not flattened, and since k is set to 3, local contrast is improved within a range not exceeding a display level. For example, a change in density of a cloud portion is also emphasized. Have been.
 なお、X方向偏微分画像202およびY方向偏微分画像203からX方向変更偏微分画像204およびY方向変更偏微分画像205への変更は、数式3、4を用いた方法以外にも、様々な方法がある。例えば、X方向偏微分画像202およびY方向偏微分画像203からX方向変更偏微分画像204およびY方向変更偏微分画像205への変更は、以下の数式23~25で与えてもよい。
(数23)
D(x,y)=sqrt(Dx(x,y)*Dx(x,y)+Dy(x,y)
*Dy(x,y))
The change from the X-direction partial differential image 202 and the Y-direction partial differential image 203 to the X-direction modified partial differential image 204 and the Y-direction modified partial differential image 205 can be performed by various methods other than the methods using the mathematical formulas 3 and 4. There is a way. For example, the change from the X direction partial differential image 202 and the Y direction partial differential image 203 to the X direction changed partial differential image 204 and the Y direction changed partial differential image 205 may be given by the following Expressions 23 to 25.
(Equation 23)
D (x, y) = sqrt (Dx (x, y) * Dx (x, y) + Dy (x, y)
* Dy (x, y))
 ただし、sqrt()は、平方根を取る演算である。 {However, sqrt () is an operation that takes a square root.
(数24)
Ex(x,y)=k*(Dx(x,y)-lv)+lv(D(x,y)≧lvかつDx(x,y)≧0の場合)
=Dx(x,y)*D(x,y)/lv(D(x,y)<lvの場合)
=k*(Dx(x,y)+lv)-lv(D(x,y)≧lvかつDx(x,y)<0の場合)
(Equation 24)
Ex (x, y) = k * (Dx (x, y) -lv) + lv (when D (x, y) ≧ lv and Dx (x, y) ≧ 0)
= Dx (x, y) * D (x, y) / lv (when D (x, y) <lv)
= K * (Dx (x, y) + lv) -lv (when D (x, y) ≧ lv and Dx (x, y) <0)
(数25)
Ey(x,y)=k*(Dy(x,y)-lv)+lv(D(x,y)≧lvかつDy(x,y)≧0の場合)
=Dy(x,y)*D(x,y)/lv(D(x,y)<lvの場合)
=k*(Dy(x,y)+lv)-lv(D(x,y)≧lvかつDy(x,y)<0の場合)
(Equation 25)
Ey (x, y) = k * (Dy (x, y) -lv) + lv (when D (x, y) ≧ lv and Dy (x, y) ≧ 0)
= Dy (x, y) * D (x, y) / lv (when D (x, y) <lv)
= K * (Dy (x, y) + lv) -lv (when D (x, y) ≧ lv and Dy (x, y) <0)
 数式23~25による微分画像の変更処理を行うと、画像のエッジの方向によらず、等方的な変換式で変更できる。この変更処理は、パラメータを変更すると、各種異なる効果が得られる。 微分 If the differential image is changed by the formulas 23 to 25, it can be changed by an isotropic conversion formula regardless of the direction of the edge of the image. In this changing process, various effects can be obtained by changing the parameters.
 例えば、lv=0とすると、微分画像の画素値がk倍に変更される。この変更は、局所的な濃淡変化をk倍にしたことになる。このため、kが1より大きい場合は、入力画像の局所的なコントラストを向上させた積分画像を得ることができる。 For example, if lv = 0, the pixel value of the differential image is changed to k times. This change means that the local shading change is multiplied by k times. For this reason, when k is larger than 1, an integrated image in which the local contrast of the input image is improved can be obtained.
 ノイズレベルの程度を評価した適当な値にlvを設定し、kを1に設定すると、ノイズレベルの程度を評価したlvより絶対値の低い値の絶対値を低減した値にし、かつそれより大きなエッジの変化は保存することができる。従って、この変更により、微分画像のエッジを保存したノイズ除去や微小な変化を抑えた平坦化を行うことができ、エッジ保存型のノイズ除去や平坦化をした平滑化画像を得ることができる。lvを所定値に設定し、kを1より大きい値にすると、lvより絶対値の低い値の絶対値を低減し、かつlvより絶対値の高い値をk倍に変更することができる。 If lv is set to an appropriate value obtained by evaluating the level of the noise level and k is set to 1, the absolute value of the value lower than the value lv obtained by evaluating the level of the noise level is reduced to a value larger than lv. Edge changes can be preserved. Therefore, by this change, it is possible to perform noise removal while preserving the edges of the differential image and flattening while suppressing minute changes, and it is possible to obtain an edge-preserving smoothed image with noise removal and flattening. If lv is set to a predetermined value and k is set to a value larger than 1, the absolute value of the value lower than lv can be reduced and the value higher than lv can be changed to k times.
 ノイズレベルの程度を評価した値にlvを設定すると、ノイズ除去と局所的なコントラスト向上を同時に行った積分画像を得ることができる。この変更は、lvより絶対値の低い値を低減する処理を含む処理である。また、この変更は、lvより絶対値の高い値を定数倍しているので、微分画像の値を定数倍にする処理を含んでいる。 L If lv is set to a value obtained by evaluating the degree of the noise level, an integrated image in which noise removal and local contrast improvement are simultaneously performed can be obtained. This change is a process including a process of reducing a value whose absolute value is lower than lv. This change includes a process of multiplying the value of the differential image by a constant, since the value having an absolute value higher than lv is multiplied by a constant.
 以上の数式23~25を用いる方法以外にも、例えば、数式23~25の式にD(x,y)がlv2を超える場合を設け、D(x,y)がlv2を超える場合は、別の傾きk2を設定し、例えば、Dx(x,y)>lv2の場合は、k2*(Dx(x,y)-lv2)+lv2のように変更し、他も同様に変更することもできる。 In addition to the method using the above formulas 23 to 25, for example, a case where D (x, y) exceeds lv2 is provided in the formulas 23 to 25, and another case where D (x, y) exceeds lv2 is provided. Is set, for example, when Dx (x, y)> lv2, it can be changed as k2 * (Dx (x, y) -lv2) + lv2, and the other can be similarly changed.
 この時、lvをノイズレベルの程度を評価した小さな値にし、kを1より大きな値にし、k2を1にすると、ブロック平均積分画像は、ノイズレベルの変化の小さい部分の変化を小さくしたノイズ低減と、ノイズレベルより大きな変化でlv2より小さな変化をする箇所の変化を大きくした局所的なコントラスト向上が達成され、lv2より大きな変化をする部分は、その変化に忠実な局所的なコントラストを持たせることができる。また、lv2を1より小さな値に設定すると、lv2より大きな変化をする部分の局所的なコントラストを抑えることができる。 At this time, if lv is set to a small value obtained by evaluating the level of the noise level, k is set to a value larger than 1, and k2 is set to 1, the block average integrated image has a noise reduction in which a change in a small change in the noise level is reduced. Thus, a local contrast improvement is achieved in which the change in the portion where the change is smaller than lv2 is increased by a change larger than the noise level, and the portion where the change is larger than lv2 has a local contrast faithful to the change. be able to. If lv2 is set to a value smaller than 1, local contrast in a portion where the change is larger than lv2 can be suppressed.
 その他、目的に応じ、数々の微分画像の変更の仕方がある。例えば、D(x,y)がlvより小さい場合は、単純に値を0にする処理を行うノイズ除去をすることができる。その他、数式24のDx(x,y)*D(x,y)をsqrt(|Dx(x,y)|)*sgn(Dx(x,y))、数25のDy(x,y)*D(x,y)をsqrt({Dy(x,y)|)*sgn(Dy(x,y))にすることができる。なお、sgn()は値が正の場合は1、負の場合は-1、0の場合は0を返す関数である。この場合、絶対値が低い変化を平方根の関数を用いて強調することができる。 There are many other ways to change the differential image depending on the purpose. For example, when D (x, y) is smaller than lv, noise removal by simply performing a process of setting the value to 0 can be performed. In addition, Dx (x, y) * D (x, y) in Expression 24 is converted to sqrt (| Dx (x, y) |) * sgn (Dx (x, y)), and Dy (x, y) in Expression 25 * D (x, y) can be changed to sqrt ({Dy (x, y) |) * sgn (Dy (x, y)). Note that sgn () is a function that returns 1 when the value is positive, −1 when the value is negative, and 0 when the value is 0. In this case, a change with a low absolute value can be emphasized using a square root function.
 その他にも、微分画像の変更の仕方は各種あり、任意の関数を用いて変更することができる。着目点周辺の3*3領域で平均をする場合を数式26、27に示す。 In addition, there are various ways to change the differential image, and the differential image can be changed using an arbitrary function. Equations 26 and 27 show the case of averaging in a 3 * 3 area around the point of interest.
(数26)
Ex(x,y)=(Dx(x-1,y-1)+Dx(x,y-1)+Dx(x+1,y-1)+Dx(x-1,y)+Dx(x,y)+Dx(x+1,y)+Dx(x-1,y+1)+Dx(x,y+1)+Dx(x+1,y+1))/9
(Equation 26)
Ex (x, y) = (Dx (x-1, y-1) + Dx (x, y-1) + Dx (x + 1, y-1) + Dx (x-1, y) + Dx (x, y) + Dx ( x + 1, y) + Dx (x-1, y + 1) + Dx (x, y + 1) + Dx (x + 1, y + 1)) / 9
(数27)
Ey(x,y)=(Dy(x-1,y-1)+Dy(x,y-1)+Dy(x+1,y-1)+Dy(x-1,y)+Dy(x,y)+Dy(x+1,y)+Dy(x-1,y+1)+Dy(x,y+1)+Dy(x+1,y+1))/9
(Equation 27)
Ey (x, y) = (Dy (x-1, y-1) + Dy (x, y-1) + Dy (x + 1, y-1) + Dy (x-1, y) + Dy (x, y) + Dy ( x + 1, y) + Dy (x-1, y + 1) + Dy (x, y + 1) + Dy (x + 1, y + 1)) / 9
 以上の平均処理をした微分画像を積分してできる積分画像は、画像の平均をした画像と良く似た画像になる。画像の鮮鋭化をする場合を数式28、29に示す。 The integrated image obtained by integrating the differential images subjected to the averaging process described above is an image very similar to the averaged image. Equations 28 and 29 show the case where the image is sharpened.
(数28)
Ex(x,y)=Ex(x,y)-(Dx(x-1,y-1)+Dx(x,y-1)+Dx(x+1,y-1)+Dx(x-1,y)+Dx(x,y)+Dx(x+1,y)+Dx(x-1,y+1)+Dx(x,y+1)+Dx(x+1,y+1))/9
(Equation 28)
Ex (x, y) = Ex (x, y)-(Dx (x-1, y-1) + Dx (x, y-1) + Dx (x + 1, y-1) + Dx (x-1, y) + Dx (X, y) + Dx (x + 1, y) + Dx (x-1, y + 1) + Dx (x, y + 1) + Dx (x + 1, y + 1)) / 9
(数29)
Ey(x,y)=Ey(x,y)-(Dy(x-1,y-1)+Dy(x,y-1)+Dy(x+1,y-1)+Dy(x-1,y)+Dy(x,y)+Dy(x+1,y)+Dy(x-1,y+1)+Dy(x,y+1)+Dy(x+1,y+1))/9
(Equation 29)
Ey (x, y) = Ey (x, y)-(Dy (x-1, y-1) + Dy (x, y-1) + Dy (x + 1, y-1) + Dy (x-1, y) + Dy (X, y) + Dy (x + 1, y) + Dy (x-1, y + 1) + Dy (x, y + 1) + Dy (x + 1, y + 1)) / 9
 以上の鮮鋭化処理をした微分画像を積分してできる積分画像は、画像の鮮鋭化をしてできる画像と良く似た画像になる。 積分 The integrated image formed by integrating the differentiated image subjected to the sharpening process described above is an image very similar to the image formed by sharpening the image.
 以上の他にも、例えば、非特許文献3の処理を、入力画像ではなく、微分画像に対して行うこともできる。その際、微分画像では、X方向偏微分画像とY方向偏微分画像の絶対値(2乗和の平方根)を取り、絶対値画像に対して8方向の最小変化方向を見つけ、その方向にX方向偏微分画像はX方向偏微分画像で処理し、Y方向偏微分画像はY方向偏微分画像で処理することができる。さらに、上記の絶対値画像を参照して、着目点がエッジ部分か平坦部かを評価した処理結果に応じ、微分画像の値の変え方を変えた適応処理を行うことができる。 Other than the above, for example, the processing of Non-Patent Document 3 can be performed not on an input image but on a differential image. At that time, in the differential image, the absolute value (square root of the sum of squares) of the partial differential image in the X direction and the partial differential image in the Y direction is calculated, and the minimum change direction in eight directions is found for the absolute value image. The directional partial differential image can be processed by the X-directional partial differential image, and the Y-directional partial differential image can be processed by the Y-directional partial differential image. Furthermore, referring to the absolute value image, it is possible to perform adaptive processing in which the way of changing the value of the differential image is changed according to the processing result of evaluating whether the point of interest is an edge portion or a flat portion.
 この非特許文献3の処理を含め、エッジ保存型の処理を行うことができる。エッジ保存型の処理では、微分画像の着目点がエッジ部分か平坦部分かを評価した処理結果に応じて、微分画像の値を変える適応型処理を行うことができる。この適応型処理は、エッジ部を保存し、平坦部の平滑化を行うフィルタ処理に有効である。 エ ッ ジ Edge-preserving processing including the processing of Non-Patent Document 3 can be performed. In the edge-preserving processing, an adaptive processing that changes the value of the differential image can be performed according to the processing result of evaluating whether the point of interest of the differential image is an edge portion or a flat portion. This adaptive processing is effective for filter processing for preserving an edge portion and smoothing a flat portion.
 以上の説明では、入力画像201が白黒のグレースケールを持つ画像としたが、カラー画像の場合も、同様に微分画像の変更処理などを行うことができる。例えば、RGBのカラーを順番にカラー番号cで表し、入力画像201をI[c]で表わし、Ex[c]、Ey[c]、Dx[c]およびDy[c]は、各々、上記の数式で、各画像に[c]を加えたものになり、それぞれ独立に計算することができる。ただし、数式23は、それぞれの加算なので、カラー番号0~2の加算を行う以下の数式30で与えることができる。 In the above description, the input image 201 is an image having a black-and-white gray scale. However, in the case of a color image, a process of changing a differential image can be performed in the same manner. For example, the RGB colors are sequentially represented by a color number c, the input image 201 is represented by I [c], and Ex [c], Ey [c], Dx [c], and Dy [c] are each described above. The expression is obtained by adding [c] to each image, and can be independently calculated. However, since Expression 23 is the respective addition, it can be given by Expression 30 below in which the addition of the color numbers 0 to 2 is performed.
(数30)
D(x,y)=sqrt(Dx[0](x,y)*Dx[0](x,y)+Dy[0](x,y)*Dy[0](x,y)+Dx[1](x,y)*Dx[1](x,y)+Dy[1](x,y)*Dy[1](x,y)+Dx[2](x,y)*Dx[2](x,y)+Dy[2](x,y)*Dy[2](x,y))
(Equation 30)
D (x, y) = sqrt (Dx [0] (x, y) * Dx [0] (x, y) + Dy [0] (x, y) * Dy [0] (x, y) + Dx [1 ] (X, y) * Dx [1] (x, y) + Dy [1] (x, y) * Dy [1] (x, y) + Dx [2] (x, y) * Dx [2] ( (x, y) + Dy [2] (x, y) * Dy [2] (x, y))
 また、以上の説明では、カラーがRGBの3色である場合を例にとったが、マルチスペクトルやハイパースペクトル画像では、3色より多い色がある。この場合でも、3色の場合と同様に処理することができる。 In the above description, the case where the colors are three colors of RGB is taken as an example. However, in a multispectral or hyperspectral image, there are more colors than three colors. In this case, processing can be performed in the same manner as in the case of three colors.
 また、以上の説明では、画像が2次元画像である場合を例にとったが、MRIやX線CT(Computed Tomography)などでは、3次元画像が得られる場合がある。3次元画像をI(x,y,z)で表すと、微分方向は、X、Y、Zの3方向あり、Dz(x,y,z)=I(x,y,z+1)-I(x,y,z)で定義できる。この時、経路整合積分値は、2次元画像と同様に、最短経路となる道筋の線積分値の平均値とすることができる。 In the above description, the case where the image is a two-dimensional image is taken as an example, but a three-dimensional image may be obtained by MRI, X-ray CT (Computed Tomography), or the like. When a three-dimensional image is represented by I (x, y, z), there are three differentiating directions of X, Y, and Z, and Dz (x, y, z) = I (x, y, z + 1) -I ( x, y, z). At this time, similarly to the two-dimensional image, the path matching integral value can be an average value of the line integral values of the path that is the shortest path.
 例えば、(x,y,z)の点から(x,y+1,z+1)の点に移動する場合は、yz平面内での2次元的な移動であるので、2つの経路の平均である。ところが、(x,y,z)の点から(x+1,y+1,z+1)の点に移動する場合は、xyz空間内での3次元的な移動である。このため、(x,y+1,z+1)と(x+1,y,z+1)と(x+1,y+1,z)の3点の線積分値を求め、この3点から(x+1,y+1,z+1)の点へ向かう3つの経路についてそれぞれ線積分値を求め、その平均を積分値とすることができる。ブロック分割では、2次元画像では平面のブロックに分割したが、3次元画像では立方体のブロックに分割することができる。 {For example, when moving from the point (x, y, z) to the point (x, y + 1, z + 1), the movement is a two-dimensional movement in the yz plane, and is the average of two paths. However, moving from the point (x, y, z) to the point (x + 1, y + 1, z + 1) is a three-dimensional movement in the xyz space. Therefore, the line integral value of three points (x, y + 1, z + 1), (x + 1, y, z + 1) and (x + 1, y + 1, z) is obtained, and from these three points, a point of (x + 1, y + 1, z + 1) is obtained. Line integrals can be obtained for each of the three routes to which the vehicle travels, and the average can be used as the integral. In the block division, a two-dimensional image is divided into planar blocks, but a three-dimensional image can be divided into cubic blocks.
 2次元では、半ブロックごとに分割の異なる4つのブロック分割積分画像を作成したが、3次元では、立方体の半ブロックごとに分割の異なる8つの立方体のブロック分割積分画像を作成する。2次元では、数式18~22に基づいてブロック平均積分画像を求めたが、3次元では、数式18~22にz方向を加えた対称的な式に基づいてブロック平均積分画像を作成する。4次元以降も同様に作成することができる。 In the two-dimensional case, four block division integral images with different divisions are generated for each half block, but in the three-dimensional case, eight cube divisional integral images with different divisions are generated for each cubic half block. In the two-dimensional case, the block average integral image is obtained based on Expressions 18 to 22, but in the three-dimensional case, the block average integral image is created based on a symmetric expression obtained by adding the z direction to Expressions 18 to 22. The fourth and subsequent dimensions can be similarly created.
 また、以上の説明では、画像入力として入力画像201を用意した例を示したが、入力画像201の他に、別の入力画像を用意してもよい。ここで、これらの入力画像の各々の微分画像を作成し、両者を加算した変更微分画像を積分した画像は、2つの入力画像を加算した画像に等しい。 In the above description, an example is described in which the input image 201 is prepared as an image input, but another input image may be prepared in addition to the input image 201. Here, a differential image of each of these input images is created, and an image obtained by integrating the modified differential image obtained by adding both is equal to an image obtained by adding two input images.
 この時、別の入力画像が、被写体の輪郭に沿って切り出され、入力画像201より小さいものとする。この場合、入力画像201の微分画像に別の入力画像の微分画像を重ねた部分では、入力画像201の微分画像の重なり部分の値を0にして、別の入力画像の微分画像を加えることもできる。この変更微分画像を積分すると、重なり部分で元の入力画像201の輝度勾配を無視した渾然とした合成画像が得られる。 At this time, it is assumed that another input image is cut out along the contour of the subject and is smaller than the input image 201. In this case, in a portion where the differential image of another input image is superimposed on the differential image of the input image 201, the value of the overlapping portion of the differential image of the input image 201 may be set to 0, and the differential image of another input image may be added. it can. By integrating this modified differential image, a vivid composite image is obtained in which the luminance gradient of the original input image 201 is ignored at the overlapping portion.
 また、重なり部分の入力画像201の微分画像の値を最も近い輪郭部分に加算し、重なり部分は0にする処理をして、別の入力画像の微分画像を加えると、その変更微分画像から、元の入力画像201の全体の輝度を損なわずに、別の入力画像を上乗せしたブロック平均積分画像を作成することができる。 Further, the value of the differential image of the input image 201 of the overlapping portion is added to the closest contour portion, the overlapping portion is processed to be 0, and a differential image of another input image is added. A block average integral image with another input image added can be created without impairing the overall luminance of the original input image 201.
 図15は、第5実施形態に係る積分画像のレベル調整処理を示すフローチャートである。
 図15のステップS50において、画像処理装置111は、経路整合積分画像206を作成した後、入力画像201に基づいて経路整合積分画像206のレベル調整を行うことで、レベル調整された経路整合積分画像271を作成する。レベル調整では、例えば、入力画像201の最大値を超えた経路整合積分画像206の画素値を入力画像201の最大値に設定し、入力画像201の最小値より小さい経路整合積分画像206の画素値を入力画像201の最小値に設定する。その他、画像レベルを0から255など、所定値の範囲で表現する場合などは、経路整合積分画像206の画素値が0(最小の所定値)より小さい場合は0(最小の所定値)にし、255(最大の所定値)より大きい場合は255(最大の所定値)にすることもできる。
FIG. 15 is a flowchart illustrating the level adjustment processing of the integral image according to the fifth embodiment.
In step S50 of FIG. 15, the image processing apparatus 111 creates the path matching integral image 206, and then adjusts the level of the path matching integral image 206 based on the input image 201, so that the level adjusted path matching integral image 271 is created. In the level adjustment, for example, the pixel value of the path matching integral image 206 exceeding the maximum value of the input image 201 is set to the maximum value of the input image 201, and the pixel value of the path matching integral image 206 smaller than the minimum value of the input image 201 is set. Is set to the minimum value of the input image 201. In addition, when the image level is expressed in a predetermined value range such as 0 to 255, the pixel value of the path matching integral image 206 is set to 0 (minimum predetermined value) when the pixel value is smaller than 0 (minimum predetermined value). When it is larger than 255 (the maximum predetermined value), it can be set to 255 (the maximum predetermined value).
 これにより、経路整合積分画像206の輝度の最大値や最小値が、入力画像201の最大値や最小値と異なる場合においても、経路整合積分画像206を適切なレベルで表示させることができる。このため、経路整合積分画像206の表示レベルが入力画像201の表示レベルと異なる場合においても、経路整合積分画像206の最大値や最小値を基準に経路整合積分画像206をこの範囲で表示させることなく、入力画像201の最大値や最小値を基準に経路整合積分画像206をこの範囲で表示させることができ、経路整合積分画像206の不自然な表示を防止することができる。 Thereby, even when the maximum value or the minimum value of the luminance of the path matching integral image 206 is different from the maximum value or the minimum value of the input image 201, the path matching integral image 206 can be displayed at an appropriate level. For this reason, even when the display level of the path matching integral image 206 is different from the display level of the input image 201, the path matching integral image 206 is displayed in this range based on the maximum value and the minimum value of the path matching integral image 206. In addition, the path matching integral image 206 can be displayed in this range based on the maximum value and the minimum value of the input image 201, and unnatural display of the path matching integral image 206 can be prevented.
 図16は、図15の処理の有無に応じた積分画像を示す図である。
 図16において、ブロック平均積分画像206eは、その表示レベルを入力画像201の最小値3から最大値244に合わせて、この範囲を表示した。一方、ブロック平均積分画像206e2は、その表示レベルを最小値-128から最大値356に合わせて、この範囲を表示した。ブロック平均積分画像206e、206e2は、同じパラメータで作成したが、表示レベルの違いによって、コントラストなど見え方が異なっている。
FIG. 16 is a diagram showing an integrated image according to the presence or absence of the processing of FIG.
In FIG. 16, the block average integrated image 206 e displays this range by adjusting the display level from the minimum value 3 to the maximum value 244 of the input image 201. On the other hand, the block average integrated image 206e2 displays this range by adjusting the display level from the minimum value of -128 to the maximum value of 356. Although the block average integral images 206e and 206e2 are created with the same parameters, the appearance such as contrast differs depending on the display level.
 なお、図2の経路整合積分画像206の表示が適切なレベルで表示されるようにする別の方法として、経路整合積分画像206のデフォルト表示レベルの情報を生成し、画像表示処理でデフォルト表示レベルを参照して経路整合積分画像206を表示したり、表示された画像を見たユーザがデフォルト表示レベルを変更し、その変更に従って表示レベルを調整して経路整合積分画像206を表示させたりすることができる。 As another method for displaying the path matching integral image 206 in FIG. 2 at an appropriate level, information on the default display level of the path matching integral image 206 is generated, and the default display level is set in the image display processing. And displaying the path matching integral image 206 by referring to the above, and changing the default display level by a user who has viewed the displayed image, adjusting the display level according to the change, and displaying the path matching integral image 206. Can be.
 図17は、第6実施形態に係る積分画像の表示処理を示すフローチャートである。
 図17のステップS60において、入力画像201に基づいて経路整合積分画像206のデフォルト表示レベルを決定する。この時、表示レベルのデフォルト値を計算または設定し、経路整合積分画像206に属性データとして付加した属性付積分画像282を作成して保存する。例えば、図1の画像処理装置121がこの処理を行う場合、属性付積分画像282を記憶装置124に保存する。、
FIG. 17 is a flowchart illustrating the display processing of the integral image according to the sixth embodiment.
In step S60 in FIG. 17, a default display level of the path matching integral image 206 is determined based on the input image 201. At this time, a default value of the display level is calculated or set, and an attribute-added integral image 282 added as attribute data to the path matching integral image 206 is created and stored. For example, when the image processing device 121 in FIG. 1 performs this process, the attribute-added integrated image 282 is stored in the storage device 124. ,
 表示レベルのデフォルト値は、表示レベルの上の値を示す上限値と、表示レベルの下の値を示す下限値がある。例えば、上限値は入力画像201の最大値に、下限値は入力画像201の最小値に設定することができる。あるいは、上限値を255、下限値を0に設定することもできる。 デ フ ォ ル ト The default value of the display level has an upper limit value indicating a value above the display level and a lower limit value indicating a value below the display level. For example, the upper limit can be set to the maximum value of the input image 201, and the lower limit can be set to the minimum value of the input image 201. Alternatively, the upper limit may be set to 255 and the lower limit to 0.
 次に、ステップS61において、属性付積分画像282の画像表示処理を実行する。この時、ステップS60で保存された属性付積分画像282を読み込み、表示装置に表示する。例えば、画像処理装置121がこの処理を行う場合、記憶装置124から属性付積分画像282を読み込み、表示装置122に表示する。 Next, in step S61, image display processing of the attributed integrated image 282 is executed. At this time, the attribute-added integral image 282 stored in step S60 is read and displayed on the display device. For example, when the image processing device 121 performs this process, the integrated image 282 with attributes is read from the storage device 124 and displayed on the display device 122.
 属性付積分画像282は、経路整合積分画像206とともに、属性データとして表示レベルのデフォルトの上限値と下限値を与える。画像処理装置121は、この上限値と下限値の範囲で経路整合積分画像206を表示する。 The attribute-added integral image 282 gives default upper and lower display levels as attribute data together with the path matching integral image 206. The image processing device 121 displays the path matching integral image 206 in the range between the upper limit and the lower limit.
 図18は、図17の処理の有無に応じた積分画像を示す図である。
 図18において、デフォルトの表示レベルを変更する場合は、例えば、図1の入力装置123から表示レベルの上限値や下限値を指定することができる。例えば、表示レベルが入力画像201の最小値3から最大値244に設定されたブロック平均積分画像206eを、デフォルト表示レベルで表示させることができる。また、表示レベルが入力画像201の最小値100から最大値244に設定されたブロック平均積分画像206e3を表示させることもできる。
FIG. 18 is a diagram showing an integral image according to the presence or absence of the processing of FIG.
In FIG. 18, when the default display level is changed, for example, an upper limit value or a lower limit value of the display level can be designated from the input device 123 in FIG. For example, the block average integral image 206e whose display level is set from the minimum value 3 to the maximum value 244 of the input image 201 can be displayed at the default display level. Further, a block average integrated image 206e3 in which the display level is set from the minimum value 100 to the maximum value 244 of the input image 201 can be displayed.
 図19は、第7実施形態に係る画像処理装置の構成および処理画像の一例を示すブロック図である。
 図19において、この画像処理装置は、微分処理部311、微分画像変更処理部312および経路整合積分処理部313を備える。
FIG. 19 is a block diagram illustrating an example of a configuration and a processed image of the image processing apparatus according to the seventh embodiment.
In FIG. 19, the image processing apparatus includes a differential processing unit 311, a differential image change processing unit 312, and a path matching integration processing unit 313.
 微分処理部311は、入力画像301を微分した微分画像を作成する。入力画像301が2次元画像の場合、微分処理部311は、X方向およびY方向のそれぞれについて1次元方向の微分を行うことで、X方向偏微分画像302とY方向偏微分画像303を作成する。入力画像301は、図1の撮影装置100で撮影された撮影画像だけでなく、画像処理装置で作成したコンピュータグラフィック画像であってもよいし、コピー画像やスキャナ画像であってもよい。 The differential processing unit 311 creates a differential image obtained by differentiating the input image 301. When the input image 301 is a two-dimensional image, the differentiation processing unit 311 performs differentiation in the one-dimensional direction in each of the X direction and the Y direction, thereby creating an X-direction partial differential image 302 and a Y-direction partial differential image 303. . The input image 301 is not limited to a photographed image photographed by the photographing device 100 of FIG. 1, but may be a computer graphic image created by an image processing device, or may be a copy image or a scanner image.
 入力画像301として、背景に緩やかな濃淡が生じた文字画像を例にとった。入力画像301の背景の緩やかな濃淡は、撮影装置100がカメラである場合に、光の加減から生じることがある。入力画像301の画像サイズは256*256とし、画素(x,y)が背景部の時は、画素値を128+((128-x)+(128-y))*0.5とし、画素(x,y)が文字部の時は、画素値を背景部よりも50だけ低い値にした。この時、X方向偏微分画像302とY方向偏微分画像303の画素値は、背景部は0.5になり、文字部のエッジの画素値は、両者の値の2乗和の平方根が49から51まで文字の角度に応じた値を取り、文字部の内部の画素値は0になる。 (4) As the input image 301, a character image in which a gradual shade occurs in the background is taken as an example. The gradual shading of the background of the input image 301 may be caused by the degree of light when the imaging device 100 is a camera. The image size of the input image 301 is 256 * 256, and when the pixel (x, y) is the background portion, the pixel value is 128 + ((128−x) + (128−y)) * 0.5, and the pixel ( When (x, y) is a character portion, the pixel value is set to a value 50 lower than that of the background portion. At this time, the pixel values of the X-direction partial differential image 302 and the Y-direction partial differential image 303 are 0.5 in the background portion, and the pixel value of the edge of the character portion has a square root of the sum of squares of both values of 49. To 51, corresponding to the angle of the character, and the pixel value inside the character portion becomes 0.
 微分画像変更処理部312は、X方向偏微分画像302とY方向偏微分画像303を変更処理したX方向変更偏微分画像304とY方向変更偏微分画像305を作成する。この時、微分画像変更処理312は、例えば、X方向偏微分画像302の画素値の絶対値が閾値lv3以下ならば値を0にし、Y方向の偏微分画像203の画素値の絶対値が閾値lv3以下ならば、値を0にすることができる。この時、閾値lv3は、背景部の緩やかな傾斜による画素値を0にする値に設定する。 The differential image change processing unit 312 creates an X direction changed partial differential image 304 and a Y direction changed partial differential image 305 obtained by changing the X direction partial differential image 302 and the Y direction partial differential image 303. At this time, the differential image changing process 312 sets the value to 0 if the absolute value of the pixel value of the X-direction partial differential image 302 is equal to or smaller than the threshold lv3, and sets the absolute value of the pixel value of the Y-direction partial differential image 203 to the threshold. If it is less than lv3, the value can be set to 0. At this time, the threshold value lv3 is set to a value that makes the pixel value due to the gentle inclination of the background portion zero.
 図19の例では、入力画像301は、背景部で1画素当たりX方向およびY方向ともに0.5の輝度変化をさせて作成したため、閾値lv3は1に設定した。この時、X方向変更偏微分画像304とY方向変更偏微分画像305は、背景部が0となり、文字部のエッジが殆ど変らず、文字部の内部は0のままになる。 In the example of FIG. 19, the threshold value lv3 was set to 1 because the input image 301 was created by changing the luminance in the X direction and the Y direction by 0.5 per pixel in the background portion. At this time, in the X-direction changed partial differential image 304 and the Y-direction changed partial differential image 305, the background portion becomes 0, the edge of the character portion hardly changes, and the inside of the character portion remains 0.
 経路整合積分処理部313は、X方向変更偏微分画像304およびY方向変更偏微分画像305を経路整合積分した経路整合積分画像306を作成する。経路整合積分処理部313は、経路整合積分画像306の表示レベルを設定する場合、均一画像307を参照することができる。 The path matching integration processing unit 313 creates a path matching integration image 306 by performing path matching integration of the X direction changed partial differential image 304 and the Y direction changed partial differential image 305. When setting the display level of the path matching integrated image 306, the path matching integration processing unit 313 can refer to the uniform image 307.
 ここで、経路整合積分処理部313は、例えば、経路整合積分画像306として、図13の処理によりブロック平均積分画像を作成する。この場合、ブロック平均積分画像の各ブロックでは、図4の処理によってブロック分割積分画像が作成される。各ブロックのブロック分割積分画像の平均値は、均一画像307の平均値(すなわち、所定の均一値)に合わせられる。 Here, the path matching integration processing unit 313 creates a block average integration image as the path matching integration image 306 by the processing of FIG. In this case, in each block of the block average integral image, a block division integral image is created by the processing of FIG. The average value of the block division integral image of each block is adjusted to the average value of the uniform image 307 (that is, a predetermined uniform value).
 図19の例では、ブロック分割積分画像の各ブロックの画像サイズは、入力画像301の画像サイズの半分に取った。すなわち、入力画像301の画像サイズが256*256であるので、ブロック分割積分画像のブロックサイズは128*128である。均一画像307の値は、中間値の128にした。 In the example of FIG. 19, the image size of each block of the block division integrated image is set to half the image size of the input image 301. That is, since the image size of the input image 301 is 256 * 256, the block size of the block division integrated image is 128 * 128. The value of the uniform image 307 was set to an intermediate value of 128.
 ここで、入力画像301を微分処理することにより、文字のエッジを保存しつつ、入力画像301の緩やかに濃淡が変化する背景部の値を小さくしたX方向偏微分画像302とY方向偏微分画像303を作成することができる。この時、X方向偏微分画像302とY方向偏微分画像303の背景部の値は、X方向偏微分画像302とY方向偏微分画像303の文字のエッジ部分の値より大きくすることができる。 Here, the X-direction partial differential image 302 and the Y-direction partial differential image are obtained by differentiating the input image 301 to reduce the value of the background portion where the gradation of the input image 301 changes gradually while preserving the edges of the characters. 303 can be created. At this time, the value of the background portion of the X-direction partial differential image 302 and the Y-direction partial differential image 303 can be larger than the value of the character edge portion of the X-direction partial differential image 302 and the Y-direction partial differential image 303.
 さらに、X方向偏微分画像302とY方向偏微分画像303の背景部の小さな値を0にする変更処理を行うことにより、文字のエッジを保存しつつ、入力画像301の緩やかに濃淡が変化する背景部の値を0としたX方向変更偏微分画像304とY方向変更偏微分画像305を作成することができる。 Further, by performing a process of changing the small value of the background portion of the X direction partial differential image 302 and the Y direction partial differential image 303 to 0, the gradation of the input image 301 changes gently while preserving the edge of the character. An X-direction changed partial differential image 304 and a Y-direction changed partial differential image 305 with the value of the background portion set to 0 can be created.
 ここで、背景部の値を0とすることにより、背景部の値を積分しても背景部の値を0のまま維持することができる。このため、X方向変更偏微分画像304とY方向変更偏微分画像305を積分することにより、入力画像301の文字のエッジを保存しつつ、背景部の緩やかな濃淡を除去することができ、背景部の濃度が均一化された経路整合積分画像306を作成することができる。 Here, by setting the value of the background portion to 0, the value of the background portion can be maintained at 0 even when the value of the background portion is integrated. Therefore, by integrating the X-direction changed partial differential image 304 and the Y-direction changed partial differential image 305, it is possible to remove the gradual shading of the background portion while preserving the edges of the characters of the input image 301. The path matching integral image 306 in which the density of the portion is uniform can be created.
 さらに、入力画像301では、背景の明るい部分では、文字も明るくなり、背景の暗い部分では、文字も暗くなっている。例えば、入力画像301では、Aという文字はFという文字よりも明るくなっている。ここで、各ブロックのブロック分割積分画像の平均値を均一画像307の平均値に合わせることにより、文字間の濃淡を均一化することができる。例えば、経路整合積分画像306では、Aという文字の明るさとFという文字の明るさがほぼ等しくなっている。 Further, in the input image 301, the characters are bright in a bright portion of the background, and the characters are dark in a dark portion of the background. For example, in the input image 301, the letter A is brighter than the letter F. Here, by adjusting the average value of the block division integrated image of each block to the average value of the uniform image 307, the shading between characters can be made uniform. For example, in the path matching integral image 306, the brightness of the character A and the brightness of the character F are substantially equal.
 図20は、図19の経路整合積分画像306の2値化処理を示すフローチャートである。
 図20のステップS70において、画像処理装置111は、図19の処理によって作成された経路整合積分画像306に対し、経路整合積分画像306を2値化処理することで、2値化画像322を作成する。2値化処理は、画像の画素値が閾値以下の場合は画素値を0、閾値より高い場合は画素値を1にする処理である。2値化画像は、画素値が0および1の2値をとる場合だけでなく、その画素値に255をかけて、画素値が0および255の2値をとるようにしてもよい。ここでは、2値化画像の値が0および255の2値をとるものとした。
FIG. 20 is a flowchart showing the binarization processing of the path matching integral image 306 of FIG.
In step S70 in FIG. 20, the image processing apparatus 111 creates a binarized image 322 by binarizing the path matching integral image 306 with respect to the path matching integral image 306 created by the processing in FIG. I do. The binarization process is a process of setting the pixel value to 0 when the pixel value of the image is equal to or smaller than the threshold, and setting the pixel value to 1 when the pixel value is higher than the threshold. The binarized image is not limited to the case where the pixel value takes a binary value of 0 and 1, and the pixel value may be multiplied by 255 to take the binary value of 0 and 255. Here, it is assumed that the value of the binarized image takes two values of 0 and 255.
 図21は、図19の入力画像301および経路整合積分画像306の2値化画像を示す図である。
 図21において、背景の濃淡が緩やかに傾斜した入力画像301を2値化処理することにより、2値化画像308を作成した。この時、2値化の閾値を103に設定した。2値化画像308では、背景が明るい部分では、文字と背景が白くなり、文字と背景の区別がつかなくなる。背景が暗い部分では、文字と背景が黒くなり、文字と背景の区別がつかなくなる。
FIG. 21 is a diagram showing a binarized image of the input image 301 and the path matching integral image 306 of FIG.
In FIG. 21, a binarized image 308 is created by performing a binarization process on an input image 301 in which the density of the background is gently inclined. At this time, the binarization threshold was set to 103. In the binarized image 308, in a portion with a bright background, the character and the background become white, and the character and the background cannot be distinguished. In a portion with a dark background, the characters and the background are black, and the characters and the background cannot be distinguished.
 これに対して、図19の処理にて入力画像301から経路整合積分画像306を作成する。そして、経路整合積分画像306に対して2値化処理した2値化画像309を作成すると、背景のみを白くし、文字のみを黒くすることができる。この時、2値化の閾値を98以上、135以下に設定すると、背景と文字が分離された2値化画像309が得られた。ここで、入力画像301は、0から255の輝度レベルの範囲になるよう作成し、経路整合積分画像306は、最大値を255、最小値を0にしたレベル調整を行っている。 (4) On the other hand, the path matching integral image 306 is created from the input image 301 by the processing of FIG. Then, when a binarized image 309 obtained by binarizing the path matching integral image 306 is created, it is possible to make only the background white and only the characters black. At this time, when the binarization threshold was set to 98 or more and 135 or less, a binary image 309 in which the background and the character were separated was obtained. Here, the input image 301 is created so as to be in the range of a luminance level from 0 to 255, and the path matching integral image 306 is adjusted to have a maximum value of 255 and a minimum value of 0.
 以上説明したように、上述した第7実施形態によれば、入力画像301の背景の濃淡に緩やかな傾斜があるために、背景と文字を分離可能な2値化の閾値を設定できない場合においても、入力画像301の文字のエッジが保存されるとともに背景の濃淡の緩やかな傾斜が除去された経路整合積分画像306を作成することができ、背景と文字を分離可能な2値化の閾値を設定することができる。 As described above, according to the above-described seventh embodiment, even when the threshold value of the binarization that can separate the background and the character cannot be set because the gradient of the background of the input image 301 has a gentle slope. In addition, a path matching integral image 306 can be created in which the edges of the characters of the input image 301 are preserved and the gradual gradient of the density of the background is removed, and a threshold for binarization that can separate the characters from the background is set. can do.
 図22は、図19の画像処理装置のその他の処理画像の一例を示す図である。
 図22において、図19の画像処理装置に入力画像301cが入力されたものとする。入力画像301cの背景には、中心を128として、右側から左側に1画素当たり0.1の濃淡を付けた。また、入力画像301cの中央部に濃度が128の均一な四角形を作成した。この例の場合、微分画像変更処理312は、数式3、4を用いた変更処理を実行し、lvを1とした。入力画像301cは、画像値が115から142までを最小の輝度から最大の輝度で表示した。入力画像301cの中央部に置かれた四角形は、濃度が均一であるが、背景にグラデーションがある。このため、本来は内部の濃度が均一な四角形でも、人間にはあたかも四角形の内部にグラデーションがかかっているかのような錯視が現れる。
FIG. 22 is a diagram illustrating an example of another processed image of the image processing apparatus in FIG.
In FIG. 22, it is assumed that the input image 301c has been input to the image processing apparatus in FIG. The background of the input image 301c is given a density of 0.1 per pixel from the right to the left, with the center at 128. In addition, a uniform square having a density of 128 was created at the center of the input image 301c. In the case of this example, the differential image changing process 312 executes a changing process using Expressions 3 and 4, and sets lv to 1. The input image 301c displays image values from 115 to 142 at the minimum luminance to the maximum luminance. The square placed at the center of the input image 301c has a uniform density, but has a gradation in the background. For this reason, even if the density of the inside of the square is originally uniform, the illusion of a human being appears as if a gradation is applied to the inside of the square.
 ここで、入力画像301cから作成した各ブロックのブロック分割積分画像の平均値を均一画像307の平均値に合わせたブロック平均積分画像306bでは、中心部に置かれた四角形の内部にグラデーションが現れるが、背景にも若干の濃淡の変化が現れる。例えば、四角形の内部の明るい箇所の周辺では、背景が暗くなり、四角形の内部の暗い箇所の周辺では、背景が明るくなっている。 Here, in the block average integral image 306b in which the average value of the block division integral image of each block created from the input image 301c is adjusted to the average value of the uniform image 307, a gradation appears inside a square located at the center. The background also shows a slight change in shading. For example, the background becomes dark around a bright portion inside the rectangle, and the background becomes bright around a dark portion inside the rectangle.
 このブロック平均積分画像306bにおいて、例えば、中央部の四角形以外の背景の値を128という一定値に設定することにより、ブロック平均積分画像306cを得ることができる。ブロック平均積分画像306cでは、中央部の四角形のグラデーションを保存しつつ、背景の濃淡の変化を除去することができ、人間の錯視を再現した画像を得ることができる。 に お い て In this block average integral image 306b, for example, by setting the value of the background other than the central square to a constant value of 128, the block average integral image 306c can be obtained. In the block average integral image 306c, a change in shading of the background can be removed while preserving the gradation of the square in the center, and an image reproducing the illusion of a human can be obtained.
 図23は、第8実施形態に係る画像処理装置の変更微分画像の作成方法を示すフローチャートである。
 図23において、この画像処理装置は、微分画像変更処理部400a、400bを備える。微分画像変更処理部400bは、コンボリューショナルニューラルネットワーク401を備える。コンボリューショナルニューラルネットワーク401は、コンボリューション層402とニューラルネット層403を備える。微分画像変更処理部400aは、コンボリューション層402を備える。
FIG. 23 is a flowchart illustrating a method for generating a modified differential image of the image processing device according to the eighth embodiment.
In FIG. 23, this image processing apparatus includes differential image change processing units 400a and 400b. The differential image change processing unit 400b includes a convolutional neural network 401. The convolutional neural network 401 includes a convolution layer 402 and a neural net layer 403. The differential image change processing unit 400a includes a convolution layer 402.
 コンボリューション層402は、入力画像のコンボリューションを行う。入力画像は、画像を局所平均した画像から引いて作るエッジ画像とすることができる。コンボリューション層402は、3*3または5*5などの小さな領域のコンボリューションを画像全体で行い、その画像を積み重ねて行き、画像の特徴を抽出する。 The convolution layer 402 performs convolution of the input image. The input image can be an edge image created by subtracting the image from a locally averaged image. The convolution layer 402 performs convolution of a small area such as 3 * 3 or 5 * 5 on the entire image, and stacks the images to extract features of the images.
 ニューラルネット層403は、コンボリューション層402で捕らえた特徴を認識するための計算を行う。ニューラルネット層403は、何層にも渡って深い計算を行う深層学習により認識率を向上させることができる。ニューラルネット層403は、認識結果404を出力する。 The neural network layer 403 performs calculations for recognizing the features captured by the convolution layer 402. The neural network layer 403 can improve the recognition rate by deep learning in which deep calculations are performed over many layers. The neural network layer 403 outputs a recognition result 404.
 また、ニューラルネット層403の出力に対して、認識物の抽出処理421を行うことで、認識物を四角で囲って表示したり、認識物の輪郭を囲って表示したり、認識物の輪郭だけでなく内部も黒く塗りつぶして表示したりすることができる。 Further, by performing the recognition object extraction process 421 on the output of the neural network layer 403, the recognition object is displayed by surrounding it with a square, the outline of the recognition object is displayed, or only the outline of the recognition object is displayed. Instead, the interior can be painted black.
 コンボリューショナルニューラルネットワーク401のネットワーク構造や深層学習の仕方などについて、例えば、以下の参考文献4に記載されている。コンボリューション層402およびニューラルネット層403の学習の仕方は、参考文献4に記載されているように、事前学習を行う方法や深層学習を行う方法がある。
 参考文献4:岡谷貴之、「深層学習」、講談社
The network structure of the convolutional neural network 401 and the method of deep learning are described in, for example, the following Reference 4. As described in Reference 4, the method of learning the convolution layer 402 and the neural network layer 403 includes a method of performing pre-learning and a method of performing deep learning.
Reference 4: Takayuki Okaya, "Deep Learning", Kodansha
 微分画像変更処理部400a、400bを図2の微分画像変更処理部212の代わりに用いる場合、コンボリューショナルニューラルネットワーク401の入力として、微分画像を用いる。微分画像には、X方向偏微分画像202とY方向偏微分画像203の2つがあるため、入力のデータ数が通常の2倍になり、ネットワークの容量も通常の2倍必要になる。ただし、微分画像は、原画像のオフセットを除き再現可能な情報を持っており、オフセットに関する何らかの情報を与えるか、内部でキャリブレーションできる画像に限定すれば、原画像を再現できる。 When the differential image change processing units 400a and 400b are used instead of the differential image change processing unit 212 in FIG. 2, a differential image is used as an input to the convolutional neural network 401. Since there are two differential images, the X-direction partial differential image 202 and the Y-direction partial differential image 203, the number of input data is twice as large as usual and the network capacity is twice as large as usual. However, the differential image has reproducible information except for the offset of the original image, and the original image can be reproduced by giving some information regarding the offset or limiting the image to an image that can be calibrated internally.
 コンボリューション層402が入力画像のコンボリューションを行うと、微分画像変更処理部400aは、コンボリューション層402に現れたデータに対して単純化処理411を行う。単純化処理411では、コンボリューション層402に現れたデータを画素ごとに最大値を持つコンボリューションに限定するスパース処理を行う。 When the convolution layer 402 performs the convolution of the input image, the differential image change processing unit 400a performs the simplification processing 411 on the data appearing in the convolution layer 402. In the simplification processing 411, sparse processing is performed to limit the data appearing in the convolution layer 402 to a convolution having a maximum value for each pixel.
 次に、微分画像変更処理部400aは、スパースなデータに対して画像再構成処理412を行う。画像再構成処理412では、スパースなデータに対して、出力画像が入力画像になるべく近くなるように学習を行う。スパース処理を行って画像の特徴の現れた再構成画像を得ることで、入力画像の特徴が良く現れた出力画像を作成することができ、ノイズ除去やデフォルメ画像の作成に用いることができる。 Next, the differential image change processing unit 400a performs the image reconstruction processing 412 on the sparse data. In the image reconstruction processing 412, learning is performed on sparse data so that the output image is as close as possible to the input image. By obtaining a reconstructed image in which the features of the image appear by performing the sparse processing, an output image in which the features of the input image appear well can be created, and can be used for noise removal and creation of a deformed image.
 微分画像変更処理部400aは、このような画像再構成処理412を行ってできた画像を、X方向変更偏微分画像204aおよびY方向変更偏微分画像205aとする。この時、X方向変更偏微分画像204aおよびY方向変更偏微分画像205aとして、入力画像の特徴を良く表現したノイズの少ない出力画像が得られるため、その積分画像として、ノイズが低減され、入力画像の形態が良く現れた出力画像を得ることができる。 The differential image change processing unit 400a sets the images obtained by performing the image reconstruction processing 412 as the X-direction changed partial differential image 204a and the Y-direction changed partial differential image 205a. At this time, as the X-direction changed partial differential image 204a and the Y-direction changed partial differential image 205a, an output image with less noise expressing the characteristics of the input image is obtained. Can be obtained.
 一方、微分画像変更処理部400bは、ニューラルネット層403を介して認識した認識物の抽出処理421の学習を行う。学習には、学習用の入力画像と物体の認識フラグと抽出領域を示したデータを用い、深層学習する方法が広く知られている。 On the other hand, the differential image change processing unit 400b learns the extraction processing 421 of the recognized object recognized via the neural network layer 403. For learning, a deep learning method using an input image for learning, a recognition flag of an object, and data indicating an extraction area is widely known.
 微分画像変更処理部400bは、ニューラルネット層403の出力に対して、認識物の抽出処理421を行うと、単純化処理422を行う。単純化処理422では、認識物の抽出処理421で抽出された物体の情報を元に、コンボリューショナルニューラルネットワーク401を逆に辿り、コンボリューション層402で抽出された物体に関する部分の選択とスパース処理を行う。 The differential image change processing unit 400b performs the simplification processing 422 when the recognition object extraction processing 421 is performed on the output of the neural network layer 403. In the simplification processing 422, based on the information of the object extracted in the recognition object extraction processing 421, the convolutional neural network 401 is traced backward, and the selection and the sparse processing of the part related to the object extracted in the convolution layer 402 are performed. I do.
 次に、微分画像変更処理部400bは、選択されたスパースなデータに対して画像再構成処理423を行う。画像再構成処理423では、選択されたスパースなデータに対して、出力画像が入力画像となるべく近くなるように学習を行う。 Next, the differential image change processing unit 400b performs the image reconstruction processing 423 on the selected sparse data. In the image reconstruction processing 423, learning is performed on the selected sparse data so that the output image is as close as possible to the input image.
 微分画像変更処理部400bは、このような画像再構成処理423を行ってできた画像を、X方向変更偏微分画像204bおよびY方向変更偏微分画像205bとする。この時、X方向変更偏微分画像204bおよびY方向変更偏微分画像205bとして、入力画像から抽出された物体の特徴を良く表現したノイズの少ない出力画像が得られるため、その積分画像として、ノイズが低減され、入力画像から抽出された物体の形態が良く現れた出力画像を得ることができる。 The differential image change processing unit 400b sets the images obtained by performing the image reconstruction processing 423 as an X-direction changed partial differential image 204b and a Y-direction changed partial differential image 205b. At this time, since an output image with less noise that well expresses the features of the object extracted from the input image can be obtained as the X-direction changed partial differential image 204b and the Y-direction changed partial differential image 205b, noise is included as an integral image. It is possible to obtain an output image in which the form of the object extracted from the input image is reduced and the shape of the object appears well.
 図24は、第9実施形態に係る画像処理装置の経路整合積分処理部の構成を示すブロック図である。
 図24において、この画像処理装置は、ニューラルネット層431およびパラメータ更新量計算部432を備える。ニューラルネット層431は、入力画像201、X方向変更偏微分画像204およびY方向変更偏微分画像205に基づいて、経路整合積分画像433を作成する。パラメータ更新量計算部432は、経路整合積分画像206、433に基づいてニューラルネット層431のパラメータ更新量を計算し、ニューラルネット層431のパラメータを更新する。
FIG. 24 is a block diagram illustrating a configuration of the path matching integration processing unit of the image processing device according to the ninth embodiment.
24, the image processing apparatus includes a neural network layer 431 and a parameter update amount calculation unit 432. The neural network layer 431 creates a path matching integral image 433 based on the input image 201, the X-direction changed partial differential image 204, and the Y-direction changed partial differential image 205. The parameter update amount calculation unit 432 calculates the parameter update amount of the neural network layer 431 based on the path matching integral images 206 and 433, and updates the parameters of the neural network layer 431.
 この時、パラメータ更新量計算部432は、経路整合積分画像206として、例えば、ブロック平均積分画像を学習用の参照データ画像として用いることができる。そして、パラメータ更新量計算部432は、ニューラルネット層431から出力された経路整合積分画像433が、参照データ画像になるべく近づくようにニューラルネット層431に学習させることができる。この学習では、経路整合積分画像206、433間の差画像について、各画素の2乗和を評価値として、この評価値を小さくするパラメータの更新量を求める。パラメータの更新量を求める計算は、この評価値を小さくするバックプロパゲーション法などを深層学習で行うことができる。 At this time, the parameter update amount calculation unit 432 can use, for example, a block average integral image as the path matching integral image 206 as a reference data image for learning. Then, the parameter update amount calculation unit 432 can make the neural network layer 431 learn so that the path matching integral image 433 output from the neural network layer 431 approaches the reference data image as much as possible. In this learning, for the difference image between the path matching integrated images 206 and 433, the sum of squares of each pixel is used as an evaluation value, and the update amount of a parameter for reducing the evaluation value is obtained. The calculation for obtaining the parameter update amount can be performed by deep learning such as a back propagation method for reducing the evaluation value.
 このようにニューラルネット層431を学習させることにより、経路整合積分画像206とほぼ同等の画像を作成する機能をニューラルネット層431に持たせることができる。このように学習させたニューラルネット層431は、学習用の参照データ画像とネット構造に依存して幾分の個性を持つが、学習に用いた経路整合積分画像206を作成した経路整合積分と似た計算を行う機能を機械学習で持つことができる。すなわち、このように学習させたニューラルネット層431は、経路整合積分を実行したり、経路整合積分をして若干の変更を加えたりすることができ、図2の経路整合積分処理部213として用いることができる。このように学習させたニューラルネット層431は、単体で経路整合積分を含んだ計算を行うものとして機能させることができる。 学習 By learning the neural network layer 431 in this manner, the neural network layer 431 can have a function of creating an image substantially equivalent to the path matching integral image 206. The neural network layer 431 trained in this way has some personality depending on the reference data image for learning and the net structure, but is similar to the path matching integral that created the path matching integral image 206 used for learning. Machine learning can have the function of making calculations. That is, the neural network layer 431 trained in this way can execute path matching integration or make a slight change by performing path matching integration, and is used as the path matching integration processing unit 213 in FIG. be able to. The neural network layer 431 trained in this way can function as a unit that performs calculation including path matching integration by itself.
 この他にも、ニューラルネット層431は、入力画像201にノイズを加えた画像、X方向変更偏微分画像204およびY方向変更偏微分画像205に基づいて、経路整合積分画像を作成することができる。そして、パラメータ更新量計算部432は、経路整合積分画像206を学習用の参照データ画像として、ニューラルネット層431の出力画像が参照データ画像になるべく近づくようにニューラルネット層431に学習をさせることができる。この時、ニューラルネット層431は、学習に利用した路整合積分画像206を作成した時の積分処理にノイズ除去機能を加えた処理を行うことができる。このように学習させたニューラルネット層431は、経路整合積分画像206を作成した時の経路整合積分を行うことができ、図2の経路整合積分処理部213として用いることができる。 In addition, the neural network layer 431 can create a path matching integral image based on the image obtained by adding noise to the input image 201, the X-direction modified partial differential image 204, and the Y-directional modified partial differential image 205. . Then, the parameter update amount calculation unit 432 uses the path matching integral image 206 as a reference data image for learning, and causes the neural network layer 431 to learn so that the output image of the neural network layer 431 approaches the reference data image as much as possible. it can. At this time, the neural network layer 431 can perform a process in which a noise removing function is added to the integration process when the road matching integral image 206 used for learning is created. The neural network layer 431 thus learned can perform path matching integration when the path matching integration image 206 is created, and can be used as the path matching integration processing unit 213 in FIG.
 上述した実施形態では、入力画像を微分した微分画像を作成し、その微分画像を変更処理した変更微分画像を作成し、その変更微分画像を経路積分した経路積分画像を作成することができる。 In the above-described embodiment, it is possible to create a differential image obtained by differentiating the input image, create a modified differential image obtained by modifying the differentiated image, and create a path integral image obtained by performing path integration on the modified differential image.
 ただし、微分画像をk(kは1より大きい実数)倍した画像は、経路によらず線積分値が同じになる。この時、微分画像をk倍した画像の経路ごとに線積分値を求めて平均値を積分値とする経路整合積分を行うことなく、数学的に等価な同一な結果が得られる簡易な処理が存在する。すなわち、微分画像をk倍した画像に限定すると、経路整合積分画像は、オフセットを除き、原画像を単にk倍したものと等しい。 However, an image obtained by multiplying the differential image by k (k is a real number greater than 1) has the same line integral value regardless of the path. At this time, a simple process for obtaining the same mathematically equivalent result without performing a line matching integration in which a line integral value is obtained for each path of an image obtained by multiplying the differential image by k and an average value is used as an integral value is obtained. Exists. That is, if the differential image is limited to an image multiplied by k, the path matching integral image is equal to the image obtained by simply multiplying the original image by k except for the offset.
 以下、微分画像をk倍した画像の経路ごとに線積分値を求めて平均値を積分値とする経路整合積分と数学的に等価な同一な結果が得られる簡易な処理について説明する。 Hereinafter, a description will be given of a simple process of obtaining a line integration value for each path of an image obtained by multiplying the differential image by k and obtaining the same result mathematically equivalent to path matching integration using an average value as an integration value.
 図25は、第10実施形態に係る画像処理装置のブロック画像の作成方法を示すフローチャートである。
 図25の処理S80は、ステップS81、S82、S83、S88を備える。ステップS81において、画像231をブロックに分割する。そして、各ブロックの中から着目ブロック240を順番に定める。
FIG. 25 is a flowchart illustrating a block image creation method of the image processing apparatus according to the tenth embodiment.
25 includes steps S81, S82, S83, and S88. In step S81, the image 231 is divided into blocks. Then, the block of interest 240 is determined in order from each block.
 次に、ステップS82において、入力画像201を入力として、着目ブロック240の画像を定数倍(例えば、k倍)する。 Next, in step S82, the input image 201 is input and the image of the block of interest 240 is multiplied by a constant (for example, k times).
 次に、ステップS84において、入力画像201の平均値を算出する。この時、入力画像201について、図8の平均値算出領域236内の平均値を求める。 Next, in step S84, the average value of the input image 201 is calculated. At this time, the average value in the average value calculation area 236 in FIG.
 次に、ステップS85において、定数倍画像の平均値を算出する。この時、ステップS82で得られた定数倍画像について、平均値算出領域236内の平均値を求める。 Next, in step S85, the average value of the constant-multiplied image is calculated. At this time, an average value in the average value calculation area 236 is obtained for the constant-multiplied image obtained in step S82.
 次に、ステップS86において、オフセット値を算出する。この時、ステップS84で得られた平均値とステップS85で得られた平均値との差分をオフセット変更量とする。 Next, in step S86, an offset value is calculated. At this time, the difference between the average value obtained in step S84 and the average value obtained in step S85 is set as the offset change amount.
 次に、ステップS87において、定数倍画像のオフセットを変更する。この時、ステップS22で得られた定数倍画像にオフセット変更量を加算し、オフセット変更した定数倍画像を作成する。 Next, in step S87, the offset of the constant multiple image is changed. At this time, the offset change amount is added to the constant-multiplied image obtained in step S22 to create an offset-changed constant-multiplied image.
 次に、ステップS88において、着目ブロック240が画像231の最後のブロックであれば、処理を終了する。着目ブロック240が画像231の最後のブロックでなければ、ステップS81に戻る。最後のブロックまで進むと、入力画像201と同じサイズのブロック分割定数倍画像206jが得られる。 Next, in step S88, if the block of interest 240 is the last block of the image 231, the process ends. If the block of interest 240 is not the last block of the image 231, the process returns to step S81. When the process proceeds to the last block, a block division constant multiplied image 206j having the same size as the input image 201 is obtained.
 ブロック分割定数倍画像206jの作成処理により、演算量を減少させつつ、k倍した微分画像からブロック平均積分画像206jを作成する処理と数学的に等価な結果を得ることができる。 By the process of creating the block-divided constant-multiplied image 206j, it is possible to obtain a result mathematically equivalent to the process of creating the block average integrated image 206j from the k-times differentiated image while reducing the amount of calculation.
 ブロックの平均値を合わせる参照画像として入力画像201を用いた場合、入力画像201を表示する時と同じ表示レベルでブロック平均定数倍画像206jを表示することができる。この時、ブロック平均定数倍画像206jは、ブロック内で画像をk倍しているため、局所的にコントラストを入力画像201のk倍にすることができる。 (4) When the input image 201 is used as a reference image for matching the average value of the blocks, the block average constant multiple image 206j can be displayed at the same display level as when the input image 201 is displayed. At this time, since the image of the block average constant multiplied image 206j is multiplied by k times in the block, the contrast can be locally made k times larger than that of the input image 201.
 例えば、数式23~25において、lv=0として微分画像をk倍にする処理を行った後、ブロックごとに積分すると、局所的な濃淡変化をk倍にしたブロック分割積分画像が得られる。ブロック分割積分画像の作成処理では、入力画像の微分処理およびブロックごとの積分処理を伴うが、ブロック分割定数倍画像の作成処理では、入力画像の微分処理およびブロックごとの積分処理をブロックごとの定数倍処理に置き換えることにより、ブロック分割積分画像の作成処理と数学的に等価な結果を得ることができる。 For example, in Formulas 23 to 25, after performing processing to multiply the differential image by k by setting lv = 0, and integrating for each block, a block-divided integrated image in which the local shading change is multiplied by k is obtained. The process of creating a block-divided integral image involves differentiation of the input image and integration of each block. In the process of creating a block-divided constant-multiplied image, the differentiation of the input image and integration of each block are performed by a constant for each block. By substituting the doubling process, it is possible to obtain a result mathematically equivalent to the process of creating the block division integral image.
 これは、微分画像をk倍して経路整合積分した画像は、微分する前の入力画像をk倍した画像と、オフセットを除き等しいためである。ブロック分割積分画像では、入力画像201や均一画像307などにブロック内の平均値を合わせるオフセット調整を行っている。ブロック分割定数倍画像についても同様に、ブロック内の画像をk倍して、ブロック内の平均値を入力画像201や均一画像307などに合わせることで、オフセットも含めて数学的に等価な処理を実現することができる。 This is because the image obtained by multiplying the differential image by k and performing the path matching integration is equal to the image obtained by multiplying the input image before differentiation by k, except for the offset. In the block division integral image, offset adjustment is performed to match the average value in the block to the input image 201, the uniform image 307, and the like. Similarly, the image in the block is multiplied by k, and the average value in the block is adjusted to the input image 201, the uniform image 307, and the like, so that mathematically equivalent processing including the offset is performed. Can be realized.
 ただし、ブロック内で画像をk倍して平均値を入力画像201に合わせれば、ブロック内の局所的なコントラストはk倍になるが、ブロック間で濃度の不連続によるブロック歪が生じる。この時、後述する図26の第11実施形態により、複数のブロック分割定数倍画像を作成してブロック間の濃度の不連続を解消することができる。その際、若干の緩やかな傾斜を持って濃淡補正をしたことになるため、k倍の局所的なコントラストが若干変化して、ほぼk倍の局所的なコントラストになる。 However, if the average value is adjusted to the input image 201 by multiplying the image by k in the block, the local contrast in the block increases by k times, but block distortion occurs due to discontinuity in density between the blocks. At this time, according to an eleventh embodiment of FIG. 26 described later, a plurality of block division constant multiplied images can be created to eliminate discontinuity in density between blocks. At that time, since the gradation correction has been performed with a slight gentle inclination, the k-times local contrast slightly changes to become almost k times the local contrast.
 なお、画像の表示レベルの上限と下限を超えた輝度は上限や下限の輝度にするため、上限と下限を越える濃度も表現したい場合は、画像の表示レベルを変えるか、あるいは、入力画像201を処理してブロック内の平均輝度の最小値や最大値を変えた画像を、平均値を合わせる参照画像にすることで対応することができる。 In addition, since the brightness exceeding the upper and lower limits of the display level of the image is set to the brightness of the upper and lower limits, if the density exceeding the upper and lower limits is also desired to be expressed, the display level of the image may be changed or the input image 201 may be displayed. This can be dealt with by processing the image in which the minimum value or the maximum value of the average luminance in the block has been changed as a reference image for matching the average value.
 図26は、第11実施形態に係るブロック画像のブロック歪除去方法を示すフローチャートである。
 図26のステップS41において、半ブロックずつ分割の仕方が異なる4つのブロック分割定数倍画像を作成する分割の仕方を決定する。
FIG. 26 is a flowchart illustrating a block distortion removal method for a block image according to the eleventh embodiment.
In step S41 of FIG. 26, a division method for creating four block division constant multiplied images having different division methods for each half block is determined.
 次に、ステップS80aにおいて、ブロック分割定数倍画像を作成する1つ目の分割Baの仕方に従って、図25の処理S80を実行することにより、1つ目のブロック分割定数倍画像を作成する。 Next, in step S80a, the first block division constant image is created by executing the process S80 of FIG. 25 in accordance with the first division Ba for creating the block division constant image.
 また、ステップS80bにおいて、ブロック分割定数倍画像を作成する2つ目の分割Bbの仕方に従って、図25の処理S80を実行することにより、2つ目のブロック分割定数倍画像を作成する。 {Circle around (2)} In step S80b, according to the method of the second division Bb for producing a multiplied block division constant image, the processing S80 in FIG. 25 is executed to produce a second multiplication block division constant image.
 また、ステップS80cにおいて、ブロック分割定数倍画像を作成する3つ目の分割Bcの仕方に従って、図25の処理S80を実行することにより、3つ目のブロック分割定数倍画像を作成する。 {Circle around (2)} In step S80c, a third divisional multiplication image is created by executing the process S80 of FIG. 25 in accordance with the third division Bc for creating a multiplication block division constant image.
 また、ステップS80dにおいて、ブロック分割定数倍画像を作成する4つ目の分割Bbの仕方に従って、図25の処理S80を実行することにより、4つ目のブロック分割定数倍画像を作成する。 {Circle around (4)} In step S80d, the fourth division Bb of FIG. 25 is executed in accordance with the method of the fourth division Bb for producing a multiplied block division constant image, thereby producing a fourth multiplication block division constant image.
 次に、ステップS42において、着目画素が属する4つのブロック分割定数倍画像の該当ブロックの各々の中心と、着目画素の距離に応じて重みを決定し、4つのブロック分割定数倍画像の重み付き平均を行うことで、ブロック平均定数倍画像206kを作成する。 Next, in step S42, the weight is determined according to the distance between the target pixel and the center of each of the corresponding blocks of the four block division constant multiplied images to which the pixel of interest belongs, and the weighted average of the four block division constant multiplied images is determined. Is performed to create a block average constant-multiplied image 206k.
 以上の処理では、ブロックの平均値を合わせる参照画像として入力画像201を用いた例を示したが、平均値を合わせる参照画像は、均一画像307であってもよいし、入力画像201を加工した画像であってもよい。 In the above processing, the example in which the input image 201 is used as the reference image for matching the average value of the blocks has been described. However, the reference image for matching the average value may be the uniform image 307, or the input image 201 is processed. It may be an image.
 また、重み付き平均をとるために、分割の仕方が異なる4つのブロック分割定数倍画像を作成する方法について説明したが、より多くのブロック分割定数倍画像を作成してもよいし、上述した方法以外の重み付き平均の仕方を用いてもよい。 Also, a method has been described in which four block division constant multiplied images having different division methods are created in order to take a weighted average. However, more block division constant multiplied images may be created, or the above-described method may be used. Other weighted averaging methods may be used.
 以上説明したように、上述した第11実施形態によれば、画像をブロックに分割し、各々のブロック内の画像を定数倍し、各ブロックの平均値算出領域内の平均値を所望の値にする平均値合わせ処理を行い、ブロック分割定数倍画像を得る。 As described above, according to the above-described eleventh embodiment, the image is divided into blocks, the image in each block is multiplied by a constant, and the average value in the average value calculation area of each block is set to a desired value. Average value matching processing to obtain an image multiplied by a block division constant.
 次に、分割の仕方の異なるブロック分割定数倍画像を複数作成し、着目画素の属する複数のブロック分割定数倍画像の各ブロックの中心と着目画素の距距離に応じた重み付き平均を行い、ブロック平均定数倍画像を作成する。 Next, a plurality of block division constant multiplied images having different methods of division are created, and a weighted average according to the distance between the center of each block of the plurality of block division constant multiplied images to which the pixel of interest belongs and the pixel of interest is calculated. Create an average constant multiple image.
 これにより、演算量を減少させつつ、ブロック内の局所的なコントラストをk倍することが可能となるとともに、ブロック間の濃度の不連続によるブロック歪を解消することができる。例えば、3次元CT画像に時間情報を持たせた4次元CT画像などの情報量が多い画像においても、画像処理にかかる時間を削減しつつ、局所的なコントラストを強調することができ、画像診断能力を向上させることができる。 This makes it possible to increase the local contrast within a block by k times while reducing the amount of calculation, and to eliminate block distortion due to discontinuity in density between blocks. For example, even in an image having a large amount of information, such as a four-dimensional CT image in which time information is added to a three-dimensional CT image, local contrast can be emphasized while reducing the time required for image processing. Ability can be improved.
 ブロック平均定数倍画像の作成処理は、各ブロック内の画像を定数倍する処理を行っているが、各ブロック内でオフセット調整している。従って、各々のブロック内で画像を定数倍する処理を行えば、画像のオフセットを除き、ブロック平均定数倍画像を作成できる。ブロック内の画像を定数倍してオフセットを調整する処理は、それと数学的に等価な処理が多数存在する。 作成 In the process of creating an image with a block average constant multiple, an image in each block is multiplied by a constant, but offset adjustment is performed in each block. Therefore, if the processing for multiplying the image in each block by a constant is performed, an image with a block average constant multiplied can be created except for the offset of the image. In the process of adjusting the offset by multiplying the image in the block by a constant, there are many mathematically equivalent processes.
 例えば、ブロック内の画像をフーリエ変換した画像を定数倍して、逆フーリエ変換することも、ブロック内の画像をウエーブレット変換した画像を定数倍して、逆ウエーブレット変換することもできる。数学的には、線形変換であれば、変換した画像を定数倍して逆変換を行うこともできる。ここでは、画像を定数倍する処理は、変換した画像を定数倍して逆変換する処理や、オフセットを除き画像を定数倍する処理も含む。 For example, an image obtained by Fourier-transforming an image in a block may be multiplied by a constant to perform an inverse Fourier transform, or an image obtained by transforming an image in a block by a wavelet may be multiplied by a constant to perform an inverse wavelet transform. Mathematically, if it is a linear transformation, the inverse transformation can be performed by multiplying the transformed image by a constant. Here, the process of multiplying the image by a constant includes a process of multiplying the converted image by a constant and performing inverse conversion, and a process of multiplying the image by a constant except for the offset.
 上述した実施形態では、変更微分画像の経路積分によって得られる複数の経路の線積分値が異なる時に、複数の線積分値に基づいて1つの積分値を算出する方法について説明した。それ以外にも、変更微分画像についての線積分値が経路によらず一意に定まるように変更微分画像を整合化し、整合化された変更微分画像を任意の経路で線積分することで、数学的に等価な経路整合積分画像が得られるようにしてもよい。 In the above-described embodiment, the method of calculating one integral value based on the plurality of line integral values when the line integral values of the plurality of paths obtained by the path integral of the modified differential image are different has been described. In addition, mathematically, the modified differential image is matched so that the line integral value of the modified differential image is uniquely determined regardless of the path, and the integrated modified differential image is line-integrated along an arbitrary path, thereby obtaining a mathematical expression. May be obtained.
 以下、変更微分画像についての線積分値が経路によらず一意に定まるように変更微分画像を整合化し、整合化された変更微分画像を線積分する処理について説明する。 Hereinafter, a description will be given of a process of matching the modified differential image so that the line integral value of the modified differential image is uniquely determined irrespective of the path, and performing line integration of the matched modified differential image.
 図27は、第12実施形態に係る画像処理装置の構成を示すブロック図である。
 図27において、画像処理装置は、微分処理部211、微分画像変更処理部212、整合微分画像作成処理部214および整合微分画像積分処理部215を備える。微分処理部211および微分画像変更処理部212は、図2の構成と同様である。
FIG. 27 is a block diagram illustrating a configuration of an image processing device according to the twelfth embodiment.
27, the image processing apparatus includes a differential processing unit 211, a differential image change processing unit 212, a matched differential image creation processing unit 214, and a matched differential image integration processing unit 215. The differential processing unit 211 and the differential image change processing unit 212 have the same configuration as in FIG.
 整合微分画像作成処理部214は、X方向変更偏微分画像204とY方向変更偏微分画像205が整合化されたX方向整合偏微分画像207とY方向整合偏微分画像207を作成する。X方向整合偏微分画像207とY方向整合偏微分画像207は、線積分値が経路によらず一意に定まる画像である。 The matched differential image creation processing unit 214 creates an X-direction matched partial differential image 207 and a Y-direction matched partial differential image 207 in which the X-direction changed partial differential image 204 and the Y-direction changed partial differential image 205 are matched. The X direction matched partial differential image 207 and the Y direction matched partial differential image 207 are images in which the line integral value is uniquely determined regardless of the path.
 整合微分画像積分処理部215は、X方向整合偏微分画像207およびY方向整合偏微分画像207を経路積分した経路積分画像209を作成する。整合微分画像積分処理部215は、経路積分画像209の表示レベルを設定する場合、入力画像201を参照することができる。経路積分画像209の作成処理は、図2の経路整合積分画像206の作成処理と数学的に等価である。 The matched differential image integration processing unit 215 creates a path integral image 209 obtained by performing path integration on the X direction matched partial differential image 207 and the Y direction matched partial differential image 207. When setting the display level of the path integral image 209, the matched differential image integration processing unit 215 can refer to the input image 201. The process of creating the path integral image 209 is mathematically equivalent to the process of creating the path matching integral image 206 in FIG.
 整合微分画像積分処理部215は、初期点から始め、順次隣の点について、X方向整合偏微分画像207およびY方向整合偏微分画像207を参照して線積分を行うことで積分値を求める。X方向整合偏微分画像207およびY方向整合偏微分画像207では、ある点への同距離の経路が複数あっても、線積分した値は変らないので、どの経路を辿ってもよい。例えば、左右を上下より優先する順番として経路を決めるなど、経路を1本に絞って線積分することができる。所望する全ての点の積分値を求め、経路積分画像209に値を代入し終えたら処理を終了する。 The matched differential image integration processing unit 215 obtains an integral value by performing line integration with reference to the X-direction matched partial differential image 207 and the Y-direction matched partial differential image 207 for successive points starting from the initial point. In the X-direction matched partial differential image 207 and the Y-direction matched partial differential image 207, even if there are a plurality of paths to a point at the same distance, the value obtained by line integration does not change, so that any path may be followed. For example, the route can be narrowed down to one and the line integration can be performed, for example, a route is determined as an order in which left and right are given priority over top and bottom. When the integral values of all the desired points are obtained and the values are substituted into the path integral image 209, the process ends.
 以下、変更微分画像が整合化された整合微分画像の作成方法について説明する。まず、変更微分画像Eを微分画像E’に代入する。この時点では、微分画像E’は整合化されておらず、以下の整合化処理を順次することにより、最終的に整合微分画像を得る。最初の代入時には、X方向微分画像Ex’には、X方向変更偏微分画像204の値が、Y方向微分画像のEy’には、Y方向偏微分画像205の値が代入される。 Hereinafter, a method for creating a matched differential image in which the changed differential image is matched will be described. First, the modified differential image E is substituted for the differential image E '. At this point, the differentiated image E 'is not matched, and the matched differential image is finally obtained by sequentially performing the following matching processing. At the time of the first substitution, the value of the X-direction modified partial differential image 204 is substituted for the X-direction differential image Ex ', and the value of the Y-direction partial differential image 205 is substituted for Ey' of the Y-direction differential image.
 次に、初期点を含む隣接4点メッシュの微分画像E’を整合化処理する。整合化とは、経路によらず、線積分値が同じになるように微分画像E’の値を変更することである。 Next, the differential image E ′ of the adjacent four-point mesh including the initial point is matched. Matching means changing the value of the differential image E 'so that the line integral value is the same regardless of the path.
 図28は、図27の画像処理装置の整合微分画像の作成時の隣接4点メッシュ内の勾配の様子を示す図である。
 図28において、初期点(x0,y0)から点(x0+1,y0+1)までの経路に沿った線積分を行うものとする。ここで、4つの点(x0,y0)、(x0+1,y0)、(x0+1,y0+1)、(x0,y0+1)を頂点とする隣接4点メッシュに着目する。この時、初期点(x0,y0)から点(x0+1,y0+1)までの経路には、(x0,y0)→(x0+1,y0)→(x0+1,y0+1)という経路R1と、(x0,y0)→(x0,y0+1)→(x0+1,y0+1)という経路R2がある。
FIG. 28 is a diagram illustrating a state of a gradient in an adjacent four-point mesh when a matched differential image is created by the image processing apparatus in FIG. 27.
In FIG. 28, it is assumed that line integration along the path from the initial point (x0, y0) to the point (x0 + 1, y0 + 1) is performed. Here, attention is paid to an adjacent four-point mesh having the vertices of four points (x0, y0), (x0 + 1, y0), (x0 + 1, y0 + 1), and (x0, y0 + 1). At this time, the route from the initial point (x0, y0) to the point (x0 + 1, y0 + 1) includes a route R1 of (x0, y0) → (x0 + 1, y0) → (x0 + 1, y0 + 1) and a route (x0, y0). There is a route R2 of → (x0, y0 + 1) → (x0 + 1, y0 + 1).
 第12実施形態の経路整合積分は、以下の数式31~34を用いて実行することができる。ただし、Fは経路整合積分画像で、F(x0,y0)には、入力画像201の(x0,y0)の値を代入する。 経 路 The path matching integration of the twelfth embodiment can be executed using the following equations 31 to 34. Here, F is a path matching integral image, and the value of (x0, y0) of the input image 201 is substituted for F (x0, y0).
(数31)
e1=Ex’(x0,y0)+Ey’(x0+1,y0)
(Equation 31)
e1 = Ex ′ (x0, y0) + Ey ′ (x0 + 1, y0)
(数32)
e2=Ey’(x0,y0)+Ex’(x0,y0+1)
(Equation 32)
e2 = Ey '(x0, y0) + Ex' (x0, y0 + 1)
(数33)
e=(e1+e2)/2=(Ex’(x0,y0)+Ey’(x0+1,y0)+Ey’(x0,y0)+Ex’(x0,y0+1))/2
(Equation 33)
e = (e1 + e2) / 2 = (Ex ′ (x0, y0) + Ey ′ (x0 + 1, y0) + Ey ′ (x0, y0) + Ex ′ (x0, y0 + 1)) / 2
(数34)
F(x0+1,y0+1)=F(x0,y0)+e
(Equation 34)
F (x0 + 1, y0 + 1) = F (x0, y0) + e
 この時、数式31~33を用いることで、X方向微分画像Ex’およびY方向微分画像のEy’の値を以下の数式35、36の計算により整合化処理する。 時 At this time, by using Expressions 31 to 33, the values of the X-direction differential image Ex ′ and the Y-direction differential image Ey ′ are matched by the following Expressions 35 and 36.
(数35)
Ey’(x0+1,y0)=e-Ex’(x0,y0)
(Equation 35)
Ey ′ (x0 + 1, y0) = e−Ex ′ (x0, y0)
(数36)
Ex’(x0,y0+1)=e-Ey’(x0,y0)
(Equation 36)
Ex ′ (x0, y0 + 1) = e−Ey ′ (x0, y0)
 以上のように整合化処理すると、以下の数式37、38で示すように、どちらの経路R1、R2の線積分値も同一になり、どちらの経路R1、R2で線積分しても線積分値を一意に決めることができる。 When the matching process is performed as described above, the line integral values of both the routes R1 and R2 become the same as shown in the following Expressions 37 and 38. Can be uniquely determined.
(数37)
F(x0+1,y0+1)=F0(x0,y0)+Ex’(x0,y0)+Ey’(x0+1,y0)
=F0(x0,y0)+e
(Equation 37)
F (x0 + 1, y0 + 1) = F0 (x0, y0) + Ex ′ (x0, y0) + Ey ′ (x0 + 1, y0)
= F0 (x0, y0) + e
(数38)
F(x0+1,y0+1)=F0(x0,y0)+Ey’(x0,y0)
+Ex’(x0,y0+1)
=F0(x0,y0)+e
(Equation 38)
F (x0 + 1, y0 + 1) = F0 (x0, y0) + Ey '(x0, y0)
+ Ex '(x0, y0 + 1)
= F0 (x0, y0) + e
 他の初期点(x0,y0)を含む隣接4点メッシュについても、整合化処理する。隣接4点メッシュ内では、初期点(x0,y0)を基準として、距離が1だけ離れた点から距離が2だけ離れた点に向かう経路が2経路ある。このため、整合化処理では、それら2経路の線積分値が同一の値になるように、それぞれの経路の線積分値の平均値を新たな線積分値とし、距離が1だけ離れた点から距離が2だけ離れた点に向かう勾配である微分画像E’の値を変更する。 整合 The matching process is also performed on the adjacent four-point mesh including the other initial points (x0, y0). In the adjacent four-point mesh, there are two paths from the point at a distance of 1 to the point at a distance of 2 from the point at a distance of 1 based on the initial point (x0, y0). For this reason, in the matching process, the average value of the line integral values of the respective paths is set as a new line integral value so that the line integral values of the two paths become the same value, and the distance from the point at a distance of 1 is determined. The value of the differential image E ′, which is the gradient toward the point whose distance is 2 away, is changed.
 初期点(x0,y0)から距離がkだけ離れた点が隣接4点メッシュの点の中で最も初期点(x0,y0)に近い点である場合、隣接4点メッシュ内で距離がk+1だけ離れた点から距離がk+2だけ離れた点に行く経路は2通りある。このため、それぞれの経路に沿った線積分値の平均値が新たな線積分値となるように、距離がk+1だけ離れた点から距離がk+2だけ離れた点の勾配である微分画像E’の値を変更することができる。 If the point at a distance k from the initial point (x0, y0) is the point closest to the initial point (x0, y0) among the points of the adjacent four-point mesh, the distance is k + 1 in the adjacent four-point mesh. There are two routes to a point at a distance k + 2 from the distant point. For this reason, the differential image E ′ is a gradient of a point at a distance of k + 2 from a point at a distance of k + 1 so that the average value of the line integrals along each path becomes a new line integral. You can change the value.
 初期点(x0,y0)を含む隣接4点メッシュから整合化処理を始め、順次距離の離れた隣接4点メッシュの整合化処理をし、積分値を求める領域内の隣接4点メッシュ全ての整合化処理を終えると、微分画像E’として整合微分画像を得ることができる。 The matching process is started from the adjacent four-point mesh including the initial point (x0, y0), the matching process is performed on the adjacent four-point meshes that are successively separated from each other, and the matching of all the adjacent four-point meshes in the area for which the integral value is to be obtained is performed. After the conversion process, a matched differential image can be obtained as the differential image E ′.
 図29は、図27の画像処理装置の整合微分画像作成処理を示すフローチャートである。
 図29のステップS91において、図27の整合微分画像作成処理部214は、入力画像201と同じサイズの画像231の各隣接4点メッシュについて、初期点P0からの距離を求める。隣接4点メッシュ上の4点のうち、初期点P0に最も距離の近い点との距離を、隣接4点メッシュと初期点P0との距離とする。
FIG. 29 is a flowchart showing a matched differential image creating process of the image processing apparatus of FIG.
In step S91 in FIG. 29, the matched differential image creation processing unit 214 in FIG. 27 calculates the distance from the initial point P0 for each of the four adjacent meshes of the image 231 having the same size as the input image 201. The distance between the nearest point to the initial point P0 among the four points on the adjacent four-point mesh is defined as the distance between the adjacent four-point mesh and the initial point P0.
 次に、ステップS92において、整合微分画像作成処理部214は、微分画像E’に変更微分画像Eを代入する。この時、X方向微分画像Ex’には、X方向変更偏微分画像204の値を代入し、Y方向微分画像のEy’には、Y方向偏微分画像205の値を代入する。この最初の代入時では、微分画像E’は、まだ整合化されていない。 Next, in step S92, the matched differential image creation processing unit 214 substitutes the changed differential image E for the differential image E '. At this time, the value of the X direction modified partial differential image 204 is substituted for the X direction differential image Ex ', and the value of the Y direction partial differential image 205 is substituted for Ey' of the Y direction differential image. At the time of this first substitution, the differential image E 'has not been matched yet.
 次に、ステップS93において、整合微分画像作成処理部214は、隣接4点メッシュの距離の小さい順に微分画像E’を整合化処理する。積分を所望する全ての点を含む隣接4点メッシュについて整合化処理を終えると、微分画像E’は、整合微分画像になる。 Next, in step S93, the matched differential image creation processing unit 214 performs matching processing on the differential images E 'in ascending order of the distance between the adjacent four-point meshes. When the matching process is completed for the adjacent four-point mesh including all the points desired to be integrated, the differential image E 'becomes a matched differential image.
 なお、整合化処理では、着目する隣接4点メッシュの中で、最も初期点P0に近い点を起点に最も初期点P0に遠い点までの経路が2通りあるものを整合化する。例えば、着目する隣接4点メッシュの中で、最も初期点P0に近い点を(x,y)とし、初期点P0に最も遠い点を(x+α,y+β)とすると、以下の数式39~43を用いて整合化する。ただし、α、βは、1または-1である。 In the matching processing, matching is performed between two adjacent four-point meshes of interest that have two routes from the point closest to the initial point P0 to the point farthest from the initial point P0. For example, assuming that the point closest to the initial point P0 is (x, y) and the point farthest from the initial point P0 is (x + α, y + β) in the four adjacent meshes of interest, the following equations 39 to 43 are obtained. And use for matching. Here, α and β are 1 or −1.
(数39)
e1’=α*Ex’(x+(α-1)/2,y)
+β*Ey’(x+α,y+(β-1)/2)
(Equation 39)
e1 ′ = α * Ex ′ (x + (α−1) / 2, y)
+ Β * Ey ′ (x + α, y + (β−1) / 2)
(数40)
e2’=β*Ey’(x,y+(β-1)/2)
+α*Ex’(x+(α-1)/2,y+β)
(Equation 40)
e2 ′ = β * Ey ′ (x, y + (β−1) / 2)
+ Α * Ex ′ (x + (α−1) / 2, y + β)
(数41)
e’=(e1’+e2’)/2
(Equation 41)
e '= (e1' + e2 ') / 2
(数42)
Ey’(x+α,y+(β-1)/2)
=β(e’-α*Ex’(x+(α-1)/2,y))
(Equation 42)
Ey '(x + α, y + (β-1) / 2)
= Β (e'-α * Ex '(x + (α-1) / 2, y))
(数43)
Ex’(x+(α-1)/2,y+β)
=α(e’-β*Ey’(x,y+(β-1)/2))
(Equation 43)
Ex '(x + (α-1) / 2, y + β)
= Α (e′−β * Ey ′ (x, y + (β−1) / 2))
 以上の整合化処理では、数式41に示すように、隣接4点メッシュ内の2つの経路による線積分値の平均値が新たな線積分値になるように微分画像の値を変更した。この他にも、様々な整合化の仕方がある。例えば、数式41では、e’を平均値(e1’+e2’)/2にしたが、e1’かe2’のどちらか一方にしたり、適当な重みを付けた重み付き平均値にしたり、若干平均値からずれた値にしてもよい。 In the above matching process, as shown in Expression 41, the value of the differential image was changed such that the average value of the line integral values of the two paths in the adjacent four-point mesh became a new line integral value. There are various other matching methods. For example, in Expression 41, e ′ is set to the average value (e1 ′ + e2 ′) / 2, but it is set to one of e1 ′ and e2 ′, a weighted average value with an appropriate weight, or a slight average value. The value may deviate from the value.
 以上のように、経路の異なる線積分値が一致するように微分画像の値を整合化する処理を行ってから、線積分する処理は、整合化していない微分画像の経路の異なる線積分値の平均値を積分値とする処理と実質的に等価である。従って、数式41の平均をとって整合化した微分画像を作成して線積分する処理は、同距離経路平均積分の中に含まれる。 As described above, after performing the process of matching the values of the differential image so that the line integral values of different paths match, the process of performing the line integration is performed on the different line integral values of the different image paths that are not matched. This is substantially equivalent to the process of using the average value as the integral value. Therefore, the process of creating a differential image matched by averaging Expression 41 and performing line integration is included in the same distance path average integration.
 なお、数式41で求めた平均値を用いて整合化された整合微分画像を線積分すると、同距離経路平均積分画像になるが、同距離経路平均積分画像を微分すると、数式41で求めた平均値を用いて整合化した整合微分画像になる。このため、両者は、変換および逆変換で結びついた画像である。 It should be noted that, when the integrated differential image matched using the average value obtained by Expression 41 is linearly integrated, the image becomes the same distance path average integrated image. However, when the same distance path average integrated image is differentiated, the average obtained by Expression 41 is obtained. It becomes a matched differential image matched using the values. For this reason, both are images connected by conversion and inverse conversion.
 また、経路整合積分画像を微分した微分画像は、整合のとれた微分画像なので、この整合のとれた微分画像が得られるように、変更微分画像(X方向偏微分画像204とY方向偏微分画像205)を加工する処理も、整合化処理の中に含まれる。 Since the differentiated image obtained by differentiating the path matching integral image is a matched differential image, the modified differential image (the X direction partial differential image 204 and the Y direction partial differential image 204) is obtained so as to obtain the matched differential image. 205) is also included in the matching process.
 図27の例では、微分処理部211、微分画像変更処理部212、整合微分画像作成処理部214および整合微分画像積分処理部215を画像処理装置に設けた構成を示した。微分処理部211、微分画像変更処理部212、整合微分画像作成処理部214および整合微分画像積分処理部215は、別個のプログラムで実現したり、別個の装置に設けたりして、分離して動作できるようにしてもよい。例えば、整合微分画像作成処理部214の出力を入力とする整合微分画像積分処理部215を別のプログラムで実現したり、別の装置に設け、その別のプログラムまたは別の装置で整合微分画像の積分処理を行うようにしてもよい。 例 In the example of FIG. 27, the configuration is shown in which the differential processing unit 211, the differential image change processing unit 212, the matched differential image creation processing unit 214, and the matched differential image integration processing unit 215 are provided in the image processing apparatus. The differential processing unit 211, the differential image change processing unit 212, the matched differential image creation processing unit 214, and the matched differential image integration processing unit 215 are realized by separate programs or provided in separate devices, and operate separately. You may be able to. For example, the matched differential image integration processing unit 215 that receives the output of the matched differential image creation processing unit 214 as an input is realized by another program or provided in another device, and the matched differential image of the matched differential image is provided by another program or another device. Integration processing may be performed.
 図30は、図27の画像処理装置の整合微分画像の作成時の隣接4点メッシュの距離の一例を示す図である。
 図30において、各隣接4点メッシュに、初期点P0からの距離を求めた値を付した。例えば、初期点P0を含む距離が0の隣接4点メッシュが4つあり、距離が1の隣接4点メッシュがその隣に8個、距離が2の隣接4点メッシュがさらにその隣に12個あることが分る。図30では、全部で64個の隣接4点メッシュしか示していないが、実際には、入力画像201と同じサイズの画像231の中の積分値を求めることを所望する全ての点を含む隣接4点メッシュに距離を割り振ることができる。
FIG. 30 is a diagram illustrating an example of the distance between adjacent four-point meshes when the matched differential image is created by the image processing apparatus in FIG. 27.
In FIG. 30, a value obtained by calculating the distance from the initial point P0 is assigned to each adjacent four-point mesh. For example, there are four adjacent 4-point meshes having a distance of 0 including the initial point P0, 8 adjacent 4-point meshes having a distance of 1 adjacent thereto, and 12 adjacent 4-point meshes having a distance of 2 adjacent thereto. I know there is. Although FIG. 30 shows only a total of 64 adjacent 4-point meshes, in actuality, adjacent 4-point meshes including all points for which it is desired to obtain an integral value in an image 231 having the same size as the input image 201 are shown. Distances can be assigned to point meshes.
 微分画像を整合化する処理は、上記で記載した方法の他、下記の参考文献5、6に記載された方法なども用いることができる。 処理 The process of matching the differential images can use the methods described in References 5 and 6 below in addition to the method described above.
 参考文献5:Vishal M.Patel,Ray Maleh,Anna C.Gilbert,and Rama Chellappa,“Gradient-Based Image Recovery Methods From Incomplete Fourier Measurements”,IEEE TRANSACTIONS ON IMAGE PROCESSING,VOL. 21,NO. 1,JANUARY 2012. {Reference 5: Vishal} M. Patel, Ray @ Maleh, Anna @ C. Gilbert, and Rama Chellappa, "Gradient-Based Image Recovery Methods from From Incomplete Fourier Measurements," IEEE TRANSACTIONS ONLINE IMAGE. $ 21, NO. {1, JANARY} 2012.
 参考文献6:Tal Simchony,Rama Chellappa,M.Shao,”Direct Analytical Methods for Solving Poisson Equations in Computer Vision Problems”,TRANSACTION on ANALYSIS and MACHINE INTELIGENCE,VOL.12,NO.5 MAY 1990. Reference 6: Tal Simchony, Rama Chellappa, M.S. Shao, “Direct Analytical Methods for Solving, Poison, Equipments in Computer, Vision Problems,” TRANSACTION, on ANALYSIS, and MACHINE INTERLIVEN. 12, NO. 5 MAY 1990.
 上記参考文献5は、撮影データがX線CTやMRIのデータで、これ等のデータから作製した微分画像について、整合化処理を行うものである。参考文献5の整合化処理は、文献中の「V.IMAGE RECONSTRUCTION FROM GRADIENTS」に記載されている。 In the above Reference 5, the radiographing data is X-ray CT or MRI data, and the matching processing is performed on a differential image created from such data. The matching process of Reference 5 is described in “V. IMAGE RECONSTRUCTION FROM FROM GRADIENTS” in the literature.
 上記参考文献6は、撮影画像の影や光線から被写体の撮影面に垂直な方向の凹凸を示す微分画像を作製し、この微分画像の凹凸を示す微分画像の整合化処理を行うものである。 Reference 6 is for producing a differential image showing irregularities in the direction perpendicular to the photographing surface of the subject from the shadows and rays of the photographed image, and performing a matching process of the differential images showing irregularities of the differential image.
 従って、上記参考文献5、6は、本実施形態とは全体の構成が異なり、本実施形態の全体の構成上の特徴である「入力画像の微分画像を求め、微分画像を変更し、変更した微分画像に対して、整合化処理をして線積分、または経路整合積分を行う」ものではない。 Therefore, the above-mentioned References 5 and 6 are different from the present embodiment in the overall configuration, and are characterized by “the differential image of the input image is obtained, the differential image is changed, The integration process is performed on the differential image to perform line integration or path matching integration.
 図31は、図1の画像処理装置のハードウェア構成例を示すブロック図である。
 図31において、画像処理装置111には、プロセッサ11、通信制御デバイス12、通信インターフェース13、主記憶デバイス14および外部記憶デバイス15が設けられている。プロセッサ11、通信制御デバイス12、通信インターフェース13、主記憶デバイス14および外部記憶デバイス15は、内部バス16を介して相互に接続されている。主記憶デバイス14および外部記憶デバイス15は、プロセッサ11からアクセス可能である。
FIG. 31 is a block diagram illustrating a hardware configuration example of the image processing apparatus in FIG.
31, the image processing apparatus 111 includes a processor 11, a communication control device 12, a communication interface 13, a main storage device 14, and an external storage device 15. The processor 11, the communication control device 12, the communication interface 13, the main storage device 14, and the external storage device 15 are mutually connected via an internal bus 16. The main storage device 14 and the external storage device 15 are accessible from the processor 11.
 また、画像処理装置111の外部には、入力装置20および出力装置21が設けられている。入力装置20および出力装置21は、入出力インターフェース17を介して内部バス16に接続されている。 入 力 Also, an input device 20 and an output device 21 are provided outside the image processing device 111. The input device 20 and the output device 21 are connected to the internal bus 16 via the input / output interface 17.
 入力装置20は、例えば、キーボード、マウス、タッチパネル、カードリーダまたは音声入力装置などである。出力装置21は、例えば、画面表示装置(液晶モニタ、有機EL(Electro Luminescence)ディスプレイ、グラフィックカード等)、音声出力装置(スピーカ等)または印字装置などである。 The input device 20 is, for example, a keyboard, a mouse, a touch panel, a card reader, or a voice input device. The output device 21 is, for example, a screen display device (a liquid crystal monitor, an organic EL (Electro Luminescence) display, a graphic card, or the like), an audio output device (such as a speaker), or a printing device.
 プロセッサ11は、画像処理装置111全体の動作制御を司るハードウェアである。なお、プロセッサ11は、汎用プロセッサであてもよいし、画像処理に特化した専用プロセッサであってもよい。主記憶デバイス14は、例えば、SRAMまたはDRAMなどの半導体メモリから構成することができる。主記憶デバイス14には、プロセッサ11が実行中のプログラムを格納したり、プロセッサ11がプログラムを実行するためのワークエリアを設けたりすることができる。 The processor 11 is hardware that controls the operation of the entire image processing apparatus 111. Note that the processor 11 may be a general-purpose processor or a dedicated processor specialized in image processing. The main storage device 14 can be composed of, for example, a semiconductor memory such as an SRAM or a DRAM. The main storage device 14 can store a program being executed by the processor 11 or provide a work area for the processor 11 to execute the program.
 外部記憶デバイス15は、大容量の記憶容量を有する記憶デバイスであり、例えば、ハードディスク装置やSSDである。外部記憶デバイス15は、各種プログラムの実行ファイルやプログラムの実行に用いられるデータを保持することができる。外部記憶デバイス15には、画像処理プログラム15Aを格納することができる。画像処理プログラム15Aは、画像処理装置111にインストール可能なソフトウェアであってもよいし、画像処理装置111にファームウェアとして組み込まれていてもよい。 The external storage device 15 is a storage device having a large storage capacity, such as a hard disk drive or an SSD. The external storage device 15 can hold executable files of various programs and data used for executing the programs. The external storage device 15 can store an image processing program 15A. The image processing program 15A may be software that can be installed in the image processing device 111, or may be incorporated in the image processing device 111 as firmware.
 通信制御デバイス12は、外部との通信を制御する機能を有するハードウェアである。通信制御デバイス12は、通信インターフェース13を介してネットワーク19に接続される。ネットワーク19は、インターネットなどのWAN(Wide Area Network)であってもよいし、WiFiなどのLAN(Local Area Network)であってもよいし、WANとLANが混在していてもよい。 The communication control device 12 is hardware having a function of controlling communication with the outside. The communication control device 12 is connected to a network 19 via a communication interface 13. The network 19 may be a WAN (Wide Area Network) such as the Internet, a LAN (Local Area Network) such as WiFi, or a mixture of WAN and LAN.
 入出力インターフェース17は、入力装置20から入力されるデータをプロセッサ11が処理可能なデータ形式に変換したり、プロセッサ11から出力されるデータを出力装置21で出力可能なデータ形式に変換したりする。 The input / output interface 17 converts data input from the input device 20 into a data format that can be processed by the processor 11, and converts data output from the processor 11 into a data format that can be output by the output device 21. .
 プロセッサ11が画像処理プログラム15Aを主記憶デバイス14に読み出し、画像処理プログラム15Aを実行することにより、入力画像を微分した微分画像を作成し、その微分画像を変更処理した変更微分画像を作成し、その変更微分画像を経路積分した経路積分画像を作成することができる。 The processor 11 reads the image processing program 15A into the main storage device 14, executes the image processing program 15A, creates a differentiated image obtained by differentiating the input image, creates a modified differentiated image obtained by changing the differentiated image, A path integral image can be created by path integrating the modified differential image.
 この時、画像処理プログラム15Aは、図2の微分処理部211、微分画像変更処理部212および経路整合積分処理部213の機能を実現することができる。 At this time, the image processing program 15A can realize the functions of the differential processing unit 211, the differential image change processing unit 212, and the path matching integration processing unit 213 in FIG.
 なお、画像処理プログラム15Aの実行は、複数のプロセッサやコンピュータに分担させてもよい。あるいは、プロセッサ11は、ネットワーク19を介してクラウドコンピュータなどに画像処理プログラム15Aの全部または一部の実行を指示し、その実行結果を受け取るようにしてもよい。 The execution of the image processing program 15A may be shared by a plurality of processors or computers. Alternatively, the processor 11 may instruct a cloud computer or the like to execute all or part of the image processing program 15A via the network 19 and receive the execution result.
 100…撮影装置、111、121、131…画像処理装置、112、122、132…表示装置、113、123、133…入力装置、114、124、134…記憶装置、140…通信ネットワーク

 
100 photographing device, 111, 121, 131 image processing device, 112, 122, 132 display device, 113, 123, 133 input device, 114, 124, 134 storage device, 140 communication network

Claims (20)

  1.  入力画像の一部の領域と対応付けられた参照画像の画素値に基づいて、前記入力画像を加工した第1加工画像の前記一部の領域のオフセットを調整するオフセット調整部を備え、
     前記参照画像は、前記入力画像または前記入力画像を加工した第2加工画像である画像処理装置。
    An offset adjustment unit that adjusts an offset of the partial area of the first processed image obtained by processing the input image based on a pixel value of a reference image associated with a partial area of the input image,
    The image processing device, wherein the reference image is the input image or a second processed image obtained by processing the input image.
  2.  前記第1加工画像または前記第2加工画像は、前記入力画像の局所的なコントラストを変更する画像処理が行われた画像である請求項1に記載の画像処理装置。 2. The image processing device according to claim 1, wherein the first processed image or the second processed image is an image on which image processing for changing a local contrast of the input image has been performed.
  3.  前記オフセット調整部は、前記第1加工画像の一部の領域の画素値の平均値または重み付き平均値を、前記入力画像の一部の領域と対応付けられた前記参照画像の画素値または前記画素値から計算した値と入れ替える請求項1に記載の画像処理装置。 The offset adjustment unit calculates the average value or the weighted average value of the pixel values of the partial area of the first processed image, the pixel value of the reference image or the pixel value of the reference image associated with the partial area of the input image. The image processing apparatus according to claim 1, wherein the value is replaced with a value calculated from the pixel value.
  4.  前記第2加工画像を作製する処理は、前記入力画像の一部の領域の中心に位置する半分の長さの評価領域に対応する前記参照画像の評価領域の平均値または重み付き平均値を計算する処理を含む請求項1に記載の画像処理装置。 The processing for producing the second processed image includes calculating an average value or a weighted average value of the evaluation regions of the reference image corresponding to a half-length evaluation region located at the center of a partial region of the input image. The image processing apparatus according to claim 1, further comprising:
  5.  前記第2加工画像を作製する処理は、前記入力画像を加工した画像の一部の領域の中心に位置する半分の長さの評価領域の平均値または重み付き平均値を計算する処理を含む請求項1に記載の画像処理装置。 The process of creating the second processed image includes a process of calculating an average value or a weighted average value of a half-length evaluation region located at the center of a partial region of the image obtained by processing the input image. Item 2. The image processing device according to Item 1.
  6.  前記オフセット調整部は、前記入力画像を分割したブロックと対応付けられた参照画像の画素値に基づいて、前記第1加工画像の前記ブロックごとのオフセットをそれぞれ調整する請求項1に記載の画像処理装置。 The image processing according to claim 1, wherein the offset adjustment unit adjusts an offset of each block of the first processed image based on a pixel value of a reference image associated with a block obtained by dividing the input image. apparatus.
  7.  入力画像を微分した微分画像を作成する微分処理部と、
     前記微分画像を変更処理した変更微分画像を作成する微分画像変更処理部と、
     前記変更微分画像を経路積分した経路積分画像を作成する経路積分処理部とを備える画像処理装置。
    A differentiation processing unit that creates a differential image by differentiating the input image;
    A differential image change processing unit that creates a changed differential image obtained by changing the differential image,
    A path integration processing unit that creates a path integration image by path integration of the modified differential image.
  8.  前記微分処理部は、前記入力画像が2次元画像であるときに、前記入力画像のX方向偏微分画像と、前記入力画像のY方向偏微分画像を生成する請求項7に記載の画像処理装置。 The image processing device according to claim 7, wherein the differential processing unit generates an X-direction partial differential image of the input image and a Y-direction partial differential image of the input image when the input image is a two-dimensional image. .
  9.  前記微分画像変更処理部は、前記X方向偏微分画像を変更処理したX方向変更偏微分画像と、前記Y方向偏微分画像を変更処理したY方向変更偏微分画像とを作成し、
     前記経路積分処理部は、X方向の経路積分を行う場合、前記X方向変更偏微分画像を参照し、Y向の経路積分を行う場合、前記Y方向変更偏微分画像を参照する請求項8に記載の画像処理装置。
    The differential image change processing unit creates an X direction changed partial differential image obtained by changing the X direction partial differential image and a Y direction changed partial differential image obtained by changing the Y direction partial differential image,
    9. The path integration processing unit according to claim 8, wherein when performing path integration in the X direction, the path integration processing unit refers to the X direction changed partial differential image, and when performing path integration in the Y direction, the path integration processing unit refers to the Y direction changed partial differential image. The image processing apparatus according to any one of the preceding claims.
  10.  前記経路積分は、前記変更微分画像の経路積分によって得られる複数の積分値に基づいて1つの積分値を算出する経路整合積分である請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the path integral is a path matching integral that calculates one integral value based on a plurality of integral values obtained by the path integral of the modified differential image.
  11.  前記経路積分において、
     第1着目点の積分値と、前記第1着目点に隣接する隣接点について前記変更微分画像の当該位置の画素値との加減算演算による線積分に基づいて、前記第1着目点に隣接する隣接点の積分値を求め、
     前記積分値を求めた前記隣接点を第2着目点として前記第2着目点の積分値と、前記第2着目点に隣接する隣接点について前記変更微分画像の当該位置の画素値との加減算演算による線積分に基づいて、前記第2着目点に隣接する隣接点の積分値を求める請求項10に記載の画像処理装置。
    In the path integral,
    An adjacent value adjacent to the first point of interest is calculated based on a line integration by an addition / subtraction operation of an integral value of the first point of interest and a pixel value of the adjacent point adjacent to the first point of interest in the modified differential image. Find the integral value of the point,
    Addition / subtraction operation of the integral value of the second point of interest and the pixel value of the adjacent point adjacent to the second point of interest in the modified differential image at the position corresponding to the adjacent point for which the integral value has been obtained as the second point of interest The image processing apparatus according to claim 10, wherein an integral value of an adjacent point adjacent to the second point of interest is obtained based on the line integration by:
  12.  前記経路整合積分において、
     前記経路積分画像の積分値を求める点と初期点との離散的な距離が同じ経路が複数ある場合、その複数の経路ごとに前記線積分を行った値の平均値をその点の積分値とし、
     前記経路ごとに前記線積分を行った値の平均値をその点の積分値とする処理を所定範囲まで順次繰り返し、前記所定範囲の全ての点の積分値を求める請求項11に記載の画像処理装置。
    In the path matching integration,
    When there are a plurality of paths having the same discrete distance between the point for obtaining the integral value of the path integral image and the initial point, the average of the values obtained by performing the line integration for each of the plurality of paths is defined as the integral value of the point. ,
    12. The image processing according to claim 11, wherein a process of setting an average value of the values obtained by performing the line integration for each path to an integrated value of the point is sequentially repeated up to a predetermined range to obtain integrated values of all points in the predetermined range. apparatus.
  13.  前記経路積分処理部は、
     前記経路積分を行う画像をブロックに分割し、
     前記ブロック内の画像を経路積分した経路積分画像を作成し、
     前記経路積分画像のオフセットを調整する請求項7に記載の画像処理装置。
    The path integration processing unit,
    Dividing the image on which the path integration is performed into blocks,
    Create a path integral image by path integrating the image in the block,
    The image processing apparatus according to claim 7, wherein an offset of the path integral image is adjusted.
  14.  前記経路積分処理部は、
     分割の仕方が異なる複数の経路積分画像を作成する分割の仕方を決定し、
     着目画素が属する複数の経路積分画像のブロックについて、各ブロックの中心と前記着目画素の距離に応じた重み付き平均を行う請求項13に記載の画像処理装置。
    The path integration processing unit,
    Determine the division method to create a plurality of path integral images with different division methods,
    14. The image processing apparatus according to claim 13, wherein a weighted average according to a distance between the center of each block and the pixel of interest is performed on a plurality of blocks of the path integral image to which the pixel of interest belongs.
  15.  前記微分画像変更処理部は、前記微分画像の画素値を定数倍する請求項7に記載の画像処理装置。 The image processing device according to claim 7, wherein the differential image change processing unit multiplies a pixel value of the differential image by a constant.
  16.  前記微分画像変更処理部は、所定値よりも低い前記微分画像の画素値の絶対値を低減する請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the differential image change processing unit reduces an absolute value of a pixel value of the differential image that is lower than a predetermined value.
  17.  入力画像を微分した微分画像を作成する微分処理部と、
     前記微分画像を変更処理した変更微分画像を作成する微分画像変更処理部と、
     前記変更微分画像の経路積分の積分値が経路によらず一意に定まるように前記変更微分画像の画素値を変更して整合化した整合微分画像を作成する整合微分画像作成部とを備える画像処理装置。
    A differentiation processing unit that creates a differential image by differentiating the input image;
    A differential image change processing unit that creates a changed differential image obtained by changing the differential image,
    Image processing comprising: a matched differential image creating unit that creates a matched differential image by changing a pixel value of the changed differential image so that an integral value of the path integral of the changed differential image is uniquely determined regardless of the path. apparatus.
  18.  前記整合微分画像を経路積分した経路積分画像を作成する経路積分処理部をさらに備える請求項17に記載の画像処理装置。 18. The image processing apparatus according to claim 17, further comprising a path integration processing unit that creates a path integral image by path integrating the matched differential image.
  19.  入力画像をブロックに分割し、
     前記ブロック内の画素値を定数倍した定数倍画像を生成し、
     前記定数倍画像のオフセットを調整する画像処理装置。
    Divide the input image into blocks,
    Generate a constant multiplied image by multiplying the pixel value in the block by a constant,
    An image processing device for adjusting an offset of the constant-multiplied image.
  20.  分割の仕方が異なる複数の定数倍画像を作成する分割の仕方を決定し、
     着目画素が属する前記オフセットが調整された定数倍画像の複数のブロックについて、各ブロックの中心と前記着目画素の距離に応じた重み付き平均を行う請求項19に記載の画像処理装置。

     
    Determine the division method to create multiple constant multiple images with different division methods,
    20. The image processing apparatus according to claim 19, wherein a weighted average according to a distance between the center of each block and the pixel of interest is performed on a plurality of blocks of the constant-multiplied image to which the pixel of interest belongs.

PCT/JP2019/027390 2018-08-01 2019-07-10 Image processing device WO2020026739A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018145435A JP7178818B2 (en) 2018-08-01 2018-08-01 Image processing device
JP2018-145435 2018-08-01

Publications (1)

Publication Number Publication Date
WO2020026739A1 true WO2020026739A1 (en) 2020-02-06

Family

ID=69231061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/027390 WO2020026739A1 (en) 2018-08-01 2019-07-10 Image processing device

Country Status (2)

Country Link
JP (1) JP7178818B2 (en)
WO (1) WO2020026739A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288620A (en) * 2020-11-04 2021-01-29 北京深睿博联科技有限责任公司 Two-dimensional image line integral calculation method and system based on GPU, electronic equipment and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108205A (en) * 2008-10-30 2010-05-13 Hitachi Ltd Super resolution image creating method
JP2012220674A (en) * 2011-04-07 2012-11-12 Canon Inc Image processing device, image processing method, and program
JP2016214615A (en) * 2015-05-21 2016-12-22 コニカミノルタ株式会社 Phase image processing method, phase image processor, image processor and image processing program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4799428B2 (en) 2007-01-22 2011-10-26 株式会社東芝 Image processing apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010108205A (en) * 2008-10-30 2010-05-13 Hitachi Ltd Super resolution image creating method
JP2012220674A (en) * 2011-04-07 2012-11-12 Canon Inc Image processing device, image processing method, and program
JP2016214615A (en) * 2015-05-21 2016-12-22 コニカミノルタ株式会社 Phase image processing method, phase image processor, image processor and image processing program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288620A (en) * 2020-11-04 2021-01-29 北京深睿博联科技有限责任公司 Two-dimensional image line integral calculation method and system based on GPU, electronic equipment and computer storage medium
CN112288620B (en) * 2020-11-04 2024-04-09 北京深睿博联科技有限责任公司 GPU-based two-dimensional image line integral calculation method, system, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
JP7178818B2 (en) 2022-11-28
JP2020021329A (en) 2020-02-06

Similar Documents

Publication Publication Date Title
US11967046B2 (en) Methods and apparatus for enhancing optical images and parametric databases
US11164295B2 (en) Methods and apparatus for enhancing optical images and parametric databases
Chen et al. Robust image and video dehazing with visual artifact suppression via gradient residual minimization
US20180350043A1 (en) Shallow Depth Of Field Rendering
US9922443B2 (en) Texturing a three-dimensional scanned model with localized patch colors
US20160005145A1 (en) Aligning Ground Based Images and Aerial Imagery
Khan et al. Localization of radiance transformation for image dehazing in wavelet domain
CN109817170B (en) Pixel compensation method and device and terminal equipment
JP6623832B2 (en) Image correction apparatus, image correction method, and computer program for image correction
CN106919257B (en) Haptic interactive texture force reproduction method based on image brightness information force
US10475229B2 (en) Information processing apparatus and information processing method
Liu et al. Rank-one prior: Real-time scene recovery
KR20110095797A (en) Image processing apparatus and image processing program
CN113658085B (en) Image processing method and device
CN111798474A (en) Image processing apparatus and image processing method thereof
JP2020061080A (en) Image processing device, imaging device, and image processing method
Zhu et al. Low-light image enhancement network with decomposition and adaptive information fusion
Lei et al. Low-light image enhancement using the cell vibration model
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
JP2021043874A (en) Image processing apparatus, image processing method, and program
WO2020026739A1 (en) Image processing device
JP5652272B2 (en) Image processing apparatus, image processing program, and image processing method
CN110070482B (en) Image processing method, apparatus and computer readable storage medium
JP2015158737A (en) Image processor and image processing method
Peng et al. Detail enhancement for infrared images based on propagated image filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19844357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19844357

Country of ref document: EP

Kind code of ref document: A1