WO2015163171A1 - Image processing apparatus and method and surgical operation system - Google Patents

Image processing apparatus and method and surgical operation system Download PDF

Info

Publication number
WO2015163171A1
WO2015163171A1 PCT/JP2015/061311 JP2015061311W WO2015163171A1 WO 2015163171 A1 WO2015163171 A1 WO 2015163171A1 JP 2015061311 W JP2015061311 W JP 2015061311W WO 2015163171 A1 WO2015163171 A1 WO 2015163171A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
image
range
filter
vertical direction
Prior art date
Application number
PCT/JP2015/061311
Other languages
French (fr)
Japanese (ja)
Inventor
真人 山根
恒生 林
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Priority to US15/304,559 priority Critical patent/US10440241B2/en
Priority to EP15782241.2A priority patent/EP3136719A4/en
Priority to JP2016514864A priority patent/JP6737176B2/en
Priority to CN201580020303.2A priority patent/CN106233719B/en
Publication of WO2015163171A1 publication Critical patent/WO2015163171A1/en
Priority to US16/555,236 priority patent/US11245816B2/en

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2085Special arrangements for addressing the individual elements of the matrix, other than by driving respective rows and columns in combination
    • G09G3/2088Special arrangements for addressing the individual elements of the matrix, other than by driving respective rows and columns in combination with use of a plurality of processors, each processor controlling a number of individual elements of the matrix
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/34Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source
    • G09G3/36Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters by control of light from an independent source using liquid crystals
    • G09G3/3611Control of matrices with row and column drivers
    • G09G3/3622Control of matrices with row and column drivers using a passive matrix
    • G09G3/3644Control of matrices with row and column drivers using a passive matrix with the matrix divided into sections
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present technology relates to an image processing apparatus and method, and a surgical system, and more particularly, to an image processing apparatus and method, and a surgical system that can realize image display with low latency.
  • Patent Document 1 For example, a technique of displaying an image at high speed by dividing an image in the vertical direction and using a plurality of processors to perform parallel processing for each divided area has been proposed (see Patent Document 1).
  • the present technology has been made in view of such a situation.
  • the image is divided in the horizontal direction and assigned to a plurality of processors, and each processor performs time-division processing on the assigned area in the vertical direction.
  • the largest overhead is set in the head area and sequentially processed so that the captured image can be displayed at high speed.
  • An image processing apparatus includes a plurality of arithmetic processing units that perform time-division processing on a range obtained by dividing an image obtained by imaging a patient's surgical part in a vertical direction, and the arithmetic processing unit includes: The image divided in the horizontal direction is time-divided in the vertical direction by the number of the arithmetic processing units and processed.
  • the plurality of arithmetic processing units are configured by a plurality of GPUs (Graphical Processing Unit), and the arithmetic processing unit can perform processing on the image divided in the horizontal direction by the number of GPUs. .
  • GPUs Graphic Processing Unit
  • the process applied to the image can be a process of applying an n-stage filter.
  • the n stages of filters sequentially process a range obtained by dividing the image in the vertical direction by time division sequentially from the uppermost stage range in the vertical direction downward.
  • a timing control unit for controlling the timing may be further included.
  • the processing range in the first period includes a reference pixel required for processing in the second period after the first period. can do.
  • the arithmetic processing unit may include a memory for buffering a processing result, and in the processing of the second period, from the processing result of the first period buffered in the memory, An arithmetic process can be executed using a processing result corresponding to the reference pixel.
  • the arithmetic processing unit may include a memory for buffering a processing result, and the vertical stage processing by the filter at each stage in a range obtained by dividing the image in the vertical direction.
  • the range may be a range including the number of reference pixel lines required for the processing of the filter in the processing range of the second stage or less in the vertical direction, and the arithmetic processing unit may include processing by the filter.
  • the calculation processing is performed using the processing result corresponding to the reference pixel from the processing result of the filter processing up to the previous stage buffered in the memory. It can be made to execute.
  • the arithmetic processing unit can perform at least an enlargement process on an image obtained by imaging the surgical site of the patient.
  • the image obtained by imaging the surgical site of the patient can be an image taken by an endoscope.
  • the image obtained by imaging the surgical site of the patient can be an image taken by a microscope.
  • An image processing method is an image processing method for an image processing apparatus including a plurality of arithmetic processing units that perform time-division processing on an image obtained by imaging a patient's surgical part for each range divided in the vertical direction.
  • the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
  • the image can be an image captured by an endoscope.
  • the image can be an image taken with a microscope.
  • An operation system includes an imaging device that images a surgical part of a patient, and a plurality of arithmetic processing units that perform processing in a time-sharing manner for each range obtained by dividing an image captured by the imaging device in a vertical direction
  • the arithmetic processing unit includes an image processing apparatus that performs time-division processing on the image divided in the horizontal direction by the number of the arithmetic processing units.
  • processing is performed in a time-sharing manner for each range in which an image obtained by imaging a patient's operative part is vertically divided by a plurality of arithmetic processing units, and by the number of the arithmetic processing units,
  • the image divided in the horizontal direction is processed by time division in the vertical direction.
  • display processing of captured images can be realized with low latency, and the captured images can be displayed at high speed in real time.
  • FIG. 2 is a diagram illustrating that the image processing apparatus of FIG. 1 performs parallel processing by dividing the number of GPU cards in the horizontal direction.
  • FIG. 2 is a diagram for explaining that the image processing apparatus of FIG. 1 divides in the horizontal direction by the number of GPU cards and performs parallel processing and time-division processing in the vertical direction.
  • It is a figure which shows an example of the filter which processes an image. It is a figure explaining the relationship between the attention pixel and reference pixel in a filter process.
  • FIG. 11 is a diagram illustrating a configuration example of a general-purpose personal computer.
  • FIG. 1 is a block diagram illustrating a configuration example of an embodiment of an image processing apparatus to which the present technology is applied.
  • the image processing apparatus 11 in FIG. 1 receives input of image data captured by an imaging apparatus such as a camera (not shown), performs various processes, and then outputs the image data to a display apparatus such as a display (not shown). Is displayed.
  • an imaging apparatus such as a camera (not shown)
  • a display apparatus such as a display (not shown). Is displayed.
  • the image processing apparatus 11 includes a CPU (Central Processing Unit) 31, a main memory 32, a bus 33, an IF (Interface) card 34, and GPU (Graphical Processing Units) cards 35-1 and 35-2. ing. Note that the GPU cards 35-1 and 35-2 are simply referred to as the GPU card 35 unless otherwise distinguished, and the other configurations are also referred to in the same manner.
  • CPU Central Processing Unit
  • main memory 32 main memory
  • bus 33 a bus 33
  • IF (Interface) card 34 IF (Interface) card 34
  • GPU Graphic Processing Units
  • a CPU (Central Processing Unit) 31 controls the overall operation of the image processing apparatus 11.
  • the CPU 31 includes a DMA (Direct Memory Access) controller 51.
  • DMA Direct Memory Access
  • DMA controller 51 controls the transfer source, the transfer destination, and the transfer timing in the DMA transfer operation that is not directly managed by the CPU 31.
  • the DMA controller 51 temporarily stores image data supplied as an input signal from a camera (not shown) via the IF card 34 and the bus 33 in the main memory 32.
  • the DMA controller 51 also selects the main memory 32 according to the image data stored in the main memory 32, the processing capabilities of the processors 92-1 and 92-1 of the GPU cards 35-1 and 35-2, and the processing contents.
  • the image data stored in is divided. Further, the DMA controller 51 assigns a timing for reading the divided image data for each range and a timing for storing the processed image data again. Further, the DMA controller 51 sequentially supplies the image data divided at the assigned timing to the GPU cards 35-1 and 35-2 and stores the processed image data in the main memory 32 sequentially. Then, the DMA controller 51 outputs the processed image data stored in the main memory 32 to the display (not shown) as an output signal via the bus 33 and the IF card 34 for display.
  • the IF (Interface) card 34 includes a camera IF 71, a display IF 72, and a PCIe (Peripheral Component Interconnect Express) bridge 73.
  • the camera IF 71 of the IF card 34 receives image data supplied as an input signal from a camera (not shown) under the management of the DMA controller 51, the image data is supplied to the main memory 32 via the PCIe bridge 73 and the bus 33.
  • the display IF 72 of the IF card 34 outputs processed image data supplied from the main memory 32 via the bus 33 and the PCIe bridge 73 to the display (not shown) under the management of the DMA controller 51. Output as a signal.
  • the GPU cards 35-1 and 35-2 include PCIe bridges 91-1 and 91-2, processors 92-1 and 92-2, and memories 93-1 and 93-2, respectively.
  • the GPU card 35 temporarily stores image data supplied from the main memory 32 via the bus 33 and the PCIe bridge 91 under the management of the DMA controller 51 of the CPU 31 in the memory 93. Then, the processor 91 performs predetermined processing while sequentially reading the image data stored in the memory 93, buffers the processing result in the memory 93 as necessary, and also via the PCIe bridge 91 and the bus 33. Output to CPU 31.
  • FIG. 1 an example in which there are two GPU cards 35 is shown, but two or more GPU cards may be used.
  • image data (leftmost part), for example, consisting of a pixel array of a Bayer array, captured by a camera (not shown), is defect correction processing and RAWNR (Noise Reduction).
  • RAWNR Noise Reduction
  • an R (red) image, a G (green) image, and a B (blue) image (RGB image in the figure) are generated by demosaic processing. Further, the demosaiced R (red) image, G (green) image, and B (blue) image are subjected to an image quality enhancement process and then subjected to an enlargement process to form R ( A red) image, a G (green) image, and a B (blue) image are generated.
  • the R (red) image, G (green) image, and B (blue) image generated in this way are output as output signals to a display unit such as a display (not shown) and displayed.
  • image data supplied as an input signal by the DMA controller 51 is written and stored in the main memory 32 as indicated by “DMA INPUT # 0” in the drawing.
  • the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, process A is executed.
  • the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, the process B is executed, and the process result is returned to the main memory 32.
  • the DMA controller 51 reads out and outputs the image data subjected to the processes A and B stored in the main memory 32. Is done.
  • one GPU card 35 processes the entire frame and then tries to display it, it will not be displayed as an image until the processing result for one frame is generated, and the processing time will be enormous. And the latency becomes large, and there is a risk of delay in display.
  • the frame is divided into several ranges in the vertical direction, and the processing is divided into small parts to reduce the latency.
  • the lower part of FIG. 3 shows an example in which processing is performed when each of the frames is divided into three to form image data # 0 to # 2.
  • image data # 0 supplied as an input signal by the DMA controller 51 is written to the main memory 32 as indicated by “DMAtINPUT ⁇ ⁇ # 0” in the figure from time t21 to t22. And memorized.
  • the image data # 0 stored in the main memory 32 is supplied to the GPU card 35 by the DMA controller 51, as indicated by “processing A # 0” in the figure, and the GPU card 35 Processing A is executed by the processor 92.
  • the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, the process B is executed, and the process result is returned to the main memory 32.
  • the DMA controller 51 From time t51 to t52, as indicated by “DMA OUTPUT # 0” in the figure, the DMA controller 51 outputs the image data # 0 subjected to the processes A and B stored in the main memory 32. .
  • the DMA controller 51 supplies the image data # 1 stored in the main memory 32 to the GPU card 35, and the GPU card 35 Processing B is executed by the processor 92, and the processing result is returned to the main memory 32.
  • the DMA controller 51 From time t53 to t54, as indicated by “DMA OUTPUT # 1” in the figure, the DMA controller 51 outputs the image data # 1 subjected to the processes A and B stored in the main memory 32. .
  • the DMA controller 51 supplies the image data # 2 stored in the main memory 32 to the GPU card 35, and the GPU card 35 Processing B is executed by the processor 92, and the processing result is returned to the main memory 32.
  • the DMA controller 51 outputs the image data # 2 subjected to the processes A and B stored in the main memory 32. .
  • image data # 0 to # 2 are time-division processed, and “DMA INPUT”, “Process A”, “Process B”, and “DMA OUTPUT” are processed in parallel as necessary.
  • the latency can be reduced as a whole.
  • the display is also expeditious and the latency is reduced. Is possible.
  • the image is divided in the vertical direction to reduce the latency, and the image processing apparatus 11 in FIG. 1 is provided with a plurality of GPU cards 35. To do. That is, when the image data P1 shown in the left part of FIG. 4 is input, it is divided in the horizontal direction as shown in the upper right part of FIG. 4, and as described with reference to FIG. Time-sharing process.
  • an area Z1 shown on the left side of the image data P1 is a processing range by “GPU # 0” corresponding to the GPU card 35-1, and is shown on the right side of the image data P1.
  • the area Z2 to be processed is the processing range by “GPU # 1” corresponding to the GPU card 35-2.
  • an example of the vertical division method in the conventional parallel processing is shown, and the processing by “GPU # 0” corresponding to the GPU card 35-1 indicated by the area Z11 in the upper stage.
  • the lower part is the processing range by “GPU # 1” corresponding to the GPU card 35-2 indicated by the area Z12. That is, in the lower right part of FIG. 4, an example in which the GPU card 35 is divided into two in the vertical direction is shown.
  • the image processing apparatus 11 of FIG. 1 has a GPU card 35-1 when the areas Z1 and Z2 of the image P1 are each divided into ranges C1 to C4 in the vertical direction from the top. In the region Z1, time division processing is performed in the order of the ranges C1 to C4 (in order from the top to the bottom). Similarly, the image processing apparatus 11 in FIG. 1 controls the GPU card 35-2 to perform time division processing in the order of the ranges C1 to C4 in the region Z2.
  • GPU cards 35 are processed in parallel in the horizontal direction, and further, image processing is performed by time-sharing processing in each GPU card 35 in the vertical direction. It is possible to increase the speed and to reduce the latency.
  • the process executed on the image by the processor 92 of the GPU card 35 is generally a filter process.
  • a Gaussian filter as shown in FIG. 6 needs to be processed three times for each pixel.
  • a Gaussian filter of 3 pixels ⁇ 3 pixels is used.
  • the pixel of interest is 4/16
  • the pixel of interest is 4 pixels above, below, left and right
  • 2/16 is diagonally above and below the pixel of interest.
  • 1/16 is a filter that sets each weighting factor as a weighting coefficient and calculates these sums as pixels.
  • the first filtering process is performed on a range of 5 pixels ⁇ 5 pixels around the target pixel P to be processed.
  • the pixels adjacent to the opposite side toward the target pixel by one pixel at the end portions are also required.
  • one pixel adjacent to the opposite side to the target pixel in the diagonal direction is further required. That is, in order to perform the first filtering process on the range of 5 pixels ⁇ 5 pixels indicated by the square marked with “1”, a total of 7 pixels ⁇ 7 pixels centered on the pixel of interest P in the figure. Is required.
  • the second filtering process is performed on a range of 3 pixels ⁇ 3 pixels centering on the target pixel P.
  • the pixels adjacent to the opposite side toward the target pixel by one pixel at the ends are also required.
  • one pixel adjacent to the opposite side to the target pixel is also required in the diagonal direction. That is, in the second filtering process, a total of 9 pixels in the range of 3 pixels ⁇ 3 pixels indicated by the squares with “2” in the figure are required.
  • pixels in the range of 7 pixels ⁇ 7 pixels indicated by the hatched portion around the target pixel P are used.
  • the filter process three times on the target pixel P it is possible to perform the filter process three times on the target pixel P. That is, when the target pixel is subjected to the filter process three times, the pixel in the region of 3 pixels ⁇ 3 pixels centered on the target pixel in the third process becomes the reference pixel.
  • the reference pixels necessary for the second filter process are required for nine pixels centering on each of the nine pixels.
  • a range of 5 pixels ⁇ 5 pixels is a reference pixel. Furthermore, each pixel of 5 pixels ⁇ 5 pixels is set as a reference pixel for each of the second reference pixels in the first process, and as a result, a range of 7 pixels ⁇ 7 pixels is required as a reference pixel.
  • a pixel to be processed that is, a reference pixel other than the target pixel, which is required when processing the target pixel, or the number of reference pixels
  • overhead an area in which the reference pixel exists
  • the overhead pixels are generated for 48 pixels excluding the target pixel.
  • a reference pixel is not an overhead pixel for a pixel that can be a pixel of interest separately.
  • a pixel that is not an object to be processed but is required only as a reference pixel is referred to as overhead.
  • the overhead width Dp is adopted as a method for expressing the amount of overhead generated for the target pixel.
  • an overhead region OHZ1C2 occurs in the region Z1C2 specified by the range C2 in the second stage from the top in the region Z1 on the left side of the image P1.
  • the overhead is estimated to be 8 times that of the overhead region OHZ1C2. Will do.
  • overhead width Dp 2, 6, 8, 40, 8 pixels.
  • an overhead of a total overhead width Dp 64 pixels occurs. That is, overhead pixels having the number of pixels excluding the target pixel are generated in the range of 129 pixels ⁇ 129 pixels. Further, for example, when dividing into two in the horizontal direction and dividing into four in the vertical direction, the overhead may increase by about 30% as compared to the case where no division processing is performed.
  • the number of lines is wider than 1/4 of the total number of lines in the vertical direction for one frame including the number of lines of all reference pixels required in the above process.
  • the ranges C2 and C3 are set so that the range is 1/4 of the total number of lines, and the remaining range is set for the last range C4.
  • the first filter processing (filter # 1), the second filter processing (filter # 2), and the nth filter processing (filter #n) are performed from the left, the ranges C1 to C4 are sequentially applied from the top. Shows the processing range to be performed on the entire image P1 when is sequentially processed.
  • the processing result of the range C1 is buffered in the memory 93, and as a result of the processing of the range C2, the area where the necessary reference pixel exists is preliminarily set as indicated by the hatched portion at the lower right. Since the processing is completed in C1, it is only necessary to refer to this, so no overhead occurs. Further, since the range C1 is a range having a line number wider than 1/4 of the total number of lines, the position of the range C2 is 1 of the total number of lines at a position closer to the range C3 than the original range C2. / 4 range. As a result, the region where the reference pixel exists in the range C3 is buffered as the processing result of the range C2, so that it is not necessary to perform filtering again, thereby suppressing the occurrence of overhead.
  • the position of the range C3 is set to 1 ⁇ 4 of the total number of lines closer to the range C4 than the original position of the range C3. Since the region where the reference pixel exists in C4 is buffered as the processing result of the range C3, it is not necessary to perform the filtering process again, thereby suppressing the occurrence of overhead.
  • the processing area in the range C1 by the second filter processing (filter # 2) is 1 of the total number of lines including the reference pixels thereafter. This is a range wider than / 4, which is narrower than the number of lines of the first filter processing (filter # 1) indicated by the upward-sloping diagonal line at the upper left of FIG. After that, the ranges C2 and C3 are set so that the range is 1/4, and the remaining range is set for the last range C4.
  • the area where the reference pixels exist becomes narrower by the smaller number of subsequent filters than the first filter process (filter # 1).
  • the range is wider than 1/4 of the total number of lines, but is narrower than the range C1 of the first filter processing (filter # 1).
  • the ranges C2 and C3 are also set so as to be shifted to positions close to 1 ⁇ 4 of the original total number of lines.
  • the range C4 is set to be more than the range C1 in the first filter processing (filter # 1).
  • the line width is increased by the reduced number of lines in the range C1.
  • the filtering process is first performed in the preceding filtering process, the processing result is buffered, and used in the subsequent filtering process.
  • the filtering process is first performed in the preceding filtering process, the processing result is buffered, and used in the subsequent filtering process.
  • step S11 the camera IF 71 of the IF card 34 accepts input of image data captured by a camera (not shown) and supplies it to the CPU 51 via the PCIe bridge 73 and the bus 33.
  • the CPU 51 stores the image data input in response to this supply in the main memory 32.
  • step S12 the DMA controller 51 divides the image in the horizontal direction according to the number of the GPU cards 35 based on the image data stored in the main memory 32, and further divides each divided area in the vertical direction.
  • the amount of processing is calculated based on the number of ranges when dividing into the number of divisions for division processing, the number of filters related to the processing, and information on the area where the reference pixel exists.
  • the processing amount is roughly divided into two types, that is, the processing amount related to the vertical processing and the processing amount related to the horizontal processing, and the DMA controller 51 calculates and adds them up.
  • the processing result obtained by the first filter processing is processed by the second filter (filter # 2). Further, processing such as processing by the third filter processing (filter # 3) is repeated, and finally the n-th filter processing is performed, and the DMA transfer is performed and output (output DMA at the upper right part in the figure). .
  • the number of lines PY (n ⁇ 1) of the (n ⁇ 1) th filter processing (filter # (n ⁇ 1)) is obtained by the following equation (1).
  • PY (n ⁇ 1) PY (n) + BY (n ⁇ 1) ⁇ z ... (1)
  • PY (n ⁇ 1) is the number of lines of the (n ⁇ 1) th filter process (filter # (n ⁇ 1)), and PY (n) is the line of the nth filter process (filter #n).
  • the number BY (n ⁇ 1) indicates the number of lines indicating the size of the processing unit block in the (n ⁇ 1) th filter processing (filter # (n ⁇ 1)).
  • Z is a value such that BY (n ⁇ 1) ⁇ A is larger than the number of reference pixels and A is minimum.
  • the (n ⁇ 1) th filter processing (filter # (n) is performed with respect to the number of lines (number of processing lines) output by the n th filter processing (filter #n).
  • the number of lines constituting the reference pixel in -1) is a range painted in a grid pattern.
  • the number of processing lines in the n-th filter processing is equivalent to four blocks of processing unit blocks each having a predetermined number of lines indicated by a downward slanting portion at the lower right in FIG.
  • the reference pixels in the (n ⁇ 1) th filter processing as shown as a grid-like range in the lower right part of FIG. There is no range for several lines.
  • each filter processing can be performed only for each processing unit block having a predetermined number of lines. Therefore, in the case of the lower right part of FIG. 12, the part having the number of lines less than one block is regarded as one block. Thereby, in the lower right part of FIG. 12, z shown by Formula (1) will be calculated
  • the number of processing lines for the (n-1) th filter processing (filter # (n-1)) is substantially obtained as the number of lines for 7 blocks. .
  • filter # 1 the number of processing unit blocks up to the first filter processing (filter # 1) is calculated, the processing amount corresponding to the processing unit block number is sequentially calculated, and the total is calculated as the vertical processing amount.
  • the number of lines required in each filter processing is the number of lines including reference pixels required in the subsequent stage of each filter so as to reduce overhead as described with reference to FIG. Is set.
  • ⁇ Horizontal processing amount> The amount of processing in the horizontal direction is also stored in the main memory 32, and the first filter processing (filter # 1) to the n-th filter processing (filter #n) with reference to the output buffer size output by DMA.
  • the number of reference pixels and the processing unit block in each of the filter processes up to are sequentially obtained in the reverse order of the processing order.
  • the processing result obtained by the first filter process is processed by the second filter process (filter # 2). Further, the process such as the third filter process (filter # 3) is repeated, and finally the n-th filter process (filter #n) is performed, and the DMA transfer is performed (output DMA in the figure). ).
  • the calculation of the amount of processing in the horizontal direction is obtained from the number of reference pixels and the processing unit block in each filter processing sequentially in the reverse direction from the width defined by the multiple of the processing unit block in the horizontal direction of the output DMA. Go.
  • the processing for reducing overhead in the vertical processing is not performed, and the width in the horizontal direction in each filter processing is equal to the number of processing unit blocks corresponding to the number of reference pixels in each filter processing. Is a processing amount corresponding to the horizontal width simply added.
  • the horizontal width Xk required for calculating the processing amount in the k-th filter processing #k is expressed by the following equation (2).
  • Xk is a width required for calculating the processing amount in the kth filter processing #k
  • w is a horizontal direction set by a multiple of the processing unit block in the nth filter processing #n. This is the width
  • zx is the width of the processing unit block.
  • Zk is the sum of the reference pixel numbers in the previous filter process (r1 + r2 +... + R (k ⁇ 1) + rk), where ri is the reference pixel number in the i-th filter process (filter #i). And zk ⁇ xk is a minimum value.
  • the reference pixel in the fifth filter process (filter # 5) with respect to the width of the sixth filter process (filter # 6) corresponding to the nth filter process (filter #n) as the output buffer size Assume that the number is two.
  • filter # 4 if the number of reference pixels is 1, in this case, in the fifth filter process (filter # 5), 2 which is the number of reference pixels is added to be 3 as indicated by the diagonally shaded area in the lower right.
  • the processing amount corresponding to the combined result of the widths that are multiples of the processing unit blocks to be processed in the horizontal filters is sequentially obtained.
  • the DMA controller 51 calculates the vertical processing amount and the horizontal processing amount described above according to the number of horizontal divisions and the number of vertical divisions of the image, and adds both of them to be necessary for processing. Calculate the amount of processing to be done.
  • step 13 the DMA controller 51 calculates processing times for various filter processes according to the processing capacity of the processors 92 mounted on the GPU card 35 and the processing amount obtained by the above calculation. Further, various timings such as image data reading or transfer timing are calculated from the obtained processing time. With this process, a timing chart indicating which image data is transferred to which GPU card 35 at which timing in the subsequent processes is constructed.
  • step S14 the DMA controller 51 starts processing from a predetermined timing based on this timing chart, determines whether or not the current processing timing is now, and until the next processing timing is reached. Repeat the same process.
  • step S14 for example, if it is determined that it is time to start the next process, the process proceeds to step S15.
  • step S15 the DMA controller 51 reads out the image data set as the next processing from the main memory 32 based on the timing chart, and transfers it to the GPU card 35 set as the transfer destination.
  • the processor 92 of the card 35 is caused to execute processing.
  • the DMA controller 51 receives this and stores it in the main memory 32.
  • step S16 the DMA controller 51 refers to the timing chart to determine whether or not the next process exists. For example, if there is a next process, the process returns to step S14, and the subsequent processes are performed. Repeated.
  • steps S14 to S16 are repeated until it is determined in step S16 that there is no next process. Then, when the processes of steps S14 to S16 are repeated and all the processes set in the timing chart are completed, it is considered that there is no next process in step S16, and the process proceeds to step S17.
  • step S ⁇ b> 17 the DMA controller 51 displays the image data stored in the main memory 32 and subjected to processing such as high image quality from the display IF 72 via the bus 33 and the PCIe bridge 73 of the IF card 34. To output.
  • step S18 the DMA controller 51 determines whether or not the next image has been supplied. If the next image exists, the process returns to step S11, and the subsequent processing is repeated.
  • step S18 If it is determined in step S18 that the next image is not supplied, the process ends.
  • the images are divided in the horizontal direction by the processors 92 of the plurality of GPU cards 35 and are shared by the processors 92 for processing.
  • each processor 92 is divided into a predetermined number of ranges in the vertical direction, and the divided ranges are subjected to time division processing. In this time division processing, the range in which the reference pixel exists in the subsequent filter processing is executed in the previous filter processing and is buffered in the memory 93.
  • the left part of FIG. 14 shows the reference pixels in the subsequent filtering process in the preceding filtering process so as to reduce overhead in various processes such as defect correction, RAWNR, demosaicing, high image quality, enlargement, and output DMA.
  • An example of the number of lines set in the ranges C1 to C4 when buffering is performed is shown.
  • the number of lines in the ranges C1 to C4 in the defect correction process is 604, 540, 540, and 476 lines
  • the number of lines in the ranges C1 to C4 in the RAWNR process is 596,540, 540 and 484 lines
  • the number of lines in the ranges C1 to C4 in the demosaic process is 588, 540, 540, and 492 lines.
  • the number of lines in the ranges C1 to C4 in the image quality improvement processing is 548, 540, 540, and 532 lines
  • the number of lines in the expansion processing ranges C1 to C4 is 540, 540, 540, and 540 lines. It is shown that the number of lines in the output DMA processing range C1 to C4 is 540, 540, 540, and 540 lines.
  • the processing time in the ranges C1 to C4 is as shown in the right part of FIG. 14, and the maximum difference ⁇ in the total processing time in the ranges C1 to C4 is, for example, the processing time in the ranges C1 and C4. This is a difference, and is about 5% of the processing time in the range C1. This is caused by the change in the number of processing lines for the purpose of reducing overhead in the filtering process in the vertical direction, and is caused by changing various processing times.
  • the total processing time and the breakdown of each processing time in the ranges C1 to C4 from the left are shown.
  • the number of lines to be finally output may be adjusted to smooth the processing time.
  • the processing time non-uniformity is eliminated such that the number of lines in the output DMA processing is 520 lines for the range C1 and 560 lines for the range C4. To be non-uniform.
  • the processing time difference ⁇ in the ranges C1 and C4 is almost eliminated, and the processing time can be smoothed and made uniform as a whole. It becomes.
  • the upper left part and upper right part of FIG. 15 are the same as the left part and right part of FIG. 14, respectively.
  • processing that does not need to adjust the processing speed in real time may be shared.
  • FIG. It may be assigned to detection processing or the like in the time zone indicated by the uppermost black range in C2 to C4 so that the processing time becomes uniform as a whole.
  • the image is divided in the horizontal direction, assigned to a plurality of processors, the horizontally divided areas are time-divided in the vertical direction, and the ranges divided in the vertical direction are The range including the reference pixels required for the subsequent processing is set in the range. Then, in the processing of the top range, the buffer processing is performed after performing the filtering processing including the processing of the reference pixel in advance, and in the subsequent filtering processing, the processing is executed with reference to the buffered one. did.
  • the display processing of the captured image can be reduced in latency, and the captured image can be displayed at high speed at a timing closer to the actual time when the captured image is captured.
  • the image processing apparatus 11 in FIG. 1 uses, for example, an endoscope that is used for endoscopic surgery as an imaging device that images a patient's surgical part, a microscope that is used for neurosurgery, and the like.
  • the present invention can be applied to an image processing apparatus that processes an image obtained by imaging an operation part, and further to a surgical system including an endoscope or a microscope as an imaging apparatus.
  • the processor 92 in the GPU card 35 it is possible to reduce consideration of a time lag or the like in displaying an image, so that it is possible to improve programmability.
  • displaying an image received via a broadcast wave or the like it is possible to reduce the latency, so that it is possible to display the image while suppressing the time lag.
  • the DMA controller 51 calculates the processing amount in advance according to the number of reference pixels corresponding to the filter used for processing and the processing unit block, and then reads out the image data and writes the image data. Since the processing is executed after optimizing, it is possible to reduce the latency in an optimum state regardless of the processing content.
  • the above-described series of processing can be executed by hardware, but can also be executed by software.
  • a program constituting the software may execute various functions by installing a computer incorporated in dedicated hardware or various programs. For example, it is installed from a recording medium in a general-purpose personal computer or the like.
  • FIG. 17 shows a configuration example of a general-purpose personal computer.
  • This personal computer incorporates a CPU (Central Processing Unit) 1001.
  • An input / output interface 1005 is connected to the CPU 1001 via a bus 1004.
  • a ROM (Read Only Memory) 1002 and a RAM (Random Access Memory) 1003 are connected to the bus 1004.
  • the input / output interface 1005 includes an input unit 1006 including an input device such as a keyboard and a mouse for a user to input an operation command, an output unit 1007 for outputting a processing operation screen and an image of the processing result to a display device, programs, and various types.
  • a storage unit 1008 including a hard disk drive for storing data, a LAN (Local Area Network) adapter, and the like are connected to a communication unit 1009 that executes communication processing via a network represented by the Internet.
  • magnetic disks including flexible disks
  • optical disks including CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc)), magneto-optical disks (including MD (Mini Disc)), or semiconductors
  • a drive 1010 for reading / writing data from / to a removable medium 1011 such as a memory is connected.
  • the CPU 1001 is read from a program stored in the ROM 1002 or a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, installed in the storage unit 1008, and loaded from the storage unit 1008 to the RAM 1003. Various processes are executed according to the program.
  • the RAM 1003 also appropriately stores data necessary for the CPU 1001 to execute various processes.
  • the CPU 1001 loads the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program, for example. Is performed.
  • the program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example.
  • the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in advance in the ROM 1002 or the storage unit 1008.
  • the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
  • the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
  • each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
  • the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
  • this technique can also take the following structures. (1) including a plurality of arithmetic processing units that perform processing in a time-sharing manner for each range in which an image obtained by imaging a patient's surgical part is vertically divided; The image processing device, wherein the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction by the number of the arithmetic processing units in the vertical direction. (2) The plurality of arithmetic processing units are configured by a plurality of GPUs (Graphical Processing Units), The image processing device according to (1), wherein the arithmetic processing unit performs processing on the image divided in the horizontal direction by the number of the GPUs.
  • GPUs Graphic Processing Units
  • the image processing apparatus according to (1) or (2), wherein the process performed on the image is a process of applying an n-stage filter.
  • the n stages of filters perform processing in a time-division manner in order from a range obtained by dividing the image in the vertical direction from the uppermost stage in the vertical direction downward.
  • Image processing device (5) Based on the amount of processing performed on the image calculated based on the number of horizontal divisions and the number of vertical divisions of the image, and the processing speed of the arithmetic processing unit, the arithmetic processing unit.
  • the image processing apparatus according to any one of (1) to (4), further including a timing control unit that controls timing of the calculation.
  • the processing range in the first period includes reference pixels required for processing in the second period after the first period.
  • the image processing device according to any one of (1) to (5).
  • the arithmetic processing unit includes a memory for buffering a processing result, In the process of the second period, an arithmetic process is executed using a process result corresponding to the reference pixel from the process result of the first period buffered in the memory. Processing equipment.
  • the arithmetic processing unit includes a memory for buffering a processing result, Of the range obtained by dividing the image in the vertical direction, the uppermost processing range in the vertical direction by the filter at each stage is required for the processing of the filter in the processing range of the second stage or less in the vertical direction. A range that includes the number of reference pixel lines, When the arithmetic processing unit executes arithmetic processing for processing by the filter, the processing using the reference pixel is performed on the reference pixel from the processing result of the filtering processing up to the previous stage buffered in the memory.
  • the image processing apparatus according to (3) wherein arithmetic processing is executed using a corresponding processing result.
  • the image processing device according to any one of (1) to (8), wherein the arithmetic processing unit performs at least an enlargement process on an image obtained by imaging the surgical site of the patient.
  • the image processing device according to any one of (1) to (9), wherein the image obtained by imaging the surgical site of the patient is an image taken by an endoscope.
  • the image processing device according to any one of (1) to (9), wherein the image obtained by imaging the surgical site of the patient is an image taken by a microscope. (12) In an image processing method of an image processing apparatus including a plurality of arithmetic processing units that perform time-division processing on an image obtained by imaging a patient's surgical part for each range divided in the vertical direction.
  • the operation system includes: an image processing device that performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.

Abstract

This technique relates to an image processing apparatus, an image processing method and a surgical operation system for reducing the latency, thereby enabling a captured image to be displayed in nearly real time. A DMA controller (51) of a CPU (31) divides image data, which have been inputted via an interface card (34), by a number of GPU cards (35-1, 35-2) in the horizontal direction and allocates the thus divided image data to the GPU cards (35-1, 35-2). In each of the GPU cards (35-1, 35-2), the image data are subjected to a time division process in the vertical direction. In this way, multiple GPU cards (35-1, 35-2) are used to enhance the speed of a process related to the displaying of image data, thereby reducing the latency, thereby achieving a high speed displaying. This technique can be applied to an endoscopic camera, a surgical operation microscope and the like.

Description

画像処理装置および方法、並びに手術システムImage processing apparatus and method, and surgical system
 本技術は、画像処理装置および方法、並びに手術システムに関し、特に、低レイテンシでの画像表示を実現できるようにした画像処理装置および方法、並びに手術システムに関する。 The present technology relates to an image processing apparatus and method, and a surgical system, and more particularly, to an image processing apparatus and method, and a surgical system that can realize image display with low latency.
 近年、医療現場において従来の開腹手術に代わって、内視鏡下手術が行われている。内視鏡下手術等において使用される画像処理装置としては、特に、低レイテンシでの画像表示を実現することが求められている。 In recent years, endoscopic surgery has been performed in place of conventional laparotomy in medical practice. As an image processing apparatus used in endoscopic surgery or the like, it is particularly required to realize image display with low latency.
 一方で、撮像された画像を、タイムラグを最小にして高速で表示できるようにする技術が提案されている。 On the other hand, a technique has been proposed that enables a captured image to be displayed at high speed with a minimum time lag.
 例えば、画像を垂直方向に分割し、複数のプロセッサを用いて、分割された領域毎に並列処理させることにより高速で表示させる技術が提案されている(特許文献1参照)。 For example, a technique of displaying an image at high speed by dividing an image in the vertical direction and using a plurality of processors to perform parallel processing for each divided area has been proposed (see Patent Document 1).
特開平2-040688号公報JP-A-2-040688
 しかしながら、上述した特許文献1に記載の技術では、ライン単位に画像を分割すると、GPU(Graphics Processing Unit)のように、それぞれ処理用のメモリを独立して持つ構成の場合、ライン単位でオーバラップする必要があり、オーバヘッドが大きくなる。 However, in the technique described in Patent Document 1 described above, when an image is divided into line units, in the case of a configuration having independent processing memories, such as GPU (Graphics Processing Unit), overlap is made in line units. And the overhead becomes large.
 結果として、オーバヘッドを処理するため、全体として処理ライン数が増えることで、演算量が増加して、処理速度を向上させることができないことがあった。 As a result, since the overhead is processed, the number of processing lines as a whole increases, so that the amount of calculation increases and the processing speed cannot be improved.
 本技術は、このような状況に鑑みてなされたものであり、特に、画像を水平方向に分割して、複数のプロセッサに割り付けると共に、各プロセッサでは割り付けられた領域を垂直方向に時分割処理し、かつ、垂直方向に分割された各領域について、先頭領域に最も大きなオーバヘッドを設定して順次処理させるようにすることで、撮像した画像を高速で表示できるようにするものである。 The present technology has been made in view of such a situation. In particular, the image is divided in the horizontal direction and assigned to a plurality of processors, and each processor performs time-division processing on the assigned area in the vertical direction. In addition, for each area divided in the vertical direction, the largest overhead is set in the head area and sequentially processed so that the captured image can be displayed at high speed.
 本技術の一側面の画像処理装置は、患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す。 An image processing apparatus according to an aspect of the present technology includes a plurality of arithmetic processing units that perform time-division processing on a range obtained by dividing an image obtained by imaging a patient's surgical part in a vertical direction, and the arithmetic processing unit includes: The image divided in the horizontal direction is time-divided in the vertical direction by the number of the arithmetic processing units and processed.
 前記複数の演算処理部には、複数のGPU(Graphical Processing Unit)により構成され、前記演算処理部は、前記GPUの数で水平方向に分割された前記画像に処理を施すようにさせることができる。 The plurality of arithmetic processing units are configured by a plurality of GPUs (Graphical Processing Unit), and the arithmetic processing unit can perform processing on the image divided in the horizontal direction by the number of GPUs. .
 前記画像に施す処理は、n段のフィルタを掛ける処理とすることができる。 The process applied to the image can be a process of applying an n-stage filter.
 前記n段の前記フィルタは、それぞれ前記画像を垂直方向に分割した範囲を、前記垂直方向の最上段の範囲から下方向に向かって順次時分割で処理を施す。 The n stages of filters sequentially process a range obtained by dividing the image in the vertical direction by time division sequentially from the uppermost stage range in the vertical direction downward.
 前記画像の前記水平方向の分割数、および前記垂直方向の分割数に基づいて算出される前記画像に施す処理量と、前記演算処理部の処理速度とに基づいて、前記演算処理部の演算のタイミングを制御するタイミング制御部をさらに含ませるようにすることができる。 Based on the amount of processing performed on the image calculated based on the number of divisions in the horizontal direction and the number of divisions in the vertical direction of the image, and the processing speed of the arithmetic processing unit, A timing control unit for controlling the timing may be further included.
 前記画像を前記垂直方向に時分割した範囲のうち、第1の期間の処理範囲には、前記第1の期間より後の第2の期間の処理に必要とされる参照ピクセルが含まれるようにすることができる。 Of the range in which the image is time-divided in the vertical direction, the processing range in the first period includes a reference pixel required for processing in the second period after the first period. can do.
 前記演算処理部には、処理結果をバッファリングするメモリを含ませるようにすることができ、前記第2の期間の処理では、前記メモリにバッファリングされた前記第1の期間の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行させるようにすることができる。 The arithmetic processing unit may include a memory for buffering a processing result, and in the processing of the second period, from the processing result of the first period buffered in the memory, An arithmetic process can be executed using a processing result corresponding to the reference pixel.
 前記演算処理部には、処理結果をバッファリングするメモリを含ませるようにすることができ、前記画像を垂直方向に分割した範囲のうち、各段の前記フィルタによる前記垂直方向の最上段の処理範囲は、前記垂直方向の2段目以下の処理範囲における前記フィルタの処理に必要とされる参照ピクセルのライン数を含む範囲とすることができ、前記演算処理部には、前記フィルタによる処理のための演算処理を実行するにあたり、前記参照ピクセルを利用する処理では、前記メモリにバッファリングされた前段までのフィルタ処理の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行させるようにすることができる。 The arithmetic processing unit may include a memory for buffering a processing result, and the vertical stage processing by the filter at each stage in a range obtained by dividing the image in the vertical direction. The range may be a range including the number of reference pixel lines required for the processing of the filter in the processing range of the second stage or less in the vertical direction, and the arithmetic processing unit may include processing by the filter. In the processing using the reference pixel, the calculation processing is performed using the processing result corresponding to the reference pixel from the processing result of the filter processing up to the previous stage buffered in the memory. It can be made to execute.
 前記演算処理部には、前記患者の術部が撮像された画像に対して、少なくとも拡大処理を施すようにさせることができる。 The arithmetic processing unit can perform at least an enlargement process on an image obtained by imaging the surgical site of the patient.
 前記患者の術部が撮像された画像は、内視鏡により撮像された画像とすることができる。 The image obtained by imaging the surgical site of the patient can be an image taken by an endoscope.
 前記患者の術部が撮像された画像は、顕微鏡により撮像された画像とすることができる。 The image obtained by imaging the surgical site of the patient can be an image taken by a microscope.
 本技術の一側面の画像処理方法は、患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含む画像処理装置の画像処理方法であって、前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す。 An image processing method according to an aspect of the present technology is an image processing method for an image processing apparatus including a plurality of arithmetic processing units that perform time-division processing on an image obtained by imaging a patient's surgical part for each range divided in the vertical direction. The arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
 前記画像は、内視鏡により撮像された画像とすることができる。 The image can be an image captured by an endoscope.
 前記画像は、顕微鏡により撮像された画像とすることができる。 The image can be an image taken with a microscope.
 本技術の一側面の手術システムは、患者の術部を撮像する撮像装置と、前記撮像装置により撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す画像処理装置とを有する。 An operation system according to an aspect of the present technology includes an imaging device that images a surgical part of a patient, and a plurality of arithmetic processing units that perform processing in a time-sharing manner for each range obtained by dividing an image captured by the imaging device in a vertical direction The arithmetic processing unit includes an image processing apparatus that performs time-division processing on the image divided in the horizontal direction by the number of the arithmetic processing units.
 本技術の一側面においては、複数の演算処理部により、患者の術部が撮像された画像が垂直方向に分割された範囲毎に時分割で処理が施され、前記演算処理部の数で、水平方向に分割された前記画像が、前記垂直方向に時分割で処理が施される。 In one aspect of the present technology, processing is performed in a time-sharing manner for each range in which an image obtained by imaging a patient's operative part is vertically divided by a plurality of arithmetic processing units, and by the number of the arithmetic processing units, The image divided in the horizontal direction is processed by time division in the vertical direction.
 本技術の一側面によれば、撮像した画像の表示処理を低レイテンシで実現し、撮像した画像をリアルタイムで高速に表示させることが可能となる。 According to one aspect of the present technology, display processing of captured images can be realized with low latency, and the captured images can be displayed at high speed in real time.
本技術を適用した画像処理装置の一実施の形態の構成を説明するブロック図である。It is a block diagram explaining the composition of the 1 embodiment of the image processing device to which this art is applied. 図1の画像処理装置による処理を説明する図である。It is a figure explaining the process by the image processing apparatus of FIG. 従来の画像処理と本技術の画像処理との違いを説明する図である。It is a figure explaining the difference between the conventional image processing and the image processing of this technique. 図1の画像処理装置が水平方向にGPUカード数で分割して並列処理することを説明する図である。FIG. 2 is a diagram illustrating that the image processing apparatus of FIG. 1 performs parallel processing by dividing the number of GPU cards in the horizontal direction. 図1の画像処理装置が水平方向にGPUカード数で分割して並列処理すると共に、垂直方向に時分割処理することを説明する図である。FIG. 2 is a diagram for explaining that the image processing apparatus of FIG. 1 divides in the horizontal direction by the number of GPU cards and performs parallel processing and time-division processing in the vertical direction. 画像を処理するフィルタの一例を示す図である。It is a figure which shows an example of the filter which processes an image. フィルタ処理における注目画素と参照ピクセルとの関係を説明する図である。It is a figure explaining the relationship between the attention pixel and reference pixel in a filter process. 各処理領域でフィルタ処理する際に発生するオーバヘッドを説明する図である。It is a figure explaining the overhead generate | occur | produced when filtering in each process area | region. 各処理領域でフィルタ処理する際に発生する具体的なオーバヘッドの一例を説明する図である。It is a figure explaining an example of the concrete overhead generate | occur | produced when filtering in each process area | region. 図1の画像処理装置における各フィルタ処理における処理範囲の設定方法を説明する図である。It is a figure explaining the setting method of the processing range in each filter process in the image processing apparatus of FIG. 図1の画像処理装置による低レイテンシ表示処理を説明するフローチャートである。3 is a flowchart for explaining low latency display processing by the image processing apparatus of FIG. 1. 水平方向の処理量を計算する方法を説明する図である。It is a figure explaining the method of calculating the processing amount of a horizontal direction. 垂直方向の処理量を計算する方法を説明する図である。It is a figure explaining the method of calculating the processing amount of a perpendicular direction. 処理範囲に応じて処理時間が変化してしまうことを説明する図である。It is a figure explaining that processing time changes according to a processing range. 処理範囲で出力するライン数を調整して、処理範囲毎の処理時間を均一化することを説明する図である。It is a figure explaining adjusting the number of lines output in a processing range, and equalizing processing time for every processing range. 処理範囲毎に生じる処理時間差が生じるときに、リアルタイム性が求められない処理を空いた時間に実行させることで、処理範囲毎の処理時間を均一化することを説明する図である。It is a figure explaining equalizing the processing time for every processing range by performing the processing for which real-time property is not calculated | required when the processing time difference which arises for every processing range arises in the vacant time. 汎用のパーソナルコンピュータの構成例を説明する図である。And FIG. 11 is a diagram illustrating a configuration example of a general-purpose personal computer.
 <画像処理装置の構成例>
 図1は、本技術を適用した画像処理装置の一実施の形態の構成例を示すブロック図である。
<Configuration example of image processing apparatus>
FIG. 1 is a block diagram illustrating a configuration example of an embodiment of an image processing apparatus to which the present technology is applied.
 図1の画像処理装置11は、図示せぬカメラ等の撮像装置により撮像された画像データの入力を受け付けて、各種の処理を施した後、図示せぬディスプレイ等の表示装置に出力して画像として表示させるものである。 The image processing apparatus 11 in FIG. 1 receives input of image data captured by an imaging apparatus such as a camera (not shown), performs various processes, and then outputs the image data to a display apparatus such as a display (not shown). Is displayed.
 より詳細には、画像処理装置11は、CPU(Central Processing Unit)31、メインメモリ32、バス33、IF(Interface)カード34、およびGPU(Graphical Processing Unit)カード35-1,35-2を備えている。尚、GPUカード35-1,35-2について、特に区別する必要が無い場合、単に、GPUカード35と称するものとし、その他の構成についても同様に称するものとする。 More specifically, the image processing apparatus 11 includes a CPU (Central Processing Unit) 31, a main memory 32, a bus 33, an IF (Interface) card 34, and GPU (Graphical Processing Units) cards 35-1 and 35-2. ing. Note that the GPU cards 35-1 and 35-2 are simply referred to as the GPU card 35 unless otherwise distinguished, and the other configurations are also referred to in the same manner.
 CPU(Central Processing Unit)31は、画像処理装置11の動作の全体を制御する。また、CPU31は、DMA(Direct Memory Access)コントローラ51を備えている。尚、ここでいうDMAとは、Direct Memory Accessを表すものであり、CPU31による管理が直接なされることなく、バス33を介してIFカード34、メインメモリ32、およびGPUカード35間の相互に直接データを転送する動作を表している。すなわち、DMAコントローラ51は、このCPU31によって直接管理されないDMAによる転送動作のうち、転送元と転送先、および転送タイミングを制御する。 A CPU (Central Processing Unit) 31 controls the overall operation of the image processing apparatus 11. In addition, the CPU 31 includes a DMA (Direct Memory Access) controller 51. Note that DMA here refers to Direct Memory Access, and is directly managed between the IF card 34, the main memory 32, and the GPU card 35 via the bus 33 without being directly managed by the CPU 31. It represents the operation of transferring data. That is, the DMA controller 51 controls the transfer source, the transfer destination, and the transfer timing in the DMA transfer operation that is not directly managed by the CPU 31.
 より詳細には、DMAコントローラ51は、IFカード34およびバス33を介して図示せぬカメラより入力信号として供給される画像データを、一旦、メインメモリ32に記憶させる。また、DMAコントローラ51は、メインメモリ32に記憶させた画像データと、GPUカード35-1,35-2のプロセッサ92-1,92-1の処理能力と、処理内容に応じて、メインメモリ32に記憶された画像データを分割する。また、DMAコントローラ51は、分割した画像データを範囲毎に読み出すタイミング、および処理された画像データを再び格納するタイミングを割り付ける。さらに、DMAコントローラ51は、割り付けたタイミングで分割された画像データを順次GPUカード35-1,35-2に供給すると共に、処理済みの画像データを順次メインメモリ32に記憶させる。そして、DMAコントローラ51は、メインメモリ32に記憶された処理済みの画像データをバス33、およびIFカード34を介して出力信号として図示せぬディスプレイに出力して表示させる。 More specifically, the DMA controller 51 temporarily stores image data supplied as an input signal from a camera (not shown) via the IF card 34 and the bus 33 in the main memory 32. The DMA controller 51 also selects the main memory 32 according to the image data stored in the main memory 32, the processing capabilities of the processors 92-1 and 92-1 of the GPU cards 35-1 and 35-2, and the processing contents. The image data stored in is divided. Further, the DMA controller 51 assigns a timing for reading the divided image data for each range and a timing for storing the processed image data again. Further, the DMA controller 51 sequentially supplies the image data divided at the assigned timing to the GPU cards 35-1 and 35-2 and stores the processed image data in the main memory 32 sequentially. Then, the DMA controller 51 outputs the processed image data stored in the main memory 32 to the display (not shown) as an output signal via the bus 33 and the IF card 34 for display.
 IF(Interface)カード34は、カメラIF71、ディスプレイIF72、およびPCIe(Peripheral Component Interconnect Express)ブリッジ73を備えている。IFカード34のカメラIF71は、DMAコントローラ51による管理の下で、図示せぬカメラより入力信号として供給されてくる画像データを受け付けると、PCIeブリッジ73、およびバス33を介してメインメモリ32に供給する。また、IFカード34のディスプレイIF72は、DMAコントローラ51による管理の下で、メインメモリ32よりバス33、およびPCIeブリッジ73を介して供給されてくる処理済みの画像データを、図示せぬディスプレイに出力信号として出力する。 The IF (Interface) card 34 includes a camera IF 71, a display IF 72, and a PCIe (Peripheral Component Interconnect Express) bridge 73. When the camera IF 71 of the IF card 34 receives image data supplied as an input signal from a camera (not shown) under the management of the DMA controller 51, the image data is supplied to the main memory 32 via the PCIe bridge 73 and the bus 33. To do. The display IF 72 of the IF card 34 outputs processed image data supplied from the main memory 32 via the bus 33 and the PCIe bridge 73 to the display (not shown) under the management of the DMA controller 51. Output as a signal.
 GPUカード35-1,35-2は、それぞれPCIeブリッジ91-1,91-2、プロセッサ92-1,92-2、およびメモリ93-1,93-2を備えている。GPUカード35は、CPU31のDMAコントローラ51の管理の下で、メインメモリ32よりバス33、およびPCIeブリッジ91を介して供給されてくる画像データをメモリ93に一旦記憶させる。そして、プロセッサ91は、メモリ93に記憶された画像データを、順次読み出しながら所定の処理を施し、処理結果を必要に応じてメモリ93にバッファリングすると共に、PCIeブリッジ91、およびバス33を介してCPU31に出力する。尚、GPUカード35は、図1においては、2枚である例が示されているが、2枚以上の枚数であってもよいものである。 The GPU cards 35-1 and 35-2 include PCIe bridges 91-1 and 91-2, processors 92-1 and 92-2, and memories 93-1 and 93-2, respectively. The GPU card 35 temporarily stores image data supplied from the main memory 32 via the bus 33 and the PCIe bridge 91 under the management of the DMA controller 51 of the CPU 31 in the memory 93. Then, the processor 91 performs predetermined processing while sequentially reading the image data stored in the memory 93, buffers the processing result in the memory 93 as necessary, and also via the PCIe bridge 91 and the bus 33. Output to CPU 31. In FIG. 1, an example in which there are two GPU cards 35 is shown, but two or more GPU cards may be used.
 <画像処理の概要について>
 次に、図2を参照して、図1の画像処理装置11による画像処理について説明する。
<About image processing>
Next, image processing by the image processing apparatus 11 of FIG. 1 will be described with reference to FIG.
 図2中、左上部から矢印で示されるように、図示せぬカメラにより撮像された、例えば、ベイヤ配列の画素配列からなる画像データ(最左部)は、欠陥補正処理、およびRAWNR(Noise Reduction)処理が施されると、デモザイク処理により、R(赤色)画像、G(緑色)画像、およびB(青色)画像(図中のRGB画像)が生成される。さらに、デモザイク処理されたR(赤色)画像、G(緑色)画像、およびB(青色)画像には、高画質化処理が施された後、拡大処理がなされて、出力画像を構成するR(赤色)画像、G(緑色)画像、およびB(青色)画像が生成される。このようにして生成されたR(赤色)画像、G(緑色)画像、およびB(青色)画像が、出力信号として図示せぬディスプレイなどの表示部に出力されて表示される。 In FIG. 2, as indicated by an arrow from the upper left, image data (leftmost part), for example, consisting of a pixel array of a Bayer array, captured by a camera (not shown), is defect correction processing and RAWNR (Noise Reduction). ) Processing, an R (red) image, a G (green) image, and a B (blue) image (RGB image in the figure) are generated by demosaic processing. Further, the demosaiced R (red) image, G (green) image, and B (blue) image are subjected to an image quality enhancement process and then subjected to an enlargement process to form R ( A red) image, a G (green) image, and a B (blue) image are generated. The R (red) image, G (green) image, and B (blue) image generated in this way are output as output signals to a display unit such as a display (not shown) and displayed.
 <低レイテンシ化>
 上述したような画像処理を、従来のようにフレーム単位で実行した場合、処理は、図3の上段で示されるようなタイムチャートとなる。尚、ここでは、画像に対してなされる処理は、処理A,Bの2種類のみであるものとする。また、図3の上段においては、GPUカード35と同一のGPUカードが1枚存在する構成であるものとする。
<Low latency>
When the image processing as described above is executed in units of frames as in the prior art, the processing becomes a time chart as shown in the upper part of FIG. Here, it is assumed that only two types of processes A and B are performed on the image. In the upper part of FIG. 3, it is assumed that there is one GPU card identical to the GPU card 35.
 すなわち、時刻t0乃至t1において、図中の「DMA INPUT #0」で示されるように、DMAコントローラ51により入力信号として供給される画像データがメインメモリ32に書き込まれて記憶される。 That is, from time t0 to t1, image data supplied as an input signal by the DMA controller 51 is written and stored in the main memory 32 as indicated by “DMA INPUT # 0” in the drawing.
 時刻t1乃至t2において、図中の「Kernel A #0」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データがGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Aが実行される。 From time t1 to t2, as indicated by “Kernel A # 0” in the figure, the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, process A is executed.
 時刻t3乃至t4において、図中の「Kernel B #0」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データがGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Bが実行されて、処理結果がメインメモリ32に戻される。 From time t3 to t4, as indicated by “Kernel B # 0” in the figure, the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, the process B is executed, and the process result is returned to the main memory 32.
 時刻t5乃至t6において、図中の「DMA OUTPUT #0」で示されるように、DMAコントローラ51により、メインメモリ32に格納されている処理A,Bが施された画像データが読み出されて出力される。 From time t5 to t6, as indicated by “DMA OUTPUT # 0” in the figure, the DMA controller 51 reads out and outputs the image data subjected to the processes A and B stored in the main memory 32. Is done.
 この場合、1のGPUカード35でフレーム全体を処理してから表示しようとすると、1フレーム分の処理結果が生成されるまでは、画像として表示されることがなく、また、処理時間が膨大なものとなり、レイテンシが大きなものとなるので、表示に遅れが生じる恐れがある。 In this case, if one GPU card 35 processes the entire frame and then tries to display it, it will not be displayed as an image until the processing result for one frame is generated, and the processing time will be enormous. And the latency becomes large, and there is a risk of delay in display.
 そこで、図1の画像処理装置11においては、図3の下段で示されるように、フレームを垂直方向にいくつかの範囲に分割し、処理を小分けにして実行することでレイテンシを小さくしている。尚、図3の下段においては、フレームを3分割して画像データ#0乃至#2にした場合に、それぞれに対する処理がなされる場合の例が示されている。 Therefore, in the image processing apparatus 11 of FIG. 1, as shown in the lower part of FIG. 3, the frame is divided into several ranges in the vertical direction, and the processing is divided into small parts to reduce the latency. . The lower part of FIG. 3 shows an example in which processing is performed when each of the frames is divided into three to form image data # 0 to # 2.
 すなわち、図3の下段においては、時刻t21乃至t22において、図中の「DMA INPUT #0」で示されるように、DMAコントローラ51により入力信号として供給される画像データ#0がメインメモリ32に書き込まれて記憶される。 That is, in the lower part of FIG. 3, image data # 0 supplied as an input signal by the DMA controller 51 is written to the main memory 32 as indicated by “DMAtINPUT よ う # 0” in the figure from time t21 to t22. And memorized.
 時刻t31乃至t32において、図中の「処理 A #0」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データ#0がGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Aが実行される。 At time t31 to t32, the image data # 0 stored in the main memory 32 is supplied to the GPU card 35 by the DMA controller 51, as indicated by “processing A # 0” in the figure, and the GPU card 35 Processing A is executed by the processor 92.
 このとき、「処理 A #0」の処理と並行して時刻t22乃至t23において、図中の「DMA INPUT #1」で示されるように、DMAコントローラ51により入力信号として供給される画像データ#1をメインメモリ32に記憶させる。 At this time, image data # 1 supplied as an input signal by the DMA controller 51 at time t22 to t23 in parallel with the processing of “Processing A # 0”, as indicated by “DMA INPUT # 1” in the drawing. Is stored in the main memory 32.
 時刻t33乃至t34において、図中の「処理 B #0」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データがGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Bが実行されて、処理結果がメインメモリ32に戻される。 At times t33 to t34, as indicated by “Processing B B # 0” in the figure, the DMA controller 51 supplies the image data stored in the main memory 32 to the GPU card 35, and the processor 92 of the GPU card 35 Thus, the process B is executed, and the process result is returned to the main memory 32.
 時刻t51乃至t52において、図中の「DMA OUTPUT #0」で示されるように、DMAコントローラ51により、メインメモリ32に格納されている処理A,Bが施された画像データ#0が出力される。 From time t51 to t52, as indicated by “DMA OUTPUT # 0” in the figure, the DMA controller 51 outputs the image data # 0 subjected to the processes A and B stored in the main memory 32. .
 「DMA OUTPUT #0」の処理に並行して、時刻t35乃至t36において、図中の「処理 A #1」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データ#1がGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Aが実行される。 In parallel with the processing of “DMA OUTPUT # 0”, the image data # 1 stored in the main memory 32 by the DMA controller 51 as shown by “processing A # 1” in the figure at time t35 to t36. Is supplied to the GPU card 35, and the process A is executed by the processor 92 of the GPU card 35.
 さらに、「処理 A #1」の処理に並行して時刻t24乃至t25において、図中の「DMA INPUT #2」で示されるように、DMAコントローラ51により入力信号として供給される画像データ#2をメインメモリ32に記憶させる。 Further, in parallel with the processing of “Process A # 1”, the image data # 2 supplied as an input signal by the DMA controller 51 as shown by “DMA INPUT # 2” in the figure at time t24 to t25. Store in the main memory 32.
 時刻t37乃至t38において、図中の「処理 B #1」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データ#1がGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Bが実行されて、処理結果がメインメモリ32に戻される。 At time t37 to t38, as indicated by “processing B # 1” in the figure, the DMA controller 51 supplies the image data # 1 stored in the main memory 32 to the GPU card 35, and the GPU card 35 Processing B is executed by the processor 92, and the processing result is returned to the main memory 32.
 時刻t53乃至t54において、図中の「DMA OUTPUT #1」で示されるように、DMAコントローラ51により、メインメモリ32に格納されている処理A,Bが施された画像データ#1が出力される。 From time t53 to t54, as indicated by “DMA OUTPUT # 1” in the figure, the DMA controller 51 outputs the image data # 1 subjected to the processes A and B stored in the main memory 32. .
 「DMA OUTPUT #1」の処理に並行して、時刻t39乃至t40において、図中の「処理 A #2」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データ#2がGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Aが実行される。 In parallel with the processing of “DMA OUTPUT # 1”, the image data # 2 stored in the main memory 32 by the DMA controller 51 as shown by “processing A # 2” in the figure at time t39 to t40. Is supplied to the GPU card 35, and the process A is executed by the processor 92 of the GPU card 35.
 時刻t41乃至t42において、図中の「処理 B #2」で示されるように、DMAコントローラ51により、メインメモリ32に記憶された画像データ#2がGPUカード35に供給されて、GPUカード35のプロセッサ92により処理Bが実行されて、処理結果がメインメモリ32に戻される。 At time t41 to t42, as indicated by “Processing B B # 2” in the figure, the DMA controller 51 supplies the image data # 2 stored in the main memory 32 to the GPU card 35, and the GPU card 35 Processing B is executed by the processor 92, and the processing result is returned to the main memory 32.
 時刻t55乃至t56において、図中の「DMA OUTPUT #2」で示されるように、DMAコントローラ51により、メインメモリ32に格納されている処理A,Bが施された画像データ#2が出力される。 At time t55 to t56, as indicated by “DMA OUTPUT # 2” in the figure, the DMA controller 51 outputs the image data # 2 subjected to the processes A and B stored in the main memory 32. .
 このような処理により、画像データ#0乃至#2が時分割処理されて、「DMA INPUT」、「処理A」、「処理B」、および「DMA OUTPUT」が必要に応じて並列処理されることにより、全体としてもレイテンシを低減させることが可能となる。また、処理A,Bが施された画像データ#0乃至#2が、処理が終了すると共に部分的に表示されることになるので、体感的にも表示が速くなり、低レイテンシ化を図ることが可能となる。 By such processing, image data # 0 to # 2 are time-division processed, and “DMA INPUT”, “Process A”, “Process B”, and “DMA OUTPUT” are processed in parallel as necessary. Thus, the latency can be reduced as a whole. In addition, since the image data # 0 to # 2 subjected to the processes A and B are partially displayed when the process is completed, the display is also expeditious and the latency is reduced. Is possible.
 <水平分割>
 上述したように、垂直方向に画像を分割して低レイテンシ化を実現させると共に、さらに、図1の画像処理装置11は、複数のGPUカード35が設けられているので、同様の処理を並列処理する。すなわち、図4の左部で示される画像データP1が入力される場合、図4の右上部で示されるように、水平方向に分割し、それぞれ図3を参照して説明したように、垂直方向に時分割処理する。
<Horizontal division>
As described above, the image is divided in the vertical direction to reduce the latency, and the image processing apparatus 11 in FIG. 1 is provided with a plurality of GPU cards 35. To do. That is, when the image data P1 shown in the left part of FIG. 4 is input, it is divided in the horizontal direction as shown in the upper right part of FIG. 4, and as described with reference to FIG. Time-sharing process.
 尚、図4の右上部においては、画像データP1の左部で示される領域Z1が、GPUカード35-1に対応する「GPU#0」による処理範囲であり、画像データP1の右部で示される領域Z2が、GPUカード35-2に対応する「GPU#1」による処理範囲である。尚、図4の右下部においては、従来の並列処理における垂直方向の分割方式の例が示されており、上段が領域Z11で示されるGPUカード35-1に対応する「GPU#0」による処理範囲であり、下段が領域Z12で示されるGPUカード35-2に対応する「GPU#1」による処理範囲である。すなわち、図4の右下部においては、GPUカード35毎に垂直方向に2分割する例が示されている。 In the upper right part of FIG. 4, an area Z1 shown on the left side of the image data P1 is a processing range by “GPU # 0” corresponding to the GPU card 35-1, and is shown on the right side of the image data P1. The area Z2 to be processed is the processing range by “GPU # 1” corresponding to the GPU card 35-2. In the lower right part of FIG. 4, an example of the vertical division method in the conventional parallel processing is shown, and the processing by “GPU # 0” corresponding to the GPU card 35-1 indicated by the area Z11 in the upper stage. The lower part is the processing range by “GPU # 1” corresponding to the GPU card 35-2 indicated by the area Z12. That is, in the lower right part of FIG. 4, an example in which the GPU card 35 is divided into two in the vertical direction is shown.
 <垂直分割>
 また、図1の画像処理装置11は、図5で示されるように、画像P1の領域Z1,Z2がそれぞれ、上から垂直方向に範囲C1乃至C4に4分割されるとき、GPUカード35-1を制御して、領域Z1においては範囲C1乃至C4の順序で(上から順に下に向かう順序で)時分割処理する。同様に、図1の画像処理装置11は、GPUカード35-2を制御して、領域Z2において範囲C1乃至C4の順序で時分割処理する。
<Vertical division>
Further, as shown in FIG. 5, the image processing apparatus 11 of FIG. 1 has a GPU card 35-1 when the areas Z1 and Z2 of the image P1 are each divided into ranges C1 to C4 in the vertical direction from the top. In the region Z1, time division processing is performed in the order of the ranges C1 to C4 (in order from the top to the bottom). Similarly, the image processing apparatus 11 in FIG. 1 controls the GPU card 35-2 to perform time division processing in the order of the ranges C1 to C4 in the region Z2.
 このように水平方向に対しては複数の(図5では2個の)GPUカード35により並列に処理がなされ、さらに、垂直方向にはそれぞれのGPUカード35により時分割処理されることにより画像処理を高速化し、低レイテンシ化を実現することが可能となる。 In this way, a plurality of (two in FIG. 5) GPU cards 35 are processed in parallel in the horizontal direction, and further, image processing is performed by time-sharing processing in each GPU card 35 in the vertical direction. It is possible to increase the speed and to reduce the latency.
 <オーバヘッド>
 GPUカード35のプロセッサ92により画像に対して実行される処理は、一般的に、フィルタ処理である。例えば、図6で示されるようなガウシアンフィルタを、各画素に対して3回処理する必要がある場合を考える。尚、図6においては、3画素×3画素のガウシアンフィルタであり、注目画素に4/16が、注目画素の上下左右の4画素に2/16が、注目画素の左右斜め上下の4画素に1/16がそれぞれ重み係数として設定され、これらの関和を画素として演算するフィルタである。
<Overhead>
The process executed on the image by the processor 92 of the GPU card 35 is generally a filter process. For example, consider a case where a Gaussian filter as shown in FIG. 6 needs to be processed three times for each pixel. In FIG. 6, a Gaussian filter of 3 pixels × 3 pixels is used. The pixel of interest is 4/16, the pixel of interest is 4 pixels above, below, left and right, and 2/16 is diagonally above and below the pixel of interest. 1/16 is a filter that sets each weighting factor as a weighting coefficient and calculates these sums as pixels.
 この場合、図7の左上部で示されるように、処理対象となる注目画素Pを中心とした5画素×5画素の範囲に1回目のフィルタ処理が施される。この場合、5画素×5画素の範囲の上下左右の端部の画素にフィルタを掛けるには、その端部の1画素分だけ注目画素に向かって反対側に隣接する画素も必要となる。また、5画素×5画素の範囲の角の画素については、さらに、斜め方向の注目画素に対して反対側に隣接する1画素も必要となる。すなわち、「1」が付されたマス目で示される5画素×5画素の範囲に対して1回目のフィルタ処理を施すには、図中の注目画素Pを中心とした合計7画素×7画素が必要となる。 In this case, as shown in the upper left part of FIG. 7, the first filtering process is performed on a range of 5 pixels × 5 pixels around the target pixel P to be processed. In this case, in order to filter the pixels at the upper, lower, left, and right end portions of the range of 5 pixels × 5 pixels, the pixels adjacent to the opposite side toward the target pixel by one pixel at the end portions are also required. In addition, for the corner pixels in the range of 5 pixels × 5 pixels, one pixel adjacent to the opposite side to the target pixel in the diagonal direction is further required. That is, in order to perform the first filtering process on the range of 5 pixels × 5 pixels indicated by the square marked with “1”, a total of 7 pixels × 7 pixels centered on the pixel of interest P in the figure. Is required.
 次に、図7の中央上部で示されるように、注目画素Pを中心とした3画素×3画素の範囲に2回目のフィルタ処理が施されることになる。この場合も、3画素×3画素の範囲の上下左右の端部の画素にフィルタを掛けるには、その端部の1画素分だけ注目画素に向かって反対側に隣接する画素も必要となる。また、3画素×3画素の範囲の角の画素については、さらに、斜め方向にも注目画素に対して反対側に隣接する1画素が必要となる。すなわち、2回目のフィルタ処理では、図中の「2」が付されたマス目で示される、3画素×3画素の範囲の合計9画素が必要となる。 Next, as shown in the upper center portion of FIG. 7, the second filtering process is performed on a range of 3 pixels × 3 pixels centering on the target pixel P. Also in this case, in order to filter the pixels at the top, bottom, left, and right ends of the range of 3 pixels × 3 pixels, the pixels adjacent to the opposite side toward the target pixel by one pixel at the ends are also required. In addition, for the corner pixel in the range of 3 pixels × 3 pixels, one pixel adjacent to the opposite side to the target pixel is also required in the diagonal direction. That is, in the second filtering process, a total of 9 pixels in the range of 3 pixels × 3 pixels indicated by the squares with “2” in the figure are required.
 そして、図7の右上部で示されるように、このようにして求められた2回目までの処理がなされた8画素が利用されて、注目画素Pは、「3」が付されるように、3回のフィルタ処理がなされることになる。 Then, as shown in the upper right part of FIG. 7, 8 pixels that have been processed up to the second time obtained in this way are used, and the target pixel P is assigned with “3”. Three filtering processes are performed.
 結果として、図7の下部で示されるように、注目画素Pに3回のフィルタ処理を施すには、注目画素Pを中心とした斜線部で示される7画素×7画素の範囲の画素を利用することで、注目画素Pに3回のフィルタ処理を施すことが可能となる。すなわち、注目画素に3回フィルタ処理を掛ける場合、3回目の処理で注目画素を中心とした3画素×3画素の領域の画素が参照ピクセルとなる。また、2回目の処理で3回目の参照ピクセルとなる3画素×3画素の各画素について2回目のフィルタの処理に必要な参照ピクセルが、9画素のそれぞれを中心とした9画素分必要となるので、5画素×5画素の範囲が参照ピクセルとなる。さらに、1回目の処理で2回目の参照ピクセルのそれぞれについて5画素×5画素の各画素が参照ピクセルとされることにより、結果として、7画素×7画素の範囲が参照ピクセルとして必要となる。 As a result, as shown in the lower part of FIG. 7, in order to perform the filtering process three times on the target pixel P, pixels in the range of 7 pixels × 7 pixels indicated by the hatched portion around the target pixel P are used. By doing this, it is possible to perform the filter process three times on the target pixel P. That is, when the target pixel is subjected to the filter process three times, the pixel in the region of 3 pixels × 3 pixels centered on the target pixel in the third process becomes the reference pixel. In addition, for each pixel of 3 pixels × 3 pixels that becomes the third reference pixel in the second process, the reference pixels necessary for the second filter process are required for nine pixels centering on each of the nine pixels. Therefore, a range of 5 pixels × 5 pixels is a reference pixel. Furthermore, each pixel of 5 pixels × 5 pixels is set as a reference pixel for each of the second reference pixels in the first process, and as a result, a range of 7 pixels × 7 pixels is required as a reference pixel.
 ここで、処理対象となる画素、すなわち、注目画素に処理を施す際に必要とされる、注目画素以外の参照ピクセル、または、参照ピクセル数をオーバヘッドと称するものとし、その参照ピクセルが存在する領域をオーバヘッド領域と称する。従って、図7の場合、1画素の注目画素については、図7の左下部で示されるように、注目画素から上下左右にオーバヘッド幅Dp=4画素分の領域がオーバヘッド領域として、発生することになる。すなわち、図7の場合、オーバヘッドとなる画素は、注目画素を除く48画素分だけ発生することになる。ただし、参照ピクセルでも、別途、注目画素となり得る画素については、オーバヘッド画素ではない。あくまでも、処理対象とはならないが、参照ピクセルとしてのみ必要とされる画素をオーバヘッドと称する。 Here, a pixel to be processed, that is, a reference pixel other than the target pixel, which is required when processing the target pixel, or the number of reference pixels is referred to as overhead, and an area in which the reference pixel exists Is referred to as an overhead region. Therefore, in the case of FIG. 7, as shown in the lower left part of FIG. 7, an attention width of 1 pixel is generated as an overhead area in the overhead width Dp = 4 pixels from the attention pixel up and down and left and right. Become. That is, in the case of FIG. 7, the overhead pixels are generated for 48 pixels excluding the target pixel. However, even a reference pixel is not an overhead pixel for a pixel that can be a pixel of interest separately. A pixel that is not an object to be processed but is required only as a reference pixel is referred to as overhead.
 尚、以降においては、注目画素に対して発生するオーバヘッドの量を表現する手法としてオーバヘッド幅Dpを採用する。オーバヘッド幅Dpは、図7の左下部で示されるように、注目画素Pからみたオーバヘッドとなるピクセルの上下左右のいずれかの端部までのピクセル数である。従って、オーバヘッド幅Dp=4であれば、オーバヘッドとなるピクセル数は48画素となる。 In the following, the overhead width Dp is adopted as a method for expressing the amount of overhead generated for the target pixel. The overhead width Dp is the number of pixels up to either the top, bottom, left, or right end of the pixel as an overhead viewed from the target pixel P, as shown in the lower left part of FIG. Therefore, if the overhead width Dp = 4, the number of pixels that become overhead is 48 pixels.
 処理対象となる画素が存在する領域には、複数の画素が存在することになるが、上述したように、1フレーム分の画像を水平方向に2分割して、それぞれGPUカード35-1,35-2のそれぞれのプロセッサ92-1,92-2により、垂直方向に4分割した範囲毎に順次時分割処理がなされる場合、例えば、図8で示されるようなオーバヘッドが発生する。 In the area where the pixel to be processed exists, there are a plurality of pixels. As described above, an image for one frame is divided into two in the horizontal direction, and GPU cards 35-1 and 35 are respectively provided. When the time-division processing is sequentially performed for each range divided into four in the vertical direction by the respective processors 92-1 and 92-2 of -2, for example, overhead as shown in FIG. 8 occurs.
 すなわち、図8においては、画像P1に対して、図中左側の領域Z1における上から2段目の範囲C2で特定される領域Z1C2においては、オーバヘッド領域OHZ1C2が発生する。 That is, in FIG. 8, an overhead region OHZ1C2 occurs in the region Z1C2 specified by the range C2 in the second stage from the top in the region Z1 on the left side of the image P1.
 従って、図8で示されるように、領域Z1,Z2のそれぞれについて、垂直方向に4分割した範囲C1乃至C4の合計8領域に分割される場合、オーバヘッドは、概算でオーバヘッド領域OHZ1C2の8倍発生することになる。 Therefore, as shown in FIG. 8, when each of the regions Z1 and Z2 is divided into a total of 8 regions of the ranges C1 to C4 divided into 4 in the vertical direction, the overhead is estimated to be 8 times that of the overhead region OHZ1C2. Will do.
 また、各オーバヘッドについても、上述した3画素×3画素のフィルタの場合、1画素について、オーバヘッド幅Dp=4画素のオーバヘッド(48画素)が発生する例について説明したが、実際の処理は、これ以上のオーバヘッドが発生する。 In addition, for each overhead, in the case of the above-described filter of 3 pixels × 3 pixels, an example in which an overhead (48 pixels) with an overhead width Dp = 4 pixels is generated for one pixel has been described. The above overhead occurs.
 例えば、図9で示されるように、欠陥補正処理、RAWNR処理、デモザイク処理、高画質化処理、および拡大処理に係るそれぞれの処理に際して、オーバヘッド幅Dp=2,6,8,40,8画素のオーバヘッドが発生するものとすると、合計オーバヘッド幅Dp=64画素のオーバヘッドが発生する。すなわち、129画素×129画素の範囲で、注目画素を除く画素数のオーバヘッド画素が発生することになる。また、例えば、水平方向に2分割し、垂直方向に4分割するとき、オーバヘッドは、分割処理しない場合に比べて、30%程度増加することがある。 For example, as shown in FIG. 9, in each process related to defect correction processing, RAWNR processing, demosaicing processing, image quality enhancement processing, and enlargement processing, overhead width Dp = 2, 6, 8, 40, 8 pixels. Assuming that overhead occurs, an overhead of a total overhead width Dp = 64 pixels occurs. That is, overhead pixels having the number of pixels excluding the target pixel are generated in the range of 129 pixels × 129 pixels. Further, for example, when dividing into two in the horizontal direction and dividing into four in the vertical direction, the overhead may increase by about 30% as compared to the case where no division processing is performed.
 <オーバヘッドの削減方法>
 以上のようにオーバヘッドが増えると演算処理量が膨大なものとなり、処理時間が大きくなることで、リアルタイムを達成するためにより演算性能の高いプロセッサを必要としてしまう。そこで、図1の画像処理装置11においては、次のような演算によりオーバヘッドを削減している。すなわち、GPUカード35-1,35-2のそれぞれに画像P1のうちの処理領域として領域Z1,Z2が割り付けられている場合、メモリ93-1,93-2に前段のフィルタ処理結果をバッファリングさせて、後段のフィルタ処理で流用できるようにする。
<Overhead reduction method>
As described above, when the overhead increases, the amount of calculation processing becomes enormous, and the processing time increases, so that a processor with higher calculation performance is required to achieve real time. Therefore, in the image processing apparatus 11 of FIG. 1, the overhead is reduced by the following calculation. That is, when the areas Z1 and Z2 of the image P1 are assigned to the GPU cards 35-1 and 35-2, the previous filter processing results are buffered in the memories 93-1 and 93-2. Thus, it can be used in subsequent filter processing.
 すなわち、n段のフィルタ処理が全体として必要な場合、図10の左上部の右上がりの斜線部で示されるように、第1フィルタ処理(フィルタ#1)による範囲C1における処理領域については、以降の処理において必要とされる全参照ピクセルのライン数を含む1フレーム分の垂直方向の全ライン数の1/4よりも広いライン数の範囲とする。そして、それ以降については、全ライン数の1/4の範囲となるように、範囲C2,C3を設定し、最後の範囲C4については、残りの範囲を設定する。尚、図10においては、左から第1フィルタ処理(フィルタ#1)、第2フィルタ処理(フィルタ#2)、および第nフィルタ処理(フィルタ#n)される場合、上から順次範囲C1乃至C4が順次処理されるときの画像P1の全体に対してなされる処理範囲を示したものである。 That is, when n-stage filter processing is necessary as a whole, the processing region in the range C1 by the first filter processing (filter # 1) as shown by the upward-sloping hatched portion in the upper left part of FIG. The number of lines is wider than 1/4 of the total number of lines in the vertical direction for one frame including the number of lines of all reference pixels required in the above process. After that, the ranges C2 and C3 are set so that the range is 1/4 of the total number of lines, and the remaining range is set for the last range C4. In FIG. 10, when the first filter processing (filter # 1), the second filter processing (filter # 2), and the nth filter processing (filter #n) are performed from the left, the ranges C1 to C4 are sequentially applied from the top. Shows the processing range to be performed on the entire image P1 when is sequentially processed.
 このようにすると、範囲C1の処理結果がメモリ93にバッファリングされることにより、範囲C2の処理については、右下がりの斜線部で示されるように、必要な参照ピクセルの存在する領域が予め範囲C1において処理済みとされることにより、これを参照することで済むので、オーバヘッドが発生しない。また、範囲C1が全ライン数の1/4よりも広いライン数の範囲とされていることから、範囲C2の位置は、本来の範囲C2よりも範囲C3に寄った位置の全ライン数の1/4の範囲とされる。これにより、範囲C3における参照ピクセルの存在する領域が範囲C2の処理結果としてバッファリングされることにより、再びフィルタ処理する必要がないのでオーバヘッドの発生が抑制される。 As a result, the processing result of the range C1 is buffered in the memory 93, and as a result of the processing of the range C2, the area where the necessary reference pixel exists is preliminarily set as indicated by the hatched portion at the lower right. Since the processing is completed in C1, it is only necessary to refer to this, so no overhead occurs. Further, since the range C1 is a range having a line number wider than 1/4 of the total number of lines, the position of the range C2 is 1 of the total number of lines at a position closer to the range C3 than the original range C2. / 4 range. As a result, the region where the reference pixel exists in the range C3 is buffered as the processing result of the range C2, so that it is not necessary to perform filtering again, thereby suppressing the occurrence of overhead.
 同様に、範囲C3の位置は、本来の範囲C3の位置よりも範囲C4に寄った位置の全ライン数の1/4の範囲とされるので、右下がりの斜線部で示されるように、範囲C4における参照ピクセルが存在する領域が範囲C3の処理結果としてバッファリングされることにより、再びフィルタ処理する必要がないのでオーバヘッドの発生が抑制される。 Similarly, the position of the range C3 is set to ¼ of the total number of lines closer to the range C4 than the original position of the range C3. Since the region where the reference pixel exists in C4 is buffered as the processing result of the range C3, it is not necessary to perform the filtering process again, thereby suppressing the occurrence of overhead.
 また、図10の中央上部で示される右上がりの斜線部で示されるように、第2フィルタ処理(フィルタ#2)による範囲C1における処理領域については、以降における参照ピクセルを含む全ライン数の1/4よりも広い範囲であって、図10の左上部の右上がり斜線部で示される第1フィルタ処理(フィルタ#1)のライン数よりも狭い範囲である。そして、それ以降については、1/4の範囲となるように、範囲C2,C3を設定し、最後の範囲C4については、残りの範囲を設定する。 Further, as indicated by the upward-sloping hatched portion shown in the upper center portion of FIG. 10, the processing area in the range C1 by the second filter processing (filter # 2) is 1 of the total number of lines including the reference pixels thereafter. This is a range wider than / 4, which is narrower than the number of lines of the first filter processing (filter # 1) indicated by the upward-sloping diagonal line at the upper left of FIG. After that, the ranges C2 and C3 are set so that the range is 1/4, and the remaining range is set for the last range C4.
 すなわち、第2フィルタ処理(フィルタ#2)の範囲C1のライン数については、第1フィルタ処理(フィルタ#1)よりも後段のフィルタ数が少ない分だけ参照ピクセルが存在する領域も狭くなることから、図10の中央上部の右上がりの斜線部で示されるように、全ライン数の1/4よりも広い範囲ではあるが、第1フィルタ処理(フィルタ#1)の範囲C1よりも狭い範囲となる。 That is, as for the number of lines in the range C1 of the second filter process (filter # 2), the area where the reference pixels exist becomes narrower by the smaller number of subsequent filters than the first filter process (filter # 1). As shown by the upward-sloping hatched portion at the upper center of FIG. 10, the range is wider than 1/4 of the total number of lines, but is narrower than the range C1 of the first filter processing (filter # 1). Become.
 この結果、範囲C2,C3についても、本来の全ライン数の1/4の範囲に近い位置にずれて設定され、範囲C4については、第1フィルタ処理(フィルタ#1)における範囲C1よりも、範囲C1のライン数が減った分だけ広くなる。 As a result, the ranges C2 and C3 are also set so as to be shifted to positions close to ¼ of the original total number of lines. The range C4 is set to be more than the range C1 in the first filter processing (filter # 1). The line width is increased by the reduced number of lines in the range C1.
 以降、残りのフィルタ数が減るにつれて、範囲C1のライン数が全ライン数の1/4のライン数に近づいていき、範囲C2,C3の位置が本来の全ライン数の1/4の位置に近づいていく。そして、最後の第nフィルタ処理(フィルタ#n)においては、後段のフィルタの参照ピクセルを考慮する必要がなくなるので、図10の右部で示されるように、範囲C1乃至C4が、本来の全ライン数の1/4のライン数の位置とされる。 Thereafter, as the number of remaining filters decreases, the number of lines in the range C1 approaches ¼ of the total number of lines, and the positions of the ranges C2 and C3 become ¼ of the original total number of lines. Approaching. In the last n-th filter processing (filter #n), it is not necessary to consider the reference pixel of the subsequent filter, and as shown in the right part of FIG. The position is one-fourth the number of lines.
 以上のように、後段のフィルタ処理で必要とされる参照ピクセルの存在する範囲のラインについて、前段のフィルタ処理において先にフィルタ処理して、処理結果をバッファリングし、後段のフィルタ処理で利用することで、オーバヘッドの発生を抑制することが可能となる。 As described above, for the line in the range where the reference pixels necessary for the subsequent filtering process exist, the filtering process is first performed in the preceding filtering process, the processing result is buffered, and used in the subsequent filtering process. Thus, it is possible to suppress the occurrence of overhead.
 <低レイテンシ表示処理>
 次に、図11のフローチャートを参照して、図1の画像処理装置11による低レイテンシ表示処理について説明する。
<Low latency display processing>
Next, low latency display processing by the image processing apparatus 11 in FIG. 1 will be described with reference to the flowchart in FIG.
 ステップS11において、IFカード34のカメラIF71は、図示せぬカメラより撮像された画像データの入力を受け付けて、PCIeブリッジ73およびバス33を介してCPU51に供給する。CPU51は、この供給を受けて入力された画像データをメインメモリ32に格納させる。 In step S11, the camera IF 71 of the IF card 34 accepts input of image data captured by a camera (not shown) and supplies it to the CPU 51 via the PCIe bridge 73 and the bus 33. The CPU 51 stores the image data input in response to this supply in the main memory 32.
 ステップS12において、DMAコントローラ51は、メインメモリ32に格納された画像データに基づいて、画像を水平方向にGPUカード35の数に応じて分割し、さらに、各分割された領域を垂直方向に時分割処理する上での分割数に分割するときの範囲数と、処理に係るフィルタ数とその参照ピクセルの存在する領域の情報に基づいて、処理量を計算する。 In step S12, the DMA controller 51 divides the image in the horizontal direction according to the number of the GPU cards 35 based on the image data stored in the main memory 32, and further divides each divided area in the vertical direction. The amount of processing is calculated based on the number of ranges when dividing into the number of divisions for division processing, the number of filters related to the processing, and information on the area where the reference pixel exists.
 すなわち、処理量は、大きく分けて垂直方向の処理に係る処理量と、水平方向の処理に係る処理量との2種類があり、DMAコントローラ51は、それぞれを計算した後、合算する。 That is, the processing amount is roughly divided into two types, that is, the processing amount related to the vertical processing and the processing amount related to the horizontal processing, and the DMA controller 51 calculates and adds them up.
 <垂直方向の処理量>
 すなわち、垂直方向については、最終的にメインメモリ32に格納された後に、DMA出力される出力バッファサイズを基準として、第1フィルタ処理#1(フィルタ#1)による処理から第nフィルタ処理(フィルタ#n)による処理までの各フィルタ処理における参照ピクセル数と処理単位ブロックとにより順次、処理順序と逆の順序で求められる。
<Vertical processing amount>
That is, with respect to the vertical direction, after being finally stored in the main memory 32, the output buffer size output from the DMA is used as a reference, and the first filter process # 1 (filter # 1) to the nth filter process (filter It is obtained in the order reverse to the processing order by the number of reference pixels and the processing unit block in each filter processing up to the processing of #n).
 すなわち、図12で示されるように、通常の垂直方向に順次なされるフィルタ処理は、第1フィルタ処理(フィルタ#1)によりなされた処理結果が、第2フィルタ(フィルタ#2)により処理され、さらに、第3フィルタ処理(フィルタ#3)により処理されるといった処理が繰り返されて、最終的に第nフィルタ処理がなされて、DMA転送されて出力される(図中最右上部の出力DMA)。 That is, as shown in FIG. 12, in the normal filter processing sequentially performed in the vertical direction, the processing result obtained by the first filter processing (filter # 1) is processed by the second filter (filter # 2). Further, processing such as processing by the third filter processing (filter # 3) is repeated, and finally the n-th filter processing is performed, and the DMA transfer is performed and output (output DMA at the upper right part in the figure). .
 従って、垂直方向の処理量の計算は、出力DMAのライン数から、順次逆方向に各フィルタ処理での参照ピクセル数と処理単位ブロックとから求められていく。すなわち、例えば、出力バッファサイズとなるライン数をPY(DMA)とすると、第nフィルタ処理(フィルタ#n)により求められるライン数は、画像を構成する画素数により予め決められることになるので、例えば、ライン数PY(n)=PY(DMA)となる。 Therefore, the calculation of the amount of processing in the vertical direction is obtained from the number of lines of the output DMA and sequentially from the number of reference pixels and the processing unit block in each filter processing in the reverse direction. That is, for example, if the number of lines to be the output buffer size is PY (DMA), the number of lines obtained by the nth filter process (filter #n) is determined in advance by the number of pixels constituting the image. For example, the number of lines PY (n) = PY (DMA).
 この場合、第(n-1)フィルタ処理(フィルタ#(n-1))のライン数PY(n-1)は、以下の式(1)により求められることになる。 In this case, the number of lines PY (n−1) of the (n−1) th filter processing (filter # (n−1)) is obtained by the following equation (1).
 PY(n-1)=PY(n)+BY(n-1)×z
                           ・・・(1)
PY (n−1) = PY (n) + BY (n−1) × z
... (1)
 ここで、PY(n-1)は、第(n-1)フィルタ処理(フィルタ#(n-1))のライン数を、PY(n)は、第nフィルタ処理(フィルタ#n)のライン数を、BY(n-1)は、第(n-1)フィルタ処理(フィルタ#(n-1))における処理単位ブロックの大きさを示すライン数をそれぞれ示している。 Here, PY (n−1) is the number of lines of the (n−1) th filter process (filter # (n−1)), and PY (n) is the line of the nth filter process (filter #n). The number BY (n−1) indicates the number of lines indicating the size of the processing unit block in the (n−1) th filter processing (filter # (n−1)).
 また、zは、BY(n-1)×Aが参照ピクセル数よりも大きく、かつ、Aが最小となる値である。 Z is a value such that BY (n−1) × A is larger than the number of reference pixels and A is minimum.
 すなわち、図12の右下部で示されるように、第nフィルタ処理(フィルタ#n)により出力されるライン数(処理ライン数)に対して、第(n-1)フィルタ処理(フィルタ#(n-1))における参照ピクセルを構成するライン数が、格子状に塗られた範囲である場合を考える。 That is, as shown in the lower right part of FIG. 12, the (n−1) th filter processing (filter # (n) is performed with respect to the number of lines (number of processing lines) output by the n th filter processing (filter #n). Let us consider a case where the number of lines constituting the reference pixel in -1)) is a range painted in a grid pattern.
 ここで、第nフィルタ処理(フィルタ#n)における処理ライン数は、図12の右下部の右下がりの斜線部で示される所定のライン数からなる処理単位ブロックの4ブロック分である。また、第(n-1)フィルタ処理(フィルタ#(n-1))における参照ピクセルは、図12の右下部の格子状での範囲として示されるように、2ブロック分と、1ブロックに満たない数ライン分の範囲である。 Here, the number of processing lines in the n-th filter processing (filter #n) is equivalent to four blocks of processing unit blocks each having a predetermined number of lines indicated by a downward slanting portion at the lower right in FIG. In addition, the reference pixels in the (n−1) th filter processing (filter # (n−1)), as shown as a grid-like range in the lower right part of FIG. There is no range for several lines.
 ところで、各フィルタ処理は、所定のライン数からなる処理単位ブロック毎にしか実施できない。そこで、図12の右下部のような場合、1ブロック分に満たないライン数の部分についても1ブロック分であるものとみなす。これにより、図12の右下部においては、式(1)で示されるzは、3として求められることになる。 By the way, each filter processing can be performed only for each processing unit block having a predetermined number of lines. Therefore, in the case of the lower right part of FIG. 12, the part having the number of lines less than one block is regarded as one block. Thereby, in the lower right part of FIG. 12, z shown by Formula (1) will be calculated | required as 3. FIG.
 このため、図12の右下部の場合、第(n-1)フィルタ処理(フィルタ#(n-1))の処理ライン数は、実質的に、7ブロック分のライン数が求められることになる。 Therefore, in the case of the lower right part of FIG. 12, the number of processing lines for the (n-1) th filter processing (filter # (n-1)) is substantially obtained as the number of lines for 7 blocks. .
 以降、第1フィルタ処理(フィルタ#1)までの処理単位ブロック数が計算されて、処理単位ブロック数に応じた処理量が順次計算されて、その総合計が垂直方向の処理量として計算される。 Thereafter, the number of processing unit blocks up to the first filter processing (filter # 1) is calculated, the processing amount corresponding to the processing unit block number is sequentially calculated, and the total is calculated as the vertical processing amount. .
 尚、ここでも、各フィルタ処理において必要とされるライン数は、図10を参照して説明したように、オーバヘッドが削減されるように、各フィルタの後段で必要な参照ピクセルを含めたライン数が設定される。 In this case as well, the number of lines required in each filter processing is the number of lines including reference pixels required in the subsequent stage of each filter so as to reduce overhead as described with reference to FIG. Is set.
 <水平方向の処理量>
 水平方向の処理量についても、最終的にメインメモリ32に格納された後に、DMA出力される出力バッファサイズを基準として、第1フィルタ処理(フィルタ#1)から第nフィルタ処理(フィルタ#n)までの各フィルタ処理における参照ピクセル数と処理単位ブロックとにより順次、処理順序と逆の順序で求められる。
<Horizontal processing amount>
The amount of processing in the horizontal direction is also stored in the main memory 32, and the first filter processing (filter # 1) to the n-th filter processing (filter #n) with reference to the output buffer size output by DMA. The number of reference pixels and the processing unit block in each of the filter processes up to are sequentially obtained in the reverse order of the processing order.
 すなわち、図13で示されるように、通常の水平方向に順次なされるフィルタ処理は、第1フィルタ処理(フィルタ#1)によりなされた処理結果が、第2フィルタ処理(フィルタ#2)により処理され、さらに、第3フィルタ処理(フィルタ#3)されるといった処理が繰り返されて、最終的に第nフィルタ処理(フィルタ#n)がなされて、DMA転送されて出力される(図中の出力DMA)。 That is, as shown in FIG. 13, in the normal horizontal filtering process, the processing result obtained by the first filter process (filter # 1) is processed by the second filter process (filter # 2). Further, the process such as the third filter process (filter # 3) is repeated, and finally the n-th filter process (filter #n) is performed, and the DMA transfer is performed (output DMA in the figure). ).
 従って、水平方向の処理量の計算は、出力DMAの水平方向の処理単位ブロックの倍数で定義される幅から、順次逆方向に各フィルタ処理での参照ピクセル数と処理単位ブロックとから求められていく。ただし、水平方向の処理においては、垂直方向の処理におけるオーバヘッドを削減する処理を行わず、各フィルタ処理における水平方向の幅に、各フィルタ処理における参照ピクセル数に応じた処理単位ブロック数分の幅が単純に加算された水平方向の幅に対応した処理量となる。 Therefore, the calculation of the amount of processing in the horizontal direction is obtained from the number of reference pixels and the processing unit block in each filter processing sequentially in the reverse direction from the width defined by the multiple of the processing unit block in the horizontal direction of the output DMA. Go. However, in the horizontal processing, the processing for reducing overhead in the vertical processing is not performed, and the width in the horizontal direction in each filter processing is equal to the number of processing unit blocks corresponding to the number of reference pixels in each filter processing. Is a processing amount corresponding to the horizontal width simply added.
 すなわち、例えば、第kフィルタ処理#kにおける処理量を計算する上で必要とされる水平方向の幅Xkは、以下の式(2)で表される。 That is, for example, the horizontal width Xk required for calculating the processing amount in the k-th filter processing #k is expressed by the following equation (2).
 Xk=w+zk×xk
                           ・・・(2)
Xk = w + zk × xk
... (2)
 ここで、Xkは、第kフィルタ処理#kにおける処理量を計算する上で必要とされる幅であり、wは、第nフィルタ処理#nにおける処理単位ブロックの倍数で設定される水平方向の幅であり、zxは、処理単位ブロックの幅である。 Here, Xk is a width required for calculating the processing amount in the kth filter processing #k, and w is a horizontal direction set by a multiple of the processing unit block in the nth filter processing #n. This is the width, and zx is the width of the processing unit block.
 また、zkは、第iフィルタ処理(フィルタ#i)の参照ピクセル数をriとしたときの、それまでのフィルタ処理の参照ピクセル数の総和(r1+r2+・・・+r(k-1)+rk)よりも大きく、かつ、zk×xkが最小となる値である。 Zk is the sum of the reference pixel numbers in the previous filter process (r1 + r2 +... + R (k−1) + rk), where ri is the reference pixel number in the i-th filter process (filter #i). And zk × xk is a minimum value.
 すなわち、図13の右下部の最下段の格子柄のマスで示されるように、第nフィルタ処理(フィルタ#n(n=1乃至6:n=6は最終段))により出力される水平方向の幅に対して、出力バッファサイズとなる第nフィルタ処理(フィルタ#n)に対応する第6フィルタ処理(フィルタ#6)の幅に対して、第5フィルタ処理(フィルタ#5)における参照ピクセル数が2であるものとする。 That is, the horizontal direction output by the nth filter process (filter #n (n = 1 to 6: n = 6 is the final stage)) as indicated by the lowermost grid pattern in the lower right part of FIG. The reference pixel in the fifth filter process (filter # 5) with respect to the width of the sixth filter process (filter # 6) corresponding to the nth filter process (filter #n) as the output buffer size Assume that the number is two.
 そして、図13の右下部の下から2段目の格子柄のマスで示されるように、第4フィルタ処理(フィルタ#4)において、参照ピクセル数が1であるものとすれば、この場合、第5フィルタ処理(フィルタ#5)における右下がりの斜線部のマスで示されるように参照ピクセル数である2が加算されて、3となる。 Then, as shown by the grid pattern in the second row from the lower right of FIG. 13, in the fourth filter process (filter # 4), if the number of reference pixels is 1, in this case, In the fifth filter process (filter # 5), 2 which is the number of reference pixels is added to be 3 as indicated by the diagonally shaded area in the lower right.
 同様にして、図13の右下部の下から3段目の格子柄のマスで示されるように、第3フィルタ処理(フィルタ#3)において、参照ピクセル数が3であるものとすれば、この場合、右下がりの斜線部のマスで示されるように第4フィルタ処理(フィルタ#4)までの参照ピクセル数である3が加算されて、6となる。 Similarly, if the number of reference pixels is 3 in the third filter processing (filter # 3) as shown by the grid pattern in the third row from the lower right in FIG. In this case, 3 as the reference pixel number up to the fourth filter process (filter # 4) is added to 6 as indicated by the diagonally shaded area.
 さらに、図13の右下部の下から4段目で示されるように、第2フィルタ処理(フィルタ#2)において、参照ピクセル数が1であるものとすれば、この場合、右下がりの斜線部のマスで示されるように第3フィルタ処理(フィルタ#3)までの参照ピクセル数である6が加算されて、7となる。 Further, as shown in the fourth row from the lower right in FIG. 13, if the number of reference pixels is 1 in the second filter processing (filter # 2), in this case, the diagonally downward slanting portion As indicated by the squares, 6 which is the number of reference pixels up to the third filter process (filter # 3) is added to be 7.
 そして、図13の右下部の最上段で示されるように、第1フィルタ処理(フィルタ#1)において、参照ピクセル数が1であるものとすれば、この場合、右下がりの斜線部のマスで示されるように第2フィルタ処理(フィルタ#2)までの参照ピクセル数である7が加算されて、8となる。 Then, as shown in the uppermost row at the lower right in FIG. 13, if the number of reference pixels is 1 in the first filter processing (filter # 1), in this case, As shown, 7 which is the number of reference pixels up to the second filter process (filter # 2) is added to obtain 8.
 すなわち、この場合、例えば、図13の右下部の下から3段目で示されるように、処理単位ブロックが1ピクセルで構成される場合、第3フィルタ処理(フィルタ#3)における、上述した式(2)のzk(=z3)は、2となる。 That is, in this case, for example, when the processing unit block is composed of one pixel as shown in the third row from the lower right in FIG. 13, the above-described equation in the third filter processing (filter # 3) Zk (= z3) of (2) is 2.
 以上のような手法により、水平方向の各フィルタにおける処理対象となる処理単位ブロックの倍数となる幅の合算結果に対応する処理量が順次求められる。 By the above-described method, the processing amount corresponding to the combined result of the widths that are multiples of the processing unit blocks to be processed in the horizontal filters is sequentially obtained.
 DMAコントローラ51は、画像の水平方向の分割数、および垂直方向の分割数に応じて、上述した垂直方向の処理量および水平方向の処理量を計算し、双方を合算することにより処理に必要とされる処理量を計算する。 The DMA controller 51 calculates the vertical processing amount and the horizontal processing amount described above according to the number of horizontal divisions and the number of vertical divisions of the image, and adds both of them to be necessary for processing. Calculate the amount of processing to be done.
 ステップ13において、DMAコントローラ51は、GPUカード35にそれぞれ搭載されているプロセッサ92の処理能力と、上述した計算により求められた処理量とに応じて、各種のフィルタ処理に係る処理時間を計算し、さらに、求められた処理時間から画像データの読み出し、または転送タイミング等の各種のタイミングを計算する。この処理により、以降の処理において、どのタイミングでどの画像データを、どこのGPUカード35に転送するのかを示すタイミングチャートを構築する。 In step 13, the DMA controller 51 calculates processing times for various filter processes according to the processing capacity of the processors 92 mounted on the GPU card 35 and the processing amount obtained by the above calculation. Further, various timings such as image data reading or transfer timing are calculated from the obtained processing time. With this process, a timing chart indicating which image data is transferred to which GPU card 35 at which timing in the subsequent processes is constructed.
 ステップS14において、DMAコントローラ51は、所定のタイミングから、このタイミングチャートに基づいて処理を開始し、今現在、次の処理のタイミングになったか否かを判定し、次の処理のタイミングになるまで、同様の処理を繰り返す。 In step S14, the DMA controller 51 starts processing from a predetermined timing based on this timing chart, determines whether or not the current processing timing is now, and until the next processing timing is reached. Repeat the same process.
 ステップS14において、例えば、次の処理を開始するタイミングであると判定された場合、処理は、ステップS15に進む。 In step S14, for example, if it is determined that it is time to start the next process, the process proceeds to step S15.
 ステップS15において、DMAコントローラ51は、タイミングチャートに基づいて、次の処理として設定されている画像データをメインメモリ32より読み出して、転送先として設定されているGPUカード35に転送させ、同時に、GPUカード35のプロセッサ92に処理を実行させる。または、DMAコントローラ51は、GPUカード35のプロセッサ92による処理が実行されて、処理結果が送信されてくると、これを受け取って、メインメモリ32に格納する。 In step S15, the DMA controller 51 reads out the image data set as the next processing from the main memory 32 based on the timing chart, and transfers it to the GPU card 35 set as the transfer destination. The processor 92 of the card 35 is caused to execute processing. Alternatively, when the processing by the processor 92 of the GPU card 35 is executed and the processing result is transmitted, the DMA controller 51 receives this and stores it in the main memory 32.
 ステップS16において、DMAコントローラ51は、タイミングチャートを参照して、次の処理が存在するか否かを判定し、例えば、次の処理がある場合、処理は、ステップS14に戻り、以降の処理が繰り返される。 In step S16, the DMA controller 51 refers to the timing chart to determine whether or not the next process exists. For example, if there is a next process, the process returns to step S14, and the subsequent processes are performed. Repeated.
 すなわち、ステップS16において、次の処理がないと判定されるまで、ステップS14乃至S16の処理が繰り返される。そして、ステップS14乃至S16の処理が繰り返されて、タイミングチャートに設定された全ての処理が完了すると、ステップS16において、次の処理がないとみなされて、処理は、ステップS17に進む。 That is, the processes in steps S14 to S16 are repeated until it is determined in step S16 that there is no next process. Then, when the processes of steps S14 to S16 are repeated and all the processes set in the timing chart are completed, it is considered that there is no next process in step S16, and the process proceeds to step S17.
 ステップS17において、DMAコントローラ51は、メインメモリ32に格納されている高画質化等の処理がなされた画像データをバス33、およびIFカード34のPCIeブリッジ73を介してディスプレイIF72より図示せぬディスプレイに出力させる。 In step S <b> 17, the DMA controller 51 displays the image data stored in the main memory 32 and subjected to processing such as high image quality from the display IF 72 via the bus 33 and the PCIe bridge 73 of the IF card 34. To output.
 ステップS18において、DMAコントローラ51は、次の画像が供給されてきているか否かを判定し、次の画像が存在する場合、処理は、ステップS11に戻り、それ以降の処理が繰り返される。 In step S18, the DMA controller 51 determines whether or not the next image has been supplied. If the next image exists, the process returns to step S11, and the subsequent processing is repeated.
 そして、ステップS18において、次の画像の供給がないとみなされた場合、処理は、終了する。 If it is determined in step S18 that the next image is not supplied, the process ends.
 すなわち、以上のように、複数のGPUカード35のプロセッサ92により水平方向に画像を分割して、各プロセッサ92により分担して処理させるようにした。また、各プロセッサ92による垂直方向に所定数の範囲に分割して、分割した範囲を時分割処理するようにした。また、この時分割処理において、後段のフィルタ処理における参照ピクセルの存在する範囲を前段のフィルタ処理で実行して、メモリ93にバッファリングさせるようにした。 That is, as described above, the images are divided in the horizontal direction by the processors 92 of the plurality of GPU cards 35 and are shared by the processors 92 for processing. In addition, each processor 92 is divided into a predetermined number of ranges in the vertical direction, and the divided ranges are subjected to time division processing. In this time division processing, the range in which the reference pixel exists in the subsequent filter processing is executed in the previous filter processing and is buffered in the memory 93.
 これにより、複数のGPUカード35におけるプロセッサ92により並列に処理を実行させるようにすることが可能となり、例えば、図3の下段で示されるような並列処理が可能となった。また、垂直方向に時分割処理するにあたって、参照ピクセルを繰り返し計算し直すといったオーバヘッドが削減されるように、各プロセッサ92による処理効率を向上させるようにした。 Thus, it becomes possible to execute processing in parallel by the processors 92 in the plurality of GPU cards 35. For example, parallel processing as shown in the lower part of FIG. 3 is possible. In addition, the processing efficiency of each processor 92 is improved so as to reduce the overhead of recalculating reference pixels when performing time division processing in the vertical direction.
 結果として、画像データを高画質化して表示させるまでの速度が向上し、低レイテンシ化を実現することが可能となる。 As a result, the speed until image data is displayed with high image quality is improved, and low latency can be realized.
 <処理時間を均一にする>
 以上の処理により、垂直方向のフィルタ処理におけるオーバヘッド削減とのトレードオフにより、各種の処理時間が変化することがある。
<Uniform processing time>
With the above processing, various processing times may change due to a trade-off with overhead reduction in vertical filter processing.
 すなわち、図14の左部は、欠陥補正、RAWNR、デモザイク、高画質化、拡大、および出力DMAといった各種の処理に当たってオーバヘッドを削減するように、前段のフィルタ処理で後段のフィルタ処理における参照ピクセルをバッファリングするようにしたときの範囲C1乃至C4に設定されるライン数の一例を示したものである。 That is, the left part of FIG. 14 shows the reference pixels in the subsequent filtering process in the preceding filtering process so as to reduce overhead in various processes such as defect correction, RAWNR, demosaicing, high image quality, enlargement, and output DMA. An example of the number of lines set in the ranges C1 to C4 when buffering is performed is shown.
 図14の左部においては、欠陥補正処理における範囲C1乃至C4の各ライン数は、604,540,540,476ラインであり、RAWNR処理における範囲C1乃至C4の各ライン数は、596,540,540,484ラインであり、デモザイク処理における範囲C1乃至C4の各ライン数は、588,540,540,492ラインであることが示されている。また、高画質化処理における範囲C1乃至C4の各ライン数は、548,540,540,532ラインであり、拡大処理の範囲C1乃至C4の各ライン数は、540,540,540,540ラインであり、出力DMA処理の範囲C1乃至C4の各ライン数は、540,540,540,540ラインであることが示されている。 In the left part of FIG. 14, the number of lines in the ranges C1 to C4 in the defect correction process is 604, 540, 540, and 476 lines, and the number of lines in the ranges C1 to C4 in the RAWNR process is 596,540, 540 and 484 lines, and the number of lines in the ranges C1 to C4 in the demosaic process is 588, 540, 540, and 492 lines. In addition, the number of lines in the ranges C1 to C4 in the image quality improvement processing is 548, 540, 540, and 532 lines, and the number of lines in the expansion processing ranges C1 to C4 is 540, 540, 540, and 540 lines. It is shown that the number of lines in the output DMA processing range C1 to C4 is 540, 540, 540, and 540 lines.
 この場合、範囲C1乃至C4における処理時間は、図14の右部で示されるようなものとなり、範囲C1乃至C4における処理時間の合計における最大差Δは、例えば、範囲C1,C4の処理時間の差であり、範囲C1の処理時間に対する5%前後の時間となる。これは、垂直方向のフィルタ処理におけるオーバヘッド削減を目的として処理ライン数が変化することに起因したものであり各種の処理時間が変化することにより生じるものである。尚、図14の右部においては、左から範囲C1乃至C4のそれぞれの処理時間の総合計とその内訳が示されている。 In this case, the processing time in the ranges C1 to C4 is as shown in the right part of FIG. 14, and the maximum difference Δ in the total processing time in the ranges C1 to C4 is, for example, the processing time in the ranges C1 and C4. This is a difference, and is about 5% of the processing time in the range C1. This is caused by the change in the number of processing lines for the purpose of reducing overhead in the filtering process in the vertical direction, and is caused by changing various processing times. In the right part of FIG. 14, the total processing time and the breakdown of each processing time in the ranges C1 to C4 from the left are shown.
 このように処理時間が不均一となることの対策として、例えば、範囲C1乃至C4において、最終的に出力するライン数を調整して、処理時間を平滑化することが考えられる。 As a countermeasure against such non-uniform processing time, for example, in the ranges C1 to C4, the number of lines to be finally output may be adjusted to smooth the processing time.
 すなわち、例えば、図15の左下部で示されるように、出力DMA処理におけるライン数を範囲C1については520ラインに、範囲C4については560ラインにするといったように、処理時間の不均一が解消するように不均一にする。 That is, for example, as shown in the lower left part of FIG. 15, the processing time non-uniformity is eliminated such that the number of lines in the output DMA processing is 520 lines for the range C1 and 560 lines for the range C4. To be non-uniform.
 このようにすると、図15の右下部で示されるように、範囲C1,C4における処理時間差Δがほぼ0の状態に解消され、処理時間を全体として平滑化されて均一なものとすることが可能となる。尚、図15の左上部および右上部は、それぞれ図14の左部および右部と同一のものである。 In this way, as shown in the lower right part of FIG. 15, the processing time difference Δ in the ranges C1 and C4 is almost eliminated, and the processing time can be smoothed and made uniform as a whole. It becomes. In addition, the upper left part and upper right part of FIG. 15 are the same as the left part and right part of FIG. 14, respectively.
 また、例えば、範囲C1乃至C4のうち、処理時間が早い範囲については、処理速度をリアルタイムで調整する必要のない処理を分担させるようにしてもよく、例えば、図16で示されるように、範囲C2乃至C4における最上段の黒色の範囲で示される時間帯に、検波処理などに割り当てるようにして、総じて処理時間が均一なものとなるようにしてもよい。 Further, for example, among the ranges C1 to C4, for a range where the processing time is early, processing that does not need to adjust the processing speed in real time may be shared. For example, as shown in FIG. It may be assigned to detection processing or the like in the time zone indicated by the uppermost black range in C2 to C4 so that the processing time becomes uniform as a whole.
 以上の処理により、画像を水平方向に分割して、複数のプロセッサに割り付け、水平分割された各領域を、垂直方向に時分割処理し、かつ、垂直方向に分割された各範囲については、先頭の範囲に後段の処理に必要とされる参照ピクセルを含む範囲を設定する。そして、先頭範囲の処理において、先行して参照ピクセルの処理も含めてフィルタ処理を実施した上でバッファリングし、以降のフィルタ処理においては、バッファリングしたものを参照して処理を実行させるようにした。これにより、撮像した画像の表示処理を低レイテンシ化させることが可能となり、撮像した画像を撮像した実時間により近いタイミングで高速に表示させることが可能となる。 With the above processing, the image is divided in the horizontal direction, assigned to a plurality of processors, the horizontally divided areas are time-divided in the vertical direction, and the ranges divided in the vertical direction are The range including the reference pixels required for the subsequent processing is set in the range. Then, in the processing of the top range, the buffer processing is performed after performing the filtering processing including the processing of the reference pixel in advance, and in the subsequent filtering processing, the processing is executed with reference to the buffered one. did. As a result, the display processing of the captured image can be reduced in latency, and the captured image can be displayed at high speed at a timing closer to the actual time when the captured image is captured.
 このため、図1の画像処理装置11は、例えば、患者の術部を撮像する撮像装置として内視鏡下手術に利用される内視鏡や脳神経外科手術等に利用される顕微鏡などにより患者の術部が撮像された画像を処理する画像処理装置、さらには、撮像装置としての内視鏡や顕微鏡を含めた手術システムに適用させることができる。また、GPUカード35におけるプロセッサ92を利用するにあたって、画像を表示させる上でのタイムラグ等への考慮を低減させることが可能となるので、プログラマビリティを向上させることが可能となる。さらに、放送波などを介して受信された画像を表示するにあたっても、低レイテンシ化を図ることができるので、タイムラグを抑制して表示させることが可能となる。 For this reason, the image processing apparatus 11 in FIG. 1 uses, for example, an endoscope that is used for endoscopic surgery as an imaging device that images a patient's surgical part, a microscope that is used for neurosurgery, and the like. The present invention can be applied to an image processing apparatus that processes an image obtained by imaging an operation part, and further to a surgical system including an endoscope or a microscope as an imaging apparatus. In addition, when using the processor 92 in the GPU card 35, it is possible to reduce consideration of a time lag or the like in displaying an image, so that it is possible to improve programmability. Furthermore, when displaying an image received via a broadcast wave or the like, it is possible to reduce the latency, so that it is possible to display the image while suppressing the time lag.
 さらに、画像を処理するにあたって、DMAコントローラ51により、処理に使用するフィルタに応じた参照ピクセル数と処理単位ブロックとに応じて、予め処理量を計算した上で、画像データの読み出しタイミングや書き込みタイミングを最適化した後、処理を実行するようにしているので、処理内容に寄らず、最適な状態で低レイテンシ化を図ることが可能となる。 Further, when processing an image, the DMA controller 51 calculates the processing amount in advance according to the number of reference pixels corresponding to the filter used for processing and the processing unit block, and then reads out the image data and writes the image data. Since the processing is executed after optimizing, it is possible to reduce the latency in an optimum state regardless of the processing content.
 ところで、上述した一連の処理は、ハードウェアにより実行させることもできるが、ソフトウェアにより実行させることもできる。一連の処理をソフトウェアにより実行させる場合には、そのソフトウェアを構成するプログラムが、専用のハードウェアに組み込まれているコンピュータ、または、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどに、記録媒体からインストールされる。 Incidentally, the above-described series of processing can be executed by hardware, but can also be executed by software. When a series of processing is executed by software, a program constituting the software may execute various functions by installing a computer incorporated in dedicated hardware or various programs. For example, it is installed from a recording medium in a general-purpose personal computer or the like.
 図17は、汎用のパーソナルコンピュータの構成例を示している。このパーソナルコンピュータは、CPU(Central Processing Unit)1001を内蔵している。CPU1001にはバス1004を介して、入出力インタ-フェイス1005が接続されている。バス1004には、ROM(Read Only Memory)1002およびRAM(Random Access Memory)1003が接続されている。 FIG. 17 shows a configuration example of a general-purpose personal computer. This personal computer incorporates a CPU (Central Processing Unit) 1001. An input / output interface 1005 is connected to the CPU 1001 via a bus 1004. A ROM (Read Only Memory) 1002 and a RAM (Random Access Memory) 1003 are connected to the bus 1004.
 入出力インタ-フェイス1005には、ユーザが操作コマンドを入力するキーボード、マウスなどの入力デバイスよりなる入力部1006、処理操作画面や処理結果の画像を表示デバイスに出力する出力部1007、プログラムや各種データを格納するハードディスクドライブなどよりなる記憶部1008、LAN(Local Area Network)アダプタなどよりなり、インターネットに代表されるネットワークを介した通信処理を実行する通信部1009が接続されている。また、磁気ディスク(フレキシブルディスクを含む)、光ディスク(CD-ROM(Compact Disc-Read Only Memory)、DVD(Digital Versatile Disc)を含む)、光磁気ディスク(MD(Mini Disc)を含む)、もしくは半導体メモリなどのリムーバブルメディア1011に対してデータを読み書きするドライブ1010が接続されている。 The input / output interface 1005 includes an input unit 1006 including an input device such as a keyboard and a mouse for a user to input an operation command, an output unit 1007 for outputting a processing operation screen and an image of the processing result to a display device, programs, and various types. A storage unit 1008 including a hard disk drive for storing data, a LAN (Local Area Network) adapter, and the like are connected to a communication unit 1009 that executes communication processing via a network represented by the Internet. Also, magnetic disks (including flexible disks), optical disks (including CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc)), magneto-optical disks (including MD (Mini Disc)), or semiconductors A drive 1010 for reading / writing data from / to a removable medium 1011 such as a memory is connected.
 CPU1001は、ROM1002に記憶されているプログラム、または磁気ディスク、光ディスク、光磁気ディスク、もしくは半導体メモリ等のリムーバブルメディア1011ら読み出されて記憶部1008にインストールされ、記憶部1008からRAM1003にロードされたプログラムに従って各種の処理を実行する。RAM1003にはまた、CPU1001が各種の処理を実行する上において必要なデータなども適宜記憶される。 The CPU 1001 is read from a program stored in the ROM 1002 or a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, installed in the storage unit 1008, and loaded from the storage unit 1008 to the RAM 1003. Various processes are executed according to the program. The RAM 1003 also appropriately stores data necessary for the CPU 1001 to execute various processes.
 以上のように構成されるコンピュータでは、CPU1001が、例えば、記憶部1008に記憶されているプログラムを、入出力インタフェース1005及びバス1004を介して、RAM1003にロードして実行することにより、上述した一連の処理が行われる。 In the computer configured as described above, the CPU 1001 loads the program stored in the storage unit 1008 to the RAM 1003 via the input / output interface 1005 and the bus 1004 and executes the program, for example. Is performed.
 コンピュータ(CPU1001)が実行するプログラムは、例えば、パッケージメディア等としてのリムーバブルメディア1011に記録して提供することができる。また、プログラムは、ローカルエリアネットワーク、インターネット、デジタル衛星放送といった、有線または無線の伝送媒体を介して提供することができる。 The program executed by the computer (CPU 1001) can be provided by being recorded on the removable medium 1011 as a package medium, for example. The program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
 コンピュータでは、プログラムは、リムーバブルメディア1011をドライブ1010に装着することにより、入出力インタフェース1005を介して、記憶部1008にインストールすることができる。また、プログラムは、有線または無線の伝送媒体を介して、通信部1009で受信し、記憶部1008にインストールすることができる。その他、プログラムは、ROM1002や記憶部1008に、あらかじめインストールしておくことができる。 In the computer, the program can be installed in the storage unit 1008 via the input / output interface 1005 by attaching the removable medium 1011 to the drive 1010. Further, the program can be received by the communication unit 1009 via a wired or wireless transmission medium and installed in the storage unit 1008. In addition, the program can be installed in advance in the ROM 1002 or the storage unit 1008.
 なお、コンピュータが実行するプログラムは、本明細書で説明する順序に沿って時系列に処理が行われるプログラムであっても良いし、並列に、あるいは呼び出しが行われたとき等の必要なタイミングで処理が行われるプログラムであっても良い。 The program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
 また、本明細書において、システムとは、複数の構成要素(装置、モジュール(部品)等)の集合を意味し、すべての構成要素が同一筐体中にあるか否かは問わない。したがって、別個の筐体に収納され、ネットワークを介して接続されている複数の装置、及び、1つの筐体の中に複数のモジュールが収納されている1つの装置は、いずれも、システムである。 In this specification, the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 Note that the embodiments of the present technology are not limited to the above-described embodiments, and various modifications can be made without departing from the gist of the present technology.
 例えば、本技術は、1つの機能をネットワークを介して複数の装置で分担、共同して処理するクラウドコンピューティングの構成をとることができる。 For example, the present technology can take a cloud computing configuration in which one function is shared by a plurality of devices via a network and is jointly processed.
 また、上述のフローチャートで説明した各ステップは、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
 さらに、1つのステップに複数の処理が含まれる場合には、その1つのステップに含まれる複数の処理は、1つの装置で実行する他、複数の装置で分担して実行することができる。 Further, when a plurality of processes are included in one step, the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
 尚、本技術は、以下のような構成も取ることができる。
(1) 患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、
 前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
 画像処理装置。
(2) 前記複数の演算処理部は、複数のGPU(Graphical Processing Unit)により構成され、
 前記演算処理部は、前記GPUの数で水平方向に分割された前記画像に処理を施す
 (1)に記載の画像処理装置。
(3) 前記画像に施す処理は、n段のフィルタを掛ける処理である
 (1)または(2)に記載の画像処理装置。
(4) 前記n段の前記フィルタは、それぞれ前記画像を垂直方向に分割した範囲を、前記垂直方向の最上段の範囲から下方向に向かって順次時分割で処理を施す
 (3)に記載の画像処理装置。
(5) 前記画像の前記水平方向の分割数、および前記垂直方向の分割数に基づいて算出される前記画像に施す処理量と、前記演算処理部の処理速度とに基づいて、前記演算処理部の演算のタイミングを制御するタイミング制御部をさらに含む
 (1)乃至(4)のいずれかに記載の画像処理装置。
(6) 前記画像を前記垂直方向に時分割した範囲のうち、第1の期間の処理範囲には、前記第1の期間より後の第2の期間の処理に必要とされる参照ピクセルが含まれる
 (1)乃至(5)のいずれかに記載の画像処理装置。
(7) 前記演算処理部は、処理結果をバッファリングするメモリを含み、
 前記第2の期間の処理では、前記メモリにバッファリングされた前記第1の期間の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行する
 (6)に記載の画像処理装置。
(8) 前記演算処理部は、処理結果をバッファリングするメモリを含み、
 前記画像を垂直方向に分割した範囲のうち、各段の前記フィルタによる前記垂直方向の最上段の処理範囲は、前記垂直方向の2段目以下の処理範囲における前記フィルタの処理に必要とされる参照ピクセルのライン数を含む範囲とし、
 前記演算処理部は、前記フィルタによる処理のための演算処理を実行するにあたり、前記参照ピクセルを利用する処理は、前記メモリにバッファリングされた前段までのフィルタ処理の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行する
 (3)に記載の画像処理装置。
(9) 前記演算処理部は、前記患者の術部が撮像された画像に対して、少なくとも拡大処理を施す
 (1)乃至(8)のいずれかに記載の画像処理装置。
(10) 前記患者の術部が撮像された画像は、内視鏡により撮像された画像である
 (1)乃至(9)のいずれかに記載の画像処理装置。
(11) 前記患者の術部が撮像された画像は、顕微鏡により撮像された画像である
 (1)乃至(9)のいずれかに記載の画像処理装置。
(12) 患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含む画像処理装置の画像処理方法において、
 前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
 画像処理方法。
(13) 前記画像は、内視鏡により撮像された画像である
 (12)に記載の画像処理方法。
(14) 前記画像は、顕微鏡により撮像された画像である
 (12)に記載の画像処理方法。
(15) 患者の術部を撮像する撮像装置と、
  前記撮像装置により撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、
  前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
 画像処理装置と
 を有する手術システム。
In addition, this technique can also take the following structures.
(1) including a plurality of arithmetic processing units that perform processing in a time-sharing manner for each range in which an image obtained by imaging a patient's surgical part is vertically divided;
The image processing device, wherein the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction by the number of the arithmetic processing units in the vertical direction.
(2) The plurality of arithmetic processing units are configured by a plurality of GPUs (Graphical Processing Units),
The image processing device according to (1), wherein the arithmetic processing unit performs processing on the image divided in the horizontal direction by the number of the GPUs.
(3) The image processing apparatus according to (1) or (2), wherein the process performed on the image is a process of applying an n-stage filter.
(4) The n stages of filters perform processing in a time-division manner in order from a range obtained by dividing the image in the vertical direction from the uppermost stage in the vertical direction downward. Image processing device.
(5) Based on the amount of processing performed on the image calculated based on the number of horizontal divisions and the number of vertical divisions of the image, and the processing speed of the arithmetic processing unit, the arithmetic processing unit The image processing apparatus according to any one of (1) to (4), further including a timing control unit that controls timing of the calculation.
(6) Of the range in which the image is time-divided in the vertical direction, the processing range in the first period includes reference pixels required for processing in the second period after the first period. The image processing device according to any one of (1) to (5).
(7) The arithmetic processing unit includes a memory for buffering a processing result,
In the process of the second period, an arithmetic process is executed using a process result corresponding to the reference pixel from the process result of the first period buffered in the memory. Processing equipment.
(8) The arithmetic processing unit includes a memory for buffering a processing result,
Of the range obtained by dividing the image in the vertical direction, the uppermost processing range in the vertical direction by the filter at each stage is required for the processing of the filter in the processing range of the second stage or less in the vertical direction. A range that includes the number of reference pixel lines,
When the arithmetic processing unit executes arithmetic processing for processing by the filter, the processing using the reference pixel is performed on the reference pixel from the processing result of the filtering processing up to the previous stage buffered in the memory. The image processing apparatus according to (3), wherein arithmetic processing is executed using a corresponding processing result.
(9) The image processing device according to any one of (1) to (8), wherein the arithmetic processing unit performs at least an enlargement process on an image obtained by imaging the surgical site of the patient.
(10) The image processing device according to any one of (1) to (9), wherein the image obtained by imaging the surgical site of the patient is an image taken by an endoscope.
(11) The image processing device according to any one of (1) to (9), wherein the image obtained by imaging the surgical site of the patient is an image taken by a microscope.
(12) In an image processing method of an image processing apparatus including a plurality of arithmetic processing units that perform time-division processing on an image obtained by imaging a patient's surgical part for each range divided in the vertical direction.
An image processing method in which the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
(13) The image processing method according to (12), wherein the image is an image captured by an endoscope.
(14) The image processing method according to (12), wherein the image is an image captured by a microscope.
(15) an imaging device for imaging the surgical site of a patient;
A plurality of arithmetic processing units that perform processing in a time-sharing manner for each range divided in the vertical direction of the image captured by the imaging device;
The operation system includes: an image processing device that performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
 11 情報処理部, 31 CPU, 32 メインメモリ, 33 バス, 34 IFカード, 35,35-1,35-2 GPUカード, 51 DMAコントローラ, 71 カメラIF, 72 ディスプレイIF, 73,91,91-1,91-2 PCIeブリッジ, 92,92-1,92-2 プロセッサ, 93,93-1,93-2 メモリ 11 Information processing section, 31 CPU, 32 main memory, 33 bus, 34 IF card, 35, 35-1, 35-2 GPU card, 51 DMA controller, 71 camera IF, 72 display IF, 73, 91, 91-1 , 91-2 PCIe bridge, 92, 92-1, 92-2 processor, 93, 93-1, 93-2 memory

Claims (15)

  1.  患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、
     前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
     画像処理装置。
    A plurality of arithmetic processing units that perform processing in a time-sharing manner for each range divided vertically in an image obtained by imaging a patient's surgical part,
    The image processing device, wherein the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction by the number of the arithmetic processing units in the vertical direction.
  2.  前記複数の演算処理部は、複数のGPU(Graphical Processing Unit)により構成され、
     前記演算処理部は、前記GPUの数で水平方向に分割された前記画像に処理を施す
     請求項1に記載の画像処理装置。
    The plurality of arithmetic processing units are configured by a plurality of GPUs (Graphical Processing Units),
    The image processing apparatus according to claim 1, wherein the arithmetic processing unit performs processing on the image divided in the horizontal direction by the number of GPUs.
  3.  前記画像に施す処理は、n段のフィルタを掛ける処理である
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the process applied to the image is a process of applying an n-stage filter.
  4.  前記n段の前記フィルタは、それぞれ前記画像を垂直方向に分割した範囲を、前記垂直方向の最上段の範囲から下方向に向かって順次時分割で処理を施す
     請求項3に記載の画像処理装置。
    The image processing apparatus according to claim 3, wherein each of the n stages of filters sequentially processes a range obtained by dividing the image in the vertical direction in a time-division manner from the uppermost range in the vertical direction downward. .
  5.  前記画像の前記水平方向の分割数、および前記垂直方向の分割数に基づいて算出される前記画像に施す処理量と、前記演算処理部の処理速度とに基づいて、前記演算処理部の演算のタイミングを制御するタイミング制御部をさらに含む
     請求項1に記載の画像処理装置。
    Based on the amount of processing performed on the image calculated based on the number of divisions in the horizontal direction and the number of divisions in the vertical direction of the image, and the processing speed of the arithmetic processing unit, The image processing apparatus according to claim 1, further comprising a timing control unit that controls timing.
  6.  前記画像を前記垂直方向に時分割した範囲のうち、第1の期間の処理範囲には、前記第1の期間より後の第2の期間の処理に必要とされる参照ピクセルが含まれる
     請求項1に記載の画像処理装置。
    The reference pixel that is required for processing in a second period after the first period is included in the processing range in the first period among the range in which the image is time-divided in the vertical direction. The image processing apparatus according to 1.
  7.  前記演算処理部は、処理結果をバッファリングするメモリを含み、
     前記第2の期間の処理では、前記メモリにバッファリングされた前記第1の期間の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行する
     請求項6に記載の画像処理装置。
    The arithmetic processing unit includes a memory for buffering a processing result,
    7. The image according to claim 6, wherein in the processing of the second period, an arithmetic process is executed using a processing result corresponding to the reference pixel from a processing result of the first period buffered in the memory. Processing equipment.
  8.  前記演算処理部は、処理結果をバッファリングするメモリを含み、
     前記画像を垂直方向に分割した範囲のうち、各段の前記フィルタによる前記垂直方向の最上段の処理範囲は、前記垂直方向の2段目以下の処理範囲における前記フィルタの処理に必要とされる参照ピクセルのライン数を含む範囲とし、
     前記演算処理部は、前記フィルタによる処理のための演算処理を実行するにあたり、前記参照ピクセルを利用する処理は、前記メモリにバッファリングされた前段までのフィルタ処理の処理結果から、前記参照ピクセルに対応する処理結果を利用して演算処理を実行する
     請求項3に記載の画像処理装置。
    The arithmetic processing unit includes a memory for buffering a processing result,
    Of the range obtained by dividing the image in the vertical direction, the uppermost processing range in the vertical direction by the filter at each stage is required for the processing of the filter in the processing range of the second stage or less in the vertical direction. A range that includes the number of reference pixel lines,
    When the arithmetic processing unit performs arithmetic processing for the processing by the filter, the processing using the reference pixel is performed on the reference pixel from the processing result of the filtering processing up to the previous stage buffered in the memory. The image processing apparatus according to claim 3, wherein arithmetic processing is executed using a corresponding processing result.
  9.  前記演算処理部は、前記患者の術部が撮像された画像に対して、少なくとも拡大処理を施す
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the arithmetic processing unit performs at least enlargement processing on an image obtained by imaging the surgical site of the patient.
  10.  前記患者の術部が撮像された画像は、内視鏡により撮像された画像である
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the image obtained by imaging the surgical site of the patient is an image obtained by an endoscope.
  11.  前記患者の術部が撮像された画像は、顕微鏡により撮像された画像である
     請求項1に記載の画像処理装置。
    The image processing apparatus according to claim 1, wherein the image obtained by imaging the surgical site of the patient is an image taken by a microscope.
  12.  患者の術部が撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含む画像処理装置の画像処理方法において、
     前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
     画像処理方法。
    In an image processing method of an image processing apparatus including a plurality of arithmetic processing units that perform time-division processing for each range divided in the vertical direction from an image obtained by imaging a patient's surgery unit,
    An image processing method in which the arithmetic processing unit performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
  13.  前記画像は、内視鏡により撮像された画像である
     請求項12に記載の画像処理方法。
    The image processing method according to claim 12, wherein the image is an image captured by an endoscope.
  14.  前記画像は、顕微鏡により撮像された画像である
     請求項12に記載の画像処理方法。
    The image processing method according to claim 12, wherein the image is an image captured by a microscope.
  15.  患者の術部を撮像する撮像装置と、
      前記撮像装置により撮像された画像を垂直方向に分割された範囲毎に時分割で処理を施す複数の演算処理部を含み、
      前記演算処理部は、前記演算処理部の数で、水平方向に分割された前記画像を、前記垂直方向に時分割して処理を施す
     画像処理装置と
     を有する手術システム。
    An imaging device for imaging a patient's surgical site;
    A plurality of arithmetic processing units that perform processing in a time-sharing manner for each range divided in the vertical direction of the image captured by the imaging device;
    The operation system includes: an image processing device that performs processing by time-dividing the image divided in the horizontal direction in the vertical direction by the number of the arithmetic processing units.
PCT/JP2015/061311 2014-04-24 2015-04-13 Image processing apparatus and method and surgical operation system WO2015163171A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US15/304,559 US10440241B2 (en) 2014-04-24 2015-04-13 Image processing apparatus, image processing method, and surgical system
EP15782241.2A EP3136719A4 (en) 2014-04-24 2015-04-13 Image processing apparatus and method and surgical operation system
JP2016514864A JP6737176B2 (en) 2014-04-24 2015-04-13 Image processing apparatus and method, and surgical system
CN201580020303.2A CN106233719B (en) 2014-04-24 2015-04-13 Image processing apparatus and method, and surgical system
US16/555,236 US11245816B2 (en) 2014-04-24 2019-08-29 Image processing apparatus, image processing method, and surgical system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014090566 2014-04-24
JP2014-090566 2014-04-24

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US15/304,559 A-371-Of-International US10440241B2 (en) 2014-04-24 2015-04-13 Image processing apparatus, image processing method, and surgical system
US16/555,236 Continuation US11245816B2 (en) 2014-04-24 2019-08-29 Image processing apparatus, image processing method, and surgical system

Publications (1)

Publication Number Publication Date
WO2015163171A1 true WO2015163171A1 (en) 2015-10-29

Family

ID=54332338

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/061311 WO2015163171A1 (en) 2014-04-24 2015-04-13 Image processing apparatus and method and surgical operation system

Country Status (5)

Country Link
US (2) US10440241B2 (en)
EP (1) EP3136719A4 (en)
JP (1) JP6737176B2 (en)
CN (1) CN106233719B (en)
WO (1) WO2015163171A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018079249A (en) * 2016-11-18 2018-05-24 ソニー・オリンパスメディカルソリューションズ株式会社 Medical signal processing apparatus and medical observation system
WO2018203473A1 (en) 2017-05-01 2018-11-08 Sony Corporation Medical image processing apparatus, medical image processing method and endoscope system
WO2019003911A1 (en) 2017-06-27 2019-01-03 Sony Corporation Medical image processing apparatus, medical image processing method, and computing device
JP2019080190A (en) * 2017-10-25 2019-05-23 日本電信電話株式会社 Communication device
US10868950B2 (en) 2018-12-12 2020-12-15 Karl Storz Imaging, Inc. Systems and methods for operating video medical scopes using a virtual camera control unit
WO2022201801A1 (en) * 2021-03-25 2022-09-29 ソニーグループ株式会社 Medical image processing system, medical image processing method, and program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146193A (en) * 2017-04-28 2017-09-08 南京觅踪电子科技有限公司 A kind of GPU parallel calculating methods based on double video cards applied to image procossing
US10812769B2 (en) 2017-08-21 2020-10-20 International Business Machines Corporation Visualizing focus objects from video data on electronic maps

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05324583A (en) * 1992-05-26 1993-12-07 Dainippon Screen Mfg Co Ltd Image data processor
JP2000312327A (en) * 1999-04-28 2000-11-07 Olympus Optical Co Ltd Image processor
JP2010263475A (en) * 2009-05-08 2010-11-18 Olympus Imaging Corp Image processing apparatus and imaging apparatus
JP2012098883A (en) * 2010-11-01 2012-05-24 Olympus Corp Data processor and image processor
JP2013182504A (en) * 2012-03-02 2013-09-12 Canon Inc Image processing system and control method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0240688A (en) 1988-07-29 1990-02-09 Nec Corp System and device for real-time processing of moving image
JPH02272533A (en) * 1989-04-14 1990-11-07 Fuji Photo Film Co Ltd Method for recognizing divided pattern of radiograph
US7075541B2 (en) * 2003-08-18 2006-07-11 Nvidia Corporation Adaptive load balancing in a multi-processor graphics processing system
US7616207B1 (en) 2005-04-25 2009-11-10 Nvidia Corporation Graphics processing system including at least three bus devices
US8369632B2 (en) 2009-04-08 2013-02-05 Olympus Corporation Image processing apparatus and imaging apparatus
JP2010271365A (en) * 2009-05-19 2010-12-02 Sony Corp Display controller and method for controlling display
WO2012088320A2 (en) * 2010-12-22 2012-06-28 The Johns Hopkins University Real-time, three-dimensional optical coherence tomography system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05324583A (en) * 1992-05-26 1993-12-07 Dainippon Screen Mfg Co Ltd Image data processor
JP2000312327A (en) * 1999-04-28 2000-11-07 Olympus Optical Co Ltd Image processor
JP2010263475A (en) * 2009-05-08 2010-11-18 Olympus Imaging Corp Image processing apparatus and imaging apparatus
JP2012098883A (en) * 2010-11-01 2012-05-24 Olympus Corp Data processor and image processor
JP2013182504A (en) * 2012-03-02 2013-09-12 Canon Inc Image processing system and control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3136719A4 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018079249A (en) * 2016-11-18 2018-05-24 ソニー・オリンパスメディカルソリューションズ株式会社 Medical signal processing apparatus and medical observation system
US11607111B2 (en) 2016-11-18 2023-03-21 Sony Olympus Medical Solutions Inc. Medical signal processing apparatus and medical observation system
WO2018203473A1 (en) 2017-05-01 2018-11-08 Sony Corporation Medical image processing apparatus, medical image processing method and endoscope system
CN110573054A (en) * 2017-05-01 2019-12-13 索尼公司 Medical image processing apparatus, medical image processing method, and endoscope system
CN110573054B (en) * 2017-05-01 2022-06-10 索尼公司 Medical image processing apparatus, medical image processing method, and endoscope system
WO2019003911A1 (en) 2017-06-27 2019-01-03 Sony Corporation Medical image processing apparatus, medical image processing method, and computing device
JP2019080190A (en) * 2017-10-25 2019-05-23 日本電信電話株式会社 Communication device
US10868950B2 (en) 2018-12-12 2020-12-15 Karl Storz Imaging, Inc. Systems and methods for operating video medical scopes using a virtual camera control unit
US11394864B2 (en) 2018-12-12 2022-07-19 Karl Storz Imaging, Inc. Systems and methods for operating video medical scopes using a virtual camera control unit
WO2022201801A1 (en) * 2021-03-25 2022-09-29 ソニーグループ株式会社 Medical image processing system, medical image processing method, and program

Also Published As

Publication number Publication date
JPWO2015163171A1 (en) 2017-04-13
US20190387135A1 (en) 2019-12-19
US10440241B2 (en) 2019-10-08
JP6737176B2 (en) 2020-08-05
EP3136719A4 (en) 2017-09-13
US20170046847A1 (en) 2017-02-16
EP3136719A1 (en) 2017-03-01
US11245816B2 (en) 2022-02-08
CN106233719B (en) 2020-03-31
CN106233719A (en) 2016-12-14

Similar Documents

Publication Publication Date Title
WO2015163171A1 (en) Image processing apparatus and method and surgical operation system
US11227566B2 (en) Method for reducing brightness of images, a data-processing apparatus, and a display apparatus
US20180365796A1 (en) Image processing device
JP6918150B2 (en) Display device and its image processing method
US9600747B2 (en) Image forming apparatus and control method that execute a plurality of rendering processing units in parallel
US11150858B2 (en) Electronic devices sharing image quality information and control method thereof
US9070201B2 (en) Image processing apparatus
US10178359B2 (en) Macropixel processing system, method and article
US9244942B1 (en) Method to transfer image data between arbitrarily overlapping areas of memory
KR20200080926A (en) Display apparatus and image processing method thereof
US20140119649A1 (en) Method and apparatus for processing image
US20210012459A1 (en) Image processing method and apparatus
EP3680827A1 (en) Information processing apparatus and memory control method
US20150370755A1 (en) Simd processor and control processor, and processing element with address calculating unit
JP2014099714A (en) Image processing apparatus, imaging device, image processing method, and program
US20140125821A1 (en) Signal processing circuit, imaging apparatus and program
US9898831B2 (en) Macropixel processing system, method and article
EP2675170A2 (en) Movie processing apparatus and control method therefor
US20180081842A1 (en) Data transfer device and data transfer method
US11494869B2 (en) Image processor having a compressing engine performing operations on each row of M*N data block
US20230222621A1 (en) Information processing apparatus, image processing method and computer readable medium
JP2013025619A (en) Image display device and image display method
US9811920B2 (en) Macropixel processing system, method and article
JP6048046B2 (en) Image composition apparatus and image composition method
US11210763B2 (en) Image processing apparatus, image processing method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15782241

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016514864

Country of ref document: JP

Kind code of ref document: A

REEP Request for entry into the european phase

Ref document number: 2015782241

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015782241

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15304559

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE