WO2017086029A1 - Image processing device, image processing method, moving body, wearable electronic device, and computer program - Google Patents

Image processing device, image processing method, moving body, wearable electronic device, and computer program Download PDF

Info

Publication number
WO2017086029A1
WO2017086029A1 PCT/JP2016/078678 JP2016078678W WO2017086029A1 WO 2017086029 A1 WO2017086029 A1 WO 2017086029A1 JP 2016078678 W JP2016078678 W JP 2016078678W WO 2017086029 A1 WO2017086029 A1 WO 2017086029A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
data
processing apparatus
control unit
Prior art date
Application number
PCT/JP2016/078678
Other languages
French (fr)
Japanese (ja)
Inventor
小曽根 卓義
洋司 山本
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2017086029A1 publication Critical patent/WO2017086029A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array

Definitions

  • the present disclosure relates to an image processing device, an image processing method, a moving body, a body-mounted electronic device, and a computer program.
  • Patent Document 1 describes a technique for converting an image captured using a fisheye lens into a developed image developed on a cylindrical surface.
  • a new and improved image processing apparatus, image processing method, moving object, and computer program capable of performing image processing on a captured image with a small delay and a small memory capacity. Propose.
  • an image sensor capable of arbitrarily reading data in units of pixels or a pixel group composed of a plurality of adjacent pixels, a data read order determined for the image sensor, and the determined read order And a control unit that reads data from the imaging device based on the image processing device.
  • a moving object including the image processing device is provided.
  • a body-mounted electronic device including the image processing device.
  • a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and the data reading order is determined based on the determined reading order.
  • An image processing method is provided that includes reading data from an imaging device.
  • a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and the data reading order is determined based on the determined reading order.
  • a computer program for causing a computer to read data from an image sensor is provided.
  • a new and improved image processing apparatus, image processing method, moving object, and computer program capable of performing image processing with little delay on a captured image are provided. Can be provided.
  • FIG. 3 is an explanatory diagram illustrating a functional configuration example of an image processing apparatus 100 according to an embodiment of the present disclosure. It is explanatory drawing which shows the structural example of the 1st chip
  • tip 130 of CIS101. 5 is a flowchart illustrating an operation example of the image processing apparatus 100 according to an embodiment of the present disclosure. 6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment.
  • FIG. 6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment.
  • FIG. 6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment.
  • FIG. It is explanatory drawing which shows another reading method of the image data from CIS101. It is explanatory drawing which shows another reading method of the image data from CIS101. It is explanatory drawing explaining a mode that the image output from the image processing apparatus 100 changes based on operation information. It is explanatory drawing explaining a mode that the image output from the image processing apparatus 100 changes based on operation information.
  • FIG. 11 is an explanatory diagram illustrating an application example of an image processing device 100 according to an embodiment of the present disclosure.
  • an image including the entire circumference of the camera can be obtained. Since an image captured by a lens having a large distortion such as a fisheye lens has a large distortion, an observer feels uncomfortable with the image as it is captured. Accordingly, it is necessary to perform processing for transforming an image having a large distortion into a shape that does not cause the viewer to feel uncomfortable.
  • an image of the entire region including the distortion output from the image sensor is first stored in a frame memory. Then, an image is read from the frame memory, subjected to a deformation process for eliminating the distortion, and output, or a process for rewriting the frame memory.
  • FIG. 1 is an explanatory diagram showing a functional configuration example of an image processing apparatus 10 that corrects a distorted image.
  • Image data output from a CIS (CMOS Image sensor) 11 is temporarily stored in the DRAM 13 via the bus bridge 12 in units of frames.
  • the coordinates to be read are designated by the coordinate generation unit 18, and are stored in an SRAM (Static Random Access Memory) 15 via the bus bridge 14.
  • SRAM Static Random Access Memory
  • the coordinates to be corrected are designated by the coordinate generation unit 18 and stored in the pixel interpolation unit 16.
  • the pixel interpolation unit 16 corrects the distortion by moving the position of the image, performing interpolation processing, and the like, and outputs the corrected image data to the SRAM 17.
  • FIG. 2 is an explanatory diagram showing a functional configuration example of the image processing apparatus 10 ′ that corrects a distorted image.
  • Image data in a predetermined range within the frame output from the CIS 11 is stored in the SRAM 21.
  • the coordinates to be corrected are designated by the coordinate generation unit 18 and stored in the pixel interpolation unit 16.
  • the pixel interpolation unit 16 corrects the distortion by moving the position of the image, performing interpolation processing, and the like, and outputs the corrected image data to the SRAM 17.
  • the image of the entire area is output from the image sensor in the conventional processing. Further, when the image needs to be deformed in the horizontal or vertical direction, a memory for holding the coordinate relationship before and after the deformation is necessary.
  • the present disclosure has intensively studied a technique for performing a deformation process with a small memory capacity and a small delay on an image captured by a lens having a large distortion.
  • the present disclosure uses an image sensor that can read image data from an arbitrary location, thereby reducing delay with respect to an image captured by a lens with large distortion, and The inventors have devised a technology that can perform deformation processing with a small memory capacity.
  • FIG. 3 is an explanatory diagram illustrating a functional configuration example of the image processing apparatus 100 according to the embodiment of the present disclosure.
  • FIG. 3 shows an example of the functional configuration of the image processing apparatus 100 that performs correction processing on an image captured by a lens with large distortion.
  • a functional configuration example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described with reference to FIG.
  • the image processing apparatus 100 includes a CMOS image sensor (CIS) 101, an image processing unit 102, and an input unit 103.
  • the image processing unit 102 includes SRAMs 111 and 113, an image transformation unit 112, and a coordinate generation unit 114.
  • the CIS 101 is a solid-state image sensor that converts light collected through a lens, for example, a fish-eye lens that is not a central projection method, into an electrical signal and outputs the electrical signal.
  • the CIS 101 is provided with a color filter having a Bayer array and other arrays.
  • the CIS 101 is configured to output data in an arbitrary pixel unit or an arbitrary pixel group (block) unit based on an external command.
  • a configuration example of the CIS 101 will be described.
  • the CIS 101 has a structure in which two semiconductor chips are stacked.
  • the upper chip of the stacked structure is a first chip
  • the lower chip is a second chip.
  • the first chip is a pixel chip in which a pixel array unit in which unit pixels including photoelectric conversion elements are two-dimensionally arranged on a matrix is formed.
  • the second chip includes a driving unit that drives each pixel of the pixel array unit formed on the first chip, and a signal processing unit that performs signal processing such as converting an analog signal read from each pixel of the pixel array unit into a digital signal.
  • a memory that stores data that has been subjected to signal processing by the signal processing unit, a data processing unit that outputs data written in the memory based on an external command, and the like can be included.
  • FIG. 4 is an explanatory diagram showing a configuration example of the first chip 120 of the CIS 101.
  • FIG. 5 is an explanatory diagram showing a configuration example of the second chip 130 of the CIS 101.
  • the pixel array unit 121 in which unit pixels including photoelectric conversion elements are two-dimensionally arranged in a matrix, pads 122 a and 122 b that are electrically connected to the outside, and the second chip 130 are electrically connected.
  • a first chip 120 provided with vias (VIA) 123 is shown.
  • FIG. 5 illustrates a signal processing unit 131 that performs signal processing such as converting an analog signal read from each pixel of the pixel array unit into a digital signal and writes the signal to the memory, and drives each pixel formed in the first chip 120.
  • 2 shows a second chip 130 provided with column recorders 134a and 134b for designating column addresses and row addresses.
  • pixel units each having a predetermined number of pixels as one unit are two-dimensionally arranged in a matrix, and a via 123 is formed for each pixel unit.
  • the number of pixels constituting the pixel unit can be arbitrary.
  • the CIS 101 is configured to output pixel data to the outside in units of pixel units based on an external command.
  • the order of outputting pixel data in units of pixel units can be arbitrarily set based on an external command.
  • the coordinate generation unit 114 designates the order of reading pixel data from the CIS 101 and outputs the pixel data of the designated pixel unit to the image processing unit 102. The processing will be described in detail later.
  • the image processing unit 102 may be configured by, for example, a CPU (Central Processing Unit), various ROMs (Read Only Memory), a RAM (Random Access Memory), and the like.
  • the image processing unit 102 can function as an example of the image processing unit of the present disclosure.
  • the SRAM 111 is a memory that stores pixel data output from the CIS 101.
  • the pixel data stored in the SRAM 111 is output to the image transformation unit 112 with the output target specified by the coordinate generation unit 114.
  • the SRAM 111 has a capacity sufficient to store pixel data for a line to be output after correcting the distortion by the image deforming unit 112 in the subsequent stage. That is, the SRAM 111 does not need a capacity for storing the pixel data for one frame, and may have a capacity for storing the pixel data for less than one frame.
  • the image deformation unit 112 uses the pixel data stored in the SRAM 111, but performs image deformation processing.
  • the image transformation unit 112 can perform rotation, enlargement, reduction, or the like of at least a part of the range of the image captured by the CIS 101 in addition to processing for removing distortion of the image captured by the CIS 101 as image deformation processing.
  • the image transformation unit 112 performs an interpolation process using pixel data stored in the SRAM 111, for example, when generating an image from which distortion is removed.
  • the output image is generated by moving pixel data in the distorted image. It becomes an image without. Therefore, it is necessary to generate pixel data in a place where there is no pixel data by performing interpolation processing using the pixel data after the pixel data is moved.
  • the image transformation unit 112 performs this interpolation process.
  • the image transformation unit 112 may use the data of the coordinates specified by the coordinate generation unit 114 when performing the interpolation process.
  • the image transformation unit 112 outputs the pixel data generated by performing the interpolation process to the SRAM 113.
  • the SRAM 113 is a memory that stores pixel data output from the image transformation unit 112.
  • the pixel data stored in the SRAM 113 is output to the subsequent processing block at a predetermined timing and becomes the original data of the output image.
  • the coordinate generation unit 114 outputs information related to the coordinates to the CIS 101, the SRAM 111, and the image transformation unit 112.
  • the coordinate generation unit 114 specifies a block for outputting pixel data to the CIS 101.
  • the coordinate generation unit 114 specifies the coordinates of the output target of the pixel data to the image transformation unit 112 for the SRAM 111.
  • the coordinate generation unit 114 specifies the coordinates to be interpolated for the image deformation unit 112.
  • the coordinate generation unit 114 specifies a block for outputting pixel data to the CIS 101
  • the coordinate generation unit 114 specifies from the upper left direction of the output image. For example, when a part of an image captured by the CIS 101 is cut out, rotated, and output, the coordinate generation unit 114 outputs pixel data from a block corresponding to the upper left location of the rotated image. It is specified in CIS101.
  • the input unit 103 includes, for example, a lever, a button, a touch panel, and the like, and outputs operation information generated based on the operation of these devices to the coordinate generation unit 114.
  • the input unit 103 may allow an operator to specify an arbitrary range in an image captured by the CIS 101, to enlarge or reduce the range, and to rotate the range.
  • the coordinate generation unit 114 generates information related to coordinates to be output to the CIS 101, the SRAM 111, and the image deformation unit 112 based on operation information sent from the input unit 103 based on the operation of the input unit 103.
  • the coordinate generation unit 114 can dynamically change the distortion correction target by generating information about coordinates based on the operation of the input unit 103.
  • the image processing unit 102 may include components other than those described above.
  • the SRAM 111 is included in the image processing unit 102.
  • the SRAM 111 may be provided outside the image processing unit 102.
  • the image processing apparatus 100 Since the image processing apparatus 100 according to the embodiment of the present disclosure has such a configuration, it is possible to perform a deformation process on an image captured by a lens having a large distortion with a small delay and a small memory capacity. It becomes.
  • FIG. 6 is a flowchart illustrating an operation example of the image processing apparatus 100 according to the embodiment of the present disclosure.
  • FIG. 6 shows an operation example of the image processing apparatus 100 when a target image is generated by correcting distortion of an image captured by the CIS 101 using data output from the CIS 101.
  • FIG. 6 shows an operation example of the image processing apparatus 100 when a target image is generated by correcting distortion of an image captured by the CIS 101 using data output from the CIS 101.
  • step S101 A block to be subjected to distortion correction processing is designated (step S101).
  • the coordinate generation unit 114 can execute the process of step S101.
  • the coordinate calculation when performing distortion correction is to calculate the correspondence between the coordinates of the image before distortion correction and the image after distortion correction. That is, the coordinate calculation when performing distortion correction is to calculate which coordinate (x ′, y ′) of the image after distortion correction corresponds to the coordinate (x ′, y ′) of the image after distortion correction. It is. If the correspondence between the coordinates of the image before distortion correction and the image after distortion correction is known, it can be determined which block of image data should be read from the CIS 101 when the target image is generated.
  • the image processing apparatus 100 specifies a block for outputting pixel data to the CIS 101
  • the image processing apparatus 100 specifies from the upper left direction of the output image. For example, when a part of an image captured by the CIS 101 is cut out, rotated, and output, the image processing apparatus 100 according to the present embodiment starts from the block corresponding to the upper left location of the rotated image.
  • the CIS 101 is designated to output pixel data.
  • step S101 the image processing apparatus 100 subsequently reads the image data designating the block from the CIS 101 based on the designation in step S101 (step S102).
  • the coordinate generation unit 114 can execute the process in step S102.
  • the image processing apparatus 100 stores the image data read from the CIS 101 in the SRAM 111 (step S103). At this time, the image data stored in the SRAM 111 only needs to be able to generate one line of the output target image. That is, the SRAM 111 does not need a capacity for storing the pixel data for one frame, and may have a capacity for storing the pixel data for less than one frame.
  • the image processing apparatus 100 When the image data read from the CIS 101 is stored in the SRAM 111, the image processing apparatus 100 then reads the image data by designating coordinates from the SRAM 111 (step S104).
  • the coordinate generation unit 114 can execute the process in step S104. Since the image data is stored in the SRAM 111 in units of blocks of the CIS 101, the image processing apparatus 100 reads from the SRAM 111 the image data necessary for the subsequent deformation process from the SRAM 111.
  • step S105 When the image data is read by designating the coordinates from the SRAM 111, the image processing apparatus 100 subsequently performs an image deformation process using the image data read from the SRAM 111 (step S105).
  • the process in step S105 can be executed by the image transformation unit 112.
  • the image deformation processing in step S105 may include processing for correcting image distortion, image rotation processing, image enlargement or reduction processing, and the like.
  • the image processing apparatus 100 performs image deformation processing in step S ⁇ b> 105, the image processing apparatus 100 stores the deformed image data in the SRAM 113.
  • FIG. 7 shows an example of an image output from the CIS 101, which is an example of a distorted image 140 formed on the CIS 101 through a fisheye lens that is a lens that is not a central projection method.
  • the image processing apparatus 100 first deforms the range indicated by the reference numeral 141 in the image 140 including the uppermost line of the deformed image, thereby enabling the observer. Convert to an image that doesn't feel strange. At that time, the image processing apparatus 100 according to the embodiment of the present disclosure reads the image data from the CIS 101 by designating a block from which the image data is read. Thereafter, the image processing apparatus 100 reads image data from the CIS 101 by sequentially designating blocks corresponding to the downward direction from the topmost line of the deformed image.
  • FIG. 8 is an explanatory diagram for explaining that the image processing apparatus 100 reads the image data from the CIS 101 by designating a block from which the image data is read.
  • FIG. 9 is an example of an image 150 that is generated by the image processing apparatus 100 according to the embodiment of the present disclosure and the image output from the CIS 101 is deformed so that the observer does not feel uncomfortable.
  • FIG. 8 shows a block 144 containing a line 143 corresponding to the top line 151 of the image 150 in FIG.
  • the image processing apparatus 100 reads the image data of each block 144 shown in FIG. 8 from the CIS 101 in order to generate the highest line 143 of the converted image.
  • the image processing apparatus 100 reads out image data from the CIS 101 by sequentially designating blocks corresponding to the downward direction from the uppermost line of the deformed image.
  • the designation of this block is performed by the coordinate generation unit 114 based on the generated coordinate information.
  • the image processing apparatus 100 can arbitrarily determine the order of reading image data from the CIS 101 according to the image to be generated. If the image to be generated is not rotated, the image processing apparatus 100 may select a block from the top to the bottom of the image output by the CIS 101. If there is no rotation in the image to be generated, the image processing apparatus 100 may select a block from which image data is read from the CIS 101 so that the generated image has a top-to-bottom direction.
  • the image processing apparatus 100 may delete unnecessary image data from the SRAM 111 when the image data is read from the CIS 101 in units of blocks.
  • the operation example of the image processing apparatus 100 according to the embodiment of the present disclosure has been described above.
  • the image processing apparatus 100 according to the embodiment of the present disclosure performs deformation processing on an image captured by a lens with a large distortion with a small delay and a small memory capacity by executing the above-described operation. Is possible.
  • the image processing apparatus 100 can read image data from the CIS 101 by various methods by applying the reading of the image data from the CIS 101 in units of blocks described above.
  • FIGS. 10 and 11 are explanatory diagrams illustrating another method of reading image data from the CIS 101 by the image processing apparatus 100 according to the embodiment of the present disclosure.
  • FIGS. 10 and 11 show that when reading image data from the CIS 101, the CIS 101 by the image processing apparatus 100 is used when the area divided in the horizontal direction is read sequentially from the nearest place in the vertical direction, not in units of blocks. This is an example of reading image data from.
  • FIG. 10 shows an example in which an image is divided into three regions (A phase, B phase, and C phase), and sequentially read from a location closest to the vertical direction.
  • FIG. 11 shows that the image is divided into two at the center of the image and then divided into three regions (A phase, B phase, C phase) so as to be line symmetric, and then in the vertical direction. This is an example of reading sequentially from the nearest location.
  • the system of the image processing apparatus 100 can be simplified as compared with the case of reading out in units of pixels or blocks.
  • the image processing apparatus 100 is based on the operation information sent from the input unit 103 based on the operation of the input unit 103, with respect to the CIS 101, the SRAM 111, and the image deformation unit 112. Information regarding the coordinates to be output may be generated. Below, the image processing apparatus 100 which changes the image output based on operation information is demonstrated.
  • FIG. 12 and 13 are explanatory diagrams for explaining how the image processing apparatus 100 changes the image output based on the operation information.
  • the image processing apparatus 100 outputs the image 160 a that has been subjected to deformation processing on the image output from the CIS 101 and the state in which the display target is moved in the diagonally lower left direction based on the operation information. ,It is shown.
  • the reference numeral 161 shown in FIG. 12 is an example of a distorted image that is imaged on the CIS 101 through the fisheye lens and output from the CIS 101.
  • Reference numerals 162a to 162d shown in FIG. 12 indicate the range of the image to be output, which is cut out from the image output by the CIS 101 and deformed.
  • the image processing apparatus 100 changes the block to be read from the CIS 101 in accordance with the movement of the display object.
  • FIG. 13 when the display target moves in the diagonally lower left direction based on the operation information, the block to be read from the CIS 101 changes, and the image finally output by the image processing apparatus 100 is displayed from the image 160a. It is explanatory drawing which shows the example which changes to the image 160d.
  • the upper side of the deformed and output images 160a to 160d corresponds to the upper right of the range 162a to 162d in FIG. Accordingly, when the display target moves in the diagonally downward left direction based on the operation information, the image processing apparatus 100 changes the block to be read first from the block 163a to the block 163d as shown in FIG. Go.
  • the image processing apparatus 100 changes the image to be output based on the operation information sent from the input unit 103 based on the operation of the input unit 103, Based on the operation information, the block to be read from the CIS 101 is changed. Then, the CIS 101 outputs the image data of the designated block.
  • the block reading order is coordinate information generated by the coordinate generation unit 114 according to the contents of the operation information sent from the input unit 103, as in the processing for removing distortion of the image captured by the fisheye lens described above. Determine based on.
  • the image processing apparatus 100 performs an image process according to operation information sent from the input unit 103 based on an operation of the input unit 103 by performing a deformation process on the image data output from the CIS 101. It becomes possible to output.
  • the image output by the CIS 101 is an image formed through a special lens such as a fisheye lens, but the present disclosure is not limited to such an example.
  • the image processing apparatus 100 changes a block to be read from the CIS 101 even when a part of a normal image with little distortion is cut out and output based on the operation of the input unit 103. I can do it.
  • the image processing apparatus 100 according to the embodiment of the present disclosure can be applied to, for example, automobiles, flying objects, and other mobile objects, and body-mounted electronic devices (wearable devices).
  • FIG. 14 is an explanatory diagram illustrating an application example of the image processing apparatus 100 according to the embodiment of the present disclosure, and illustrates an example in a case where the image processing apparatus 100 is provided to an automobile.
  • FIG. 14 shows imaging units 2910, 2912, 2914, 2916, and 2918 and outside information detection units 2920, 2922, 2924, 2926, 2928, and 2930.
  • the imaging units 2910, 2912, 2914, 2916, and 2918 are provided at, for example, at least one of the front nose, the side mirror, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior of the vehicle 2900.
  • An imaging unit 2910 provided in the front nose and an imaging unit 2918 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 2900.
  • the imaging units 2912 and 2914 provided in the side mirror mainly acquire an image on the side of the vehicle 2900.
  • An imaging unit 2916 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 2900.
  • An imaging unit 2918 provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 14 shows an example of shooting ranges of the respective imaging units 2910, 2912, 2914, and 2916.
  • the imaging range a indicates the imaging range of the imaging unit 2910 provided in the front nose
  • the imaging ranges b and c indicate the imaging ranges of the imaging units 2912 and 2914 provided in the side mirrors, respectively
  • the imaging range d The imaging range of the imaging unit 2916 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 2910, 2912, 2914, and 2916, an overhead image when the vehicle 2900 is viewed from above is obtained.
  • the vehicle outside information detection units 2920, 2922, 2924, 2926, 2928, 2930 provided on the front, rear, side, corner, and upper windshield of the vehicle 2900 may be, for example, an ultrasonic sensor or a radar device.
  • the vehicle outside information detection units 2920, 2926, and 2930 provided on the front nose, the rear bumper, the back door, and the windshield in the vehicle interior of the vehicle 2900 may be, for example, LIDAR devices.
  • These vehicle outside information detection units 2920 to 2930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, and the like.
  • Each of the imaging units 2910, 2912, 2914, 2916, and 2918 in FIG. 14 is provided with the CIS 101 that can read image data from an arbitrary block, and the above-described image processing unit 102 is provided inside the vehicle 2900.
  • the SRAM 111 since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image around the vehicle to the driver of the vehicle 2900.
  • an apparatus to which the image processing apparatus 100 according to the embodiment of the present disclosure can be applied is not limited to an automobile.
  • the image processing apparatus 100 according to the embodiment of the present disclosure can also be applied to a body-mounted electronic device (wearable device) intended to be used by being worn on a human body.
  • a body-mounted electronic device wearable device
  • the wearable device when the wearable device is provided with the CIS 101 and the image blur caused by the movement of the body is corrected, the above-described image processing unit 102 is provided in the wearable device, so that the image blur or distortion can be reduced. It is possible to correct the image with a low delay and present the corrected image to an observer who has captured the image with the wearable device. Further, since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image captured by a wearable device.
  • the image processing apparatus 100 can also be applied to a case where a camera is mounted on an autonomous flying object (drone) and an image from above is captured by the camera.
  • a camera is mounted on an autonomous flying object (drone) and an image from above is captured by the camera.
  • the above-described image processing unit 102 is provided inside the flying vehicle.
  • the SRAM 111 since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image captured by a flying object that autonomously flies.
  • an image processing apparatus 100 capable of outputting is provided.
  • image processing that can transform the image and output the transformed image without storing image data in units of frames
  • An apparatus 100 is provided.
  • the image processing apparatus 100 reads data from the CIS 101 that can output image data in a block unit including a pixel or a plurality of adjacent pixels and in any order.
  • the image processing apparatus 100 reads data from the CIS 101 by an amount necessary for image processing, for example, distortion correction processing for each line.
  • the image processing apparatus 100 does not need to read all data for one frame from the CIS 101 when performing image processing, and thus can perform image processing with little delay. Further, the image processing apparatus 100 according to the embodiment of the present disclosure can reduce the capacity of the SRAM 111 that temporarily stores image data before image processing to an amount less than one frame of image data. Costs can be reduced.
  • each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart.
  • each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
  • each functional block shown in the functional block diagram used in the above description may be realized by a server device connected via a network such as the Internet.
  • the configuration of each functional block shown in the functional block diagram used in the above description may be realized by a single device or a system in which a plurality of devices cooperate.
  • a system in which a plurality of devices are linked may include, for example, a combination of a plurality of server devices, a combination of a server device and a terminal device, or the like.
  • Control for determining the data reading order for an image sensor capable of arbitrarily reading data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and reading data from the image sensor based on the determined reading order An image processing apparatus comprising a unit.
  • the image processing apparatus according to (1) wherein the control unit performs image processing on an image formed on the imaging element using data read from the imaging element based on the determined reading order.
  • the image processing apparatus according to (2) further including a storage unit that stores data read by the control unit.
  • a body-mounted electronic device comprising the image processing device according to any one of (1) to (12).
  • a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order. Including an image processing method.
  • a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order.
  • a computer program that causes a computer to execute.
  • Image processing apparatus 102 Image processing unit 103: Input unit 111: SRAM 112: Image transformation unit 113: SRAM 114: Coordinate generating unit 120: First chip 121: Pixel array unit 122a: Pad unit 122b: Pad unit 123: Via 130: Second chip 131: Signal processing unit 132a: Peripheral circuit 132b: Peripheral circuit 133: Output circuit 134a: Column recorder 134b: Column recorder

Abstract

[Problem] To provide an image processing device capable of performing image processing with little delay, for captured images. [Solution] Provided is an image processing device comprising: an imaging element capable of reading any data in pixel group units comprising a pixel or a plurality of adjacent pixels; and a control unit that determines a reading order for the data, for the imaging elements, and reads data from the imaging elements on the basis of the determined reading order. The image processing device is capable of processing captured images, with little delay, as a result of reading data from imaging elements on the basis of the determined reading order.

Description

画像処理装置、画像処理方法、移動体、身体装着型電子機器及びコンピュータプログラムImage processing apparatus, image processing method, moving object, body-mounted electronic device, and computer program
 本開示は、画像処理装置、画像処理方法、移動体、身体装着型電子機器及びコンピュータプログラムに関する。 The present disclosure relates to an image processing device, an image processing method, a moving body, a body-mounted electronic device, and a computer program.
 特殊な構造のレンズを備えたカメラを用いることによって、カメラの全周を含む画像を得ることができる。このような画像を加工して利用するための技術は、種々提案されている。例えば、特許文献1には、魚眼レンズを用いて撮像された画像を円筒面上に展開した展開画像に変換する技術が記載されている。 By using a camera equipped with a lens with a special structure, an image including the entire circumference of the camera can be obtained. Various techniques for processing and using such images have been proposed. For example, Patent Document 1 describes a technique for converting an image captured using a fisheye lens into a developed image developed on a cylindrical surface.
特開2012-226645号公報JP 2012-226645 A
 魚眼レンズ等の中心射影方式ではない歪みが大きなレンズで撮像された画像を、観察者が違和感を覚えない形に変形するためには、画像を一度メモリに溜め込んでから任意の変形処理を行っていたが、変形処理によっては必要なメモリの容量が大きくなったり、また画像のメモリへの書き込みやメモリからの読み出しに時間がかかり、撮像から出力までの遅延が生じたりしていた。また画像の一部を切り出して出力するような場合にも、同様に撮像から出力までの遅延が生じていた。 In order to transform an image captured by a lens with a large distortion that is not a central projection method, such as a fisheye lens, into a form that the observer does not feel uncomfortable, the image is stored once in memory and then subjected to arbitrary deformation processing However, depending on the deformation process, the required memory capacity increases, and it takes time to write and read images from the memory, resulting in a delay from imaging to output. Similarly, when a part of an image is cut out and output, a delay from imaging to output occurs in the same manner.
 そこで、本開示では、撮像された画像に対して、遅延の少なく、かつ少ないメモリ容量で画像処理を行うことが可能な、新規かつ改良された画像処理装置、画像処理方法、移動体及びコンピュータプログラムを提案する。 Therefore, in the present disclosure, a new and improved image processing apparatus, image processing method, moving object, and computer program capable of performing image processing on a captured image with a small delay and a small memory capacity. Propose.
 本開示によれば、画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子と、前記撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出す制御部と、を備える、画像処理装置が提供される。 According to the present disclosure, an image sensor capable of arbitrarily reading data in units of pixels or a pixel group composed of a plurality of adjacent pixels, a data read order determined for the image sensor, and the determined read order And a control unit that reads data from the imaging device based on the image processing device.
 また本開示によれば、上記画像処理装置を備える、移動体が提供される。また本開示によれば、上記画像処理装置を備える、身体装着型電子機器が提供される。 Further, according to the present disclosure, a moving object including the image processing device is provided. According to the present disclosure, there is provided a body-mounted electronic device including the image processing device.
 また本開示によれば、画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことを含む、画像処理方法が提供される。 Further, according to the present disclosure, a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and the data reading order is determined based on the determined reading order. An image processing method is provided that includes reading data from an imaging device.
 また本開示によれば、画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことをコンピュータに実行させる、コンピュータプログラムが提供される。 Further, according to the present disclosure, a data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and the data reading order is determined based on the determined reading order. A computer program for causing a computer to read data from an image sensor is provided.
 以上説明したように本開示によれば、撮像された画像に対して、遅延の少ない画像処理を行うことが可能な、新規かつ改良された画像処理装置、画像処理方法、移動体及びコンピュータプログラムを提供することが出来る。 As described above, according to the present disclosure, a new and improved image processing apparatus, image processing method, moving object, and computer program capable of performing image processing with little delay on a captured image are provided. Can be provided.
 なお、上記の効果は必ずしも限定的なものではなく、上記の効果とともに、または上記の効果に代えて、本明細書に示されたいずれかの効果、または本明細書から把握され得る他の効果が奏されてもよい。 Note that the above effects are not necessarily limited, and any of the effects shown in the present specification, or other effects that can be grasped from the present specification, together with or in place of the above effects. May be played.
歪みのある画像を補正する画像処理装置10の機能構成例を示す説明図である。It is explanatory drawing which shows the function structural example of the image processing apparatus 10 which correct | amends the image with distortion. 歪みのある画像を補正する画像処理装置10’の機能構成例を示す説明図である。It is explanatory drawing which shows the function structural example of image processing apparatus 10 'which correct | amends an image with distortion. 本開示の実施の形態に係る画像処理装置100の機能構成例を示す説明図である。FIG. 3 is an explanatory diagram illustrating a functional configuration example of an image processing apparatus 100 according to an embodiment of the present disclosure. CIS101の第1チップ120の構成例を示す説明図であるIt is explanatory drawing which shows the structural example of the 1st chip | tip 120 of CIS101. CIS101の第2チップ130の構成例を示す説明図である。It is explanatory drawing which shows the structural example of the 2nd chip | tip 130 of CIS101. 本開示の実施の形態に係る画像処理装置100の動作例を示す流れ図である。5 is a flowchart illustrating an operation example of the image processing apparatus 100 according to an embodiment of the present disclosure. 同実施の形態に係る画像処理装置100の動作を説明するための説明図である。6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment. FIG. 同実施の形態に係る画像処理装置100の動作を説明するための説明図である。6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment. FIG. 同実施の形態に係る画像処理装置100の動作を説明するための説明図である。6 is an explanatory diagram for explaining an operation of the image processing apparatus 100 according to the embodiment. FIG. CIS101からの画像データの別の読み出し方を示す説明図である。It is explanatory drawing which shows another reading method of the image data from CIS101. CIS101からの画像データの別の読み出し方を示す説明図である。It is explanatory drawing which shows another reading method of the image data from CIS101. 画像処理装置100が操作情報に基づいて出力する画像を変化させていく様子を説明する説明図である。It is explanatory drawing explaining a mode that the image output from the image processing apparatus 100 changes based on operation information. 画像処理装置100が操作情報に基づいて出力する画像を変化させていく様子を説明する説明図である。It is explanatory drawing explaining a mode that the image output from the image processing apparatus 100 changes based on operation information. 本開示の実施の形態に係る画像処理装置100の適用例を示す説明図である。FIG. 11 is an explanatory diagram illustrating an application example of an image processing device 100 according to an embodiment of the present disclosure.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In addition, in this specification and drawing, about the component which has the substantially same function structure, duplication description is abbreviate | omitted by attaching | subjecting the same code | symbol.
 なお、説明は以下の順序で行うものとする。
 1.本開示の実施の形態
  1.1.背景
  1.2.機能構成例
  1.3.動作例
  1.4.適用例
 2.まとめ
The description will be made in the following order.
1. Embodiment of the present disclosure 1.1. Background 1.2. Functional configuration example 1.3. Example of operation 1.4. Application example Summary
 <1.本開示の実施の形態>
 [1.1.背景]
 まず、本開示の実施の形態について詳細に説明する前に、本開示の実施の形態の背景について説明する。
<1. Embodiment of the present disclosure>
[1.1. background]
First, before describing the embodiment of the present disclosure in detail, the background of the embodiment of the present disclosure will be described.
 魚眼レンズなどの特殊な構造のレンズを備えたカメラを用いることによって、カメラの全周を含む画像を得ることができる。魚眼レンズなどの歪みが大きいレンズで撮像された画像は歪みが大きいので、撮像されたままの状態では観察者は画像に違和感を覚える。従って、歪みが大きな画像を、観察者が見て違和感を覚えない形に変形する処理が必要になる。 Using a camera equipped with a lens with a special structure such as a fisheye lens, an image including the entire circumference of the camera can be obtained. Since an image captured by a lens having a large distortion such as a fisheye lens has a large distortion, an observer feels uncomfortable with the image as it is captured. Accordingly, it is necessary to perform processing for transforming an image having a large distortion into a shape that does not cause the viewer to feel uncomfortable.
 歪みが大きな画像を変形するためには、まずは撮像素子から出力された歪みを含む全領域の画像を一度フレームメモリに溜め込むことが一般的に行われていた。そして、画像をフレームメモリから読み出して、歪みを無くす変形処理を行って出力したり、フレームメモリに再度書き込んだりする処理が行われていた。 In order to transform an image having a large distortion, generally, an image of the entire region including the distortion output from the image sensor is first stored in a frame memory. Then, an image is read from the frame memory, subjected to a deformation process for eliminating the distortion, and output, or a process for rewriting the frame memory.
 図1は、歪みのある画像を補正する画像処理装置10の機能構成例を示す説明図である。CIS(CMOS Image sensor)11から出力される画像データは、フレーム単位で、バスブリッジ12を経由してDRAM13に一旦格納される。DRAM13に格納された画像データは、読み出し対象の座標が座標生成部18で指定され、バスブリッジ14を経由してSRAM(Static Random Access Memory)15に格納される。 FIG. 1 is an explanatory diagram showing a functional configuration example of an image processing apparatus 10 that corrects a distorted image. Image data output from a CIS (CMOS Image sensor) 11 is temporarily stored in the DRAM 13 via the bus bridge 12 in units of frames. In the image data stored in the DRAM 13, the coordinates to be read are designated by the coordinate generation unit 18, and are stored in an SRAM (Static Random Access Memory) 15 via the bus bridge 14.
 SRAM15に格納された画像データは、補正対象の座標が座標生成部18で指定され、画素補間部16に格納される。画素補間部16は、画像の位置の移動や補間処理などを行って歪みを補正して、補正後の画像データをSRAM17に出力する。 In the image data stored in the SRAM 15, the coordinates to be corrected are designated by the coordinate generation unit 18 and stored in the pixel interpolation unit 16. The pixel interpolation unit 16 corrects the distortion by moving the position of the image, performing interpolation processing, and the like, and outputs the corrected image data to the SRAM 17.
 図2は、歪みのある画像を補正する画像処理装置10’の機能構成例を示す説明図である。CIS11から出力される、フレーム内の所定の範囲の画像データは、SRAM21に格納される。SRAM21に格納された画像データは、補正対象の座標が座標生成部18で指定され、画素補間部16に格納される。画素補間部16は、画像の位置の移動や補間処理などを行って歪みを補正して、補正後の画像データをSRAM17に出力する。 FIG. 2 is an explanatory diagram showing a functional configuration example of the image processing apparatus 10 ′ that corrects a distorted image. Image data in a predetermined range within the frame output from the CIS 11 is stored in the SRAM 21. In the image data stored in the SRAM 21, the coordinates to be corrected are designated by the coordinate generation unit 18 and stored in the pixel interpolation unit 16. The pixel interpolation unit 16 corrects the distortion by moving the position of the image, performing interpolation processing, and the like, and outputs the corrected image data to the SRAM 17.
 しかし、歪みが大きな画像の一部の領域のみを表示させたい場合であっても、従来の処理では撮像素子から全ての領域の画像を出力していた。また、画像の水平または垂直方向の変形が必要になる場合、変形前後の座標関係を保持するためのメモリが必要になっていた。 However, even in the case where it is desired to display only a partial area of an image with large distortion, the image of the entire area is output from the image sensor in the conventional processing. Further, when the image needs to be deformed in the horizontal or vertical direction, a memory for holding the coordinate relationship before and after the deformation is necessary.
 また、歪みを含む全領域の画像を一度フレームメモリに溜め込むためにフレームメモリの容量の削減が難しく、また、変形が限定的なものであればメモリの容量は少なく済むが、水平ラインでの処理の場合、垂直方向への変形量に応じたメモリ(垂直方向への変形用のラインメモリ)が必要だった。さらに、撮像素子から出力された画像をフレームメモリに一度溜め込むことから、撮像から出力までに遅延が生じていた。 In addition, it is difficult to reduce the capacity of the frame memory because the image of the entire area including distortion is once stored in the frame memory, and if the deformation is limited, the capacity of the memory can be reduced. In this case, a memory corresponding to the amount of deformation in the vertical direction (line memory for deformation in the vertical direction) was required. Further, since the image output from the image sensor is once stored in the frame memory, there is a delay from imaging to output.
 そこで本件開示者は、上述した背景に鑑み、歪みが大きなレンズで撮像された画像に対して、遅延の少なく、かつ少ないメモリ容量で変形処理を行う技術について鋭意検討を行った。その結果、本件開示者は、以下で説明するように、任意の場所から画像データを読み出し可能なイメージセンサを用いることで、歪みが大きなレンズで撮像された画像に対して、遅延の少なく、かつ少ないメモリ容量で変形処理を行うことが可能な技術を考案するに至った。 Therefore, in view of the background described above, the present disclosure has intensively studied a technique for performing a deformation process with a small memory capacity and a small delay on an image captured by a lens having a large distortion. As a result, as will be described below, the present disclosure uses an image sensor that can read image data from an arbitrary location, thereby reducing delay with respect to an image captured by a lens with large distortion, and The inventors have devised a technology that can perform deformation processing with a small memory capacity.
 以上、本開示の実施の形態の背景について説明した。続いて、本開示の実施の形態について詳細に説明する。 The background of the embodiment of the present disclosure has been described above. Subsequently, an embodiment of the present disclosure will be described in detail.
 [1.2.機能構成例]
 図3は、本開示の実施の形態に係る画像処理装置100の機能構成例を示す説明図である。図3に示したのは、歪みが大きなレンズで撮像された画像に対する補正処理を行う画像処理装置100の機能構成例である。以下、図3を用いて本開示の実施の形態に係る画像処理装置100の機能構成例について説明する。
[1.2. Functional configuration example]
FIG. 3 is an explanatory diagram illustrating a functional configuration example of the image processing apparatus 100 according to the embodiment of the present disclosure. FIG. 3 shows an example of the functional configuration of the image processing apparatus 100 that performs correction processing on an image captured by a lens with large distortion. Hereinafter, a functional configuration example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described with reference to FIG.
 図3に示したように、本開示の実施の形態に係る画像処理装置100は、CMOSイメージセンサ(CIS)101と、画像処理部102と、入力部103と、を含んで構成される。そして画像処理部102は、SRAM111、113と、画像変形部112と、座標生成部114と、を含んで構成される。 As illustrated in FIG. 3, the image processing apparatus 100 according to the embodiment of the present disclosure includes a CMOS image sensor (CIS) 101, an image processing unit 102, and an input unit 103. The image processing unit 102 includes SRAMs 111 and 113, an image transformation unit 112, and a coordinate generation unit 114.
 CIS101は、レンズ、例えば中心射影方式ではないレンズである魚眼レンズを通じて集光される光を電気信号に変換して出力する固体撮像素子である。なおCIS101にはベイヤ配列その他の配列を有するカラーフィルタが備えられる。 The CIS 101 is a solid-state image sensor that converts light collected through a lens, for example, a fish-eye lens that is not a central projection method, into an electrical signal and outputs the electrical signal. The CIS 101 is provided with a color filter having a Bayer array and other arrays.
 本実施形態では、CIS101は、外部からの命令に基づき、任意の画素単位、または任意の画素群(ブロック)単位でデータを出力可能なように構成される。ここでCIS101の構成例について説明する。 In this embodiment, the CIS 101 is configured to output data in an arbitrary pixel unit or an arbitrary pixel group (block) unit based on an external command. Here, a configuration example of the CIS 101 will be described.
 本実施形態に係るCIS101は、2つの半導体チップが積層された構造を有する。以下の説明では、積層構造の上側のチップを第1チップ、下側のチップを第2チップとする。 The CIS 101 according to the present embodiment has a structure in which two semiconductor chips are stacked. In the following description, the upper chip of the stacked structure is a first chip, and the lower chip is a second chip.
 第1チップは、光電変換素子を含む単位画素が行列上に2次元に配置される画素アレイ部が形成された画素チップである。 The first chip is a pixel chip in which a pixel array unit in which unit pixels including photoelectric conversion elements are two-dimensionally arranged on a matrix is formed.
 第2チップは、第1チップに形成された画素アレイ部の各画素を駆動する駆動部、画素アレイ部の各画素から読み出されるアナログ信号をデジタル信号に変換する等の信号処理を行う信号処理部、信号処理部で信号処理が施されたデータを格納するメモリ、外部からの命令に基づいてメモリに書き込まれたデータを出力するデータ処理部などが含まれうる。 The second chip includes a driving unit that drives each pixel of the pixel array unit formed on the first chip, and a signal processing unit that performs signal processing such as converting an analog signal read from each pixel of the pixel array unit into a digital signal. A memory that stores data that has been subjected to signal processing by the signal processing unit, a data processing unit that outputs data written in the memory based on an external command, and the like can be included.
 図4は、CIS101の第1チップ120の構成例を示す説明図である。また図5は、CIS101の第2チップ130の構成例を示す説明図である。 FIG. 4 is an explanatory diagram showing a configuration example of the first chip 120 of the CIS 101. FIG. 5 is an explanatory diagram showing a configuration example of the second chip 130 of the CIS 101.
 図4には、光電変換素子を含む単位画素が行列上に2次元に配置される画素アレイ部121、外部と電気的に接続するパッド部122a、122b、第2チップ130と電気的に接続するビア(VIA)123が設けられている第1チップ120が示されている。 4, the pixel array unit 121 in which unit pixels including photoelectric conversion elements are two-dimensionally arranged in a matrix, pads 122 a and 122 b that are electrically connected to the outside, and the second chip 130 are electrically connected. A first chip 120 provided with vias (VIA) 123 is shown.
 また図5には、画素アレイ部の各画素から読み出されるアナログ信号をデジタル信号に変換する等の信号処理を行ってメモリに書き込む信号処理部131、第1チップ120に形成される各画素を駆動する周辺回路132a、132b、外部からの命令に基づいて信号処理部131に書き込まれたデータを出力する出力回路133、画素アレイ部121の各画素から画素データをメモリに書き込んだり、メモリから読み出したりする際の列アドレス、行アドレスを指定する列レコーダ134a、134bが設けられている第2チップ130が示されている。 Further, FIG. 5 illustrates a signal processing unit 131 that performs signal processing such as converting an analog signal read from each pixel of the pixel array unit into a digital signal and writes the signal to the memory, and drives each pixel formed in the first chip 120. Peripheral circuits 132a and 132b, an output circuit 133 that outputs data written to the signal processing unit 131 based on a command from the outside, and pixel data from each pixel of the pixel array unit 121 is written to the memory or read from the memory. 2 shows a second chip 130 provided with column recorders 134a and 134b for designating column addresses and row addresses.
 そして本実施形態では、所定数の画素を1単位とする画素ユニットが行列状に2次元に配置され、その画素ユニット毎にビア123が形成された構成となっている。画素ユニットを構成する画素の数は任意のものとすることができる。 In this embodiment, pixel units each having a predetermined number of pixels as one unit are two-dimensionally arranged in a matrix, and a via 123 is formed for each pixel unit. The number of pixels constituting the pixel unit can be arbitrary.
 本実施形態に係るCIS101は、外部からの命令に基づき、この画素ユニット単位で画素データを外部に出力出来るよう構成されている。そして、画素ユニット単位で画素データを出力する順序は、外部からの命令に基づいて任意のものとすることが可能である。本実施形態では、CIS101からの画素データの読み出し順序は座標生成部114が指定し、指定された画素ユニットの画素データを画像処理部102に出力するが、その処理については後に詳述する。 The CIS 101 according to the present embodiment is configured to output pixel data to the outside in units of pixel units based on an external command. The order of outputting pixel data in units of pixel units can be arbitrarily set based on an external command. In the present embodiment, the coordinate generation unit 114 designates the order of reading pixel data from the CIS 101 and outputs the pixel data of the designated pixel unit to the image processing unit 102. The processing will be described in detail later.
 続いて画像処理部102の構成を説明する。画像処理部102は、例えばCPU(Central Processing Unit)、各種のROM(Read Only Memory)やRAM(Random Access Memory)などで構成されうる。画像処理部102は、本開示の画像処理部の一例として機能しうる。 Next, the configuration of the image processing unit 102 will be described. The image processing unit 102 may be configured by, for example, a CPU (Central Processing Unit), various ROMs (Read Only Memory), a RAM (Random Access Memory), and the like. The image processing unit 102 can function as an example of the image processing unit of the present disclosure.
 SRAM111は、CIS101から出力される画素データを格納するメモリである。SRAM111に格納された画素データは、座標生成部114によって出力対象が指定されて、画像変形部112に出力される。 The SRAM 111 is a memory that stores pixel data output from the CIS 101. The pixel data stored in the SRAM 111 is output to the image transformation unit 112 with the output target specified by the coordinate generation unit 114.
 ここで、SRAM111は、後段の画像変形部112で歪みを補正した後に出力するライン分の画素データを格納できるだけの容量があれば良い。すなわち、SRAM111は、1フレーム分の画素データを格納するための容量は必要無く、1フレーム分未満の画素データを格納するだけの容量があれば良い。 Here, it is sufficient that the SRAM 111 has a capacity sufficient to store pixel data for a line to be output after correcting the distortion by the image deforming unit 112 in the subsequent stage. That is, the SRAM 111 does not need a capacity for storing the pixel data for one frame, and may have a capacity for storing the pixel data for less than one frame.
 画像変形部112は、SRAM111に格納された画素データを用いたが画像の変形処理を行う。画像変形部112は、画像の変形処理としてCIS101で撮像された画像の歪みを除去する処理の他、CIS101で撮像された画像の少なくとも一部の範囲の回転、拡大、縮小などを行いうる。 The image deformation unit 112 uses the pixel data stored in the SRAM 111, but performs image deformation processing. The image transformation unit 112 can perform rotation, enlargement, reduction, or the like of at least a part of the range of the image captured by the CIS 101 in addition to processing for removing distortion of the image captured by the CIS 101 as image deformation processing.
 画像変形部112は、例えば、歪みを除去した画像を生成する際に、SRAM111に格納された画素データを用いて補間処理を行う。歪みのある画像から、歪みを除去した出力画像を生成する際には、歪みのある画像における画素データを移動させることで出力画像を生成するが、画素データを移動させるだけでは場所によっては画素データの無い画像となる。従って、画素データの移動後に画素データを用いて補間処理を行うことで、画素データの無い場所に画素データを生成する必要がある。画像変形部112はこの補間処理を行う。 The image transformation unit 112 performs an interpolation process using pixel data stored in the SRAM 111, for example, when generating an image from which distortion is removed. When generating an output image from which distortion has been removed from a distorted image, the output image is generated by moving pixel data in the distorted image. It becomes an image without. Therefore, it is necessary to generate pixel data in a place where there is no pixel data by performing interpolation processing using the pixel data after the pixel data is moved. The image transformation unit 112 performs this interpolation process.
 画像変形部112は、補間処理を行う際に、座標生成部114によって指定された座標のデータを用いても良い。画像変形部112は、補間処理を行って生成した画素データをSRAM113に出力する。 The image transformation unit 112 may use the data of the coordinates specified by the coordinate generation unit 114 when performing the interpolation process. The image transformation unit 112 outputs the pixel data generated by performing the interpolation process to the SRAM 113.
 SRAM113は、画像変形部112から出力される画素データを格納するメモリである。SRAM113に格納された画素データは、所定のタイミングで後段の処理ブロックに出力されて出力画像の元データとなる。 The SRAM 113 is a memory that stores pixel data output from the image transformation unit 112. The pixel data stored in the SRAM 113 is output to the subsequent processing block at a predetermined timing and becomes the original data of the output image.
 座標生成部114は、CIS101、SRAM111、及び画像変形部112に対して座標に関する情報を出力する。座標生成部114は、CIS101に対しては画素データを出力するブロックを指定する。また、座標生成部114は、SRAM111に対しては、画像変形部112への画素データの出力対象の座標を指定する。また、座標生成部114は、画像変形部112に対しては補間対象の座標を指定する。 The coordinate generation unit 114 outputs information related to the coordinates to the CIS 101, the SRAM 111, and the image transformation unit 112. The coordinate generation unit 114 specifies a block for outputting pixel data to the CIS 101. In addition, the coordinate generation unit 114 specifies the coordinates of the output target of the pixel data to the image transformation unit 112 for the SRAM 111. Further, the coordinate generation unit 114 specifies the coordinates to be interpolated for the image deformation unit 112.
 座標生成部114は、CIS101に対して画素データを出力するブロックを指定する際に、出力する画像の左上の方向から指定する。例えば、CIS101で撮像された画像の一部の領域を切り出して回転させて出力する場合は、座標生成部114は、回転後の画像の左上となる場所に対応するブロックから画素データを出力するようCIS101に指定する。 When the coordinate generation unit 114 specifies a block for outputting pixel data to the CIS 101, the coordinate generation unit 114 specifies from the upper left direction of the output image. For example, when a part of an image captured by the CIS 101 is cut out, rotated, and output, the coordinate generation unit 114 outputs pixel data from a block corresponding to the upper left location of the rotated image. It is specified in CIS101.
 入力部103は、例えばレバー、ボタン、タッチパネルなどで構成され、これらのデバイスの操作に基づいて生成される操作情報を座標生成部114へ出力する。入力部103は、例えば、CIS101で撮像された画像の中の任意の範囲の指定、当該範囲の拡大または縮小、回転等を操作者に行わせてもよい。 The input unit 103 includes, for example, a lever, a button, a touch panel, and the like, and outputs operation information generated based on the operation of these devices to the coordinate generation unit 114. For example, the input unit 103 may allow an operator to specify an arbitrary range in an image captured by the CIS 101, to enlarge or reduce the range, and to rotate the range.
 座標生成部114は、入力部103の操作に基づいて入力部103から送られる操作情報に基づき、CIS101、SRAM111、及び画像変形部112に対して出力する座標に関する情報を生成する。座標生成部114は、入力部103の操作に基づいて座標に関する情報を生成することで、歪みの補正対象を動的に変化させることが可能となる。 The coordinate generation unit 114 generates information related to coordinates to be output to the CIS 101, the SRAM 111, and the image deformation unit 112 based on operation information sent from the input unit 103 based on the operation of the input unit 103. The coordinate generation unit 114 can dynamically change the distortion correction target by generating information about coordinates based on the operation of the input unit 103.
 もちろん、画像処理部102には上述したもの以外の構成要素が含まれていても良いことは言うまでもない。また、上述の例では、画像処理部102にSRAM111が含まれていたが、SRAM111は画像処理部102の外部に設けられていても良い。 Of course, it goes without saying that the image processing unit 102 may include components other than those described above. In the above example, the SRAM 111 is included in the image processing unit 102. However, the SRAM 111 may be provided outside the image processing unit 102.
 本開示の実施の形態に係る画像処理装置100は、係る構成を有することで、歪みが大きなレンズで撮像された画像に対して、遅延の少なく、かつ少ないメモリ容量で変形処理を行うことが可能となる。 Since the image processing apparatus 100 according to the embodiment of the present disclosure has such a configuration, it is possible to perform a deformation process on an image captured by a lens having a large distortion with a small delay and a small memory capacity. It becomes.
 以上、本開示の実施の形態に係る画像処理装置100の機能構成例について説明した。続いて、本開示の実施の形態に係る画像処理装置100の動作例について説明する。 The functional configuration example of the image processing apparatus 100 according to the embodiment of the present disclosure has been described above. Subsequently, an operation example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described.
 [1.3.動作例]
 図6は、本開示の実施の形態に係る画像処理装置100の動作例を示す流れ図である。図6に示したのは、CIS101から出力されるデータを用いて、CIS101で撮像された画像の歪みの補正を行って目的の画像を生成する際の、画像処理装置100の動作例である。以下、図6を用いて本開示の実施の形態に係る画像処理装置100の動作例について説明する。
[1.3. Example of operation]
FIG. 6 is a flowchart illustrating an operation example of the image processing apparatus 100 according to the embodiment of the present disclosure. FIG. 6 shows an operation example of the image processing apparatus 100 when a target image is generated by correcting distortion of an image captured by the CIS 101 using data output from the CIS 101. Hereinafter, an operation example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described with reference to FIG.
 本実施の形態に係る画像処理装置100は、CIS101で撮像された画像の歪みの補正を行う際に、まず歪み補正を行う際の座標演算を実行するとともに、CIS101で撮像された画像中の、歪み補正処理の対象となるブロックを指定する(ステップS101)。ステップS101の処理は座標生成部114が実行しうる。 When the image processing apparatus 100 according to the present embodiment corrects distortion of an image captured by the CIS 101, the image processing apparatus 100 first performs coordinate calculation when performing distortion correction, and in the image captured by the CIS 101, A block to be subjected to distortion correction processing is designated (step S101). The coordinate generation unit 114 can execute the process of step S101.
 歪み補正を行う際の座標演算とは、歪み補正前の画像と歪み補正後の画像との座標の対応関係を演算することである。すなわち、歪み補正を行う際の座標演算とは、歪み補正前の画像の座標(x,y)が、歪み補正後の画像のどの座標(x’,y’)に対応するかを演算することである。そして、歪み補正前の画像と歪み補正後の画像との座標の対応関係が分かれば、目的の画像を生成する際に、CIS101からどのブロックの画像データを読み出せば良いのかが分かる。 The coordinate calculation when performing distortion correction is to calculate the correspondence between the coordinates of the image before distortion correction and the image after distortion correction. That is, the coordinate calculation when performing distortion correction is to calculate which coordinate (x ′, y ′) of the image after distortion correction corresponds to the coordinate (x ′, y ′) of the image after distortion correction. It is. If the correspondence between the coordinates of the image before distortion correction and the image after distortion correction is known, it can be determined which block of image data should be read from the CIS 101 when the target image is generated.
 本実施の形態に係る画像処理装置100は、CIS101に対して画素データを出力するブロックを指定する際に、出力する画像の左上の方向から指定する。例えば、CIS101で撮像された画像の一部の領域を切り出して回転させて出力する場合は、本実施の形態に係る画像処理装置100は、回転後の画像の左上となる場所に対応するブロックから画素データを出力するようCIS101に指定する。 When the image processing apparatus 100 according to the present embodiment specifies a block for outputting pixel data to the CIS 101, the image processing apparatus 100 specifies from the upper left direction of the output image. For example, when a part of an image captured by the CIS 101 is cut out, rotated, and output, the image processing apparatus 100 according to the present embodiment starts from the block corresponding to the upper left location of the rotated image. The CIS 101 is designated to output pixel data.
 上記ステップS101で座標の演算及びブロックの指定を行うと、続いて画像処理装置100は、上記ステップS101での指定に基づき、CIS101からのブロックを指定した画像データの読み出しを行う(ステップS102)。ステップS102の処理は座標生成部114が実行しうる。 When the coordinate calculation and the block designation are performed in step S101, the image processing apparatus 100 subsequently reads the image data designating the block from the CIS 101 based on the designation in step S101 (step S102). The coordinate generation unit 114 can execute the process in step S102.
 CIS101からブロックを指定して画像データを読み出すと、画像処理装置100は、CIS101から読み出した画像データをSRAM111に格納する(ステップS103)。この際、SRAM111に格納される画像データは、出力対象の画像の1ライン分を生成出来るだけのものがあれば良い。すなわち、SRAM111は、1フレーム分の画素データを格納するための容量は必要無く、1フレーム分未満の画素データを格納するだけの容量があれば良い。 When the image data is read by designating a block from the CIS 101, the image processing apparatus 100 stores the image data read from the CIS 101 in the SRAM 111 (step S103). At this time, the image data stored in the SRAM 111 only needs to be able to generate one line of the output target image. That is, the SRAM 111 does not need a capacity for storing the pixel data for one frame, and may have a capacity for storing the pixel data for less than one frame.
 CIS101から読み出した画像データをSRAM111に格納すると、続いて画像処理装置100は、SRAM111から座標を指定して画像データを読み出す(ステップS104)。ステップS104の処理は座標生成部114が実行しうる。SRAM111には、CIS101のブロック単位で画像データが格納されているので、画像処理装置100は、SRAM111から後段の変形処理に必要な分の画像データをSRAM111から読み出す。 When the image data read from the CIS 101 is stored in the SRAM 111, the image processing apparatus 100 then reads the image data by designating coordinates from the SRAM 111 (step S104). The coordinate generation unit 114 can execute the process in step S104. Since the image data is stored in the SRAM 111 in units of blocks of the CIS 101, the image processing apparatus 100 reads from the SRAM 111 the image data necessary for the subsequent deformation process from the SRAM 111.
 SRAM111から座標を指定して画像データを読み出すと、続いて画像処理装置100は、SRAM111から読み出した画像データを用いて画像の変形処理を行う(ステップS105)。ステップS105の処理は画像変形部112が実行しうる。ステップS105での画像の変形処理には、画像の歪みを補正する処理、画像の回転処理、画像の拡大または縮小処理などが含まれうる。画像処理装置100は、上記ステップS105において画像の変形処理を行うと、変形後の画像データをSRAM113に格納する。 When the image data is read by designating the coordinates from the SRAM 111, the image processing apparatus 100 subsequently performs an image deformation process using the image data read from the SRAM 111 (step S105). The process in step S105 can be executed by the image transformation unit 112. The image deformation processing in step S105 may include processing for correcting image distortion, image rotation processing, image enlargement or reduction processing, and the like. When the image processing apparatus 100 performs image deformation processing in step S <b> 105, the image processing apparatus 100 stores the deformed image data in the SRAM 113.
 本開示の実施の形態に係る画像処理装置100の動作例を、他の図面を参照しながら寄り詳細に説明する。 An operation example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described in detail with reference to other drawings.
 図7、8、9は、本開示の実施の形態に係る画像処理装置100の動作を説明するための説明図である。図7に示したのは、CIS101から出力される画像の例を示したものであり、中心射影方式ではないレンズである魚眼レンズを通じてCIS101に結像された、歪みのある画像140の例である。 7, 8, and 9 are explanatory diagrams for explaining the operation of the image processing apparatus 100 according to the embodiment of the present disclosure. FIG. 7 shows an example of an image output from the CIS 101, which is an example of a distorted image 140 formed on the CIS 101 through a fisheye lens that is a lens that is not a central projection method.
 このように歪みのある画像140は、観察者が見ると非常に違和感を覚えるものである。従って、本開示の実施の形態に係る画像処理装置100は、まず、変形後の画像の最上位のラインが含まれる、画像140の中の符号141で示した範囲を変形することで、観察者が見て違和感の無い画像に変換する。その際、本開示の実施の形態に係る画像処理装置100は、画像データを読み出すブロックを指定してCIS101から画像データを読み出す。その後、画像処理装置100は、変形後の画像の最上位のラインから下に向かう方向に対応するブロックを順次指定してCIS101から画像データを読み出す。 The image 140 having such distortion is very uncomfortable when viewed by an observer. Therefore, the image processing apparatus 100 according to the embodiment of the present disclosure first deforms the range indicated by the reference numeral 141 in the image 140 including the uppermost line of the deformed image, thereby enabling the observer. Convert to an image that doesn't feel strange. At that time, the image processing apparatus 100 according to the embodiment of the present disclosure reads the image data from the CIS 101 by designating a block from which the image data is read. Thereafter, the image processing apparatus 100 reads image data from the CIS 101 by sequentially designating blocks corresponding to the downward direction from the topmost line of the deformed image.
 図8は、画像処理装置100が画像データを読み出すブロックを指定してCIS101から画像データを読み出していることを説明するための説明図である。また図9は、本開示の実施の形態に係る画像処理装置100が生成する、CIS101から出力される画像を変形して観察者が見て違和感の無い画像150の例である。 FIG. 8 is an explanatory diagram for explaining that the image processing apparatus 100 reads the image data from the CIS 101 by designating a block from which the image data is read. FIG. 9 is an example of an image 150 that is generated by the image processing apparatus 100 according to the embodiment of the present disclosure and the image output from the CIS 101 is deformed so that the observer does not feel uncomfortable.
 図8には、図9における画像150の最上位のライン151に対応する、ライン143が含まれているブロック144が示されている。画像処理装置100は、変換後の画像の最上位のライン143を生成するために、図8に示したそれぞれのブロック144の画像データをCIS101から読み出す。 FIG. 8 shows a block 144 containing a line 143 corresponding to the top line 151 of the image 150 in FIG. The image processing apparatus 100 reads the image data of each block 144 shown in FIG. 8 from the CIS 101 in order to generate the highest line 143 of the converted image.
 以後、画像処理装置100は、変形後の画像の最上位のラインから下に向かう方向に対応するブロックを順次指定してCIS101から画像データを読み出す。このブロックの指定は、座標生成部114が、生成した座標の情報に基づいて行う。このようにブロックを指定してCIS101から画像データを読み出すことで、CIS101から読み出した画像データを格納するSRAM111の容量を削減することが可能になる。 Thereafter, the image processing apparatus 100 reads out image data from the CIS 101 by sequentially designating blocks corresponding to the downward direction from the uppermost line of the deformed image. The designation of this block is performed by the coordinate generation unit 114 based on the generated coordinate information. Thus, by designating a block and reading image data from the CIS 101, the capacity of the SRAM 111 that stores the image data read from the CIS 101 can be reduced.
 画像処理装置100は、CIS101から画像データを読み出す順序を、生成する画像に応じて任意に決定することが出来る。生成する画像に回転が生じていなければ、画像処理装置100は、CIS101が出力する画像の上から下の方向に向かってブロックを選択すれば良い。生成する画像に回転が生じていなければ、画像処理装置100は、生成する画像の上から下の方向となるように、CIS101から画像データを読み込むブロックを選択すれば良い。 The image processing apparatus 100 can arbitrarily determine the order of reading image data from the CIS 101 according to the image to be generated. If the image to be generated is not rotated, the image processing apparatus 100 may select a block from the top to the bottom of the image output by the CIS 101. If there is no rotation in the image to be generated, the image processing apparatus 100 may select a block from which image data is read from the CIS 101 so that the generated image has a top-to-bottom direction.
 また、画像の生成状況によってはCIS101から読み出した後に不要となるブロックが生じる。従って、画像処理装置100は、CIS101からブロック単位で画像データを読み出した際に、不要となる画像データはSRAM111から消去しても良い。 Depending on the image generation status, unnecessary blocks may be generated after reading from the CIS 101. Therefore, the image processing apparatus 100 may delete unnecessary image data from the SRAM 111 when the image data is read from the CIS 101 in units of blocks.
 以上、本開示の実施の形態に係る画像処理装置100の動作例を説明した。本開示の実施の形態に係る画像処理装置100は、上述した動作を実行することで、歪みが大きなレンズで撮像された画像に対して、遅延の少なく、かつ少ないメモリ容量で変形処理を行うことが可能となる。 The operation example of the image processing apparatus 100 according to the embodiment of the present disclosure has been described above. The image processing apparatus 100 according to the embodiment of the present disclosure performs deformation processing on an image captured by a lens with a large distortion with a small delay and a small memory capacity by executing the above-described operation. Is possible.
 本開示の実施の形態に係る画像処理装置100は、上述したブロック単位でのCIS101からの画像データの読み出しを応用して、様々な方法でCIS101から画像データを読み出すことが出来る。 The image processing apparatus 100 according to the embodiment of the present disclosure can read image data from the CIS 101 by various methods by applying the reading of the image data from the CIS 101 in units of blocks described above.
 図10、11は、本開示の実施の形態に係る画像処理装置100によるCIS101からの画像データの別の読み出し方を示す説明図である。図10、11に示したのは、CIS101から画像データを読み出す際に、ブロック単位では無く、水平方向に分割した領域を垂直方向に一番近い場所から順次読み出す場合の、画像処理装置100によるCIS101からの画像データの読み出した方の例である。 10 and 11 are explanatory diagrams illustrating another method of reading image data from the CIS 101 by the image processing apparatus 100 according to the embodiment of the present disclosure. FIGS. 10 and 11 show that when reading image data from the CIS 101, the CIS 101 by the image processing apparatus 100 is used when the area divided in the horizontal direction is read sequentially from the nearest place in the vertical direction, not in units of blocks. This is an example of reading image data from.
 図10に示したのは3つの領域(A相、B相、C相)に画像を分割した上で垂直方向に一番近い場所から順次読み出す場合の例である。また図11に示したのは、画像の中央で2つに分割した上で、線対称となるように3つの領域(A相、B相、C相)に画像を分割した上で垂直方向に一番近い場所から順次読み出す場合の例である。 FIG. 10 shows an example in which an image is divided into three regions (A phase, B phase, and C phase), and sequentially read from a location closest to the vertical direction. Also, FIG. 11 shows that the image is divided into two at the center of the image and then divided into three regions (A phase, B phase, C phase) so as to be line symmetric, and then in the vertical direction. This is an example of reading sequentially from the nearest location.
 このようにCIS101から画像データを読み出すことで、画素またはブロック単位で読み出す場合に比べて、画像処理装置100のシステムの簡略化が可能となる。 Thus, by reading out the image data from the CIS 101, the system of the image processing apparatus 100 can be simplified as compared with the case of reading out in units of pixels or blocks.
 本開示の実施の形態に係る画像処理装置100は、上述したように、入力部103の操作に基づいて入力部103から送られる操作情報に基づき、CIS101、SRAM111、及び画像変形部112に対して出力する座標に関する情報を生成してもよい。以下では、操作情報に基づいて出力する画像を変化させていく画像処理装置100について説明する。 As described above, the image processing apparatus 100 according to the embodiment of the present disclosure is based on the operation information sent from the input unit 103 based on the operation of the input unit 103, with respect to the CIS 101, the SRAM 111, and the image deformation unit 112. Information regarding the coordinates to be output may be generated. Below, the image processing apparatus 100 which changes the image output based on operation information is demonstrated.
 図12、13は、画像処理装置100が操作情報に基づいて出力する画像を変化させていく様子を説明する説明図である。図12には、画像処理装置100によって、CIS101が出力する画像に対する変形処理が施されて出力されている画像160aと、操作情報に基づいて左斜め下方向に表示対象を移動させていく様子と、が示されている。 12 and 13 are explanatory diagrams for explaining how the image processing apparatus 100 changes the image output based on the operation information. In FIG. 12, the image processing apparatus 100 outputs the image 160 a that has been subjected to deformation processing on the image output from the CIS 101 and the state in which the display target is moved in the diagonally lower left direction based on the operation information. ,It is shown.
 図12に示した符号161は、魚眼レンズを通じてCIS101に結像され、CIS101が出力する、歪みのある画像の例である。また図12に示した符号162a~162dは、CIS101が出力する画像の中から切り出して変形した、出力対象となる画像の範囲を示すものである。 The reference numeral 161 shown in FIG. 12 is an example of a distorted image that is imaged on the CIS 101 through the fisheye lens and output from the CIS 101. Reference numerals 162a to 162d shown in FIG. 12 indicate the range of the image to be output, which is cut out from the image output by the CIS 101 and deformed.
 操作情報に基づいて左斜め下方向に表示対象が移動していくと、画像処理装置100は、その表示対象の移動に応じてCIS101からの読み出し対象のブロックを変化させていく。図13は、操作情報に基づいて左斜め下方向に表示対象が移動していく際に、CIS101からの読み出し対象のブロックが変化し、最終的に画像処理装置100が出力する画像が画像160aから画像160dへと変化していく例を示す説明図である。 When the display object moves in the diagonally downward left direction based on the operation information, the image processing apparatus 100 changes the block to be read from the CIS 101 in accordance with the movement of the display object. In FIG. 13, when the display target moves in the diagonally lower left direction based on the operation information, the block to be read from the CIS 101 changes, and the image finally output by the image processing apparatus 100 is displayed from the image 160a. It is explanatory drawing which shows the example which changes to the image 160d.
 図12、13に示した例では、変形して出力する画像160a~160dの上側は、図12の162a~162dで示した範囲の右上に対応する。従って、操作情報に基づいて左斜め下方向に表示対象が移動していく際に、画像処理装置100は、図13に示したように最初に読み出すブロックをブロック163aからブロック163dへと変化させていく。 12 and 13, the upper side of the deformed and output images 160a to 160d corresponds to the upper right of the range 162a to 162d in FIG. Accordingly, when the display target moves in the diagonally downward left direction based on the operation information, the image processing apparatus 100 changes the block to be read first from the block 163a to the block 163d as shown in FIG. Go.
 このように、本開示の実施の形態に係る画像処理装置100は、入力部103の操作に基づいて入力部103から送られる操作情報に基づいて、出力する画像を変化させていく際に、この操作情報に基づいてCIS101からの読み出し対象のブロックを変化させる。そしてCIS101は、指定されたブロックの画像データを出力する。ブロックの読み出し順序は、上述の魚眼レンズで撮像された画像の歪みを除去する際の処理と同様に、座標生成部114が、入力部103から送られる操作情報の内容に応じて生成した座標の情報に基づいて決定する。 As described above, when the image processing apparatus 100 according to the embodiment of the present disclosure changes the image to be output based on the operation information sent from the input unit 103 based on the operation of the input unit 103, Based on the operation information, the block to be read from the CIS 101 is changed. Then, the CIS 101 outputs the image data of the designated block. The block reading order is coordinate information generated by the coordinate generation unit 114 according to the contents of the operation information sent from the input unit 103, as in the processing for removing distortion of the image captured by the fisheye lens described above. Determine based on.
 本開示の実施の形態に係る画像処理装置100は、CIS101から出力された画像データに対する変形処理を行うことで、入力部103の操作に基づいて入力部103から送られる操作情報に応じた画像を出力することが可能となる。 The image processing apparatus 100 according to the embodiment of the present disclosure performs an image process according to operation information sent from the input unit 103 based on an operation of the input unit 103 by performing a deformation process on the image data output from the CIS 101. It becomes possible to output.
 上述した例では、CIS101が出力する画像は、魚眼レンズ等の特殊なレンズを通じて結像された画像であったが、本開示は係る例に限定されるものではない。例えば、画像処理装置100は、歪みが少ない通常の画像の一部を、入力部103の操作に基づいて場所を変えながら切り出して出力するような場合でも、CIS101からの読み出し対象のブロックを変化させることが出来る。 In the above-described example, the image output by the CIS 101 is an image formed through a special lens such as a fisheye lens, but the present disclosure is not limited to such an example. For example, the image processing apparatus 100 changes a block to be read from the CIS 101 even when a part of a normal image with little distortion is cut out and output based on the operation of the input unit 103. I can do it.
 続いて、本開示の実施の形態に係る画像処理装置100の適用例を説明する。本開示の実施の形態に係る画像処理装置100は、例えば、自動車、飛行体その他の移動体や、身体装着型電子機器(ウェアラブルデバイス)などに適用することができる。 Subsequently, an application example of the image processing apparatus 100 according to the embodiment of the present disclosure will be described. The image processing apparatus 100 according to the embodiment of the present disclosure can be applied to, for example, automobiles, flying objects, and other mobile objects, and body-mounted electronic devices (wearable devices).
 図14は、本開示の実施の形態に係る画像処理装置100の適用例を示す説明図であり、自動車に画像処理装置100を提供した場合の例を示したものである。 FIG. 14 is an explanatory diagram illustrating an application example of the image processing apparatus 100 according to the embodiment of the present disclosure, and illustrates an example in a case where the image processing apparatus 100 is provided to an automobile.
 図14には、撮像部2910,2912,2914,2916,2918と、車外情報検出部2920,2922,2924,2926,2928,2930と、が示されている。 FIG. 14 shows imaging units 2910, 2912, 2914, 2916, and 2918 and outside information detection units 2920, 2922, 2924, 2926, 2928, and 2930.
 撮像部2910,2912,2914,2916,2918は、例えば、車両2900のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部のうちの少なくとも一つの位置に設けられる。フロントノーズに備えられる撮像部2910及び車室内のフロントガラスの上部に備えられる撮像部2918は、主として車両2900の前方の画像を取得する。サイドミラーに備えられる撮像部2912,2914は、主として車両2900の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部2916は、主として車両2900の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部2918は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 2910, 2912, 2914, 2916, and 2918 are provided at, for example, at least one of the front nose, the side mirror, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior of the vehicle 2900. An imaging unit 2910 provided in the front nose and an imaging unit 2918 provided in the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 2900. The imaging units 2912 and 2914 provided in the side mirror mainly acquire an image on the side of the vehicle 2900. An imaging unit 2916 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 2900. An imaging unit 2918 provided on the upper part of the windshield in the passenger compartment is mainly used for detecting a preceding vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図14には、それぞれの撮像部2910,2912,2914,2916の撮影範囲の一例が示されている。撮像範囲aは、フロントノーズに設けられた撮像部2910の撮像範囲を示し、撮像範囲b,cは、それぞれサイドミラーに設けられた撮像部2912,2914の撮像範囲を示し、撮像範囲dは、リアバンパ又はバックドアに設けられた撮像部2916の撮像範囲を示す。例えば、撮像部2910,2912,2914,2916で撮像された画像データが重ね合わせられることにより、車両2900を上方から見た俯瞰画像が得られる。 FIG. 14 shows an example of shooting ranges of the respective imaging units 2910, 2912, 2914, and 2916. The imaging range a indicates the imaging range of the imaging unit 2910 provided in the front nose, the imaging ranges b and c indicate the imaging ranges of the imaging units 2912 and 2914 provided in the side mirrors, respectively, and the imaging range d The imaging range of the imaging unit 2916 provided in the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 2910, 2912, 2914, and 2916, an overhead image when the vehicle 2900 is viewed from above is obtained.
 車両2900のフロント、リア、サイド、コーナ及び車室内のフロントガラスの上部に設けられる車外情報検出部2920,2922,2924,2926,2928,2930は、例えば超音波センサ又はレーダ装置であってよい。車両2900のフロントノーズ、リアバンパ、バックドア及び車室内のフロントガラスの上部に設けられる車外情報検出部2920,2926,2930は、例えばLIDAR装置であってよい。これらの車外情報検出部2920~2930は、主として先行車両、歩行者又は障害物等の検出に用いられる。 The vehicle outside information detection units 2920, 2922, 2924, 2926, 2928, 2930 provided on the front, rear, side, corner, and upper windshield of the vehicle 2900 may be, for example, an ultrasonic sensor or a radar device. The vehicle outside information detection units 2920, 2926, and 2930 provided on the front nose, the rear bumper, the back door, and the windshield in the vehicle interior of the vehicle 2900 may be, for example, LIDAR devices. These vehicle outside information detection units 2920 to 2930 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, and the like.
 図14の撮像部2910,2912,2914,2916,2918のそれぞれに、任意のブロックから画像データを読み出すことが可能なCIS101を設けて、車両2900の内部に上述した画像処理部102を備えることで、車両の周囲の画像を低遅延で車両2900の運転手に提示することが可能となる。またSRAM111は1フレーム分未満の画像データを格納するための容量があれば良いので、車両の周囲の画像を車両2900の運転手に提示するためのハードウェアのコストを削減することに寄与できる。 Each of the imaging units 2910, 2912, 2914, 2916, and 2918 in FIG. 14 is provided with the CIS 101 that can read image data from an arbitrary block, and the above-described image processing unit 102 is provided inside the vehicle 2900. Thus, an image around the vehicle can be presented to the driver of the vehicle 2900 with a low delay. In addition, since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image around the vehicle to the driver of the vehicle 2900.
 もちろん、本開示の実施の形態に係る画像処理装置100を適用できる装置は自動車に限定されるものでは無い。例えば、人間の身体に装着して使用することを目的とする身体装着型電子機器(ウェアラブルデバイス)にも、本開示の実施の形態に係る画像処理装置100を適用できる。 Of course, an apparatus to which the image processing apparatus 100 according to the embodiment of the present disclosure can be applied is not limited to an automobile. For example, the image processing apparatus 100 according to the embodiment of the present disclosure can also be applied to a body-mounted electronic device (wearable device) intended to be used by being worn on a human body.
 例えば、ウェアラブルデバイスにCIS101を設けて、身体の移動に伴って生じる画像のブレを補正する際に、当該ウェアラブルデバイスの内部に上述した画像処理部102を備えることで、画像のブレや歪みなどを低遅延で補正して、補正後の画像をウェアラブルデバイスで撮像した画像の観察者に提示することが可能となる。またSRAM111は1フレーム分未満の画像データを格納するための容量があれば良いので、ウェアラブルデバイスで撮像された画像を提示するためのハードウェアのコストを削減することに寄与できる。 For example, when the wearable device is provided with the CIS 101 and the image blur caused by the movement of the body is corrected, the above-described image processing unit 102 is provided in the wearable device, so that the image blur or distortion can be reduced. It is possible to correct the image with a low delay and present the corrected image to an observer who has captured the image with the wearable device. Further, since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image captured by a wearable device.
 また例えば、自律飛行する飛行体(ドローン)にカメラを搭載して、上空からの映像をそのカメラで撮像させる場合にも、本開示の実施の形態に係る画像処理装置100を適用できる。 For example, the image processing apparatus 100 according to the embodiment of the present disclosure can also be applied to a case where a camera is mounted on an autonomous flying object (drone) and an image from above is captured by the camera.
 例えば、自律飛行する飛行体に搭載されるカメラにCIS101を設けて、飛行体の移動に伴って生じる画像のブレを補正する際に、当該飛行体の内部に上述した画像処理部102を備えることで、画像のブレや歪みなどを低遅延で補正して、補正後の画像を飛行体で撮像した画像の観察者に提示することが可能となる。またSRAM111は1フレーム分未満の画像データを格納するための容量があれば良いので、自律飛行する飛行体で撮像された画像を提示するためのハードウェアのコストを削減することに寄与できる。 For example, when the CIS 101 is provided in a camera mounted on an autonomous flying vehicle and the image blur caused by the movement of the flying vehicle is corrected, the above-described image processing unit 102 is provided inside the flying vehicle. Thus, it is possible to correct image blurring and distortion with a low delay and present the corrected image to an observer who has captured the image with a flying object. Further, since the SRAM 111 only needs to have a capacity for storing image data of less than one frame, it can contribute to reducing the cost of hardware for presenting an image captured by a flying object that autonomously flies.
 <2.まとめ>
 以上説明したように本開示の実施の形態によれば、レンズを通じて撮像された画像を変形させる際に、フレーム単位での画像データを保存すること無く、画像を変形して、変形後の画像を出力することが可能な画像処理装置100が提供される。例えば、魚眼レンズ等の特殊なレンズを通じて撮像された画像を変形させる際に、フレーム単位での画像データを保存すること無く、画像を変形して、変形後の画像を出力することが可能な画像処理装置100が提供される。
<2. Summary>
As described above, according to the embodiment of the present disclosure, when an image captured through a lens is deformed, the image is deformed without storing image data in units of frames, and the deformed image is displayed. An image processing apparatus 100 capable of outputting is provided. For example, when transforming an image captured through a special lens such as a fisheye lens, image processing that can transform the image and output the transformed image without storing image data in units of frames An apparatus 100 is provided.
 本開示の実施の形態に係る画像処理装置100は、画素または隣接する複数の画素からなるブロック単位で、かつ任意の順序で画像データを出力可能なCIS101からデータを読み出す。画像処理装置100は、CIS101からデータを読み出す際に、画像処理、例えばライン毎の歪み補正処理に必要となる分だけCIS101からデータを読み出す。 The image processing apparatus 100 according to the embodiment of the present disclosure reads data from the CIS 101 that can output image data in a block unit including a pixel or a plurality of adjacent pixels and in any order. When reading data from the CIS 101, the image processing apparatus 100 reads data from the CIS 101 by an amount necessary for image processing, for example, distortion correction processing for each line.
 従って、本開示の実施の形態に係る画像処理装置100は、画像処理を行う際に1フレーム分全てのデータをCIS101から読み出す必要が無いので、遅延の少ない画像処理を行うことが可能となる。また本開示の実施の形態に係る画像処理装置100は、画像処理の前に一時的に画像データを蓄えるSRAM111の容量を1フレーム分の画像データ未満の量とすることが出来るので、ハードウェアのコストを削減することが可能となる。 Therefore, the image processing apparatus 100 according to the embodiment of the present disclosure does not need to read all data for one frame from the CIS 101 when performing image processing, and thus can perform image processing with little delay. Further, the image processing apparatus 100 according to the embodiment of the present disclosure can reduce the capacity of the SRAM 111 that temporarily stores image data before image processing to an amount less than one frame of image data. Costs can be reduced.
 本明細書の各装置が実行する処理における各ステップは、必ずしもシーケンス図またはフローチャートとして記載された順序に沿って時系列に処理する必要はない。例えば、各装置が実行する処理における各ステップは、フローチャートとして記載した順序と異なる順序で処理されても、並列的に処理されてもよい。 Each step in the processing executed by each device in this specification does not necessarily have to be processed in chronological order in the order described as a sequence diagram or flowchart. For example, each step in the processing executed by each device may be processed in an order different from the order described as the flowchart, or may be processed in parallel.
 また、各装置に内蔵されるCPU、ROMおよびRAMなどのハードウェアを、上述した各装置の構成と同等の機能を発揮させるためのコンピュータプログラムも作成可能である。また、該コンピュータプログラムを記憶させた記憶媒体も提供されることが可能である。また、機能ブロック図で示したそれぞれの機能ブロックをハードウェアまたはハードウェア回路で構成することで、一連の処理をハードウェアまたはハードウェア回路で実現することもできる。 In addition, it is possible to create a computer program for causing hardware such as CPU, ROM, and RAM incorporated in each device to exhibit functions equivalent to the configuration of each device described above. A storage medium storing the computer program can also be provided. In addition, by configuring each functional block shown in the functional block diagram with hardware or a hardware circuit, a series of processing can be realized with hardware or a hardware circuit.
 また上述の説明で用いた機能ブロック図で示したそれぞれの機能ブロックの一部又は全部は、たとえばインターネット等のネットワークを介して接続されるサーバ装置で実現されてもよい。また上述の説明で用いた機能ブロック図で示したそれぞれの機能ブロックの構成は、単独の装置で実現されてもよく、複数の装置が連携するシステムで実現されても良い。複数の装置が連携するシステムには、例えば複数のサーバ装置の組み合わせ、サーバ装置と端末装置との組み合わせ等が含まれ得る。 Also, some or all of the functional blocks shown in the functional block diagram used in the above description may be realized by a server device connected via a network such as the Internet. The configuration of each functional block shown in the functional block diagram used in the above description may be realized by a single device or a system in which a plurality of devices cooperate. A system in which a plurality of devices are linked may include, for example, a combination of a plurality of server devices, a combination of a server device and a terminal device, or the like.
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、特許請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。 The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can come up with various changes or modifications within the scope of the technical idea described in the claims. Of course, it is understood that it belongs to the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 In addition, the effects described in this specification are merely illustrative or illustrative, and are not limited. That is, the technology according to the present disclosure can exhibit other effects that are apparent to those skilled in the art from the description of the present specification in addition to or instead of the above effects.
 なお、以下のような構成も本開示の技術的範囲に属する。
(1)
 画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出す制御部を備える、画像処理装置。
(2)
 前記制御部は、決定した読み出し順序に基づいて前記撮像素子から読み出したデータを用いた、前記撮像素子に結像した画像に対する画像処理を行う、前記(1)に記載の画像処理装置。
(3)
 前記制御部が読み出したデータを格納する記憶部を備える、前記(2)に記載の画像処理装置。
(4)
 前記記憶部に格納されるデータの量は前記制御部での前記画像処理の単位で必要になる1フレーム未満の量である、前記(3)に記載の画像処理装置。
(5)
 前記制御部は、前記データを用いて前記撮像素子に結像した画像の変形処理を行う、前記(2)~(4)のいずれかに記載の画像処理装置。
(6)
 前記制御部は、前記変形処理として前記画像の歪みを除去する処理を行う、前記(5)に記載の画像処理装置。
(7)
 前記制御部は、中心射影方式ではないレンズを通じて前記撮像素子に結像した前記画像の歪みを除去する処理を行う、前記(6)に記載の画像処理装置。
(8)
 前記制御部は、前記中心射影方式ではないレンズとして魚眼レンズを通じて前記撮像素子に結像した前記画像の歪みを除去する処理を行う、前記(7)に記載の画像処理装置。
(9)
 前記制御部は、前記変形処理として前記画像の回転処理を行う、前記(5)~(8)のいずれかに記載の画像処理装置。
(10)
 前記制御部は、前記変形処理として前記画像の拡大率を変更する処理を行う、前記(5)~(9)のいずれかに記載の画像処理装置。
(11)
 前記制御部は、前記画像処理後の前記画像の左上に対応する位置のデータから読み出すよう前記読み出し順序を決定する、前記(2)~(10)のいずれかに記載の画像処理装置。
(12)
 前記制御部は、外部からの操作情報に応じて前記読み出し順序を変化させる、前記(1)~(11)のいずれかに記載の画像処理装置。
(13)
 前記(1)~(12)のいずれかに記載の画像処理装置を備える、移動体。
(14)
 前記移動体は自動車である、前記(13)に記載の移動体。
(15)
 前記移動体は自律飛行する飛行体である、前記(13)に記載の移動体。
(16)
 前記(1)~(12)のいずれかに記載の画像処理装置を備える、身体装着型電子機器。
(17)
 画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことを含む、画像処理方法。
(18)
 画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことをコンピュータに実行させる、コンピュータプログラム。
The following configurations also belong to the technical scope of the present disclosure.
(1)
Control for determining the data reading order for an image sensor capable of arbitrarily reading data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and reading data from the image sensor based on the determined reading order An image processing apparatus comprising a unit.
(2)
The image processing apparatus according to (1), wherein the control unit performs image processing on an image formed on the imaging element using data read from the imaging element based on the determined reading order.
(3)
The image processing apparatus according to (2), further including a storage unit that stores data read by the control unit.
(4)
The image processing apparatus according to (3), wherein the amount of data stored in the storage unit is an amount of less than one frame required for the unit of the image processing in the control unit.
(5)
The image processing apparatus according to any one of (2) to (4), wherein the control unit performs a deformation process on an image formed on the image sensor using the data.
(6)
The image processing apparatus according to (5), wherein the control unit performs a process of removing distortion of the image as the deformation process.
(7)
The image processing apparatus according to (6), wherein the control unit performs a process of removing distortion of the image formed on the imaging element through a lens that is not a central projection method.
(8)
The image processing apparatus according to (7), wherein the control unit performs a process of removing distortion of the image formed on the imaging element through a fisheye lens as a lens that is not the central projection method.
(9)
The image processing apparatus according to any one of (5) to (8), wherein the control unit performs rotation processing of the image as the deformation processing.
(10)
The image processing apparatus according to any one of (5) to (9), wherein the control unit performs a process of changing an enlargement ratio of the image as the deformation process.
(11)
The image processing apparatus according to any one of (2) to (10), wherein the control unit determines the reading order so as to read from data at a position corresponding to an upper left of the image after the image processing.
(12)
The image processing apparatus according to any one of (1) to (11), wherein the control unit changes the reading order according to operation information from the outside.
(13)
A moving body comprising the image processing device according to any one of (1) to (12).
(14)
The moving body according to (13), wherein the moving body is an automobile.
(15)
The mobile body according to (13), wherein the mobile body is a flying body that autonomously flies.
(16)
A body-mounted electronic device comprising the image processing device according to any one of (1) to (12).
(17)
A data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order. Including an image processing method.
(18)
A data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order. A computer program that causes a computer to execute.
100  :画像処理装置
102  :画像処理部
103  :入力部
111  :SRAM
112  :画像変形部
113  :SRAM
114  :座標生成部
120  :第1チップ
121  :画素アレイ部
122a :パッド部
122b :パッド部
123  :ビア
130  :第2チップ
131  :信号処理部
132a :周辺回路
132b :周辺回路
133  :出力回路
134a :列レコーダ
134b :列レコーダ
100: Image processing apparatus 102: Image processing unit 103: Input unit 111: SRAM
112: Image transformation unit 113: SRAM
114: Coordinate generating unit 120: First chip 121: Pixel array unit 122a: Pad unit 122b: Pad unit 123: Via 130: Second chip 131: Signal processing unit 132a: Peripheral circuit 132b: Peripheral circuit 133: Output circuit 134a: Column recorder 134b: Column recorder

Claims (18)

  1.  画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出す制御部を備える、画像処理装置。 Control for determining the data reading order for an image sensor capable of arbitrarily reading data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and reading data from the image sensor based on the determined reading order An image processing apparatus comprising a unit.
  2.  前記制御部は、決定した読み出し順序に基づいて前記撮像素子から読み出したデータを用いた、前記撮像素子に結像した画像に対する画像処理を行う、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the control unit performs image processing on an image formed on the imaging element using data read from the imaging element based on the determined reading order.
  3.  前記制御部が読み出したデータを格納する記憶部を備える、請求項2に記載の画像処理装置。 The image processing apparatus according to claim 2, further comprising a storage unit that stores data read by the control unit.
  4.  前記記憶部に格納されるデータの量は前記制御部での前記画像処理の単位で必要になる1フレーム未満の量である、請求項3に記載の画像処理装置。 4. The image processing apparatus according to claim 3, wherein the amount of data stored in the storage unit is an amount of less than one frame required for the image processing unit in the control unit.
  5.  前記制御部は、前記データを用いて前記撮像素子に結像した画像の変形処理を行う、請求項2に記載の画像処理装置。 The image processing apparatus according to claim 2, wherein the control unit performs a deformation process on an image formed on the image sensor using the data.
  6.  前記制御部は、前記変形処理として前記画像の歪みを除去する処理を行う、請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the control unit performs a process of removing distortion of the image as the deformation process.
  7.  前記制御部は、中心射影方式ではないレンズを通じて前記撮像素子に結像した前記画像の歪みを除去する処理を行う、請求項6に記載の画像処理装置。 The image processing apparatus according to claim 6, wherein the control unit performs a process of removing distortion of the image formed on the imaging element through a lens that is not a central projection method.
  8.  前記制御部は、前記中心射影方式ではないレンズとして魚眼レンズを通じて前記撮像素子に結像した前記画像の歪みを除去する処理を行う、請求項7に記載の画像処理装置。 The image processing apparatus according to claim 7, wherein the control unit performs a process of removing distortion of the image formed on the imaging element through a fish-eye lens as a lens that is not the central projection method.
  9.  前記制御部は、前記変形処理として前記画像の回転処理を行う、請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the control unit performs rotation processing of the image as the deformation processing.
  10.  前記制御部は、前記変形処理として前記画像の拡大率を変更する処理を行う、請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the control unit performs a process of changing an enlargement ratio of the image as the deformation process.
  11.  前記制御部は、前記画像処理後の前記画像の左上に対応する位置のデータから読み出すよう前記読み出し順序を決定する、請求項2に記載の画像処理装置。 The image processing apparatus according to claim 2, wherein the control unit determines the reading order so as to read from data at a position corresponding to an upper left of the image after the image processing.
  12.  前記制御部は、外部からの操作情報に応じて前記読み出し順序を変化させる、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the control unit changes the reading order according to operation information from the outside.
  13.  請求項1に記載の画像処理装置を備える、移動体。 A moving body comprising the image processing apparatus according to claim 1.
  14.  前記移動体は自動車である、請求項13に記載の移動体。 The moving body according to claim 13, wherein the moving body is an automobile.
  15.  前記移動体は自律飛行する飛行体である、請求項13に記載の移動体。 The mobile body according to claim 13, wherein the mobile body is an autonomously flying object.
  16.  請求項1に記載の画像処理装置を備える、身体装着型電子機器。 A body-mounted electronic device comprising the image processing device according to claim 1.
  17.  画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことを含む、画像処理方法。 A data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order. Including an image processing method.
  18.  画素または隣接する複数の前記画素からなる画素群単位で任意にデータの読み出しが可能な撮像素子に対してデータの読み出し順序を決定し、決定した読み出し順序に基づいて前記撮像素子からデータを読み出すことをコンピュータに実行させる、コンピュータプログラム。 A data reading order is determined for an image sensor that can arbitrarily read data in units of pixels or a pixel group composed of a plurality of adjacent pixels, and data is read from the image sensor based on the determined reading order. A computer program that causes a computer to execute.
PCT/JP2016/078678 2015-11-20 2016-09-28 Image processing device, image processing method, moving body, wearable electronic device, and computer program WO2017086029A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-227330 2015-11-20
JP2015227330 2015-11-20

Publications (1)

Publication Number Publication Date
WO2017086029A1 true WO2017086029A1 (en) 2017-05-26

Family

ID=58718681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/078678 WO2017086029A1 (en) 2015-11-20 2016-09-28 Image processing device, image processing method, moving body, wearable electronic device, and computer program

Country Status (1)

Country Link
WO (1) WO2017086029A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019145185A (en) * 2018-02-20 2019-08-29 キヤノン株式会社 Imaging apparatus and inspection method thereof, and imaging system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007108081A1 (en) * 2006-03-20 2007-09-27 Fujitsu Limited Photography instrument, photography method and program, table making device and method of photography instrument, video processor and processing method
JP2012226645A (en) * 2011-04-21 2012-11-15 Sony Corp Image processing apparatus, image processing method, recording medium, and program
JP2013084124A (en) * 2011-10-11 2013-05-09 Panasonic Corp Imaging system, imaging device, and image processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007108081A1 (en) * 2006-03-20 2007-09-27 Fujitsu Limited Photography instrument, photography method and program, table making device and method of photography instrument, video processor and processing method
JP2012226645A (en) * 2011-04-21 2012-11-15 Sony Corp Image processing apparatus, image processing method, recording medium, and program
JP2013084124A (en) * 2011-10-11 2013-05-09 Panasonic Corp Imaging system, imaging device, and image processing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019145185A (en) * 2018-02-20 2019-08-29 キヤノン株式会社 Imaging apparatus and inspection method thereof, and imaging system

Similar Documents

Publication Publication Date Title
US10462372B2 (en) Imaging device, imaging system, and imaging method
US9196022B2 (en) Image transformation and multi-view output systems and methods
JP5159070B2 (en) Vehicle periphery image display device and display method
JP2009017020A (en) Image processor and method for generating display image
JP5925579B2 (en) Semiconductor device, electronic device, and image processing method
JP5195592B2 (en) Video processing device
CN103065318A (en) Curved surface projection method and device of multi-camera panorama system
JP2015097335A (en) Bird&#39;s-eye image generating apparatus
US10713757B2 (en) Image processing apparatus, control method thereof, and storage medium
CN113781302B (en) Multi-path image splicing method and system, readable storage medium and unmanned vehicle
JP4791222B2 (en) Display control device
WO2017086029A1 (en) Image processing device, image processing method, moving body, wearable electronic device, and computer program
KR101469717B1 (en) Video system and method using cameras with a wide angle
KR102235951B1 (en) Imaging Apparatus and method for Automobile
EP4156127A1 (en) Image processing system, mobile object, image processing method, and storage medium
EP4156125A1 (en) Image processing system, mobile object, image processing method, and storage medium
US20230096414A1 (en) Camera unit installing method, moving device, image processing system, image processing method, and storage medium
US7953292B2 (en) Semiconductor integrated circuit device and rendering processing display system
JP2020135206A (en) Image processing device, on-vehicle camera system, and image processing method
JP2010003014A (en) Vehicle surroundings display apparatus and display method thereof
US20220224852A1 (en) Image sensor, camera module, and optical device comprising camera module
Lai et al. Zynq-based full HD around view monitor system for intelligent vehicle
JP2021093628A (en) Vehicle peripheral display device
JP5959684B2 (en) Image conversion system
JP4407246B2 (en) Vehicle periphery monitoring system and vehicle periphery monitoring method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16866031

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16866031

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP