WO2012029658A1 - Imaging device, image-processing device, image-processing method, and image-processing program - Google Patents

Imaging device, image-processing device, image-processing method, and image-processing program Download PDF

Info

Publication number
WO2012029658A1
WO2012029658A1 PCT/JP2011/069316 JP2011069316W WO2012029658A1 WO 2012029658 A1 WO2012029658 A1 WO 2012029658A1 JP 2011069316 W JP2011069316 W JP 2011069316W WO 2012029658 A1 WO2012029658 A1 WO 2012029658A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinates
subject
distance
distance information
Prior art date
Application number
PCT/JP2011/069316
Other languages
French (fr)
Japanese (ja)
Inventor
寿之 猪子
康毅 斎藤
Original Assignee
チームラボ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by チームラボ株式会社 filed Critical チームラボ株式会社
Publication of WO2012029658A1 publication Critical patent/WO2012029658A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor

Definitions

  • the present invention relates to an imaging device, an image processing device, an image processing method, and an image processing program, and more particularly, to a separation process between a subject included in an image generated by imaging and a background.
  • Such subject clipping processing may be realized by image processing on image information digitized by photoelectric conversion as well as by manual operation by an operator.
  • an area of the image including the contour is designated by the operator, and the comparison of the density of the image in the designated area with a separately designated reference density is performed.
  • a method for detecting a contour line has been proposed (see, for example, Patent Document 1).
  • Patent Document 3 a method for detecting a background portion and a subject portion in an image based on image information of two identical images having different focus conditions has been proposed (for example, see Patent Document 3).
  • JP 63-5745 A Japanese Patent Laid-Open No. 9-83776 Japanese Patent Laid-Open No. 10-233919
  • Patent Documents 1 and 2 it is necessary for the user to designate one or two areas.
  • a user who understands the characteristics of image clipping and the like can specify an appropriate area, but it is difficult for a general user to specify an area by understanding the characteristics of image clipping.
  • the present invention has been made in consideration of the above circumstances, and enables more accurate separation processing regardless of the skill level of the user in the separation processing of the subject and the background in the image information obtained by capturing the subject.
  • the purpose is to do.
  • an image capturing unit that generates a subject image on which a subject and a background are displayed by imaging, and a visual range including the subject and the background are images.
  • a distance information generation unit that measures a distance to an object displayed in each unit of the image and generates distance information in which coordinates and distances on the image of the visual range are associated with each other,
  • a coordinate conversion unit that converts the coordinates of the acquired distance information into coordinates on the subject image to generate converted distance information, and an association among the converted coordinates included in the generated converted distance information
  • a coordinate extraction unit that extracts coordinates for which a predetermined distance satisfies a predetermined condition, and an image separation unit that separates and outputs a region specified by the extracted coordinates and another region of the subject image including And wherein the door.
  • an image processing apparatus that measures a distance to an object displayed on each unit when a visual range including a subject and a background is an image
  • a distance information acquisition unit that acquires distance information in which coordinates on a target range image and a distance are associated; a subject image acquisition unit that acquires a subject image on which a subject and a background are displayed; and the acquired distance information
  • a coordinate conversion unit that converts the coordinates of the subject image into coordinates on the subject image and generates converted distance information, and among the converted coordinates included in the generated converted distance information, an associated distance is predetermined
  • a coordinate extraction unit that extracts coordinates satisfying the above condition, and an image separation unit that separates a region specified by the extracted coordinates from the other region of the acquired subject image.
  • another aspect of the present invention is an image processing method for measuring a distance to an object displayed on each part when a visual range including a subject and a background is an image
  • the distance information in which the coordinates on the image of the visual range and the distance are associated is acquired and stored in a storage medium, and the subject image on which the subject and the background are displayed is acquired and stored in the storage medium.
  • the converted distance information is converted into coordinates on the subject image, converted distance information is generated and stored in a storage medium, and the converted coordinates included in the generated converted distance information are associated with each other.
  • a coordinate satisfying a predetermined condition is extracted, and an area specified by the extracted coordinate is separated from another area of the acquired subject image and stored in a storage medium.
  • Still another aspect of the present invention is an image processing program for measuring a distance to an object displayed on each part when a visual range including a subject and a background is an image, Obtaining distance information in which the coordinates on the image of the visual range and the distance are associated with each other and storing the distance information in a storage medium; obtaining a subject image displaying the subject and the background; and storing the subject image in the storage medium; Converting the stored distance information coordinates into coordinates on the subject image to generate converted distance information and storing the converted distance information in a storage medium; and the converted distance information included in the generated converted distance information Of the coordinates, the coordinates for which the associated distance satisfies a predetermined condition are extracted, and the area specified by the extracted coordinates and the other area are separated and stored in the acquired subject image. Characterized in that and a step of storing the body data processing device.
  • the present invention in the separation processing of the subject and the background in the image information obtained by capturing the subject, it is possible to perform the separation processing with higher accuracy regardless of the skill level of the user.
  • an image camera that picks up an image and a reduced image such as a gray scale are picked up, and a distance of a subject such as a subject or background displayed at a position on the image (hereinafter, distance information).
  • distance information a distance of a subject such as a subject or background displayed at a position on the image.
  • FIG. 1 is a block diagram showing a hardware configuration of the imaging apparatus 1 according to the present embodiment.
  • the imaging apparatus 1 according to the present embodiment includes the above-described distance camera and image camera in addition to the same configuration as an information processing terminal such as a general server or a PC (Personal Computer). That is, the imaging apparatus 1 according to the present embodiment includes a CPU (Central Processing Unit) 10, a RAM (Random Access Memory) 11, a ROM (Read Only Memory) 12, a HDD (Hard Disk Drive) 13, and an I / F 14. Connected through. Further, an image camera 15, a distance camera 16, an LCD (Liquid Crystal Display) 17, and an operation unit 18 are connected to the I / F 14.
  • a CPU Central Processing Unit
  • RAM Random Access Memory
  • ROM Read Only Memory
  • HDD Hard Disk Drive
  • the CPU 10 is a calculation means and controls the operation of the entire imaging apparatus 1.
  • the RAM 11 is a volatile storage medium capable of reading and writing information at high speed, and is used as a work area when the CPU 10 processes information.
  • the ROM 12 is a read-only nonvolatile storage medium, and stores programs such as firmware.
  • the HDD 13 is a nonvolatile storage medium that can read and write information, and stores an OS (Operating System), various control programs, application programs, and the like.
  • the I / F 14 connects and controls the bus 19 and various hardware and networks.
  • the image camera 15 is an image capturing unit that includes a photoelectric conversion element and converts received light information into electronic information to generate image information. Similar to the image camera 15, the distance camera 16 generates a gray scale image by photoelectric conversion, and measures the distance from the object based on the time until the projected light is reflected by the object and returns. Thus, the distance information generating unit generates information on the distance to the object displayed at each position on the gray scale image. As the distance camera 16, for example, a three-dimensional image distance camera “ZC-1000” series manufactured by Optics Corporation can be used.
  • the LCD 17 is a visual user interface for the user to check the state of the image forming apparatus 1.
  • the operation unit 18 is a user interface such as a keyboard and a mouse for the user to input information to the image forming apparatus 1.
  • a program stored in a recording medium such as the ROM 12, the HDD 14, or an optical disk (not shown) is read into the RAM 11, and the CPU 10 performs an operation according to the program, thereby configuring a software control unit.
  • a functional block that realizes the function of the imaging apparatus 1 according to the present embodiment is configured by a combination of the software control unit configured as described above and hardware.
  • FIG. 2 is a block diagram illustrating a functional configuration of the imaging apparatus 1 according to the present embodiment.
  • the imaging apparatus 1 according to the present embodiment includes a function realized by the image processing unit 100 and a function realized by the display control unit 110. Note that, as described above, the image processing unit 100 and the display control unit 110 function when the software control unit and the hardware realized by the CPU 10 performing calculations according to the program read into the RAM 11 work together. .
  • the image processing unit 100 executes image processing for cutting out the outline of the subject Q displayed in the image information generated by the image camera 15 based on the distance information acquired by the distance camera 16. As shown in FIG. 2, the image processing unit 100 includes a coordinate conversion unit 101 and an image clipping unit 102.
  • the distance information includes “u” (pixel) which is a horizontal coordinate on an image generated by the distance camera 16 and “v” (pixel) which is a vertical coordinate. ), And the distance “Z” (mm) from the light receiving surface of the distance camera 16 of the subject or background displayed at the position on the image specified by “u” and “v”.
  • the coordinates on the image of the grayscale image captured by the distance camera 16 are associated with the distance between the object displayed at each coordinate on the image. With such information, it is possible to recognize the distance in the actual space of the subject Q and the background displayed in the image captured and generated by the distance camera 16.
  • the coordinate conversion unit 101 converts the coordinate system of the distance information acquired by the distance camera 16 from the coordinate system on the image captured by the distance camera 16 to the coordinate system of the image information captured by the image camera 15. As shown in FIG. 2, the coordinate conversion unit 101 includes coordinate conversion functions of “image plane / three-dimensional space”, “rotation / translation”, “three-dimensional space / image plane”, and “distortion correction”.
  • the image clipping unit 102 applies a predetermined threshold value to the distance information converted by the coordinate conversion unit 101, so that a pixel whose distance from the camera is within a predetermined range in the image captured by the image camera 15. Is extracted, the outline of the subject Q is cut out.
  • the display control unit 110 causes the LCD 17 to display the subject image clipped by the image clipping unit 102.
  • the distance information acquired by the distance camera 16 it is one of the gist according to the present embodiment to convert the coordinate system.
  • the image itself is Since the image is captured by the image camera 15, an image having a desired image quality level can be obtained.
  • FIG. 4 shows the position and image of the subject in a three-dimensional space with the light receiving unit 16a of the distance camera 16 as the origin, the optical axis of the distance camera 16 as the Z axis, the horizontal direction as the Y axis, and the vertical direction as the X axis. It is a figure which shows the coordinate of an image by a perspective projection model. Note that “Z” in FIG. 3 corresponds to the value in the Z-axis direction in FIG. As shown in FIG. 4, an image captured by the camera has a virtual frame (the bold broken line shown in FIG. 4) arranged at the focal length f of the camera when the direction of the optical axis is viewed from the camera. This is the scenery that falls within the frame.
  • the coordinates in the frame are the coordinates “u” and “v” on the image captured by the distance camera 16.
  • a certain point p on the image captured by the distance camera 16 ( u i , v i ) can be expressed by the following equation (1) using a certain point P (X i , Y i , Z i ) and the focal length f of the actual subject Q.
  • the coordinate conversion unit 101 calculates the coordinates p (u i , v i ) on the image captured by the distance camera 16 as coordinates in the three-dimensional space by calculating the following equation (2). Convert to P (X i , Y i , Z i ).
  • the matrix of 3 rows and 3 columns including “f 1x ”, “f 1y ”, “c 1x ”, and “c 1y ” in Expression (2) indicates the focal length and the optical axis shift of the distance camera 16.
  • Internal parameter. “F 1x ” and “f 1y ” are the focal lengths of the distance camera 16 in the horizontal direction and the vertical direction, and are the same if the aspect ratio is 1: 1 as described above.
  • “C 1x ” and “c 1y ” are deviations of the optical axis of the distance camera 16 in the horizontal and vertical directions.
  • the internal parameter of the distance camera 16 is that, for example, a check board is photographed from various angles with the focal length of the distance camera 16 fixed, and the position of the lattice point of the photographed check board is calculated, as in the Zhang method. It can ask for.
  • the coordinate conversion unit 101 stores the internal parameters of the distance camera 16 obtained in this way, and calculates the expression (2) using the internal parameters, whereby an image on the image captured by the distance camera 16 is displayed. converting the coordinates (u i, v i) the coordinates of a three-dimensional space (X i, Y i, Z i) to.
  • the “rotation / translation” coordinate conversion function is a process of converting the coordinate system in the three-dimensional space of the distance camera 16 into the coordinate system in the three-dimensional space of the image camera 15.
  • the coordinate conversion unit 101 converts the rotation vector “R” in 3 rows and 3 columns and the translation in 3 rows and 1 column.
  • t” constituted by the vector “t” is used. Further, the coordinate conversion unit 101 performs a “three-dimensional space / image plane” coordinate conversion process simultaneously with the “rotation / translation” coordinate conversion process.
  • the “3D space / image plane” coordinate conversion process is performed by converting the coordinates in the 3D space to the coordinates on the image, contrary to the “image plane / 3D space” coordinate conversion realized by the above equation (2). It is processing to convert to. However, in this embodiment, since the purpose is to convert the coordinates on the image captured by the distance camera 16 to the coordinates on the image captured by the image camera 15, the coordinate conversion of “three-dimensional space / image plane” is performed. In the processing, the coordinates converted into the coordinates in the three-dimensional space of the image camera by the “rotation / translation” coordinate conversion processing are converted into the coordinates on the image captured by the image camera 15 using the internal parameters of the image camera 15. Convert. This conversion is realized by the following equation (3).
  • the matrix of 3 rows and 3 columns including “f 2x ”, “f 2y ”, “c 2x ”, and “c 2y ” in Expression (3) indicates the focal length and the optical axis shift of the image camera 15.
  • Internal parameter. “F 2x ” and “f 2y ” are the focal lengths of the image camera 15 in the horizontal direction and the vertical direction, and are the same if the aspect ratio is 1: 1 as described above. “C 2x ” and “c 2y ” are deviations in the horizontal and vertical optical axes of the image camera 15.
  • the internal parameters of the image camera 15 can be obtained by the Zhang method, for example, in the same manner as the internal parameters of the distance camera 16.
  • the matrix including “r 11 ” to “r 33 ” and “t 1 ” to “t 3 ” in the expression (3) is the above-described external parameter “R
  • t” can also be obtained by the Zhang method.
  • t” is a parameter for converting the coordinate system of the distance camera 16 to the coordinate system of the image camera 15. Therefore, in order to obtain the external parameter “R
  • the positions of the lattice points can be converted by the external parameter “R
  • the coordinate conversion unit 101 stores the internal parameter and the external parameter “R
  • the distance information acquired as the coordinates on the image captured by the distance camera 16 is converted into the coordinates on the image captured by the image camera 15.
  • the coordinate conversion unit 101 inputs distance information corresponding to the image captured by the image camera 15 generated in this way (hereinafter referred to as converted distance information) to the image clipping unit 102.
  • FIG. 6A shows a state in which a coordinate point whose distance in the Z-axis direction is specified by the converted distance information is superimposed on an image including a subject imaged by the image camera 15 (hereinafter referred to as a subject image).
  • a subject image a subject imaged by the image camera 15
  • FIG. 6A Since the resolution when the distance in the Z-axis direction is acquired by the distance camera 16 is lower than the resolution of the image generated by the imaging by the image camera 15, the coordinates of the converted distance information are superimposed on the subject image.
  • the coordinates of the converted distance information are displayed as discrete points.
  • the image clipping unit 102 extracts only points where the target is within a predetermined distance from the camera by applying a threshold to the distance in the Z-axis direction in the converted distance information. That is, the image cutout unit 102 functions as a coordinate extraction unit.
  • FIG. 6B is a diagram showing a state where the points extracted in this way are superimposed on the subject image. As shown in FIG. 6B, points that overlap the subject are extracted. The points extracted as shown in FIG. 6B are hereinafter referred to as extraction points.
  • the image cutout unit 102 extracts a subject by erasing a portion that does not overlap with the extraction point from the subject image. However, as described above, since each point of the converted distance information is discrete in the subject image, the extracted point cannot be applied as it is. Therefore, the image cutout unit 102 performs image dilation processing so that each discrete point is connected to form a single region with each discrete point being a white pixel and the other region being a black pixel. Repeat.
  • the image expansion process is a process of replacing a target pixel with a white pixel if even one pixel is present around a target pixel. The image cutout unit 102 repeats this expansion processing until adjacent points in the vertical, horizontal, and diagonal directions are connected.
  • FIG. 7A is a diagram showing a state in which adjacent points are connected at the extraction point as a result of the expansion process.
  • the image cutout unit 102 performs the smoothing process of the rough outline by the expansion process in addition to the repetition of the expansion process.
  • the image cutout unit 102 performs the labeling process so that the image clipping unit 102 has a widest area or an area having a predetermined threshold value or more. Leave noise alone and perform noise cut processing.
  • the image cutout unit 102 deletes a portion other than the region corresponding to the generated region (hereinafter, referred to as an extraction target region) in the subject image as illustrated in FIG. As shown, the subject and the background are separated, and the subject can be extracted. Here, as a result of the expansion process, the extraction target area is wider than the actual contour of the subject as shown in FIG. In FIG. 7B, the portion of the extraction target area that protrudes from the actual subject is shown in black.
  • the image cutout unit 102 erases an extra area outside the contour of the subject by performing, for example, conventional edge detection processing in the extracted image as shown in FIG. As shown in FIG. 7B, since the image is cut out substantially along the contour of the subject, it is considered that the density between the contour of the subject and the contour of the cut out image is substantially constant. Therefore, as in the prior art, edge detection can be performed with higher accuracy than in detecting the contour of a subject in an image captured and generated by the image camera 15.
  • the image clipping unit 102 may perform the shrinking process on the extraction target area after generating the extraction target area as shown in FIG. 7A and before clipping the subject image.
  • the image contraction process is a process of replacing a target pixel with a black pixel if there is at least one pixel around the target pixel. Thereby, the contour expanded by the expansion process is contracted, and the protrusion from the subject as shown in FIG. 7B can be reduced.
  • FIG. 8A is a diagram illustrating a problem that the coordinate conversion function of “distortion correction” aims at.
  • the extracted point extracted by applying a threshold value to the converted distance information may deviate from the subject in the subject image. This may be caused by radial and circumferential distortions in the camera lens. Therefore, when the coordinate conversion unit 101 converts the distance information acquired by the distance camera 16 to generate converted distance information, the coordinate conversion unit 101 performs conversion by correcting the distortion. In the present embodiment, correction is performed assuming that the lens of the distance camera 16 is distorted.
  • the coordinate conversion unit 101 also performs a “distortion correction” process in the calculation of Expression (3) described above.
  • the calculation of the above equation (3) is equivalent to the following equations (4) to (6).
  • Expression (7) is an expression for correcting distortion caused by the lens.
  • the coefficients expanded up to the second order are considered as an example.
  • the coefficients up to the third order or higher may be considered.
  • These distortion coefficients can also be obtained by calibration. That is, based on the images of a plurality of check boards generated when the internal parameters of the distance camera 16 described above are obtained, the positions of the respective grid points are applied to the above formulas to calculate “k 1 ”, “k 2 ”. ", And distortion coefficients of" p 1 "and” p 2 "can be obtained.
  • the dimension considering the above-described coefficients is preferably determined according to the distance between the camera and the subject.
  • the closer the distance between the camera and the subject the greater the distortion. Therefore, as the distance between the camera and the subject is shorter, the distortion correction can be performed more suitably by performing calculation in consideration of higher-order coefficients.
  • the coordinate conversion unit 101 stores the distortion coefficient obtained in this way, and receives the three-dimensional coordinates (X i , Y i , Z i ) in the distance camera 16 as an input, and the image camera according to the above equation (3).
  • the distortion of the lens is corrected by using the above formulas (4), (5), (7), and (8).
  • the coordinates on the image obtained by the image camera 15 can be obtained. Thereby, as shown in FIG.8 (b), the shift
  • the distance information acquired by the distance camera 1 is not used in principle when the portion where the subject is displayed is clipped from the subject image, without using the information on the image density. Process based on. Further, in the imaging apparatus 1 according to the present embodiment, the image processing unit 100 automatically executes a process based on given information without obtaining an operation from the user. Therefore, in the image clipping process, it is possible to perform a more accurate clipping process regardless of the skill level of the user.
  • the imaging apparatus 1 including the image camera 15 and the distance camera 16 has been described as an example.
  • the image processing unit 100 alone or a program for realizing the image processing unit 100 is described. It is also possible to provide as. In this case, it is necessary to separately acquire the internal parameters of the first camera that captured the subject image, the internal parameters of the second camera that acquired the distance information, and the external parameters of the first camera and the second camera.
  • the image camera 15 and the distance camera 16 are equipped with a positioning system such as GPS (Global Positioning System) and a high-accuracy one, It is also possible to use that information. Specifically, when the image camera 15 and the distance camera 16 acquire the subject image and the distance information, respectively, the position and orientation at the time of information acquisition are simultaneously acquired by the mounted positioning system and input to the coordinate conversion unit 101. .
  • GPS Global Positioning System
  • the coordinate conversion unit 101 obtains external parameters for converting the coordinate system of the three-dimensional space in the distance camera 16 to the coordinate system of the three-dimensional space in the image camera 15 based on the input position and orientation information.
  • the rotation vector R can be obtained from the difference in orientation
  • the translation vector t can be obtained from the position.
  • the image clipping unit 102 functions as an image separation unit that separates at least the region of the subject image specified by the extraction target region and the other region and stores the separated region in the storage medium. It is possible to obtain As a result, the user can select whether or not the background image is specified in subsequent operations, and the convenience of the user can be improved.
  • each point specified by the coordinates of the converted distance information is set at an interval according to the ratio between the resolution of the distance camera 16 and the resolution of the subject image.
  • each pixel on the subject image corresponds to each pixel on the subject image. That is, since the coordinates of the converted distance information are superimposed on the subject image having a higher resolution while maintaining the resolution as it is, a discrete state as shown in FIGS. 6A and 6B is obtained.
  • the resolution of the converted distance information can be matched with the resolution of the subject image by dividing each pixel by using each point specified by the coordinates of the converted distance information as a pixel. Such an embodiment will be described below.
  • FIG. 9A is a diagram illustrating a state in which the distance information illustrated in FIG. 3 is converted into converted distance information by the coordinate conversion unit 101.
  • the coordinates specified as (u 1 , v 1 ), (u 2 , v 2 ), etc. in FIG. 3 are converted to (u ′ 1 , v ′ 1 ), (u ′ 2 , v ′ 2 ).
  • FIG. 9B is a diagram illustrating a state in which each converted coordinate shown in FIG. 9A is a pixel and each pixel is divided into four.
  • the points specified as (u ′ 1 , v ′ 1 ) in FIG. 9A are (u ′ 11 , v ′ 11 ), (u ′ 12 , v ′ 12 ) shown in FIG. This corresponds to four points (u ′ 13 , v ′ 13 ) and (u ′ 14 , v ′ 14 ).
  • each point has four pixels arranged by dividing the pixel specified by (u ′ 1 , v ′ 1 ) in the original resolution into two vertically and horizontally. It corresponds to.
  • distance information of the same resolution is generated so that all pixels in the subject image are covered by 1: 1 instead of discrete points as shown in FIGS. 6 (a) and 6 (b). be able to.
  • the distance “Z 1 ” before the division is associated with each of the four points after the division.
  • the image clipping unit 102 applies a threshold value to the distance Z and extracts only points where the target is within a predetermined distance, and the extraction result is substantially the contour of the subject as shown in FIG. In the matched state, the adjacent dots are filled, and the extraction target area can be suitably obtained.
  • 9A, 9B, 10A, and 10B a process for smoothing a rough outline due to pixel division, a labeling process for noise cutting, and the like are performed. It is preferable.
  • 9A, 9B, 10A, and 10B the extraction target area does not completely match the actual subject contour, and the extraction target region is the contour of the subject. Since there is a case of protruding from the outside, as in the above embodiment, conventional edge detection processing or the like may be performed to erase the extra area outside the contour of the subject. Even in this case, since the image is cut out substantially along the contour of the subject, the edge is detected with higher accuracy than in the case of detecting the contour of the subject in the image captured and generated by the image camera 15 as in the prior art. It is the same that detection is possible.
  • Imaging device 10 CPU 11 RAM 12 ROM 13 HDD 14 I / F 15 Image camera 16 Distance camera 17 LCD 18 Operation Unit 19 Bus 100 Image Processing Unit 101 Coordinate Conversion Unit 102 Image Clipping Unit 110 Display Control Unit

Abstract

[Problem] To make it possible to perform more-accurate cropping without relying on the skill of the user when cropping out a subject from image information in which the image of the subject has been captured. [Solution] The present invention is characterized in comprising: an image camera (15) for generating the image of a subject by capturing the image; a distance camera (16) for measuring the distance to a target object displayed in various sections in a case in which the image is a visual range that includes the subject and a background, and generating distance information in which a correlation is established between the distance and the coordinates of the visual range on the image; a coordinates converter (101) for converting the coordinates of the distance information to the coordinates on the image of the subject and generating converted distance information; and an image-cropping unit (102) for extracting, from among the post-conversion coordinates included in the converted distance information, coordinates for which the correlated distance satisfies a predetermined condition, separating the region specified by the extracted coordinates and other regions in the image of the subject, and outputting the result.

Description

撮像装置、画像処理装置、画像処理方法及び画像処理プログラムImaging apparatus, image processing apparatus, image processing method, and image processing program
 本発明は、撮像装置、画像処理装置、画像処理方法及び画像処理プログラムに関し、特に、撮像によって生成された画像に含まれる被写体と背景との分離処理に関する。 The present invention relates to an imaging device, an image processing device, an image processing method, and an image processing program, and more particularly, to a separation process between a subject included in an image generated by imaging and a background.
 被写体を撮影することにより得られた画像の活用態様として、被写体を輪郭にそって切り抜くことにより、他の背景との合成等を可能とする態様がある。このような被写体の切り抜き処理は、作業者による手作業によって行われる場合の他、光電変換により電子化された画像情報に対する画像処理によって実現される場合がある。 As a mode of utilizing an image obtained by photographing a subject, there is a mode in which a subject can be combined with other backgrounds by cutting out the subject along a contour. Such subject clipping processing may be realized by image processing on image information digitized by photoelectric conversion as well as by manual operation by an operator.
 画像処理によって被写体の切り抜きを実現する態様としては、作業者によって輪郭が含まれる画像の領域が指定され、その指定された領域における画像の濃度と別途指定された基準濃度との比較により、被写体の輪郭線を検知する方法が提案されている(例えば、特許文献1参照)。 As a mode of realizing the clipping of the subject by image processing, an area of the image including the contour is designated by the operator, and the comparison of the density of the image in the designated area with a separately designated reference density is performed. A method for detecting a contour line has been proposed (see, for example, Patent Document 1).
 また、背景画像の影響を低減してより高精度な切り抜きを実現するために、指定された領域内において特徴量が類似する画素群をクラスタとして分類し、そのクラスタを被写体の輪郭の内側と外側とに分類することにより、被写体の輪郭線を検知する方法が提案されている(例えば、特許文献2参照)。 In addition, in order to reduce the influence of the background image and realize more accurate clipping, pixel groups with similar features in the specified area are classified as clusters, and the clusters are inside and outside the contour of the subject. There has been proposed a method for detecting the contour line of a subject by classifying them as follows (for example, see Patent Document 2).
 他方、焦点条件の異なる2枚の同画像の画像情報に基づき、画像中における背景部分と被写体部分とを検知する方法が提案されている(例えば、特許文献3参照)。 On the other hand, a method for detecting a background portion and a subject portion in an image based on image information of two identical images having different focus conditions has been proposed (for example, see Patent Document 3).
特開昭63-5745号公報JP 63-5745 A 特開平9-83776号公報Japanese Patent Laid-Open No. 9-83776 特開平10-233919号公報Japanese Patent Laid-Open No. 10-233919
 特許文献1、2に開示された技術においては、1つ乃至2つの領域をユーザが指定する必要がある。画像切り抜きの特性等を理解したユーザであれば、適切な領域を指定することが可能であるが、一般的なユーザにおいては、画像切り抜きの特性までも理解して領域を指定することは難しい。 In the techniques disclosed in Patent Documents 1 and 2, it is necessary for the user to designate one or two areas. A user who understands the characteristics of image clipping and the like can specify an appropriate area, but it is difficult for a general user to specify an area by understanding the characteristics of image clipping.
 また、特許文献1乃至3いずれの場合においても、電子化された画像情報に基づいて判断されるため、被写体色と背景色との関係等、画像情報の状態によっては、被写体の切り抜き処理が適正に実行されない場合がある。 In any case of Patent Documents 1 to 3, since the determination is based on the digitized image information, subject clipping processing is appropriate depending on the state of the image information such as the relationship between the subject color and the background color. May not be executed.
 本発明は、上記実情を考慮してなされたものであり、被写体が撮像された画像情報における被写体と背景との分離処理において、ユーザの熟練度によらず、より高精度な分離処理を可能とすることを目的とする。 The present invention has been made in consideration of the above circumstances, and enables more accurate separation processing regardless of the skill level of the user in the separation processing of the subject and the background in the image information obtained by capturing the subject. The purpose is to do.
 上記課題を解決するために、本発明の一態様は、被写体と背景とが表示された被写体画像を撮像により生成する画像撮像部と、被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を生成する距離情報生成部とを含む撮像装置であって、前記取得された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成する座標変換部と、前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出する座標抽出部と、前記被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して出力する画像分離部とを含むことを特徴とする。 In order to solve the above problems, according to one aspect of the present invention, an image capturing unit that generates a subject image on which a subject and a background are displayed by imaging, and a visual range including the subject and the background are images. A distance information generation unit that measures a distance to an object displayed in each unit of the image and generates distance information in which coordinates and distances on the image of the visual range are associated with each other, A coordinate conversion unit that converts the coordinates of the acquired distance information into coordinates on the subject image to generate converted distance information, and an association among the converted coordinates included in the generated converted distance information A coordinate extraction unit that extracts coordinates for which a predetermined distance satisfies a predetermined condition, and an image separation unit that separates and outputs a region specified by the extracted coordinates and another region of the subject image including And wherein the door.
 また、本発明の他の態様は、画像処理装置であって、被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得する距離情報取得部と、被写体と背景とが表示された被写体画像を取得する被写体画像取得部と、前記取得された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成する座標変換部と、前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出する座標抽出部と、前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離する画像分離部とを含むことを特徴とする。 According to another aspect of the present invention, there is provided an image processing apparatus that measures a distance to an object displayed on each unit when a visual range including a subject and a background is an image, and A distance information acquisition unit that acquires distance information in which coordinates on a target range image and a distance are associated; a subject image acquisition unit that acquires a subject image on which a subject and a background are displayed; and the acquired distance information A coordinate conversion unit that converts the coordinates of the subject image into coordinates on the subject image and generates converted distance information, and among the converted coordinates included in the generated converted distance information, an associated distance is predetermined A coordinate extraction unit that extracts coordinates satisfying the above condition, and an image separation unit that separates a region specified by the extracted coordinates from the other region of the acquired subject image. To do.
 また、本発明の更に他の態様は、画像処理方法であって、被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得して記憶媒体に記憶し、被写体と背景とが表示された被写体画像を取得して記憶媒体に記憶し、前記記憶された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成して記憶媒体に記憶し、前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出し、前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して記憶媒体に記憶することを特徴とする。 Further, another aspect of the present invention is an image processing method for measuring a distance to an object displayed on each part when a visual range including a subject and a background is an image, The distance information in which the coordinates on the image of the visual range and the distance are associated is acquired and stored in a storage medium, and the subject image on which the subject and the background are displayed is acquired and stored in the storage medium. The converted distance information is converted into coordinates on the subject image, converted distance information is generated and stored in a storage medium, and the converted coordinates included in the generated converted distance information are associated with each other. A coordinate satisfying a predetermined condition is extracted, and an area specified by the extracted coordinate is separated from another area of the acquired subject image and stored in a storage medium. And
 また、本発明の更に他の態様は、画像処理プログラムであって、被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得して記憶媒体に記憶するステップと、被写体と背景とが表示された被写体画像を取得して記憶媒体に記憶するステップと、前記記憶された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成して記憶媒体に記憶するステップと、前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出し、前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して記憶媒体に記憶するステップとを情報処理装置に実行させることを特徴とする。 Still another aspect of the present invention is an image processing program for measuring a distance to an object displayed on each part when a visual range including a subject and a background is an image, Obtaining distance information in which the coordinates on the image of the visual range and the distance are associated with each other and storing the distance information in a storage medium; obtaining a subject image displaying the subject and the background; and storing the subject image in the storage medium; Converting the stored distance information coordinates into coordinates on the subject image to generate converted distance information and storing the converted distance information in a storage medium; and the converted distance information included in the generated converted distance information Of the coordinates, the coordinates for which the associated distance satisfies a predetermined condition are extracted, and the area specified by the extracted coordinates and the other area are separated and stored in the acquired subject image. Characterized in that and a step of storing the body data processing device.
 本発明によれば、被写体が撮像された画像情報における被写体と背景との分離処理において、ユーザの熟練度によらず、より高精度な分離処理が可能となる。 According to the present invention, in the separation processing of the subject and the background in the image information obtained by capturing the subject, it is possible to perform the separation processing with higher accuracy regardless of the skill level of the user.
本発明の実施形態に係る撮像装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware constitutions of the imaging device which concerns on embodiment of this invention. 本発明の実施形態に係る撮像装置の機能構成を示す図である。It is a figure which shows the function structure of the imaging device which concerns on embodiment of this invention. 本発明の実施形態に係る距離カメラによって取得される距離情報の例を示す図である。It is a figure which shows the example of the distance information acquired by the distance camera which concerns on embodiment of this invention. 本発明の実施形態に係る画像平面/三次元空間の座標変換機能の原理を示す図である。It is a figure which shows the principle of the coordinate conversion function of the image plane / three-dimensional space which concerns on embodiment of this invention. 本発明の実施形態に係る回転・並進の座標変換機能の原理を示す図である。It is a figure which shows the principle of the coordinate conversion function of rotation and translation which concerns on embodiment of this invention. 本発明の実施形態に係る距離情報を被写体画像に重ねた状態を示す図である。It is a figure which shows the state which accumulated the distance information which concerns on embodiment of this invention on the to-be-photographed image. 本発明の実施形態に係る抽出対象領域を示す図である。It is a figure which shows the extraction object area | region which concerns on embodiment of this invention. 本発明の実施形態に係るレンズの歪みの補正態様を示す図である。It is a figure which shows the correction | amendment aspect of the distortion of the lens which concerns on embodiment of this invention. 本発明の他の実施形態に係る変換済み距離情報の例を示す図である。It is a figure which shows the example of the converted distance information which concerns on other embodiment of this invention. 本発明の他の実施形態に係る変換済み距離情報によって特定される点の分割態様を示す図である。It is a figure which shows the division | segmentation aspect of the point specified by the converted distance information which concerns on other embodiment of this invention.
 以下、図面を参照して、本発明の実施形態を詳細に説明する。本実施形態においては、画像を撮像する画像カメラと、グレイスケール等の減色された画像を撮像すると共に、その画像上の位置に表示されている被写体や背景等の対象の距離(以降、距離情報とする)を取得する距離カメラとを含み、画像カメラによって撮像された画像に表示されている被写体の輪郭の切り抜き処理を自動的に実行する撮像装置を例として説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the present embodiment, an image camera that picks up an image and a reduced image such as a gray scale are picked up, and a distance of a subject such as a subject or background displayed at a position on the image (hereinafter, distance information). An imaging apparatus that automatically executes a process of cutting out the contour of a subject displayed in an image captured by an image camera will be described.
 図1は、本実施形態に係る撮像装置1のハードウェア構成を示すブロック図である。図1に示すように、本実施形態に係る撮像装置1は、一般的なサーバやPC(Personal Computer)等の情報処理端末と同様の構成に加えて、上述した距離カメラや画像カメラを含む。即ち、本実施形態に係る撮像装置1は、CPU(Central Processing Unit)10、RAM(Random Access Memory)11、ROM(Read Only Memory)12、HDD(Hard Disk Drive)13及びI/F14がバス19を介して接続されている。また、I/F14には、画像カメラ15、距離カメラ16、LCD(Liquid Crystal Display)17及び操作部18が接続されている。 FIG. 1 is a block diagram showing a hardware configuration of the imaging apparatus 1 according to the present embodiment. As shown in FIG. 1, the imaging apparatus 1 according to the present embodiment includes the above-described distance camera and image camera in addition to the same configuration as an information processing terminal such as a general server or a PC (Personal Computer). That is, the imaging apparatus 1 according to the present embodiment includes a CPU (Central Processing Unit) 10, a RAM (Random Access Memory) 11, a ROM (Read Only Memory) 12, a HDD (Hard Disk Drive) 13, and an I / F 14. Connected through. Further, an image camera 15, a distance camera 16, an LCD (Liquid Crystal Display) 17, and an operation unit 18 are connected to the I / F 14.
 CPU10は演算手段であり、撮像装置1全体の動作を制御する。RAM11は、情報の高速な読み書きが可能な揮発性の記憶媒体であり、CPU10が情報を処理する際の作業領域として用いられる。ROM12は、読み出し専用の不揮発性記憶媒体であり、ファームウェア等のプログラムが格納されている。HDD13は、情報の読み書きが可能な不揮発性の記憶媒体であり、OS(Operating System)や各種の制御プログラム、アプリケーション・プログラム等が格納されている。I/F14は、バス19と各種のハードウェアやネットワーク等を接続し制御する。 The CPU 10 is a calculation means and controls the operation of the entire imaging apparatus 1. The RAM 11 is a volatile storage medium capable of reading and writing information at high speed, and is used as a work area when the CPU 10 processes information. The ROM 12 is a read-only nonvolatile storage medium, and stores programs such as firmware. The HDD 13 is a nonvolatile storage medium that can read and write information, and stores an OS (Operating System), various control programs, application programs, and the like. The I / F 14 connects and controls the bus 19 and various hardware and networks.
 画像カメラ15は、光電変換素子を含み、受光した光情報を電子情報に変換して画像情報を生成する画像撮像部である。距離カメラ16は、画像カメラ15と同様に、光電変換によりグレイスケールの画像を生成すると共に、投光した光が対象によって反射されて戻ってくるまでの時間に基づき、対象との距離を測定することにより、上記グレイスケールの画像上の各位置に表示された対象物までの距離の情報を生成する距離情報生成部である。距離カメラ16としては、例えば、オプテックス株式会社製の3次元画像距離カメラ「ZC-1000」シリーズを用いることができる。LCD17は、ユーザが画像形成装置1の状態を確認するための視覚的ユーザインタフェースである。操作部18は、キーボードやマウス等、ユーザが画像形成装置1に情報を入力するためのユーザインタフェースである。 The image camera 15 is an image capturing unit that includes a photoelectric conversion element and converts received light information into electronic information to generate image information. Similar to the image camera 15, the distance camera 16 generates a gray scale image by photoelectric conversion, and measures the distance from the object based on the time until the projected light is reflected by the object and returns. Thus, the distance information generating unit generates information on the distance to the object displayed at each position on the gray scale image. As the distance camera 16, for example, a three-dimensional image distance camera “ZC-1000” series manufactured by Optics Corporation can be used. The LCD 17 is a visual user interface for the user to check the state of the image forming apparatus 1. The operation unit 18 is a user interface such as a keyboard and a mouse for the user to input information to the image forming apparatus 1.
 このようなハードウェア構成において、ROM12やHDD14若しくは図示しない光学ディスク等の記録媒体に格納されたプログラムがRAM11に読み出され、そのプログラムに従ってCPU10が演算を行う事により、ソフトウェア制御部が構成される。このようにして構成されたソフトウェア制御部と、ハードウェアとの組み合わせによって、本実施形態に係る撮像装置1の機能を実現する機能ブロックが構成される。 In such a hardware configuration, a program stored in a recording medium such as the ROM 12, the HDD 14, or an optical disk (not shown) is read into the RAM 11, and the CPU 10 performs an operation according to the program, thereby configuring a software control unit. . A functional block that realizes the function of the imaging apparatus 1 according to the present embodiment is configured by a combination of the software control unit configured as described above and hardware.
 次に、図2を参照して、本実施形態に係る撮像装置1の機能構成について説明する。図2は、本実施形態に係る撮像装置1の機能構成を示すブロック図である。図2に示すように、本実施形態に係る撮像装置1は、画像処理部100によって実現される機能と、表示制御部110によって実現される機能とを含む。尚、画像処理部100及び表示制御部110は、上述したように、RAM11に読み出されたプログラムに従ってCPU10が演算を行う事により実現されるソフトウェア制御部とハードウェアとが連動することにより機能する。 Next, the functional configuration of the imaging apparatus 1 according to the present embodiment will be described with reference to FIG. FIG. 2 is a block diagram illustrating a functional configuration of the imaging apparatus 1 according to the present embodiment. As illustrated in FIG. 2, the imaging apparatus 1 according to the present embodiment includes a function realized by the image processing unit 100 and a function realized by the display control unit 110. Note that, as described above, the image processing unit 100 and the display control unit 110 function when the software control unit and the hardware realized by the CPU 10 performing calculations according to the program read into the RAM 11 work together. .
 画像処理部100は、距離カメラ16によって取得された距離情報に基づき、画像カメラ15によって生成された画像情報に表示されている被写体Qの輪郭を切り抜く画像処理を実行する。図2に示すように、画像処理部100には、座標変換部101及び画像切り抜き部102が含まれる。 The image processing unit 100 executes image processing for cutting out the outline of the subject Q displayed in the image information generated by the image camera 15 based on the distance information acquired by the distance camera 16. As shown in FIG. 2, the image processing unit 100 includes a coordinate conversion unit 101 and an image clipping unit 102.
 ここで、図3を参照して、距離カメラ16によって取得される距離情報の例について説明する。図3に示すように、本実施形態に係る距離情報は、距離カメラ16によって生成される画像上の水平方向の座標である“u”(pixel)及び垂直方向の座標である“v” (pixel)に加えて、“u”、“v”によって特定される画像上の位置に表示されている被写体や背景の、距離カメラ16の受光面からの距離“Z”(mm)を含む。換言すると、距離情報においては、距離カメラ16によって撮像されたグレイスケール画像の画像上の座標と、画像上における夫々の座標に表示されている対象物との距離とが関連付けられている。このような情報により、距離カメラ16によって撮影されて生成された画像に表示されている被写体Qや背景の、実際の空間上における距離を認識することが可能となる。 Here, an example of distance information acquired by the distance camera 16 will be described with reference to FIG. As shown in FIG. 3, the distance information according to the present embodiment includes “u” (pixel) which is a horizontal coordinate on an image generated by the distance camera 16 and “v” (pixel) which is a vertical coordinate. ), And the distance “Z” (mm) from the light receiving surface of the distance camera 16 of the subject or background displayed at the position on the image specified by “u” and “v”. In other words, in the distance information, the coordinates on the image of the grayscale image captured by the distance camera 16 are associated with the distance between the object displayed at each coordinate on the image. With such information, it is possible to recognize the distance in the actual space of the subject Q and the background displayed in the image captured and generated by the distance camera 16.
 座標変換部101は、距離カメラ16によって取得された距離情報の座標系を、距離カメラ16によって撮像された画像上の座標系から、画像カメラ15によって撮像された画像情報の座標系に変換する。図2に示すように、座標変換部101は、“画像平面/三次元空間”、“回転・並進”、“三次元空間/画像平面”及び“歪み補正”の夫々の座標変換機能を含む。 The coordinate conversion unit 101 converts the coordinate system of the distance information acquired by the distance camera 16 from the coordinate system on the image captured by the distance camera 16 to the coordinate system of the image information captured by the image camera 15. As shown in FIG. 2, the coordinate conversion unit 101 includes coordinate conversion functions of “image plane / three-dimensional space”, “rotation / translation”, “three-dimensional space / image plane”, and “distortion correction”.
 画像切り抜き部102は、座標変換部101によって変換された距離情報に対して所定の閾値を適用することにより、画像カメラ15によって撮像された画像において、カメラからの距離が所定の範囲内である画素を抽出することにより、被写体Qの輪郭を切り抜く。表示制御部110は、画像切り抜き部102によって切り抜かれた被写体の画像を、LCD17に表示させる。 The image clipping unit 102 applies a predetermined threshold value to the distance information converted by the coordinate conversion unit 101, so that a pixel whose distance from the camera is within a predetermined range in the image captured by the image camera 15. Is extracted, the outline of the subject Q is cut out. The display control unit 110 causes the LCD 17 to display the subject image clipped by the image clipping unit 102.
 このように、距離カメラ16によって取得された距離情報を画像カメラ15によって撮像された画像に適用するために、座標系を変換することが本実施形態に係る要旨の1つである。これにより、距離カメラ16によって撮影される画像の解像度が低く、所望の画質レベルの画像を得られない場合や、距離カメラ16がフルカラーに対応していないような場合であっても、画像自体は画像カメラ15によって撮像するため、所望の画質レベルの画像を得ることができる。 Thus, in order to apply the distance information acquired by the distance camera 16 to the image captured by the image camera 15, it is one of the gist according to the present embodiment to convert the coordinate system. As a result, even when the resolution of the image captured by the distance camera 16 is low and an image of a desired image quality level cannot be obtained, or even when the distance camera 16 does not support full color, the image itself is Since the image is captured by the image camera 15, an image having a desired image quality level can be obtained.
 次に、座標変換部101による夫々の機能について説明する。尚、先ずは“画像平面/三次元空間”、“回転・並進”、“三次元空間/画像平面”の座標変換機能について説明し、“歪み補正”の座標変換機能については後述する。先ず、“画像平面/三次元空間”の座標変換機能について説明するため、カメラによって撮像される画像上の座標と、三次元空間上の座標との関係について、図4を参照して説明する。 Next, each function by the coordinate conversion unit 101 will be described. First, the coordinate conversion function of “image plane / three-dimensional space”, “rotation / translation”, and “three-dimensional space / image plane” will be described, and the coordinate conversion function of “distortion correction” will be described later. First, in order to explain the coordinate conversion function of “image plane / three-dimensional space”, the relationship between the coordinates on the image captured by the camera and the coordinates on the three-dimensional space will be described with reference to FIG.
 図4は、距離カメラ16の受光部16aを原点とし、距離カメラ16の光軸をZ軸、水平方向をY軸、垂直方向をX軸とした三次元空間における、被写体の位置及び撮像される画像の座標を透視射影モデルによって示す図である。尚、図3の“Z”は、図4のZ軸方向の値に対応している。図4に示すように、カメラによって撮影される画像は、カメラから光軸の方向を見た場合に、カメラの焦点距離fの位置に配置された仮想的な枠(図4に示す太い破線の枠)内に入る風景である。この枠内における座標が、距離カメラ16によって撮像された画像上の座標“u”、“v”となる。 4 shows the position and image of the subject in a three-dimensional space with the light receiving unit 16a of the distance camera 16 as the origin, the optical axis of the distance camera 16 as the Z axis, the horizontal direction as the Y axis, and the vertical direction as the X axis. It is a figure which shows the coordinate of an image by a perspective projection model. Note that “Z” in FIG. 3 corresponds to the value in the Z-axis direction in FIG. As shown in FIG. 4, an image captured by the camera has a virtual frame (the bold broken line shown in FIG. 4) arranged at the focal length f of the camera when the direction of the optical axis is viewed from the camera. This is the scenery that falls within the frame. The coordinates in the frame are the coordinates “u” and “v” on the image captured by the distance camera 16.
 この際、被写体Qを含む枠内の撮像対象から反射された光は、図4に示すように、距離カメラ16の受光部16aに向かって集光される。従って、距離カメラ16の光軸とZ軸とが同一であり、画像、即ち図4中の枠のアスペクト比が1:1である場合、距離カメラ16によって撮像された画像上のある一点p(u,v)は、実際の被写体Qのある一点P(X,Y,Z)及び焦点距離fを用いて、以下の式(1)によって表すことができる。
Figure JPOXMLDOC01-appb-I000001
At this time, the light reflected from the imaging target within the frame including the subject Q is condensed toward the light receiving unit 16a of the distance camera 16, as shown in FIG. Therefore, when the optical axis and the Z-axis of the distance camera 16 are the same, and the image, that is, the aspect ratio of the frame in FIG. 4 is 1: 1, a certain point p (on the image captured by the distance camera 16 ( u i , v i ) can be expressed by the following equation (1) using a certain point P (X i , Y i , Z i ) and the focal length f of the actual subject Q.
Figure JPOXMLDOC01-appb-I000001
 座標変換部101は、上記式(1)に基づき、以下の式(2)の計算により、距離カメラ16の撮像による画像上の座標p(u,v)を、三次元空間上の座標P(X,Y,Z)に変換する。
Figure JPOXMLDOC01-appb-I000002
Based on the above equation (1), the coordinate conversion unit 101 calculates the coordinates p (u i , v i ) on the image captured by the distance camera 16 as coordinates in the three-dimensional space by calculating the following equation (2). Convert to P (X i , Y i , Z i ).
Figure JPOXMLDOC01-appb-I000002
 ここで、式(2)における“f1x”、“f1y”、“c1x”、“c1y”を含む3行3列の行列は、距離カメラ16の焦点距離及び光軸のずれを示す内部パラメータである。“f1x”、“f1y”は、距離カメラ16の水平方向、垂直方向の焦点距離であり、上述したように、アスペクト比が1:1であれば同一である。“c1x”、“c1y”は、距離カメラ16の水平方向、垂直方向の光軸のずれである。 Here, the matrix of 3 rows and 3 columns including “f 1x ”, “f 1y ”, “c 1x ”, and “c 1y ” in Expression (2) indicates the focal length and the optical axis shift of the distance camera 16. Internal parameter. “F 1x ” and “f 1y ” are the focal lengths of the distance camera 16 in the horizontal direction and the vertical direction, and are the same if the aspect ratio is 1: 1 as described above. “C 1x ” and “c 1y ” are deviations of the optical axis of the distance camera 16 in the horizontal and vertical directions.
 この距離カメラ16の内部パラメータは、例えばZhangの手法のように、距離カメラ16の焦点距離を固定して様々な角度からチェックボードを撮影し、撮影したチェックボードの格子点の位置を演算することにより求めることができる。座標変換部101は、このようにして求められた距離カメラ16の内部パラメータを記憶しており、その内部パラメータを用いて式(2)の計算を行う事により、距離カメラ16の撮像による画像上の座標(u,v)を三次元空間上の座標(X,Y,Z)に変換する。 The internal parameter of the distance camera 16 is that, for example, a check board is photographed from various angles with the focal length of the distance camera 16 fixed, and the position of the lattice point of the photographed check board is calculated, as in the Zhang method. It can ask for. The coordinate conversion unit 101 stores the internal parameters of the distance camera 16 obtained in this way, and calculates the expression (2) using the internal parameters, whereby an image on the image captured by the distance camera 16 is displayed. converting the coordinates (u i, v i) the coordinates of a three-dimensional space (X i, Y i, Z i) to.
 次に、“回転・並進”及び“三次元空間/画像平面”の座標変換機能について説明する。上述したように、三次元空間上の座標軸は、夫々のカメラに応じて定められている。従って、距離カメラ16と画像カメラ15とでは図5に示すように座標軸が異なる。“回転・並進”の座標変換機能は、距離カメラ16における三次元空間上の座標系を、画像カメラ15における三次元空間上の座標系に変換する処理である。 Next, the coordinate conversion function of “rotation / translation” and “three-dimensional space / image plane” will be described. As described above, the coordinate axes in the three-dimensional space are determined according to each camera. Therefore, the distance camera 16 and the image camera 15 have different coordinate axes as shown in FIG. The “rotation / translation” coordinate conversion function is a process of converting the coordinate system in the three-dimensional space of the distance camera 16 into the coordinate system in the three-dimensional space of the image camera 15.
 距離カメラ16における三次元空間上の座標を、画像カメラ15における三次元空間上の座標に変換する際、座標変換部101は、3行3列の回転ベクトル“R”及び3行1列の並進ベクトル“t”によって構成される外部パラメータ“R|t”を用いる。更に、座標変換部101は、“回転・並進”の座標変換処理と同時に、“三次元空間/画像平面”の座標変換処理を行う。 When converting the coordinates in the three-dimensional space in the distance camera 16 to the coordinates in the three-dimensional space in the image camera 15, the coordinate conversion unit 101 converts the rotation vector “R” in 3 rows and 3 columns and the translation in 3 rows and 1 column. The external parameter “R | t” constituted by the vector “t” is used. Further, the coordinate conversion unit 101 performs a “three-dimensional space / image plane” coordinate conversion process simultaneously with the “rotation / translation” coordinate conversion process.
 “三次元空間/画像平面”の座標変換処理は、上記式(2)によって実現された“画像平面/三次元空間”の座標変換とは逆に、三次元空間上の座標を画像上の座標に変換する処理である。但し、本実施形態においては、距離カメラ16の撮像による画像上の座標を画像カメラ15の撮像による画像上の座標に変換することが目的であるため、“三次元空間/画像平面”の座標変換処理においては、“回転・並進”の座標変換処理によって画像カメラにおける三次元空間上の座標に変換された座標を、画像カメラ15の内部パラメータを用いて画像カメラ15の撮像による画像上の座標に変換する。この変換は、以下の式(3)によって実現される。
Figure JPOXMLDOC01-appb-I000003
The “3D space / image plane” coordinate conversion process is performed by converting the coordinates in the 3D space to the coordinates on the image, contrary to the “image plane / 3D space” coordinate conversion realized by the above equation (2). It is processing to convert to. However, in this embodiment, since the purpose is to convert the coordinates on the image captured by the distance camera 16 to the coordinates on the image captured by the image camera 15, the coordinate conversion of “three-dimensional space / image plane” is performed. In the processing, the coordinates converted into the coordinates in the three-dimensional space of the image camera by the “rotation / translation” coordinate conversion processing are converted into the coordinates on the image captured by the image camera 15 using the internal parameters of the image camera 15. Convert. This conversion is realized by the following equation (3).
Figure JPOXMLDOC01-appb-I000003
 ここで、式(3)における“f2x”、“f2y”、“c2x”、“c2y”を含む3行3列の行列は、画像カメラ15の焦点距離及び光軸のずれを示す内部パラメータである。“f2x”、“f2y”は、画像カメラ15の水平方向、垂直方向の焦点距離であり、上述したように、アスペクト比が1:1であれば同一である。“c2x”、“c2y”は、画像カメラ15の水平方向、垂直方向の光軸のずれである。この画像カメラ15の内部パラメータは、上記距離カメラ16の内部パラメータと同様に、例えばZhangの手法によって求めることができる。 Here, the matrix of 3 rows and 3 columns including “f 2x ”, “f 2y ”, “c 2x ”, and “c 2y ” in Expression (3) indicates the focal length and the optical axis shift of the image camera 15. Internal parameter. “F 2x ” and “f 2y ” are the focal lengths of the image camera 15 in the horizontal direction and the vertical direction, and are the same if the aspect ratio is 1: 1 as described above. “C 2x ” and “c 2y ” are deviations in the horizontal and vertical optical axes of the image camera 15. The internal parameters of the image camera 15 can be obtained by the Zhang method, for example, in the same manner as the internal parameters of the distance camera 16.
 また、式(3)における“r11”~“r33”及び“t”~“t”を含む行列は、上述した外部パラメータ“R|t”である。外部パラメータ“R|t”も、Zhangの手法によって求めることができる。上述したように、外部パラメータ“R|t”は、距離カメラ16の座標系を画像カメラ15の座標系に変換するためのパラメータである。従って、外部パラメータ“R|t”を求めるためには、画像カメラ15及び距離カメラ16を撮像装置1に実際の稼動時と同様に固定した状態において、ある向きのチェックボードを画像カメラ15及び距離カメラ16の両方で撮像することにより、画像カメラ15及び距離カメラ16夫々によって撮像された一組のチェックボードの画像を得る。 In addition, the matrix including “r 11 ” to “r 33 ” and “t 1 ” to “t 3 ” in the expression (3) is the above-described external parameter “R | t”. The external parameter “R | t” can also be obtained by the Zhang method. As described above, the external parameter “R | t” is a parameter for converting the coordinate system of the distance camera 16 to the coordinate system of the image camera 15. Therefore, in order to obtain the external parameter “R | t”, a check board in a certain direction is attached to the image camera 15 and the distance in a state where the image camera 15 and the distance camera 16 are fixed to the imaging apparatus 1 in the same manner as in actual operation. By capturing images with both cameras 16, a set of checkboard images captured by the image camera 15 and the distance camera 16 are obtained.
 この一組のチェックボードの画像は、同一のチェックボードを撮像したものであるため、その格子点の位置は、上記外部パラメータ“R|t”によって変換可能である。従って、チェックボードの位置を様々に変えて複数の組の画像を生成することにより、連立方程式を解くようにして外部パラメータ“R|t”を求めることができる。尚、上述したように、距離カメラ16は、撮像によりグレイスケールの画像を生成することができる。従って、上記内部パラメータ及び外部パラメータ“R|t”を求める際には、このグレイスケールの画像を用いることができる。 Since the images of this set of check boards are images of the same check board, the positions of the lattice points can be converted by the external parameter “R | t”. Accordingly, by generating a plurality of sets of images by changing the position of the check board in various ways, the external parameter “R | t” can be obtained so as to solve the simultaneous equations. As described above, the distance camera 16 can generate a gray scale image by imaging. Therefore, when obtaining the internal parameter and the external parameter “R | t”, this gray scale image can be used.
 座標変換部101は、このようにして求められた画像カメラ101の内部パラメータ及び外部パラメータ“R|t”を記憶しており、それらの情報を用いて式(3)の演算を行うことによって、“回転・並進”及び“三次元空間/画像平面”の座標変換機能を同時に実現する。 The coordinate conversion unit 101 stores the internal parameter and the external parameter “R | t” of the image camera 101 thus obtained, and by performing the calculation of Expression (3) using the information, Simultaneously realizes "rotation / translation" and "three-dimensional space / image plane" coordinate conversion functions.
 このような処理により、図3に示すように、距離カメラ16の撮像による画像上の座標として取得された距離情報が、画像カメラ15の撮像による画像上の座標に変換される。座標変換部101は、このようにして生成した画像カメラ15による撮像画像に対応した距離情報(以降、変換済み距離情報とする)を、画像切り抜き部102に入力する。 By such processing, as shown in FIG. 3, the distance information acquired as the coordinates on the image captured by the distance camera 16 is converted into the coordinates on the image captured by the image camera 15. The coordinate conversion unit 101 inputs distance information corresponding to the image captured by the image camera 15 generated in this way (hereinafter referred to as converted distance information) to the image clipping unit 102.
 次に、画像切り抜き部102による切り抜き処理について説明する。図6(a)は、変換済み距離情報によってZ軸方向の距離が特定される座標の点を、画像カメラ15によって撮像された被写体を含む画像(以降、被写体画像とする)に重ね合わせた状態を示す図である。距離カメラ16によってZ軸方向の距離が取得される際の解像度は、画像カメラ15による撮像によって生成される画像の解像度よりも低いため、変換済み距離情報の座標を被写体画像に重ね合わせると、図6(a)に示すように、変換済み距離情報の座標は、離散的な点として表示される。 Next, the clipping process performed by the image clipping unit 102 will be described. FIG. 6A shows a state in which a coordinate point whose distance in the Z-axis direction is specified by the converted distance information is superimposed on an image including a subject imaged by the image camera 15 (hereinafter referred to as a subject image). FIG. Since the resolution when the distance in the Z-axis direction is acquired by the distance camera 16 is lower than the resolution of the image generated by the imaging by the image camera 15, the coordinates of the converted distance information are superimposed on the subject image. As shown in FIG. 6A, the coordinates of the converted distance information are displayed as discrete points.
 画像切り抜き部102は、変換済み距離情報におけるZ軸方向の距離に対して閾値を適用することにより、対象がカメラから所定の距離以内にある点のみを抽出する。即ち、画像切り抜き部102が、座標抽出部として機能する。図6(b)は、このようにして抽出された点を被写体画像に重ね合わせた状態を示す図である。図6(b)に示すように、被写体に重なる点が抽出される。図6(b)に示すように抽出された点を、以降、抽出点とする。 The image clipping unit 102 extracts only points where the target is within a predetermined distance from the camera by applying a threshold to the distance in the Z-axis direction in the converted distance information. That is, the image cutout unit 102 functions as a coordinate extraction unit. FIG. 6B is a diagram showing a state where the points extracted in this way are superimposed on the subject image. As shown in FIG. 6B, points that overlap the subject are extracted. The points extracted as shown in FIG. 6B are hereinafter referred to as extraction points.
 画像切り抜き部102は、この抽出点と重ならない部分を被写体画像から消去することにより、被写体を抽出する。しかしながら、上述したように、変換済み距離情報の各点は、被写体画像中において離散的であるため、抽出点をそのまま適用することは出来ない。そこで、画像切り抜き部102は、離散的である各点を白画素、その他の領域を黒画素として、離散的である各点が連結されて1つの領域を形成するように、画像の膨張処理を繰り返し行う。画像の膨張処理とは、ある注目画素の周辺に1画素でも白画素があれば、その注目画素を白画素に置き換える処理である。画像切り抜き部102は、この膨張処理を、抽出点において縦・横・斜において隣接する点が連結されるまで繰り返す。 The image cutout unit 102 extracts a subject by erasing a portion that does not overlap with the extraction point from the subject image. However, as described above, since each point of the converted distance information is discrete in the subject image, the extracted point cannot be applied as it is. Therefore, the image cutout unit 102 performs image dilation processing so that each discrete point is connected to form a single region with each discrete point being a white pixel and the other region being a black pixel. Repeat. The image expansion process is a process of replacing a target pixel with a white pixel if even one pixel is present around a target pixel. The image cutout unit 102 repeats this expansion processing until adjacent points in the vertical, horizontal, and diagonal directions are connected.
 図7(a)は、上記膨張処理の結果、抽出点において隣接する点が連結された状態を示す図である。尚、図6(b)の状態から図7(a)の状態への遷移の際、画像切り抜き部102は、上記膨張処理の繰り返しの他、膨張処理により荒れた輪郭の平滑化処理を行う。また、距離カメラ16のノイズによって、被写体とは関係ない位置に抽出点が出現する場合もあるため、画像切り抜き部102は、ラベリング処理により、最も広い領域や所定の閾値以上の広さを有する領域のみを残し、ノイズカット処理を行う。 FIG. 7A is a diagram showing a state in which adjacent points are connected at the extraction point as a result of the expansion process. In the transition from the state of FIG. 6B to the state of FIG. 7A, the image cutout unit 102 performs the smoothing process of the rough outline by the expansion process in addition to the repetition of the expansion process. Further, since the extraction point may appear at a position unrelated to the subject due to the noise of the distance camera 16, the image cutout unit 102 performs the labeling process so that the image clipping unit 102 has a widest area or an area having a predetermined threshold value or more. Leave noise alone and perform noise cut processing.
 画像切り抜き部102は、被写体画像において、図7(a)に示すように生成された領域(以降、抽出対象領域とする)に対応する以外の部分を消去することにより、図7(b)に示すように被写体と背景とが分離され、被写体を抽出することができる。ここで、上記膨張処理により、抽出対象領域は図7(b)に示すように実際の被写体の輪郭よりも広い領域となっている。図7(b)においては、抽出対象領域のうち、実際の被写体からはみ出している部分を黒塗りで示している。 The image cutout unit 102 deletes a portion other than the region corresponding to the generated region (hereinafter, referred to as an extraction target region) in the subject image as illustrated in FIG. As shown, the subject and the background are separated, and the subject can be extracted. Here, as a result of the expansion process, the extraction target area is wider than the actual contour of the subject as shown in FIG. In FIG. 7B, the portion of the extraction target area that protrudes from the actual subject is shown in black.
 画像切り抜き部102は、図7(b)に示すように抽出された画像において、例えば従来のエッジ検知の処理等を行う事により、被写体の輪郭の外側の余分な領域を消去することが好ましい。図7(b)に示すように、被写体の輪郭に略沿って画像が切り抜かれているため、被写体の輪郭と切り抜かれた画像の輪郭との間の濃度は略一定であるものと考えられる。従って、従来技術のように、画像カメラ15によって撮像されて生成された画像において被写体の輪郭を検出するよりも高精度にエッジ検知を行うことができる。 It is preferable that the image cutout unit 102 erases an extra area outside the contour of the subject by performing, for example, conventional edge detection processing in the extracted image as shown in FIG. As shown in FIG. 7B, since the image is cut out substantially along the contour of the subject, it is considered that the density between the contour of the subject and the contour of the cut out image is substantially constant. Therefore, as in the prior art, edge detection can be performed with higher accuracy than in detecting the contour of a subject in an image captured and generated by the image camera 15.
 また、画像切り抜き部102は、図7(a)に示すように抽出対象領域を生成した後、被写体画像の切り抜きを行う前に、抽出対象領域の収縮処理を行っても良い。画像の収縮処理とは、上記膨張処理とは逆に、ある注目画素の周辺に1画素でも黒画素があれば、その注目画素を黒画素に置き換える処理である。これにより、上記膨張処理によって膨張した輪郭が収縮され、図7(b)に示すような被写体からのはみ出しを軽減することができる。 Further, the image clipping unit 102 may perform the shrinking process on the extraction target area after generating the extraction target area as shown in FIG. 7A and before clipping the subject image. Contrary to the dilation process, the image contraction process is a process of replacing a target pixel with a black pixel if there is at least one pixel around the target pixel. Thereby, the contour expanded by the expansion process is contracted, and the protrusion from the subject as shown in FIG. 7B can be reduced.
 次に、座標変換部101の“歪み補正”の座標変換機能について説明する。図8(a)は、“歪み補正”の座標変換機能が目的とする課題を示す図である。図8(a)に示すように、変換済み距離情報に閾値を適用して抽出された抽出点と、被写体画像における被写体とがずれる場合がある。これは、カメラのレンズにおける半径方向及び円周方向の歪みによってもたらされることが考えられる。従って、座標変換部101は、距離カメラ16によって取得された距離情報を変換して変換済み距離情報を生成する際、その歪みを補正して変換を行う。尚、本実施形態においては、距離カメラ16のレンズに歪みがあるものとして補正を行う。 Next, the coordinate conversion function of “distortion correction” of the coordinate conversion unit 101 will be described. FIG. 8A is a diagram illustrating a problem that the coordinate conversion function of “distortion correction” aims at. As shown in FIG. 8A, the extracted point extracted by applying a threshold value to the converted distance information may deviate from the subject in the subject image. This may be caused by radial and circumferential distortions in the camera lens. Therefore, when the coordinate conversion unit 101 converts the distance information acquired by the distance camera 16 to generate converted distance information, the coordinate conversion unit 101 performs conversion by correcting the distortion. In the present embodiment, correction is performed assuming that the lens of the distance camera 16 is distorted.
 本実施形態に係る座標変換部101は、上述した式(3)の計算において、“歪み補正”の処理も行う。ここで、上記式(3)の計算は、以下の式(4)~(6)に等しい。但し、z≠0である。
Figure JPOXMLDOC01-appb-I000004
Figure JPOXMLDOC01-appb-I000005
Figure JPOXMLDOC01-appb-I000006
The coordinate conversion unit 101 according to the present embodiment also performs a “distortion correction” process in the calculation of Expression (3) described above. Here, the calculation of the above equation (3) is equivalent to the following equations (4) to (6). However, z ≠ 0.
Figure JPOXMLDOC01-appb-I000004
Figure JPOXMLDOC01-appb-I000005
Figure JPOXMLDOC01-appb-I000006
 これに対して、レンズの歪みを考慮すると、上記式(6)は以下の式(7)、(8)によって置き換えられる。
Figure JPOXMLDOC01-appb-I000007
Figure JPOXMLDOC01-appb-I000008
On the other hand, considering the distortion of the lens, the above equation (6) is replaced by the following equations (7) and (8).
Figure JPOXMLDOC01-appb-I000007
Figure JPOXMLDOC01-appb-I000008
 ここで、式(7)における“k”、“k”並びに“p”、“p”は、夫々半径方向、円周方向の歪み係数である。即ち、式(7)は、レンズによる歪みを補正するための式である。本実施形態においては、夫々2次まで展開した係数まで考慮する場合を例とするが、3次以上の係数まで考慮しても良い。これらの歪み係数も、キャリブレーションによって求めることができる。即ち、上述した距離カメラ16の内部パラメータを求める際に生成した複数のチェックボードの画像に基づき、各格子点の位置を上記式に適用して演算することにより、“k”、“k”並びに“p”、“p”の歪み係数を求めることができる。 Here, “k 1 ”, “k 2 ” and “p 1 ”, “p 2 ” in the equation (7) are distortion coefficients in the radial direction and the circumferential direction, respectively. That is, Expression (7) is an expression for correcting distortion caused by the lens. In the present embodiment, the case where the coefficients expanded up to the second order are considered as an example. However, the coefficients up to the third order or higher may be considered. These distortion coefficients can also be obtained by calibration. That is, based on the images of a plurality of check boards generated when the internal parameters of the distance camera 16 described above are obtained, the positions of the respective grid points are applied to the above formulas to calculate “k 1 ”, “k 2 ”. ", And distortion coefficients of" p 1 "and" p 2 "can be obtained.
 尚、上述した係数を考慮する次元は、カメラと被写体との距離に応じて決定することが好ましい。一般的に、カメラと被写体との距離が近い程、歪みが大きくなる。従って、カメラと被写体との距離が近いほど、高次の係数まで考慮して計算を行うことにより、より好適に歪み補正を行うことができる。 It should be noted that the dimension considering the above-described coefficients is preferably determined according to the distance between the camera and the subject. In general, the closer the distance between the camera and the subject, the greater the distortion. Therefore, as the distance between the camera and the subject is shorter, the distortion correction can be performed more suitably by performing calculation in consideration of higher-order coefficients.
 座標変換部101は、このようにして求めた歪み係数を記憶しており、距離カメラ16における三次元座標(X,Y,Z)を入力として、上述した式(3)に従って画像カメラ15の撮像による画像上の座標(u,v)を求める際に、上記式(4)、(5)、(7)(8)を用いることにより、レンズの歪みが補正された上で、画像カメラ15の撮像による画像上の座標を得られる。これにより、図8(b)に示すように、抽出点と被写体とのずれを解消することができる。 The coordinate conversion unit 101 stores the distortion coefficient obtained in this way, and receives the three-dimensional coordinates (X i , Y i , Z i ) in the distance camera 16 as an input, and the image camera according to the above equation (3). When the coordinates (u j , v j ) on the image obtained by the imaging of 15 are obtained, the distortion of the lens is corrected by using the above formulas (4), (5), (7), and (8). The coordinates on the image obtained by the image camera 15 can be obtained. Thereby, as shown in FIG.8 (b), the shift | offset | difference of an extraction point and a to-be-photographed object can be eliminated.
 以上説明したように、本実施形態に係る撮像装置1においては、被写体画像から被写体が表示された部分を切り抜く際、原則として画像の濃度の情報を用いず、距離カメラ1によって取得された距離情報に基づいて処理を行う。また、本実施形態に係る撮像装置1においては、ユーザに対して操作を求めることなく、画像処理部100が、与えられた情報に基づいて自動的に処理を実行する。従って、画像の切り抜き処理において、ユーザの熟練度によらず、より高精度な切り抜き処理を可能とすることができる。 As described above, in the imaging apparatus 1 according to the present embodiment, the distance information acquired by the distance camera 1 is not used in principle when the portion where the subject is displayed is clipped from the subject image, without using the information on the image density. Process based on. Further, in the imaging apparatus 1 according to the present embodiment, the image processing unit 100 automatically executes a process based on given information without obtaining an operation from the user. Therefore, in the image clipping process, it is possible to perform a more accurate clipping process regardless of the skill level of the user.
 尚、上記実施形態においては、図2に示すように、画像カメラ15及び距離カメラ16を含む撮像装置1を例として説明したが、画像処理部100単体または画像処理部100を実現するためのプログラムとして提供することも可能である。この場合、被写体画像を撮像した第1のカメラの内部パラメータ及び距離情報を取得した第2のカメラの内部パラメータ並びに第1のカメラと第2のカメラとの外部パラメータを別途取得する必要がある。 In the above embodiment, as illustrated in FIG. 2, the imaging apparatus 1 including the image camera 15 and the distance camera 16 has been described as an example. However, the image processing unit 100 alone or a program for realizing the image processing unit 100 is described. It is also possible to provide as. In this case, it is necessary to separately acquire the internal parameters of the first camera that captured the subject image, the internal parameters of the second camera that acquired the distance information, and the external parameters of the first camera and the second camera.
 外部パラメータの取得方法としては、上述したステレオキャリブレーションによるものの他、画像カメラ15及び距離カメラ16にGPS(Global Positioning System)のような測位システムであって高精度なものが搭載されていれば、その情報を用いることも可能である。具体的には、画像カメラ15及び距離カメラ16は、夫々被写体画像、距離情報を取得する際、搭載された測位システムにより情報取得時の位置及び方位を同時に取得し、座標変換部101に入力する。 As an external parameter acquisition method, in addition to the above-described stereo calibration, if the image camera 15 and the distance camera 16 are equipped with a positioning system such as GPS (Global Positioning System) and a high-accuracy one, It is also possible to use that information. Specifically, when the image camera 15 and the distance camera 16 acquire the subject image and the distance information, respectively, the position and orientation at the time of information acquisition are simultaneously acquired by the mounted positioning system and input to the coordinate conversion unit 101. .
 これにより、座標変換部101は、入力された位置及び方位の情報に基づき、距離カメラ16における三次元空間の座標系を画像カメラ15における三次元空間の座標系に変換するための外部パラメータを求めることができる。尚、上記入力された情報のうち、方位の違いより回転ベクトルRを、位置の違いにより並進ベクトルtを夫々求めることができる。 Accordingly, the coordinate conversion unit 101 obtains external parameters for converting the coordinate system of the three-dimensional space in the distance camera 16 to the coordinate system of the three-dimensional space in the image camera 15 based on the input position and orientation information. be able to. Of the input information, the rotation vector R can be obtained from the difference in orientation, and the translation vector t can be obtained from the position.
 また、上記実施形態においては、図7(a)、(b)に示すように、被写体画像から抽出対象領域に対応する領域以外の部分を消去する場合を例として説明した。この他、抽出対象領域に対応する領域と他の領域とを別レイヤーとして保存することも可能である。即ち、画像切り抜き部102が、少なくとも上記抽出対象領域によって特定される被写体画像の領域と他の領域とを分離して記憶媒体に記憶する画像分離部として機能することにより、本実施形態に係る効果を得ることが可能である。これにより、以降の操作において背景部分の画像の仕様有無をユーザに選択させることができ、ユーザの利便性を向上することができる。 Further, in the above-described embodiment, as illustrated in FIGS. 7A and 7B, the case where a portion other than the region corresponding to the extraction target region is deleted from the subject image is described as an example. In addition, the area corresponding to the extraction target area and other areas can be stored as separate layers. That is, the image clipping unit 102 functions as an image separation unit that separates at least the region of the subject image specified by the extraction target region and the other region and stores the separated region in the storage medium. It is possible to obtain As a result, the user can select whether or not the background image is specified in subsequent operations, and the convenience of the user can be improved.
 また、上記実施形態においては、被写体画像に変換済み距離情報の座標を重ね合わせる際、図6(a)、(b)に示すように、変換済み距離情報の座標によって特定される点を、変換済み距離情報の解像度、即ち距離カメラ16の解像度と被写体画像の解像度との比率に応じた間隔で、被写体画像上に配置する場合を例として説明した。換言すると、図6(a)、(b)の例においては、変換済み距離情報の座標によって特定される各点を、距離カメラ16の解像度と被写体画像の解像度との比率に応じた間隔で、被写体画像上の各画素に対応させる。即ち、変換済み距離情報の座標を、解像度はそのままの状態で、解像度がより高い被写体画像に重ね合わせているため、図6(a)、(b)に示すような離散的な状態となる。 In the above embodiment, when the coordinates of the converted distance information are superimposed on the subject image, the points specified by the coordinates of the converted distance information are converted as shown in FIGS. 6 (a) and 6 (b). The case where the distance information is arranged on the subject image at an interval corresponding to the resolution of the distance information, that is, the ratio between the resolution of the distance camera 16 and the resolution of the subject image has been described as an example. In other words, in the examples of FIGS. 6A and 6B, each point specified by the coordinates of the converted distance information is set at an interval according to the ratio between the resolution of the distance camera 16 and the resolution of the subject image. Correspond to each pixel on the subject image. That is, since the coordinates of the converted distance information are superimposed on the subject image having a higher resolution while maintaining the resolution as it is, a discrete state as shown in FIGS. 6A and 6B is obtained.
 これに対して、予め変換済み距離情報の解像度を被写体画像の解像度に対応させた上で重ね合わせを行うことも可能である。例えば、変換済み距離情報の座標によって特定される各点を画素として、夫々の画素を分割することにより、変換済み距離情報の解像度を被写体画像の解像度に一致させることができる。そのような態様について以下に説明する。 On the other hand, it is also possible to perform superimposition after making the resolution of the converted distance information correspond to the resolution of the subject image in advance. For example, the resolution of the converted distance information can be matched with the resolution of the subject image by dividing each pixel by using each point specified by the coordinates of the converted distance information as a pixel. Such an embodiment will be described below.
 図9(a)は、図3に示す距離情報が座標変換部101によって変換済み距離情報に変換された状態を示す図である。図9(a)に示すように、図3において(u,v)、(u,v)・・・として特定されていた座標は、変換済みの座標として(u´,v´)、(u´,v´)・・・として特定されている。図9(b)は、図9(a)に示す夫々の変換済みの各座標を画素とし、夫々の画素を4分割した状態を示す図である。 FIG. 9A is a diagram illustrating a state in which the distance information illustrated in FIG. 3 is converted into converted distance information by the coordinate conversion unit 101. As shown in FIG. 9A, the coordinates specified as (u 1 , v 1 ), (u 2 , v 2 ), etc. in FIG. 3 are converted to (u ′ 1 , v ′ 1 ), (u ′ 2 , v ′ 2 )... FIG. 9B is a diagram illustrating a state in which each converted coordinate shown in FIG. 9A is a pixel and each pixel is divided into four.
 図9(a)において(u´,v´)として特定されていた点は、図9(b)に示す(u´11,v´11)、(u´12,v´12)、(u´13,v´13)、(u´14,v´14)の4点に対応している。各点は、図10(a)、(b)に示すように、元の解像度において(u´,v´)によって特定される画素を、縦横に2分割して配置される4つの画素に相当する。 The points specified as (u ′ 1 , v ′ 1 ) in FIG. 9A are (u ′ 11 , v ′ 11 ), (u ′ 12 , v ′ 12 ) shown in FIG. This corresponds to four points (u ′ 13 , v ′ 13 ) and (u ′ 14 , v ′ 14 ). As shown in FIGS. 10A and 10B, each point has four pixels arranged by dividing the pixel specified by (u ′ 1 , v ′ 1 ) in the original resolution into two vertically and horizontally. It corresponds to.
 このような処理により、図6(a)、(b)に示すような離散的な点ではなく、被写体画像における全画素が1:1でカバーされるように、同一解像度の距離情報を生成することができる。また、図9(b)に示すように、分割後の4点には、夫々分割前の距離“Z”が関連付けられる。このため、画像切り抜き部102が、距離Zに対して閾値を適用し、対象が所定の距離以内にある点のみを抽出した抽出結果は、図6(b)に示すように被写体の輪郭と略一致した状態において、隣接するドット間が埋められた状態となり、抽出対象領域を好適に求めることができる。 By such processing, distance information of the same resolution is generated so that all pixels in the subject image are covered by 1: 1 instead of discrete points as shown in FIGS. 6 (a) and 6 (b). be able to. Further, as shown in FIG. 9B, the distance “Z 1 ” before the division is associated with each of the four points after the division. For this reason, the image clipping unit 102 applies a threshold value to the distance Z and extracts only points where the target is within a predetermined distance, and the extraction result is substantially the contour of the subject as shown in FIG. In the matched state, the adjacent dots are filled, and the extraction target area can be suitably obtained.
 尚、図9(a)、(b)及び図10(a)、(b)の態様においても、画素の分割により荒れた輪郭を平滑化する処理や、ノイズカットのためのラベリング処理等を行うことが好ましい。また、図9(a)、(b)及び図10(a)、(b)の態様においても、抽出対象領域と実際の被写体の輪郭とが完全に一致せず、抽出対象領域が被写体の輪郭からはみ出す場合が考えられるため、上記実施形態と同様に、従来のエッジ検知の処理等を行い、被写体の輪郭の外側の余分な領域を消去しても良い。この場合においても、被写体の輪郭に略沿って画像が切り抜かれているため、従来技術のように、画像カメラ15によって撮像されて生成された画像において被写体の輪郭を検出するよりも高精度にエッジ検知が可能であることは同様である。 9A, 9B, 10A, and 10B, a process for smoothing a rough outline due to pixel division, a labeling process for noise cutting, and the like are performed. It is preferable. 9A, 9B, 10A, and 10B, the extraction target area does not completely match the actual subject contour, and the extraction target region is the contour of the subject. Since there is a case of protruding from the outside, as in the above embodiment, conventional edge detection processing or the like may be performed to erase the extra area outside the contour of the subject. Even in this case, since the image is cut out substantially along the contour of the subject, the edge is detected with higher accuracy than in the case of detecting the contour of the subject in the image captured and generated by the image camera 15 as in the prior art. It is the same that detection is possible.
 1 撮像装置
 10 CPU
 11 RAM
 12 ROM
 13 HDD
 14 I/F
 15 画像カメラ
 16 距離カメラ
 17 LCD
 18 操作部
 19 バス
 100 画像処理部
 101 座標変換部
 102 画像切り抜き部
 110 表示制御部
1 Imaging device 10 CPU
11 RAM
12 ROM
13 HDD
14 I / F
15 Image camera 16 Distance camera 17 LCD
18 Operation Unit 19 Bus 100 Image Processing Unit 101 Coordinate Conversion Unit 102 Image Clipping Unit 110 Display Control Unit

Claims (9)

  1.  被写体と背景とが表示された被写体画像を撮像により生成する画像撮像部と、
     被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を生成する距離情報生成部と、
     前記取得された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成する座標変換部と、
     前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出する座標抽出部と、
     前記被写体画像において前記抽出された座標によって特定される領域と他の領域とを分離して出力する画像分離部とを含むことを特徴とする撮像装置。
    An image capturing unit that generates a subject image on which a subject and a background are displayed by capturing;
    When the visual range including the subject and the background is used as an image, the distance to the object displayed in each part is measured, and distance information in which the coordinates on the image of the visual range and the distance are associated is obtained. A distance information generation unit to generate;
    A coordinate conversion unit that converts the coordinates of the acquired distance information into coordinates on the subject image to generate converted distance information;
    Among the converted coordinates included in the generated converted distance information, a coordinate extracting unit that extracts coordinates for which the associated distance satisfies a predetermined condition;
    An image pickup apparatus comprising: an image separation unit that separates and outputs a region specified by the extracted coordinates and another region in the subject image.
  2.  前記座標変換部は、
     前記距離情報生成部の焦点距離及び光軸の情報を含む第1のパラメータに基づき、前記視覚的範囲の画像上の座標である前記距離情報の座標を、前記距離情報生成部を基準とした三次元空間上の座標に変換する第1の座標変換機能と、
     前記距離情報生成部を基準とした三次元空間上の座標軸と前記画像撮像部を基準とした三次元空間上の座標軸とのずれに基づく第2のパラメータに基づき、前記距離情報生成部を基準とした三次元空間上の座標に変換された前記距離情報の座標を、前記画像撮像部を基準とした三次元空間上の座標に変換する第2の座標変換機能と、
     前記画像撮像部の焦点距離及び光軸の情報を含む第2のパラメータに基づき、前記画像撮像部を基準とした三次元空間上の座標に変換された前記距離情報の座標を、前記被写体画像上の座標に変換する第3の座標変換機能とを含むことを特徴とする請求項1に記載の撮像装置。
    The coordinate converter is
    Based on the first parameter including the focal length and optical axis information of the distance information generation unit, the coordinate of the distance information, which is the coordinate on the image of the visual range, is a cubic with the distance information generation unit as a reference A first coordinate conversion function for converting to coordinates in the original space;
    Based on the second parameter based on the deviation between the coordinate axis in the three-dimensional space with reference to the distance information generation unit and the coordinate axis in the three-dimensional space with reference to the image capturing unit, the distance information generation unit is used as a reference. A second coordinate conversion function for converting the coordinates of the distance information converted into coordinates on the three-dimensional space into coordinates on the three-dimensional space with the image capturing unit as a reference;
    Based on the second parameter including the focal length and optical axis information of the image capturing unit, the coordinates of the distance information converted into the coordinates in the three-dimensional space with the image capturing unit as a reference are displayed on the subject image. The imaging apparatus according to claim 1, further comprising: a third coordinate conversion function that converts the coordinates into a plurality of coordinates.
  3.  前記座標変換部は、前記距離情報生成部または前記画像撮像部に含まれるレンズの半径方向の歪み及び円周方向の歪みのうち少なくとも一方の情報を含む第4のパラメータに基づき、前記距離情報生成部を基準とした三次元空間上の座標又は前記画像撮像部を基準とした三次元空間上の座標に変換された前記距離情報の座標を、前記レンズの半径方向の歪みまたは円周方向の歪みが補正された座標に変換する第4の座標変換機能を含むことを特徴とする請求項2に記載の撮像装置。 The coordinate conversion unit generates the distance information based on a fourth parameter including information on at least one of a radial distortion and a circumferential distortion of a lens included in the distance information generation unit or the image capturing unit. The coordinates in the three-dimensional space with reference to the part or the coordinates of the distance information converted into the coordinates in the three-dimensional space with reference to the image capturing part are used as the radial distortion or the circumferential distortion of the lens. The imaging apparatus according to claim 2, further comprising a fourth coordinate conversion function for converting the coordinates into corrected coordinates.
  4.  前記座標抽出部は、前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の閾値以下の座標を抽出することを特徴とする請求項1乃至3いずれか1項に記載の撮像装置。 The coordinate extraction unit extracts coordinates whose associated distance is equal to or less than a predetermined threshold among the converted coordinates included in the generated converted distance information. The imaging apparatus of Claim 1.
  5.  前記画像分離部は、前記被写体画像のうち、前記抽出された座標によって特定される領域以外の領域の画像情報を消去することにより、前記被写体画像において前記被写体が表示された領域を抽出することを特徴とする請求項4に記載の撮像装置。 The image separation unit extracts an area where the subject is displayed in the subject image by erasing image information of a region other than the region specified by the extracted coordinates in the subject image. The imaging apparatus according to claim 4, wherein the imaging apparatus is characterized.
  6.  前記距離情報における座標の解像度は、前記被写体画像の解像度よりも低く、
     前記画像分離部は、前記抽出された座標を画素として描画される画像において前記画素を分割することにより前記抽出された座標の解像度を前記被写体画像の解像度に対応させ、前記分割された画素によって描画される画像によって領域を特定することを特徴とする請求項1乃至5いずれか1項に記載の撮像装置。
    The resolution of coordinates in the distance information is lower than the resolution of the subject image,
    The image separation unit divides the pixels in an image drawn with the extracted coordinates as pixels, thereby causing the resolution of the extracted coordinates to correspond to the resolution of the subject image, and draws with the divided pixels. The imaging apparatus according to claim 1, wherein a region is specified by an image to be processed.
  7.  被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得する距離情報取得部と、
     被写体と背景とが表示された被写体画像を取得する被写体画像取得部と、
     前記取得された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成する座標変換部と、
     前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出する座標抽出部と、
     前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して出力する画像分離部とを含むことを特徴とする画像処理装置。
    When the visual range including the subject and the background is used as an image, the distance to the object displayed in each part is measured, and distance information in which the coordinates on the image of the visual range and the distance are associated is obtained. A distance information acquisition unit to acquire;
    A subject image acquisition unit for acquiring a subject image in which a subject and a background are displayed;
    A coordinate conversion unit that converts the coordinates of the acquired distance information into coordinates on the subject image to generate converted distance information;
    Among the converted coordinates included in the generated converted distance information, a coordinate extracting unit that extracts coordinates for which the associated distance satisfies a predetermined condition;
    An image processing apparatus comprising: an image separation unit that separates and outputs a region specified by the extracted coordinates and another region of the acquired subject image.
  8.  被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得して記憶媒体に記憶し、
     被写体と背景とが表示された被写体画像を取得して記憶媒体に記憶し、
     前記記憶された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成して記憶媒体に記憶し、
     前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出し、前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して記憶媒体に記憶することを特徴とする画像処理方法。
    When the visual range including the subject and the background is used as an image, the distance to the object displayed in each part is measured, and distance information in which the coordinates on the image of the visual range and the distance are associated is obtained. Get it and store it in a storage medium,
    Obtain a subject image displaying the subject and background, store it in a storage medium,
    Converting the coordinates of the stored distance information into coordinates on the subject image to generate converted distance information and storing it in a storage medium;
    Among the converted coordinates included in the generated converted distance information, a coordinate whose associated distance satisfies a predetermined condition is extracted, and specified by the extracted coordinates in the acquired subject image An image processing method characterized in that an area to be processed and another area are separated and stored in a storage medium.
  9.  被写体と背景とが含まれる視覚的範囲を画像とした場合の各部に表示されている対象物までの距離を測定し、前記視覚的範囲の画像上の座標と距離とが関連付けられた距離情報を取得して記憶媒体に記憶するステップと、
     被写体と背景とが表示された被写体画像を取得して記憶媒体に記憶するステップと、
     前記記憶された距離情報の座標を前記被写体画像上の座標に変換して変換済み距離情報を生成して記憶媒体に記憶するステップと、
     前記生成された変換済み距離情報に含まれる変換後の座標のうち、関連付けられている距離が所定の条件を満たす座標を抽出し、前記取得された被写体画像のうち、前記抽出された座標によって特定される領域と他の領域とを分離して記憶媒体に記憶するステップとを情報処理装置に実行させることを特徴とする画像処理プログラム。
    When the visual range including the subject and the background is used as an image, the distance to the object displayed in each part is measured, and distance information in which the coordinates on the image of the visual range and the distance are associated is obtained. Obtaining and storing in a storage medium;
    Acquiring a subject image on which a subject and a background are displayed and storing the subject image in a storage medium;
    Converting the coordinates of the stored distance information into coordinates on the subject image to generate converted distance information and storing it in a storage medium;
    Among the converted coordinates included in the generated converted distance information, a coordinate whose associated distance satisfies a predetermined condition is extracted, and specified by the extracted coordinates in the acquired subject image An image processing program for causing an information processing apparatus to execute a step of separating an area to be processed and another area and storing the separated area in a storage medium.
PCT/JP2011/069316 2010-08-30 2011-08-26 Imaging device, image-processing device, image-processing method, and image-processing program WO2012029658A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-192717 2010-08-30
JP2010192717A JP2012050013A (en) 2010-08-30 2010-08-30 Imaging apparatus, image processing device, image processing method, and image processing program

Publications (1)

Publication Number Publication Date
WO2012029658A1 true WO2012029658A1 (en) 2012-03-08

Family

ID=45772748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/069316 WO2012029658A1 (en) 2010-08-30 2011-08-26 Imaging device, image-processing device, image-processing method, and image-processing program

Country Status (3)

Country Link
JP (1) JP2012050013A (en)
TW (1) TW201225658A (en)
WO (1) WO2012029658A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669382A (en) * 2020-12-30 2021-04-16 联想未来通信科技(重庆)有限公司 Image-based distance determination method and device
CN113469872A (en) * 2020-03-31 2021-10-01 广东博智林机器人有限公司 Region display method, device, equipment and storage medium
CN113645378A (en) * 2021-06-21 2021-11-12 福建睿思特科技股份有限公司 Safe management and control portable video distribution and control terminal based on edge calculation

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016181672A1 (en) * 2015-05-11 2016-11-17 ノーリツプレシジョン株式会社 Image analysis device, image analysis method, and image analysis program
JP6574461B2 (en) * 2016-08-04 2019-09-11 株式会社フォーディーアイズ Point cloud data conversion system and method
WO2018025842A1 (en) * 2016-08-04 2018-02-08 株式会社Hielero Point group data conversion system, method, and program
JP7369588B2 (en) 2019-10-17 2023-10-26 Fcnt株式会社 Imaging equipment and imaging method
CN116401484B (en) * 2023-04-18 2023-11-21 河北长风信息技术有限公司 Method, device, terminal and storage medium for processing paper material in electronization mode

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012307A (en) * 2003-06-17 2005-01-13 Minolta Co Ltd Imaging apparatus
JP2005300179A (en) * 2004-04-06 2005-10-27 Constec Engi Co Infrared structure diagnosis system
JP2006329628A (en) * 2005-05-23 2006-12-07 Hitachi Zosen Corp Measuring method of deformation amount in structure
JP2008112259A (en) * 2006-10-30 2008-05-15 Central Res Inst Of Electric Power Ind Image verification method and image verification program
JP2010109923A (en) * 2008-10-31 2010-05-13 Nikon Corp Imaging apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012307A (en) * 2003-06-17 2005-01-13 Minolta Co Ltd Imaging apparatus
JP2005300179A (en) * 2004-04-06 2005-10-27 Constec Engi Co Infrared structure diagnosis system
JP2006329628A (en) * 2005-05-23 2006-12-07 Hitachi Zosen Corp Measuring method of deformation amount in structure
JP2008112259A (en) * 2006-10-30 2008-05-15 Central Res Inst Of Electric Power Ind Image verification method and image verification program
JP2010109923A (en) * 2008-10-31 2010-05-13 Nikon Corp Imaging apparatus

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113469872A (en) * 2020-03-31 2021-10-01 广东博智林机器人有限公司 Region display method, device, equipment and storage medium
CN113469872B (en) * 2020-03-31 2024-01-19 广东博智林机器人有限公司 Region display method, device, equipment and storage medium
CN112669382A (en) * 2020-12-30 2021-04-16 联想未来通信科技(重庆)有限公司 Image-based distance determination method and device
CN113645378A (en) * 2021-06-21 2021-11-12 福建睿思特科技股份有限公司 Safe management and control portable video distribution and control terminal based on edge calculation

Also Published As

Publication number Publication date
TW201225658A (en) 2012-06-16
JP2012050013A (en) 2012-03-08

Similar Documents

Publication Publication Date Title
WO2012029658A1 (en) Imaging device, image-processing device, image-processing method, and image-processing program
JP6394005B2 (en) Projection image correction apparatus, method and program for correcting original image to be projected
JP5580164B2 (en) Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program
JP6363863B2 (en) Information processing apparatus and information processing method
JP6577703B2 (en) Image processing apparatus, image processing method, program, and storage medium
US20120147224A1 (en) Imaging apparatus
JP7123736B2 (en) Image processing device, image processing method, and program
WO2013190862A1 (en) Image processing device and image processing method
JP2013005258A (en) Blur correction apparatus, blur correction method, and business form
US20150178595A1 (en) Image processing apparatus, imaging apparatus, image processing method and program
JP2010187348A (en) Apparatus, method and program for image processing
DK3189493T3 (en) PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP
JP7156624B2 (en) Depth map filtering device, depth map filtering method and program
JP5857712B2 (en) Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation
US10621694B2 (en) Image processing apparatus, system, image processing method, calibration method, and computer-readable recording medium
JP6395429B2 (en) Image processing apparatus, control method thereof, and storage medium
JP2014142832A (en) Image processing apparatus, control method of image processing apparatus, and program
JP2008298589A (en) Device and method for detecting positions
JP2006113832A (en) Stereoscopic image processor and program
JP6153318B2 (en) Image processing apparatus, image processing method, image processing program, and storage medium
JP2016156702A (en) Imaging device and imaging method
CN110832851A (en) Image processing apparatus, image conversion method, and program
KR102195762B1 (en) Acquisition method for high quality 3-dimension spatial information using photogrammetry
JP2018160024A (en) Image processing device, image processing method and program
JP2022024688A (en) Depth map generation device and program thereof, and depth map generation system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11821675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11821675

Country of ref document: EP

Kind code of ref document: A1