WO2015026002A1 - Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil - Google Patents

Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil Download PDF

Info

Publication number
WO2015026002A1
WO2015026002A1 PCT/KR2013/008936 KR2013008936W WO2015026002A1 WO 2015026002 A1 WO2015026002 A1 WO 2015026002A1 KR 2013008936 W KR2013008936 W KR 2013008936W WO 2015026002 A1 WO2015026002 A1 WO 2015026002A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
transform function
feature points
image sensor
estimating
Prior art date
Application number
PCT/KR2013/008936
Other languages
English (en)
Korean (ko)
Inventor
이준성
오재윤
김곤수
Original Assignee
삼성테크윈 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성테크윈 주식회사 filed Critical 삼성테크윈 주식회사
Publication of WO2015026002A1 publication Critical patent/WO2015026002A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Definitions

  • An embodiment of the present invention relates to an image matching device and an image matching method using the same.
  • An embodiment of the present invention provides an image matching device for real-time image registration and an image matching method using the same.
  • An image matching device may include: a first estimation function of estimating a first transform function based on feature point information extracted from a first image photographed by a first image sensor and a second image photographed by a second image sensor; 1 transform function estimator; And a zoom transform function estimator configured to estimate a third transform function of adjusting the first transform function based on zoom information of the first image sensor and the second image sensor.
  • the zoom conversion function estimator may estimate the third conversion function when the zoom state of at least one of the first image sensor and the second image sensor is changed.
  • the first transform function estimator may set a region of interest in the first image and the second image, and estimate the first transform function based on feature point information extracted from the set region of interest.
  • the apparatus may further include a transform function selector configured to select the first transform function or the third transform function as a final transform function according to whether the first transform function is estimated.
  • a transform function selector configured to select the first transform function or the third transform function as a final transform function according to whether the first transform function is estimated.
  • the transform function selector may select the third transform function as a final transform function until a new first transform function is estimated.
  • the first transform function estimator may include a feature point detector configured to detect feature points of the first image and the second image; A feature point selector for selecting corresponding feature points between the detected feature points of the first image and the second image; And a first estimator estimating the first transform function based on the selected corresponding feature points.
  • the feature point selector may include: a patch image acquisition unit configured to acquire a patch image centering on feature points of the first image and the second image; A candidate selector that selects candidate feature points corresponding to the remaining images from each feature point of the reference image among the first image and the second image; A similarity determination unit that determines similarity between patch images of feature points of the reference image and patch images of candidate feature points of the remaining images; And a corresponding feature point selector for selecting a corresponding feature point corresponding to the feature point of the reference image among the candidate feature points based on the similarity determination result.
  • the zoom transform function estimator may include: a scale determiner configured to determine a scale transform coefficient corresponding to the zoom information; A second transform function estimator for estimating a second transform function by adjusting the first transform function based on the scale transform coefficients; And a third transform function estimator configured to estimate the third transform function from the second transform function based on a center offset value between the first image and the second image matched by the second transform function. have.
  • the scale determiner may determine the scale conversion coefficient from a relationship between previously stored zoom information for each image sensor and the scale conversion coefficient.
  • the apparatus may further include a matching unit that matches the first image and the second image using the selected first transform function or third transform function.
  • the estimating of the third transform function may include estimating the third transform function when the zoom state of at least one of the first image sensor and the second image sensor is changed.
  • the estimating of the first transform function may include setting a region of interest in the first image and the second image and estimating the first transform function based on feature point information extracted from the set region of interest. have.
  • the method may further include selecting the first transform function or the third transform function as a final transform function according to whether the first transform function is estimated.
  • the third transform function is selected as the final transform function until a new first transform function is estimated. It may include;
  • the estimating of the first transform function may include detecting feature points of the first image and the second image; Selecting corresponding feature points corresponding to the feature points of the detected first and second images; And estimating the first transform function based on the selected corresponding feature points.
  • the feature point selecting step may include obtaining a patch image centering on feature points of the first image and the second image; Selecting candidate feature points corresponding to other feature points of the reference image among the first image and the second image; Determining similarity between patch images of feature points of the reference image and patch images of candidate feature points of the remaining images; And selecting a corresponding feature point corresponding to the feature point of the reference image among the candidate feature points based on the similarity determination result.
  • the estimating of the third transform function may include: determining a scale transform coefficient corresponding to the zoom information; Estimating a second transform function by adjusting the first transform function based on the scale transform coefficients; And estimating the third transform function from the second transform function based on the center offset value between the first image and the second image matched by the second transform function.
  • the determining of the scale conversion coefficient may include determining the scale conversion coefficient from a relationship between previously stored zoom information for each image sensor and the scale conversion coefficient.
  • the method may further include registering the first image and the second image by using the selected first transform function or third transform function.
  • the image registration device enables real-time image registration when the zoom information is changed.
  • FIG. 1 is a block diagram schematically illustrating an image fusion system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram schematically illustrating an image matching device according to an embodiment of the present invention.
  • FIG. 3 is a block diagram schematically illustrating a first transform function estimator according to an embodiment of the present invention.
  • FIG. 4 is an exemplary view illustrating feature point selection according to an embodiment of the present invention.
  • FIG. 5 is a block diagram schematically illustrating a zoom transform function estimating unit according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating an image registration method according to an embodiment of the present invention.
  • FIG. 9 is a flowchart for explaining a method of selecting corresponding feature points of FIG. 8.
  • An image matching device may include: a first estimation function of estimating a first transform function based on feature point information extracted from a first image photographed by a first image sensor and a second image photographed by a second image sensor; 1 transform function estimator; And a zoom transform function estimator configured to estimate a third transform function of adjusting the first transform function based on zoom information of the first image sensor and the second image sensor.
  • first and second may be used to describe various components, but the components should not be limited by the terms. The terms are only used to distinguish one component from another.
  • Embodiments of the present invention can be represented by functional block configurations and various processing steps. Such functional blocks may be implemented in various numbers of hardware or / and software configurations that perform particular functions. For example, embodiments of the invention may be implemented directly, such as memory, processing, logic, look-up table, etc., capable of executing various functions by the control of one or more microprocessors or other control devices. Circuit configurations can be employed. Similar to the components of an embodiment of the present invention may be implemented in software programming or software elements, embodiments of the present invention include various algorithms implemented in combinations of data structures, processes, routines or other programming constructs. It may be implemented in a programming or scripting language such as C, C ++, Java, assembler, or the like.
  • inventions may be implemented with an algorithm running on one or more processors.
  • embodiments of the present invention may employ the prior art for electronic configuration, signal processing, and / or data processing.
  • Terms such as mechanism, element, means, configuration can be used broadly and are not limited to mechanical and physical configurations. The term may include the meaning of a series of routines of software in conjunction with a processor or the like.
  • FIG. 1 is a block diagram schematically illustrating an image fusion system according to an embodiment of the present invention.
  • the image fusion system 1 of the present invention includes a first image sensor 10, a second image sensor 20, an image matching device 30, an image fusion device 40, and a display device 50. ).
  • the first image sensor 10 and the second image sensor 20 may be cameras having different characteristics of photographing the same scene and providing image information.
  • the first image sensor 10 and the second image sensor 20 may have a pan tilt zoom (PTZ) function, and may be panned and tilted together to acquire an image of the same point at each zoom magnification.
  • the first image sensor 10 and the second image sensor 20 are integrally installed inside and outside offices, houses, hospitals, as well as public buildings requiring banks and security, and are used for access control or crime prevention. Depending on the location and purpose of use, it can have various shapes such as straight and dome shaped.
  • the first image sensor 10 is a visible light camera, and acquires image information by detecting light to generate a first image, which is a visible image according to a luminance distribution of an object.
  • the visible camera may be a camera using a CCD or a CMOS as an image pickup device.
  • the second image sensor 20 is an infrared light camera (or thermal camera), which detects radiant energy (thermal energy) emitted by an object, detects it as an infrared wavelength form of electromagnetic waves, and measures the intensity of thermal energy.
  • the second image may be a thermal image having different colors according to the intensity.
  • the image matching device 30 performs image registration by matching the positional relationship of two or more images obtained from different sensors with the same scene in one coordinate system. In the field of surveillance system and medical imaging, image registration using two or more sensors to generate a single fusion image should be performed.
  • the image matching device 30 registers the first image photographed by the first image sensor 10 and the second image photographed by the second image sensor 20. To this end, the image matching device 30 may extract feature point information extracted from the first image and the second image, and zoom information (eg, a zoom ratio) of the first image sensor 10 and the second image sensor 20. Estimate the transform function as the basis.
  • the transform function is a matrix representing a correspondence relationship between feature points of each of the first and second images.
  • the image matching device 30 matches the first image and the second image by applying the estimated conversion function.
  • the image fusion device 40 performs signal processing to output the received image signal as a signal conforming to the display standard.
  • the image fusion device 40 fuses the matched first image and the second image.
  • an infrared camera may well indicate the thermal distribution of an object, but the shape of the measured object is not clear, and a visible light camera may clearly indicate the shape of the object but not the thermal distribution of the object.
  • the image fusion device 40 may display an image of an object using the mutual strengths and weaknesses of the visible light camera and the infrared light camera, and may clearly display the thermal distribution of the object.
  • the image fusion device 40 reduces noise with respect to the first image and the second image, and performs gamma correction, color filter array interpolation, color matrix, and color correction.
  • Image signal processing may be performed to improve image quality such as correction and color enhancement.
  • the image fusion device 40 may generate an image file by compressing the data of the fusion image by processing an image signal for improving image quality, or may restore the image data from the image file.
  • the compressed format of the image may include a reversible format or an irreversible format.
  • the image fusion device 40 can also perform color processing, blur processing, edge enhancement processing, image analysis processing, image recognition processing, image effect processing, and the like. Face recognition, scene recognition processing, and the like can be performed by the image recognition processing.
  • the display device 50 provides the user with a fusion image output from the image fusion device 40, so that the user can monitor the displayed image.
  • the display apparatus 50 may display a fusion image in which the first image and the second image are overlapped.
  • the display device 50 may be formed of a liquid crystal display panel (LCD), an organic light emitting display panel (OLED), an electrophoretic display panel (EPD), and the like.
  • the display device 50 may be provided in the form of a touch screen to receive an input through a user's touch and operate as a user input interface.
  • FIG. 2 is a block diagram schematically illustrating an image matching device according to an embodiment of the present invention.
  • the image matching device 30 may include a first transform function estimator 301, a zoom transform function estimator 303, a transform function selector 305, and a matcher 307. .
  • the first transform function estimator 301 uses the first transform function based on the feature point information extracted from the first image photographed by the first image sensor 10 and the second image photographed by the second image sensor 20. H1) can be estimated.
  • the first transform function estimator 301 may newly estimate the first transform function H1 whenever the zoom state of the first image sensor 10 and the second image sensor 20 changes.
  • the first transform function estimator 301 sets the region of interest without performing feature point detection for the entire first image and the second image, and performs a feature point detection only on the set region of interest, thereby reducing the amount of computation and thus converting the first transform function.
  • H1 The estimation time can be shortened.
  • the ROI may be a region where the photographing region overlaps between the first image and the second image.
  • the first transform function estimator 301 estimates the first transform function H1 through a process of extracting feature points from the first image and the second image and selecting a corresponding pair of feature points, the matching error rate is small. Since the new first transform function H1 must be estimated each time the zoom state is changed, it is difficult to perform real-time matching.
  • the first conversion function estimator 301 when the zoom state of at least one of the first image sensor 10 and the second image sensor 20 is changed, the first conversion function estimator 301 newly converts the first state according to the change of the zoom state. Until the function (H1) estimation is completed, matching is performed with the transform function estimated quickly with a small amount of computation through a simpler estimation process, so that real-time matching is possible even when there is a zoom state change.
  • the zoom conversion function estimator 303 estimates a third conversion function H3 that adjusts the first conversion function H1 based on the zoom information of the first image sensor 10 and the second image sensor 20. Can be. When at least one of the first image sensor 10 and the second image sensor 20 is changed while the image is captured while the first image sensor 10 and the second image sensor 20 are fixed, The zoom transform function estimator 303 may quickly estimate the third transform function H3 by adjusting the first transform function H1 estimated in the fixed zoom state. Since the zoom transform function estimator 303 estimates the third transform function H3 that adjusts the first transform function H1 estimated with only zoom information without the feature point extraction process, the first transform function estimator 301 Real-time matching is possible while E estimates the new first transform function H1.
  • the first transform function estimating unit 301 sets an image of which the photographing area is reduced by changing the zoom state among the first image and the second image as the reference image, sets the photographing region of the reference image as the region of interest,
  • the first transform function H1 may be estimated by extracting feature points only for the ROI of each of the second images. Accordingly, it is not necessary to extract a feature point in an area that is not a common area between the first image and the second image, thereby reducing computation time.
  • the transform function selector 305 may select the first transform function H1 or the third transform function H3 as the final transform function H, depending on whether the first transform function H1 is estimated.
  • the third transform function H3 estimated by 303 is selected as the final transform function H.
  • the transform function selector 305 calculates the first estimated by the first transform function estimator 301.
  • the conversion function H1 is selected as the final conversion function H.
  • the selection method of the conversion function selection unit 305 is shown in Equation 1 below.
  • the matching unit 307 matches the first image and the second image by using the transform function H selected from the first transform function H1 or the third transform function H3.
  • FIG. 3 is a block diagram schematically illustrating a first transform function estimator according to an embodiment of the present invention.
  • 4 is an exemplary view illustrating feature point selection according to an embodiment of the present invention.
  • the first transform function estimator 301 may include a feature point detector 311, a feature point selector 341, and an estimator 391.
  • the feature point detector 311 may detect the feature point F1 of the first image captured by the first image sensor 10, and the feature point of the second image captured by the second image sensor 20. And a second feature point detector 331 for detecting (F2).
  • the first feature point detector 321 and the second feature point detector 331 may be separately or integrally implemented to perform feature point detection sequentially or in parallel.
  • the feature point detector 311 uses a SIFT algorithm, a HARRIS corner algorithm, a SUSAN algorithm, and the like to determine corners, edges, contours, and line intersections from the first and second images, respectively. Can be extracted as feature points.
  • the feature point detection algorithm is not particularly limited, and various feature point extraction algorithms may be used.
  • the feature point selector 341 may select corresponding feature points corresponding to feature points of the first image and the second image.
  • the feature point selector 341 may include a patch image acquirer 351, a candidate selector 361, a similarity determiner 371, and a corresponding feature point selector 381.
  • the patch image acquisition unit 351 may acquire patch images of the feature points of the first image and the second image.
  • the patch image may have an N ⁇ N size around the feature point.
  • the candidate selector 361 may use one of the first image and the second image as a reference image, and select candidate feature points corresponding to the remaining images from each of the feature points of the reference image.
  • the feature points of the two images acquired for the same scene indicate localization.
  • the candidate selector 361 may select, as candidate feature points, feature points within a block having a predetermined size based on feature points of the reference image in the remaining images.
  • the block size can be flexibly optimized according to the field of view (FOV) and viewing direction of the two image sensors. For example, the closer the viewing angle and the viewing direction of the two image sensors are, the smaller the block size can be, and the further the larger the block size can be.
  • the candidate selector 361 may select, as candidate feature points, feature points of the remaining image having a distance from the feature point of the reference image within a threshold based on the block size.
  • the similarity determiner 371 may determine similarity between the patch image of the feature point of the reference image and the patch images of the candidate feature points of the remaining images.
  • the similarity determination may use normalized mutual information and gradient direction information as parameters.
  • Normal mutual information is information that normalizes mutual information representing a statistical correlation between two random variables.
  • the method of calculating the normal mutual information and the gradient direction information is a well-known algorithm and method, and detailed description thereof will be omitted in the detailed description of this embodiment.
  • the corresponding feature point selector 381 may select a corresponding feature point among candidate feature points based on the similarity determination result of the similarity determiner 371.
  • the corresponding feature point selector 381 may select a pair of corresponding feature points using the feature point having the greatest similarity among the candidate feature points of the remaining images as the corresponding feature point corresponding to the feature point of the reference image.
  • FIG. 4 illustrates an example in which feature points are respectively detected from a first image I1 and a second image I2, and a pair of corresponding feature points is selected based on the first image I1 as a reference image.
  • the candidate feature points f21, f22, and f23 of the second image I2 are selected with respect to the feature point f1, which is one of the plurality of feature points of the first image I1.
  • the candidate feature points f21, f22, and f23 are feature points positioned in an area CFA within a predetermined distance from a position corresponding to the feature point f1 of the second image I2.
  • the estimator 391 may estimate the first transform function H1 based on the selected corresponding feature points.
  • the estimator 391 may estimate the first transform function H1 using a random sample consensus (RANSAC) or a locally optimized RANSAC (LO-RANSAC) algorithm.
  • the first transform function H1 may be expressed as Equation 2 below.
  • Each component h11 to h33 of the first transform function H1 is rotation information indicating at which rotation angle to rotate, translation information indicating how much to move in the x, y, and z directions, and x and scaling information indicating how much to change the size in the y and z directions.
  • FIG. 5 is a block diagram schematically illustrating a zoom transform function estimating unit according to an embodiment of the present invention.
  • the zoom conversion function estimator 303 adjusts the first conversion function H1 based on the zoom information Z1 and Z2 of the first image sensor 10 and the second image sensor 20.
  • One third transform function H3 can be estimated.
  • the zoom information Z1 and Z2 may be parameters representing a zoom magnification.
  • the zoom transform function estimator 303 may include a scale determiner 313, a second transform function estimator 353, and a third transform function estimator 373.
  • the scale determiner 313 may determine the scale conversion factor S corresponding to the zoom information Z1 and Z2.
  • the scale conversion coefficient S is a coefficient representing the degree of conversion of the image size. Since the size and the focal length of each zoom section of the zoom lens are different for each image sensor, the image size conversion ratio corresponding to the zoom magnification may be different for each image sensor. Accordingly, the scale determiner 313 calculates the scale conversion coefficient S corresponding to the zoom magnification of the corresponding image sensor by using a graph or a look-up table indicating the relationship between the zoom magnification and the scale conversion coefficient for each image sensor previously stored in a memory or the like. You can decide.
  • the second transform function estimator 353 may estimate the second transform function H2 by adjusting the first transform function H1 based on the scale transform coefficient S.
  • the h11 and h22 components of the first transform function H1 are size conversion information in the x and y directions, respectively, when matching the remaining images with the coordinates of the reference image using either the first image or the second image as the reference image. It includes. Accordingly, the second transform function H2 may be expressed by Equation 3 by dividing the h11 and h22 components of the first transform function H1 by the scale transform coefficient S.
  • the third transform function estimator 373 extracts an offset value O between the first image and the second image matched by the second transform function H2 and based on the offset value O, the second transform function.
  • the offset value O is the degree of center misalignment between the first image and the second image that are aligned and aligned with the second transform function H2.
  • the h13 and h23 components of the first transform function H1 are parallel shifts that move the x and y coordinates when the remaining images are aligned with the coordinates of the reference image using either the first image or the second image as the reference image. Contains information.
  • Equation 4 An offset value O of (tx, ty) from the center coordinates x1 and y1 of the first image and the center coordinates x2 and y2 of the second image may be extracted as in Equation 4 below. Accordingly, the third transform function H3 may be expressed as shown in Equation 5 by adding offset values tx and ty to the h13 and h23 components of the second transform function H2, respectively.
  • FIG. 6 illustrates a first image (a) photographed by the first image sensor 10 of 1x zoom and a second image b photographed by the second image sensor 20 whose zoom state is changed from 1x zoom to 2x zoom.
  • the aligned result image c is illustrated.
  • the second transform function H2 since the second transform function H2 includes only image size conversion information, the centers x1 and y1 of the first image a as the reference image and the second image b 'that are size-converted are used. Of centers (x2, y2) are shifted.
  • FIG. 7 illustrates a first image (a) photographed by the first image sensor 10 of 1x zoom and a second image b photographed by the second image sensor 20 whose zoom state is changed from 1x zoom to 2x zoom.
  • the aligned result image d is illustrated.
  • the third conversion function H3 includes image size conversion information and parallel movement information, the center of the first image a, which is the reference image, and the size of the second image b 'that has been size converted. The centers are aligned.
  • FIG. 8 is a flowchart illustrating an image registration method according to an embodiment of the present invention.
  • FIG. 9 is a flowchart for explaining a method of selecting corresponding feature points of FIG. 8.
  • a first conversion is performed based on feature point information extracted from a first image photographed by a first image sensor and a second image photographed by a second image sensor.
  • the function H1 can be estimated (S80A).
  • the image matching device may detect feature points F1 and F2 of the first image and the second image (S81).
  • Feature points may include corners, edges, contours, line intersections, and the like.
  • the image matching apparatus may select corresponding feature points between the detected feature points F1 and F2 of the first image and the second image (S82).
  • the image registration device may acquire a patch image centering on the feature points F1 and F2 of each of the first and second images (S821).
  • the image matching apparatus may select candidate feature points of the remaining images that may correspond to the feature points of the reference image which is one of the first image and the second image. For example, when the first image is a reference image, candidate feature points of the second image that can correspond to the feature points of the first image may be selected.
  • candidate feature points of the second image that can correspond to the feature points of the first image may be selected.
  • Candidate feature points may be selected based on locality, eg, distance between feature points.
  • the image matching apparatus may determine the similarity between the patch image of the feature point of the reference image and the patch images of the candidate feature points of the remaining image.
  • the degree of similarity may be determined using normal mutual information and gradient direction information between patch images.
  • the image matching apparatus may select a corresponding feature point corresponding to the feature point of the reference image from among the candidate feature points based on the similarity determination result (S827). For example, the image matching apparatus may select candidate feature points having the greatest similarity with the reference image feature points as corresponding feature points of the reference image feature points.
  • the image matching apparatus may estimate the first transform function H1 based on the selected corresponding feature points (S83).
  • the image matching device may select the first transform function H1 as the final transform function H (S87).
  • the image matching device may match the first image and the second image with the final transform function H (S88).
  • the image matching device is based on the zoom information (Z1, Z2) of the first image sensor and the second image sensor, before changing the zoom state.
  • the third transform function H3 may be estimated by adjusting the estimated first transform function H1 (80B).
  • the image matching device may determine the scale conversion factor S corresponding to the change of the zoom state.
  • the image matching device may previously store a graph or a look-up table indicating a relationship between the zoom magnification and the scale conversion coefficient for each image sensor, and determine the scale conversion factor S corresponding to the zoom magnification of the corresponding image sensor by using the same.
  • the image matching device may estimate the second transform function H2 by adjusting the first transform function H1 based on the scale transform coefficient S (S85).
  • the image matching device may estimate the second transform function H2 by applying the scale transform coefficient S to a component including the size transform information in the x and y directions among the components of the first transform function H1.
  • the image matching device estimates the third transform function H3 by adjusting the second transform function H2 based on the offset values between the first and second images matched and aligned with the second transform function H2. Can be (S86).
  • the image matching device applies an offset value to a component including parallel movement information in the x and y directions among the components of the first transform function H1, thereby converting the third transform function H3 from the second transform function H2. It can be estimated.
  • the image matching device may select the third transform function H3 as the final transform function H until a new first transform function H1 is estimated (S87).
  • the image matching device may select the newly estimated first transform function H1 as the final transform function H.
  • the image matching device may match the first image and the second image with the final transform function H (S88).
  • the image matcher estimates the first transform function H1 through the detection of the feature points on the ROI rather than the entire first image and the second image. can do.
  • the ROI may be a region where the photographing region overlaps between the first image and the second image. As a result, the image matching device may reduce the calculation function estimation amount and the calculation time.
  • the first image is described as a visible image and the second image as an example of a thermal image.
  • embodiments of the present invention are not limited thereto, and the first image and the second image are different from each other.
  • Embodiments of the present invention may be equally applicable to images obtained from sensors having different characteristics other than a time or visible light camera and an infrared light camera.
  • the image matching method according to the present invention can be embodied as computer readable codes on a computer readable recording medium.
  • Computer-readable recording media include all kinds of recording devices that store data that can be read by a computer system. Examples of computer-readable recording media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, optical data storage devices, and the like.
  • the computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
  • functional programs, codes, and code segments for implementing the present invention can be easily inferred by programmers in the art to which the present invention belongs.
  • the above-described embodiments are applicable to boundary area surveillance such as GOP, surveillance requiring 24-hour real-time monitoring such as forest fire monitoring, detection of building and residential intrusion in a no-light or low light environment, tracking of missing and criminals in places such as mountains, medical imaging, etc. Can be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un appareil d'appariement d'images et un procédé d'appariement d'images au moyen de cet appareil. Un appareil d'appariement d'images selon un mode de réalisation de l'invention peut comprendre une première unité d'estimation de fonction de transformation, pour estimer une première fonction de transformation sur la base d'informations de points caractéristiques extraites d'une première image photographiée par un premier capteur d'image et d'une seconde image photographiée par un deuxième capteur d'image, et une unité d'estimation de fonction de transformation de zoom pour estimer une troisième fonction de transformation réglée à partir de la première fonction de transformation sur la base d'informations de zoom du premier capteur d'image et du deuxième capteur d'image.
PCT/KR2013/008936 2013-08-20 2013-10-08 Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil WO2015026002A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2013-0098608 2013-08-20
KR1020130098608A KR102050418B1 (ko) 2013-08-20 2013-08-20 영상 정합 장치 및 이를 이용한 영상 정합 방법

Publications (1)

Publication Number Publication Date
WO2015026002A1 true WO2015026002A1 (fr) 2015-02-26

Family

ID=52483768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/008936 WO2015026002A1 (fr) 2013-08-20 2013-10-08 Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil

Country Status (2)

Country Link
KR (1) KR102050418B1 (fr)
WO (1) WO2015026002A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246100A (zh) * 2019-06-11 2019-09-17 山东师范大学 一种基于角度感知块匹配的图像修复方法和系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101885562B1 (ko) * 2017-12-18 2018-08-06 주식회사 뷰노 제1 의료 영상의 관심 영역을 제2 의료 영상 위에 맵핑하는 방법 및 이를 이용한 장치
KR102667740B1 (ko) * 2018-02-12 2024-05-22 삼성전자주식회사 영상 정합 방법 및 장치

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100138202A (ko) * 2009-06-24 2010-12-31 전자부품연구원 이종 카메라를 이용한 객체 추적 시스템 및 방법
KR20110116777A (ko) * 2010-04-20 2011-10-26 국방과학연구소 가시광선 및 적외선 영상신호 융합장치 및 그 방법
KR101109695B1 (ko) * 2010-10-20 2012-01-31 주식회사 아이닉스 고속 오토 포커스 제어 장치 및 그 방법
JP2012165333A (ja) * 2011-02-09 2012-08-30 Sony Corp 撮像装置、および撮像装置制御方法、並びにプログラム
KR101279484B1 (ko) * 2012-04-18 2013-06-27 한국항공우주연구원 영상 처리 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100138202A (ko) * 2009-06-24 2010-12-31 전자부품연구원 이종 카메라를 이용한 객체 추적 시스템 및 방법
KR20110116777A (ko) * 2010-04-20 2011-10-26 국방과학연구소 가시광선 및 적외선 영상신호 융합장치 및 그 방법
KR101109695B1 (ko) * 2010-10-20 2012-01-31 주식회사 아이닉스 고속 오토 포커스 제어 장치 및 그 방법
JP2012165333A (ja) * 2011-02-09 2012-08-30 Sony Corp 撮像装置、および撮像装置制御方法、並びにプログラム
KR101279484B1 (ko) * 2012-04-18 2013-06-27 한국항공우주연구원 영상 처리 장치 및 방법

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110246100A (zh) * 2019-06-11 2019-09-17 山东师范大学 一种基于角度感知块匹配的图像修复方法和系统
CN110246100B (zh) * 2019-06-11 2021-06-25 山东师范大学 一种基于角度感知块匹配的图像修复方法和系统

Also Published As

Publication number Publication date
KR102050418B1 (ko) 2019-12-02
KR20150021352A (ko) 2015-03-02

Similar Documents

Publication Publication Date Title
KR102144394B1 (ko) 영상 정합 장치 및 이를 이용한 영상 정합 방법
WO2018128355A1 (fr) Robot et dispositif électronique servant à effectuer un étalonnage œil-main
US7733404B2 (en) Fast imaging system calibration
WO2021112406A1 (fr) Appareil électronique et procédé de commande associé
WO2012124852A1 (fr) Dispositif de caméra stéréo capable de suivre le trajet d'un objet dans une zone surveillée, et système de surveillance et procédé l'utilisant
WO2019164379A1 (fr) Procédé et système de reconnaissance faciale
WO2015005577A1 (fr) Appareil et procédé d'estimation de pose d'appareil photo
WO2013151270A1 (fr) Appareil et procédé de reconstruction d'image tridimensionnelle à haute densité
WO2018135906A1 (fr) Caméra et procédé de traitement d'image d'une caméra
WO2016072625A1 (fr) Système de contrôle d'emplacement de véhicule pour parc de stationnement utilisant une technique d'imagerie, et son procédé de commande
JP2016187162A (ja) 情報処理装置、情報処理方法、及びプログラム
JP2019159739A (ja) 画像処理装置、画像処理方法およびプログラム
WO2015026002A1 (fr) Appareil d'appariement d'images et procédé d'appariement d'images au moyen de cet appareil
JP2010217984A (ja) 像検出装置及び像検出方法
CN111696143A (zh) 一种事件数据的配准方法与系统
WO2023149603A1 (fr) Système de surveillance par images thermiques utilisant une pluralité de caméras
WO2019194561A1 (fr) Procédé et système de reconnaissance d'emplacement pour fournir une réalité augmentée dans un terminal mobile
WO2019083068A1 (fr) Système d'acquisition d'informations tridimensionnelles à l'aide d'une pratique de lancement, et procédé de calcul de paramètres de caméra
WO2012074174A1 (fr) Système utilisant des données d'identification originales pour mettre en oeuvre une réalité augmentée
KR20220115223A (ko) 다중 카메라 캘리브레이션 방법 및 장치
KR20220114820A (ko) 영상 내의 카메라 움직임 제거 시스템 및 방법
WO2017007047A1 (fr) Procédé et dispositif de compensation de la non-uniformité de la profondeur spatiale en utilisant une comparaison avec gigue
JP6664078B2 (ja) 3次元侵入検知システムおよび3次元侵入検知方法
WO2020111353A1 (fr) Procédé et appareil pour détecter un équipement d'invasion de confidentialité et système associé
WO2019168280A1 (fr) Procédé et dispositif permettant de déchiffrer une lésion à partir d'une image d'endoscope à capsule à l'aide d'un réseau neuronal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13891942

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13891942

Country of ref document: EP

Kind code of ref document: A1