KR101636481B1 - Method And Apparatus for Generating Compound View Image - Google Patents

Method And Apparatus for Generating Compound View Image Download PDF

Info

Publication number
KR101636481B1
KR101636481B1 KR1020150108786A KR20150108786A KR101636481B1 KR 101636481 B1 KR101636481 B1 KR 101636481B1 KR 1020150108786 A KR1020150108786 A KR 1020150108786A KR 20150108786 A KR20150108786 A KR 20150108786A KR 101636481 B1 KR101636481 B1 KR 101636481B1
Authority
KR
South Korea
Prior art keywords
image
images
camera
sorting
unit
Prior art date
Application number
KR1020150108786A
Other languages
Korean (ko)
Inventor
김황남
김현순
이동규
Original Assignee
고려대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 고려대학교 산학협력단 filed Critical 고려대학교 산학협력단
Priority to KR1020150108786A priority Critical patent/KR101636481B1/en
Application granted granted Critical
Publication of KR101636481B1 publication Critical patent/KR101636481B1/en

Links

Images

Classifications

    • H04N5/23238
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The purpose of the present invention is to provide a method and a device for producing a view angle image or a moving picture wider than a view angle of a camera by mixing images in an overlapping area while a plurality of images is photographed in a predetermined rotary range by using one camera. Disclosed is a compound view image production method. The compound view image production method according to an embodiment of the present invention comprises: a step in which one camera or a lens attached to the camera takes a picture by rotating as the predetermined rotary range and obtains a plurality of photographing images; a step of selecting a plurality of selection images to include all photographing areas corresponding to the predetermined rotary range among the photographing images; a step of detecting an area overlapped with the other selection image in each of the selection images; and a step of producing a result image or a result moving picture by mixing the selection image with the other selection image based on the detected overlapping area.

Description

METHOD AND APPARATUS FOR GENERATING COMPOUND VIEW IMAGE

Field of the Invention [0002] The present invention relates to a method and apparatus for generating an image or a moving image having a wider angle of view than a camera's original viewing angle using a single camera. And more particularly, to a method and apparatus for generating an image or a moving image having a wide angle of view by fusing various images taken by a camera through image processing.

Most digital cameras have a panorama shooting function that combines multiple consecutive pictures into a single picture and outputs a wide-angle picture. The panoramic picture is taken by the photographer so that the photograph taken before and the picture taken for the next time should be overlapped as much as possible so that a smooth natural picture can be taken. However, since the photographer has to take a picture while moving his / her body every time when taking a panoramic picture, it is not easy to acquire a high-quality panoramic picture every time in exactly the same shooting area.

As described above, it is difficult to repeatedly or automatically photograph a high quality image and a moving image that is wider than the angle of view of a camera by a general photographing technique. Therefore, in order to capture high quality images or moving pictures, several cameras are calibrated by calibrating the position and setting in advance, and the frames of the videos taken by each camera are connected to each other to form a single wide angle photograph. Image processing technology is applied to capture a wide angle of view into a moving image. This technique is image stitching, which is a new large-size photograph that is attached to a photograph. In order to perform such image stitching, a Scale Invariant Feature Transform (SIFT) algorithm and a SURF (Speeded Up Robust Features) algorithm have been mainly used.

However, it takes a lot of cost to install a plurality of cameras, and conventional algorithms require a large number of operations to select feature points and extract technology points, thereby fusing a plurality of images at a high speed, Was not suitable for generating a fused video. Therefore, there is a growing debate about an algorithm that can reduce the amount of computation, perform image stitching more quickly, and generate a compound view image while using one camera.

Related prior arts include Registration No. 10-1454803 entitled " Homography estimation apparatus and method, publication date: 2014.07.01 ").

The present invention provides a method and apparatus for generating an image or a moving image with a wider angle of view than an original camera angle by fusing images having overlapping areas while photographing a plurality of images in a predetermined rotation range using one camera .

In addition, the present invention provides a new algorithm capable of fusing images having overlapping regions by significantly reducing the amount of computation compared to existing algorithms, and fusing images in real time or at a high speed.

The problems to be solved by the present invention are not limited to the above-mentioned problem (s), and another problem (s) not mentioned can be clearly understood by those skilled in the art from the following description.

In order to achieve the above object, the present invention provides a compound view image generating method, comprising: obtaining a plurality of photographed images taken by one camera or a lens attached to one camera rotated by a predetermined rotation range; Selecting a plurality of selection images so that all the photographing areas corresponding to the predetermined rotation range are included in the plurality of photographing images; Detecting an area overlapping another sorting image in each of the plurality of selected sorting images; And generating a resultant image or a resultant moving image by fusing the selected sorting image for each of the sorting images based on the detected overlapping region.

Advantageously, the step of detecting the overlapping region comprises the steps of: determining the number of pixels in the lateral direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the to- The overlapping area can be detected based on the number of frames per second of the image.

Preferably, the overlapping area can be calculated by Equation (1).

[Equation 1]

Figure 112015074667077-pat00001

Here, D is the number of pixels in the horizontal direction of the overlap region, S is the number of pixels in the horizontal direction of the selected image, ω is the rotational speed of the lens attached to the camera or the cameras, k m is the m-th the and the frame sequence number of the selected image, and n k is the frame sequence number of the n-th image selection, v is the number of frames per second of the image photographed by the camera.

Advantageously, the step of generating the resultant image or the resulting moving image comprises: selecting keypoints within the overlapping region of the to-be-merged selection image; Extracting a descriptor based on brightness of a plurality of pixels adjacent to the feature point; And fusing the plurality of selected images using the selected feature points and the extracted descriptors.

Preferably, the step of selecting feature points within the overlapping region includes: selecting candidate corner points that are candidates of corner points having a curvature of an edge within the overlapping region by a predetermined threshold or more, for each of the overlapping images to be fused; If the brightness of a predetermined number or more of the candidate peripheral points that are surrounded by the predetermined corner of the candidate corner point is darker than the first threshold value or brighter than the second threshold value, Determining a point; And selecting the feature point with the determined corner point.

Preferably, the step of classifying the plurality of candidate peripheral points into a set of four pixels for each direction; Calculating an average of the brightness of the pixels in each of the four sets of classified pixels; And determining whether the candidate corner point is the corner point based on whether the calculated four average values are darker than the first reference brightness and whether the calculated fourth average value is brighter than the second reference brightness .

Preferably, the step of fusing the plurality of selected images includes generating homography information indicating a conversion relationship between the selected feature points and the selected images to be fused based on the extracted descriptors. Transforming each of the sorting images to be fused using the generated homography information; And fusing the sorting image to be fused into the one sorting image based on the similarity of each of the feature points and the descriptor.

Preferably, the method further comprises searching for a plurality of the sorted images in which the same image and the same feature information are stored in the DB, May reuse the minutiae and descriptor information of the same sorting image in the presence of the same sorting image according to the search result.

In order to achieve the above object, a compound view image generating apparatus provided in the present invention is a compound view image generating apparatus provided with a compound image acquiring apparatus for acquiring a plurality of photographed images photographed with one camera or a lens attached to one camera rotated by a predetermined rotation range part; A selection unit that selects a plurality of selection images such that all the photographing regions corresponding to the predetermined rotation range are included in the plurality of photographing images; A detecting unit detecting an overlapping area of each of the plurality of selected images with another sorting image; And a generating unit for generating a resultant image or a resultant moving image by fusing the at least one other sorting image for each of the sorting images based on the detected overlapping area.

Preferably, the detecting unit detects the number of pixels in the horizontal direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the image to be fused and the number of frames per second It is possible to detect the overlapping area.

Preferably, the generating unit includes a characteristic dotted line selecting unit for selecting a minutiae point within the overlapping area of the sorting image to be fused; A descriptor extracting unit for extracting descriptors based on brightness of a plurality of pixels adjacent to the feature points; And a fusion unit for fusing the plurality of selected images using the selected feature points and the extracted descriptors.

Preferably, the characteristic dotted line section is a candidate selection section for selecting a candidate corner point that is a candidate of a corner point having a curvature of an edge within a predetermined threshold or more, in the overlapping area for each of the merged sorting images. If the brightness of a predetermined number or more of the candidate peripheral points that are surrounded by the predetermined corner of the candidate corner point is darker than the first threshold value or brighter than the second threshold value, A determination unit that determines the point; And a minutia point determination unit for selecting the minutiae at the determined corner point.

Preferably, the group classification unit classifies the plurality of candidate neighboring points into four pixel groups for each direction. Wherein the determining unit determines whether the calculated four average values are darker than the first reference brightness and whether the calculated average value of the four reference pixels is darker than the first reference brightness, It is possible to further determine whether or not the corner candidate point is the corner point based on whether or not the corner candidate point is brighter than the brightness.

Preferably, the fusing unit includes: an information generating unit generating homography information indicating a conversion relation between the selected feature point and the selected sorting image based on the extracted descriptor; An image transform unit for transforming each of the sorting images to be fused by using the generated homography information; And an image fusion unit for fusing the sorting image to be fused into the single sorting image based on the similarity between each feature point and the descriptor.

Preferably, the image fusion unit further includes an image search unit for searching whether the same sorting image and descriptor information of the same sorting image are stored for each of the plurality of sorting images, Accordingly, the feature point and descriptor information of the same sorting image can be reused when the same sorting image exists.

The present invention has the effect of generating an image or a moving image having a wider angle of view than the photographing angle of the camera itself in real time or very fast using a single camera.

1 is a flowchart illustrating a method of generating a compound view image according to an exemplary embodiment of the present invention.
2 is a flowchart of a method for generating a resultant image or a resulting moving image according to an embodiment of the present invention.
3 is a flowchart illustrating a method of selecting feature points according to an exemplary embodiment of the present invention.
4 is a flowchart of a method of fusing a plurality of sorted images according to an embodiment of the present invention.
5 is a view for explaining an apparatus for generating a compound view image according to an embodiment of the present invention.
6 is a view for explaining a generator according to an embodiment of the present invention.
FIG. 7 is a diagram illustrating a feature dotted line according to an embodiment of the present invention. Referring to FIG.
8 is a view for explaining a rotation speed of a camera according to an embodiment of the present invention.
FIG. 9 is a view for explaining a candidate peripheral point according to an embodiment of the present invention.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

The terms first, second, A, B, etc. may be used to describe various elements, but the elements should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or < / RTI > includes any combination of a plurality of related listed items or any of a plurality of related listed items.

It is to be understood that when an element is referred to as being "connected" or "connected" to another element, it may be directly connected or connected to the other element, . On the other hand, when an element is referred to as being "directly connected" or "directly connected" to another element, it should be understood that there are no other elements in between.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries are to be interpreted as having a meaning consistent with the contextual meaning of the related art and are to be interpreted as either ideal or overly formal in the sense of the present application Do not.

In addition, the camera referred to in the present invention should be interpreted as including all kinds of cameras and image pickup devices applicable to the present invention, and should not be construed to be limited to a specific type of camera.

Hereinafter, preferred embodiments according to the present invention will be described in detail with reference to the accompanying drawings.

1 is a flowchart illustrating a method of generating a compound view image according to an exemplary embodiment of the present invention.

In step S110, the compound view image generating apparatus acquires a plurality of shot images taken by one camera or a lens attached to one camera rotated by a predetermined rotation range.

Here, the 'compound view image' refers to an image created by fusing a plurality of images captured by a camera.

Further, a camera or a lens attached to the camera may be manufactured in a form applicable to the present invention, or may be used in conjunction with an additional mechanism or apparatus which is applicable to the present invention.

For example, the camera may be a camera (e.g., a PTZ camera) manufactured to rotate the lens unit itself, or may be used with a pedestal that rotates the camera while the camera is mounted. In addition, a lens separately attached to the camera may be manufactured so that the photographing direction can be adjusted.

In addition, the predetermined rotation range may be a rotation range of the camera set so as to correspond to the photographing area in which the user of the compound view image generating apparatus is interested. At this time, the direction of rotation is not limited to one of the horizontal direction, vertical direction, and other directions.

Further, the camera continuously photographs the image at a predetermined photographing speed (e.g., 30 fps) while rotating the predetermined rotation range, and the compound view image generating device can acquire a plurality of photograph images from the camera.

On the other hand, the compound view image generating device may be located inside the camera to acquire a plurality of captured images directly from the camera, or may be obtained by receiving a plurality of generated images from the camera located outside the camera.

In step S120, a plurality of selection images are selected so that all the photographing areas corresponding to the predetermined rotation range are included among the plurality of photographing images acquired by the compound view image generating device.

Here, the 'selected image' means a part of the captured images selected by the compound view image generating device so as to include all of the plurality of captured images.

For example, a plurality of images can be selected so that all the photographing regions corresponding to the rotation range are included in the plurality of photographing images photographed at 30 fps while rotating the predetermined rotation range.

In this case, the selection criteria of the selection image may be any photographic image as long as it includes all of the corresponding photographic areas in principle. However, in consideration of the efficiency in the subsequent image fusion process, it is preferable that at least three It may be desirable to select so as not to occur.

In step S130, the compound view image generating apparatus detects an area overlapping another different sorting image in each of the plurality of selected sorting images.

For example, when the camera rotates on a plane that is horizontal to the ground, it is possible to detect whether there is a region overlapping another screen image with respect to each of a plurality of selected images selected so as to include all the shooting regions corresponding to the rotation range .

In another embodiment, the step of detecting a redundant area by the compound view image generating apparatus may include detecting the number of pixels in the horizontal direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the selected image to be fused, It is possible to detect the overlapping area based on the number of frames per second of the image.

Here, the number of pixels in the horizontal direction of the selected image may mean the horizontal size (pixels) of the captured image captured by the camera.

Further, the rotation speed of a lens attached to a camera or a camera means the number of pixels moving per unit time in a shot image in which a specific pixel (or object) of the shot image is continuous, and the unit may be pixel / s have.

In addition, the frame serial number of the selected image may be a serial number attached to all shot images taken by the camera, and may be used to identify the shooting order between shot images.

In addition, the number of frames per second of the image to be photographed may mean the photographing speed of the camera (e.g., 30 fps).

On the other hand, it can be easily understood that a time stamp, which is information on each generation time of the selection image, can be used in place of the frame serial number of the selected image and the number of frames per second of the image to be photographed. This is because, if the frame serial number difference between the two sorted images is divided by the frame number per second of the image, there is a difference in time stamps between the two sorted images.

A method of detecting a redundant area will be described in detail later in the description of another embodiment.

On the other hand, when the camera rotates on a plane perpendicular to the paper, it is obvious that the overlapping region can be detected by using the number of pixels in the vertical direction of the selected image by the same principle.

In another embodiment, the overlap region (the number of pixels in the horizontal direction of the overlap region) can be calculated by Equation (1).

[Equation 1]

Figure 112015074667077-pat00002

Here, D is a number in the transverse direction of the overlapping area pixel, S is the number of pixels in the horizontal direction of the selected image, ω is the rotational speed of the lens inside the camera or cameras, k m is a set frame of the m-th selected images K n is the frame serial number of the nth sorted image, and v is the number of frames per second of the image taken from the camera.

For example, referring to FIG. 8, when a camera rotating in the horizontal direction with respect to the ground is photographed at a resolution of Full-HD (1920x1080), the number of pixels in the horizontal direction of the selected image is 1920 pixels, (I.e., the number of rotation pixels per second of the photographed image according to the rotation of the camera or the lens) is 960 (pixel / s), and the difference between the frame serial numbers of the to-be-merged selected images is 15, The pixel D in the horizontal direction of the overlap region can be calculated as 1920-960 * (15/30) = 1440 (pixels) when the number of frames per second is 30 (fps). That is, it can be seen that 1440 of the 1920 pieces of the two screen images are overlapped with each other.

On the other hand, when the calculated overlapping area is 0 or smaller than 0, it can be determined that there is no overlapping area.

Finally, in step S140, the compound view image generating apparatus generates a resultant image or a resultant moving image by fusing the different sorting images on the basis of the detected overlapping regions.

For example, the resultant moving image can be generated as a single moving image file by using the generated plurality of result images, and the resultant moving image generating device repeatedly generates the resultant image generated by merging the selected images at predetermined time intervals.

On the other hand, a method of fusing the sorted image based on the overlapping area will be described in detail below with reference to FIG. 2.

2 is a flowchart of a method for generating a resultant image or a resulting moving image according to an embodiment of the present invention.

In step S210, the compound view image generating apparatus selects keypoints in the overlapping areas of the respective sorting images to be fused.

Here, the 'feature point' is a pixel having features that can be commonly found in other images, and the compound view image generating apparatus can select feature points within each overlapping region for each of the selected images.

Meanwhile, algorithms for selecting existing feature points include Scale Invariant Feature Transform (SIFT) algorithm and SURF (Speeded Up Robust Feature) algorithm. However, since the conventional algorithm requires a large amount of computation until the feature point is selected, there is a problem that it is not suitable to be applied to the image fusion which should be performed in real time or in a short time as in the present invention.

Accordingly, in the present invention, a method of selecting feature points in the overlapping area by itself is proposed, which will be described in detail below with reference to FIG. 3.

In step S220, the compound view image generating apparatus extracts a descriptor based on the brightness of a plurality of pixels adjacent to the feature point.

Here, 'descriptor' refers to brightness information about surrounding pixels adjacent to the feature point. By using the descriptor, similarity can be judged on a region-by-region basis by using not only the feature points but also information on the surrounding regions of the feature points.

For example, the descriptor selects the pixels around the minutiae to form pairs p1 and p2, selects the total of n pairs of pixels and compares the brightness of the pixels with each other. If the result is 1 (p1 & 0) to generate a binary result vector. Then, in comparison with the descriptors of other selection images, it is possible to determine whether or not the descriptors are similar to each other by comparing the extracted result vectors from the feature points of other selection images in the same manner.

Finally, in step S230, the compound view image generating apparatus fuses a plurality of selected images using the selected feature points and the extracted descriptors.

A method of fusing a plurality of selected images using feature points and descriptors will be described in detail later with reference to FIG.

In another embodiment of the present invention, the method further comprises the step of searching for a plurality of sorted images referred to by the compound view image generating apparatus, in which the same sorting images judged to be identical in DB and the minutiae points and descriptor information of the same sorted images are stored, The step of fusing the plurality of selected images may reuse the feature points and the descriptor information of the same selected image when there is the same selected image according to the search result.

Here, the 'same sorting image' means another previously used sorting image judged to be the same as a certain sorting image.

For example, when the compound view image generating apparatus selects the minutiae points and extracts the descriptors every time, the compound view image generating apparatus itself may be overloaded, and when the same scene is repeatedly photographed according to the photographing scene of the camera Can be. Therefore, for this case, it may be added a step of searching for the existence of the previously merged selection image which is the same as the new selection image to be fused.

That is, the image comparison of the images may directly compare the pixel values of the respective images. If the images are compared with each other based on the image capturing area of the camera and the brightness of the pixels (for example, It may be possible to judge the

At this time, if there is the same selected image, it is meaningless to find out again the minutiae and the descriptor from the selected image, so that the minutiae and descriptor information of the same sorted image already stored can be reused.

3 is a flowchart illustrating a method of selecting feature points according to an exemplary embodiment of the present invention. On the other hand, steps S320 to S340 are optional steps that can be added to speed up the selection speed of the feature points.

In step S310, the compound view image generating apparatus selects a candidate corner point that is a candidate of a corner point whose curvature of an edge is equal to or larger than a predetermined threshold value in an overlapping area for each sorting image to be fused.

Here, the 'corner point' means a pixel included in the edge of the selected image, in which the curvature, which is the degree of the edge being bent around the pixel, is greater than or equal to a predetermined threshold value.

For example, since the candidate corner point is a candidate for becoming a corner point, the compound view image generating apparatus can arbitrarily select one pixel as the candidate corner point in the overlapping region.

In step S320, the compound view image generating apparatus classifies the four candidate points into a set of four pixels per direction among a plurality of candidate neighboring points surrounded by a predetermined number of pixels with respect to the corner candidate point.

For example, a total of 16 candidate peripheral points (1, 2, ..., 16) can be selected as shown in FIG. 9 so as to surround the corner candidate points while being separated by one pixel or two pixels based on the corner candidate points.

At this time, three pixels in the upper, lower, left, and right directions of the corner candidate point among the 16 neighboring points can be classified into four pixel sets. That is, the first set of 16, 1, and 2 pixels in the upward direction, the second set of 4, 5, and 6 pixels in the right direction, and the third set of 8, 9, The left direction pixels 12, 13 and 14 can be classified into the fourth set.

In step S330, the compound view image generating apparatus calculates an average of the brightnesses of the pixels in each of the sorted four pixel sets.

For example, an average of the brightness of a pixel can be calculated for each of the four pixel sets classified above. At this time, the calculation of the average can be performed using Equation (2).

&Quot; (2) "

Figure 112015074667077-pat00003

Figure 112015074667077-pat00004

Figure 112015074667077-pat00005

Figure 112015074667077-pat00006

Here, I p -> x is the brightness of the xth pixel in FIG. 9, and I p -> xa , xb , and xc are the average values of the brightness of the pixels a, b, and c in FIG. (Where x, a, b, and c are natural numbers of 1 or more and 16 or less)

In step S340, the compound view image generating apparatus determines whether or not the candidate corner point can be a corner point based on whether the calculated four average values are darker than the first reference brightness and brighter than the second reference brightness For example, of the average values for a total of four pixel sets, three are darker than the first reference brightness and the other one is brighter than the second reference brightness, or conversely, three is greater than the second reference brightness The image generating apparatus can judge that the candidate corner point can not be a corner point unless it is bright and the remaining one is darker than the first reference brightness.

If it is determined that the candidate corner point can not be a corner point, the compound view image generating apparatus ends the operation of selecting the minutiae point.

This is because the corner points that can not be a corner point are excluded in advance by using a simpler calculation process so that the amount of calculation can be reduced and the operation time can be saved.

Meanwhile, steps S320 to S340 are optional processes, and may not be performed depending on the surrounding conditions or environments in the compound view image generating apparatus.

In step S350, when it is determined that the candidate corner point can be a corner point, the compound view image generating apparatus generates a candidate corner point corresponding to a predetermined number of pixels adjacent to the candidate corner point If the brightness is darker than the first threshold value or brighter than the second threshold value, the candidate corner point is determined as the corner point.

For example, if the brightness of nine connected pixels among the sixteen candidate peripheral points in FIG. 9 is darker than the first threshold value or brighter than the second threshold value, the candidate corner point can be determined as the corner point.

At this time, the first threshold may indicate the brightness of a pixel darker than the second threshold, and the brightness of nine connected pixels out of a total of 16 pixels is darker than the first threshold or brighter than the second threshold, This is because it means that an edge where the curvature becomes equal to or greater than a predetermined threshold value on the basis of the candidate point is found.

In the meantime, the meaning of 'connected pixel' means not only a state in which the two pixels are in direct contact with each other such as 1 and 2, 4 and 5 in FIG. 9, but also diagonal lines like 2 and 3 pixels It may mean a state in which it is in contact.

Finally, in step S360, the compound view image generating apparatus selects the determined corner point as a feature point.

That is, the compound view image generating apparatus can select a pixel detected as a corner point as a feature point.

4 is a flowchart of a method of fusing a plurality of sorted images according to an embodiment of the present invention.

In step S410, the compound view image generating apparatus generates homography information indicating a conversion relationship between the selected feature point and the selected image to be fused based on the extracted descriptor.

For example, the homography information is information that can associate two images on the same plane, and may include information of co-planarization, co-co-ordination, and other transformation (rotation, transformation, etc.) of two images.

In step S420, the compound view image generating device converts each of the sorted images to be fused using the generated homography information.

For example, the homography information generated above can be used to transform the selected images, including co-planarization, co-co-ordination, and other transformations (rotation, transformation, etc.).

Finally, in step S430, the compound view image generating apparatus fuses the sorted image to be fused into one sort image based on the similarity of each feature point and descriptor.

For example, since two sort images to be fused by transformation according to the above homology information are now on the same coordinate system of the same plane, two images can be fused into one image.

5 is a view for explaining an apparatus for generating a compound view image according to an embodiment of the present invention.

5, the compound view image generating apparatus 500 includes an acquiring unit 510, a selecting unit 520, a detecting unit 530, and a generating unit 540.

The acquiring unit 510 acquires a plurality of photographed images taken by one camera or a lens attached to one camera rotated by a predetermined rotation range.

The selection unit 520 selects a plurality of selection images so that all the photographing regions corresponding to the predetermined rotation range are included in the plurality of photographing images.

The detecting unit 530 detects an area overlapping another sorting image in each of the plurality of sorting images.

In another embodiment, the detection unit 530 may determine the number of pixels in the horizontal direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the selected image to be fused and the number of frames per second So that the redundant area can be detected.

In another embodiment, the overlapping area can be calculated by Equation (1).

Finally, the generation unit 540 generates a resultant image or a resultant moving image by fusing it with at least one other selected image for each selected image based on the detected overlapping region.

The generation unit 540 will be described later in detail with reference to FIG.

6 is a view for explaining a generator according to an embodiment of the present invention.

6, the generating unit 540 of the compound view image generating apparatus 500 includes a characteristic dotted line section 542, a descriptor extracting section 544, and a converging section 546. [

Feature dotted line selection 542 selects feature points within the overlapping region of the sorting image to be fused.

The description of the characteristic dotted line section 542 will be described later in detail with reference to FIG.

The descriptor extracting unit 544 extracts descriptors based on the brightness of a plurality of pixels adjacent to the minutiae.

The fusion unit 546 fuses a plurality of selected images using the selected feature points and the extracted descriptors.

In another embodiment, the fusion unit 546 may include an information generation unit (not shown), an image conversion unit (not shown), and an image fusion unit (not shown).

The information generation unit generates homography information indicating a conversion relation between the selected feature point and the selected image to be fused based on the extracted descriptor.

The image transform unit transforms each of the sorted images to be fused using the generated homography information.

The image fusion unit fuses the sorting image to be fused into one sorting image based on the similarity of each feature point and descriptor.

In yet another embodiment, the generator 540 may further include an image retrieval unit (not shown).

The image searching unit searches the same sorting image judged to be the same for a plurality of sorting images and the minutiae and descriptor information of the same sorting image. The image fusion unit may reuse the feature point and descriptor information of the same selected image in the case where there is the same selected image according to the search result.

FIG. 7 is a diagram illustrating a feature dotted line according to an embodiment of the present invention. Referring to FIG.

Referring to FIG. 7, the characteristic dotted line section 542 includes a candidate selection section 542a, a determination section 542b, and a minutia determination section 542c. In addition, it may optionally include an aggregate classifier (not shown) and an operation unit (not shown).

The candidate selection unit 542a selects a corner candidate point that is a candidate of a corner point whose curvature of the edge is equal to or larger than a predetermined threshold value in the overlapping region for each sorting image to be fused.

If the brightness of a predetermined number or more of the neighboring points of the plurality of pixels surrounded by the predetermined number of pixels spaced apart from the corner candidate point by a predetermined number of pixels is darker than the first threshold value or brighter than the second threshold value, The candidate point is determined as a corner point.

The minutiae point determining unit 542c selects the minutiae point at the determined corner point.

The set classification unit classifies a plurality of candidate peripheral points into four pixel sets per direction.

The arithmetic unit calculates an average of the brightnesses of the pixels in each of the four sets of the classified pixels.

On the other hand, the determining section 542b determines whether or not the four average values calculated when the characteristic dotted line section 542 includes the set classification section and the arithmetic section are darker than the first reference brightness and whether the four average values are darker than the second reference brightness It is possible to further judge whether or not the corner candidate point is a corner point on the basis of the corner point.

The above-described embodiments of the present invention can be embodied in a general-purpose digital computer that can be embodied as a program that can be executed by a computer and operates the program using a computer-readable recording medium.

The computer readable recording medium includes a magnetic storage medium (e.g., ROM, floppy disk, hard disk, etc.), optical reading medium (e.g., CD ROM, DVD, etc.).

The present invention has been described with reference to the preferred embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

Claims (16)

Obtaining a plurality of photographed images photographed by rotating one camera or a lens attached to the one camera by a predetermined rotation range;
Selecting a plurality of selection images so that all the photographing areas corresponding to the predetermined rotation range are included in the plurality of photographing images;
Detecting an area overlapping another sorting image in each of the plurality of selected sorting images; And
Generating a resultant image or a resultant moving image by fusing the selected sorting image on the basis of the detected overlapping area;
Lt; / RTI >
The step of detecting the overlapping region
The number of pixels in the horizontal direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the to-be-merged selected image, and the number of frames per second of the image taken from the camera. Wherein the compound image is a compound image.
delete Obtaining a plurality of photographed images photographed by rotating one camera or a lens attached to the one camera by a predetermined rotation range;
Selecting a plurality of selection images so that all the photographing areas corresponding to the predetermined rotation range are included in the plurality of photographing images;
Detecting an area overlapping another sorting image in each of the plurality of selected sorting images; And
Generating a resultant image or a resultant moving image by fusing the selected sorting image on the basis of the detected overlapping area;
Lt; / RTI >
Wherein the overlap region is calculated according to Equation (1).
[Equation 1]
Figure 112016056597586-pat00007

Here, D is the number of pixels in the horizontal direction of the overlap region, S is the number of pixels in the horizontal direction of the selected image, ω is the rotational speed of the lens attached to the camera or the cameras, k m is the m-th the and the frame sequence number of the selected image, and n k is the frame sequence number of the n-th image selection, v is the number of frames per second of the image photographed by the camera.
The method according to claim 1 or 3,
The step of generating the resultant image or the resulting moving image
Selecting keypoints within the overlapping region of the sorting image to be fused;
Extracting a descriptor based on brightness of a plurality of pixels adjacent to the feature point; And
Fusing the plurality of selected images using the selected feature points and the extracted descriptors;
And generating the compound view image.
5. The method of claim 4,
The step of selecting feature points in the overlapping region
Selecting a candidate corner point that is a candidate for a corner point having a curvature of an edge within a predetermined threshold or more within the overlapping region for each of the fused images;
If the brightness of a predetermined number or more of the candidate peripheral points that are surrounded by the predetermined corner of the candidate corner point is darker than the first threshold value or brighter than the second threshold value, Determining a point; And
Selecting the feature point with the determined corner point;
And generating the compound view image.
6. The method of claim 5,
The step of selecting feature points in the overlapping region
Classifying the plurality of candidate peripheral points into a set of four pixels for each direction;
Calculating an average of the brightness of the pixels in each of the four sets of classified pixels; And
Determining whether the candidate corner point is the corner point based on whether the calculated four average values are darker than the first reference brightness and whether the calculated fourth average value is brighter than the second reference brightness, To create a compound view image.
5. The method of claim 4,
The step of fusing the plurality of sorted images
Generating homography information indicating a conversion relationship between the selected minutiae and the selection image to be fused based on the extracted descriptor;
Transforming each of the sorting images to be fused using the generated homography information; And
Fusing the sorting image to be fused into the single sorting image based on similarity between each of the minutiae points and the descriptor;
And generating the compound view image.
8. The method of claim 7,
Wherein the step of fusing the plurality of sort images further comprises searching for a plurality of the sorted images, each of which has the same feature point and descriptor information,
The step of fusing to the one sort image
And the feature point and descriptor information of the same sorting image when the same sorting image exists, according to the search result.
An acquiring unit acquiring a plurality of photographed images taken by one camera or a lens attached to the one camera rotated by a predetermined rotation range;
A selection unit that selects a plurality of selection images such that all the photographing regions corresponding to the predetermined rotation range are included in the plurality of photographing images;
A detecting unit detecting an overlapping area of each of the plurality of selected images with another sorting image; And
A generating unit for generating a resultant image or a resultant moving image by fusing the at least one different sorting image for each of the sorting images based on the detected overlapping region;
Lt; / RTI >
The detection unit
The number of pixels in the horizontal direction of the selected image, the rotational speed of the lens attached to the camera or the camera, the frame serial number of the to-be-merged selected image, and the number of frames per second of the image taken from the camera. The compound view image generating apparatus comprising:
delete An acquiring unit acquiring a plurality of photographed images taken by one camera or a lens attached to the one camera rotated by a predetermined rotation range;
A selection unit that selects a plurality of selection images such that all the photographing regions corresponding to the predetermined rotation range are included in the plurality of photographing images;
A detecting unit detecting an overlapping area of each of the plurality of selected images with another sorting image; And
A generating unit for generating a resultant image or a resultant moving image by fusing the at least one different sorting image for each of the sorting images based on the detected overlapping region;
Lt; / RTI >
Wherein the overlap region is calculated according to Equation (1).
[Equation 1]
Figure 112016056597586-pat00008

Here, D is the number of pixels in the horizontal direction of the overlap region, S is the number of pixels in the horizontal direction of the selected image, ω is the rotational speed of the lens attached to the camera or the cameras, k m is the m-th the and the frame sequence number of the selected image, and n k is the frame sequence number of the n-th image selection, v is the number of frames per second of the image photographed by the camera.
The method according to claim 9 or 11,
The generating unit
A characteristic dotted line selecting section for selecting the minutiae in the overlapping area of the sorting image to be fused;
A descriptor extracting unit for extracting descriptors based on brightness of a plurality of pixels adjacent to the feature points; And
A fusion unit for fusing the plurality of selected images using the selected feature points and the extracted descriptors;
Wherein the compound view image generating device comprises:
13. The method of claim 12,
The feature dotted line
A candidate selecting unit for selecting a candidate corner point that is a candidate of a corner point having a curvature of an edge within a predetermined threshold or more within an overlapping region for each of the images to be fused;
If the brightness of a predetermined number or more of the candidate peripheral points that are surrounded by the predetermined corner of the candidate corner point is darker than the first threshold value or brighter than the second threshold value, A determination unit that determines the point; And
A minutia point determining unit for selecting the minutiae to the determined corner point;
Wherein the compound view image generating device comprises:
14. The method of claim 13,
An aggregation classifier for classifying the plurality of candidate peripheral points into a set of four pixels for each direction; And
An operation unit for calculating an average of brightness of pixels in each of the four sets of pixels classified;
Further comprising:
The determination unit
Further determining whether the candidate corner point is the corner point based on whether the calculated four average values are darker than the first reference brightness and whether the calculated fourth average value is brighter than the second reference brightness, Device.
13. The method of claim 12,
The fusion unit
An information generating unit for generating homography information indicating a conversion relation between the selected feature point and the selected image to be fused based on the extracted descriptor;
An image transform unit for transforming each of the sorting images to be fused by using the generated homography information; And
An image fusion unit for fusing the sorting image to be fused into the single sorting image based on similarity between each of the feature points and the descriptor;
Wherein the compound view image generating device comprises:
16. The method of claim 15,
An image retrieving unit for retrieving whether the same sorting image judged to be the same for each of the plurality of sorting images and the minutiae and descriptor information of the same sorting image are stored;
Further comprising:
The image fusion unit
Wherein the feature point and descriptor information of the same sorting image are reused when the same sorting image exists according to the search result.
KR1020150108786A 2015-07-31 2015-07-31 Method And Apparatus for Generating Compound View Image KR101636481B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150108786A KR101636481B1 (en) 2015-07-31 2015-07-31 Method And Apparatus for Generating Compound View Image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150108786A KR101636481B1 (en) 2015-07-31 2015-07-31 Method And Apparatus for Generating Compound View Image

Publications (1)

Publication Number Publication Date
KR101636481B1 true KR101636481B1 (en) 2016-07-06

Family

ID=56502634

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150108786A KR101636481B1 (en) 2015-07-31 2015-07-31 Method And Apparatus for Generating Compound View Image

Country Status (1)

Country Link
KR (1) KR101636481B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101915024B1 (en) * 2017-06-13 2018-11-06 삼성중공업(주) Apparatus and method for providing scanning information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035163A (en) * 2006-10-18 2008-04-23 주식회사 메디슨 Ultrasound diagnostic apparatus and method for measuring size of target object
KR20140081639A (en) * 2012-12-20 2014-07-01 중앙대학교 산학협력단 Homography estimation apparatus and method
JP2015019397A (en) * 2014-09-03 2015-01-29 カシオ計算機株式会社 Imaging synthesizing apparatus, image synthesizing method and program
KR20150072090A (en) * 2013-12-19 2015-06-29 삼성전자주식회사 Apparatus for detecting region of interest and the method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035163A (en) * 2006-10-18 2008-04-23 주식회사 메디슨 Ultrasound diagnostic apparatus and method for measuring size of target object
KR20140081639A (en) * 2012-12-20 2014-07-01 중앙대학교 산학협력단 Homography estimation apparatus and method
KR20150072090A (en) * 2013-12-19 2015-06-29 삼성전자주식회사 Apparatus for detecting region of interest and the method thereof
JP2015019397A (en) * 2014-09-03 2015-01-29 カシオ計算機株式会社 Imaging synthesizing apparatus, image synthesizing method and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101915024B1 (en) * 2017-06-13 2018-11-06 삼성중공업(주) Apparatus and method for providing scanning information

Similar Documents

Publication Publication Date Title
US9325899B1 (en) Image capturing device and digital zooming method thereof
JP4772839B2 (en) Image identification method and imaging apparatus
JP6371553B2 (en) Video display device and video display system
US8861806B2 (en) Real-time face tracking with reference images
US9373034B2 (en) Apparatus and method for tracking object
US9251439B2 (en) Image sharpness classification system
US8170368B2 (en) Correcting device and method for perspective transformed document images
JP4642128B2 (en) Image processing method, image processing apparatus and system
US8532337B2 (en) Object tracking method
JP2012088787A (en) Image processing device, image processing method
JP2008234653A (en) Object image detection method and image detection device
US10079974B2 (en) Image processing apparatus, method, and medium for extracting feature amount of image
KR102199094B1 (en) Method and Apparatus for Learning Region of Interest for Detecting Object of Interest
US9031355B2 (en) Method of system for image stabilization through image processing, and zoom camera including image stabilization function
US9020269B2 (en) Image processing device, image processing method, and recording medium
WO2016031573A1 (en) Image-processing device, image-processing method, program, and recording medium
KR101861245B1 (en) Movement detection system and method for multi sensor cctv panorama video
US20110085026A1 (en) Detection method and detection system of moving object
US9094617B2 (en) Methods and systems for real-time image-capture feedback
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera
JP6511950B2 (en) Image processing apparatus, image processing method and program
US9392146B2 (en) Apparatus and method for extracting object
KR101636481B1 (en) Method And Apparatus for Generating Compound View Image
JP2018137667A (en) Camera calibration method, program and device
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant