US20210174469A1 - Image stitching method and related monitoring camera apparatus - Google Patents
Image stitching method and related monitoring camera apparatus Download PDFInfo
- Publication number
- US20210174469A1 US20210174469A1 US16/703,885 US201916703885A US2021174469A1 US 20210174469 A1 US20210174469 A1 US 20210174469A1 US 201916703885 A US201916703885 A US 201916703885A US 2021174469 A1 US2021174469 A1 US 2021174469A1
- Authority
- US
- United States
- Prior art keywords
- image
- group
- features
- stitching method
- matched
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000012544 monitoring process Methods 0.000 title claims abstract description 42
- 230000009466 transformation Effects 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000009885 systemic effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Definitions
- the present invention relates to an image stitching method and a monitoring camera apparatus, and more particularly, to an image stitching method utilizing a feature without an identification pattern to increase a detecting distance and systemic adaptability and a related monitoring camera apparatus.
- a monitoring camera is applied for capturing a large-range monitor image
- several camera units are arranged in different angles to face a monitoring region of the monitoring camera.
- a field of view of one camera unit is different from a field of view of any other cameras.
- An edge of the field of view of one camera unit can be partly overlapped with an edge of the field of view of the adjacent camera unit.
- Conventional image stitching technology can set some marking features in an overlapped region of the monitoring images, and the marking features in the overlapped region can be used to stitch small-range images for generating the large-range image.
- the marking feature has a special identification pattern, so that the monitoring camera can determine the stitching direction and stitching sequence of the monitoring images via the identification pattern.
- a drawback of the conventional image stitching technology is a limited installation height of the camera unit.
- the camera unit If the camera unit is disposed on a location higher than an allowable height, the camera unit cannot identify whether the marking features in the plurality of monitoring images have the same identification pattern. Design of an image stitching method of using the marking feature without the identification pattern and increasing its detectable distance is an important issue in the monitoring industry.
- the present invention provides an image stitching method utilizing a feature without an identification pattern to increase a detecting distance and systemic adaptability and a related monitoring camera apparatus for solving above drawbacks.
- an image stitching method is applied to a monitoring camera apparatus with a first image receiver and a second image receiver for respectively acquiring a first image and a second image.
- the image stitching method includes detecting a plurality of first features in the first image and a plurality of second features in the second image, dividing the plurality of first features at least into a first group and a second group and further dividing the plurality of second features at least into a third group, analyzing the plurality of first features and the plurality of second features via an identification condition to determine whether one of the first group and the second group is matched with the third group, and utilizing two matched groups to stitch the first image and the second image.
- a monitoring camera apparatus with an image stitching function includes a first image receiver, a second image receiver and an operational processor.
- the first image receiver is adapted to acquire a first image.
- the second image receiver is adapted to acquire a second image.
- the operational processor is electrically connected to the first image receiver and the second image receiver.
- the operational processor is adapted to detect a plurality of first features in the first image and a plurality of second features in the second image, divide the plurality of first features at least into a first group and a second group and further dividing the plurality of second features at least into a third group, analyze the plurality of first features and the plurality of second features via an identification condition to determine whether one of the first group and the second group is matched with the third group, and utilize two matched groups to stitch the first image and the second image.
- the first feature and the second feature used in the image stitching method of the present invention do not have special identification pattern, so that the image stitching method and the related monitoring camera apparatus can increase its detectable distance and detectable range.
- the image can be stitched with one image or a plurality of images, and the features detected in the image can be used for stitching with one image or be divided for stitching several images.
- the image stitching method of the present invention can divide the features in each image into one or more groups, and then match the groups between different images for finding out the groups that are useful in image stitching. After the group matching, the image stitching method can pair the features between the matched groups and compute the related transformation parameter via the paired features.
- the images can be stitched via the paired features and the transformation parameter.
- FIG. 1 is a functional block diagram of a monitoring camera apparatus according to an embodiment of the present invention.
- FIG. 2 is a diagram of images acquired by the monitoring camera apparatus according to the embodiment of the present invention.
- FIG. 3 is a flow chart of the image stitching method according to the embodiment of the present invention.
- FIG. 4 to FIG. 8 are diagrams of image stitching recording according to the embodiment of the present invention.
- FIG. 9 is a diagram of the image stitching recording according to another embodiment of the present invention.
- FIG. 1 is a functional block diagram of a monitoring camera apparatus 10 according to an embodiment of the present invention.
- FIG. 2 is a diagram of images acquired by the monitoring camera apparatus 10 according to the embodiment of the present invention.
- the monitoring camera apparatus 10 can include some image receivers and an operational processor 12 ; the present invention gives an example of a first image receiver 14 and a second image receiver 16 , and an actual application is not limited to the foresaid application.
- the monitoring camera apparatus 10 may have three or more image receivers.
- a field of view of the first image receiver 14 is partly overlapped with a field of view of the second image receiver 16 .
- the first image receiver 14 and the second image receiver 16 can respectively acquire a first image I 1 and a second image I 2 .
- the operational processor 12 can be electrically connected with the first image receiver 14 and the second image receiver 16 in a wire manner or in a wireless manner.
- the operational processor 12 can execute an image stitching method of the present invention to stitch the first image I 1 and the second image I 2 .
- the operational processor 12 can be a built-in unit of the monitoring camera apparatus 10 or an external unit, which depends on actual demand.
- FIG. 3 is a flow chart of the image stitching method according to the embodiment of the present invention.
- FIG. 4 to FIG. 8 are diagrams of image stitching recording according to the embodiment of the present invention.
- the image stitching method illustrated in FIG. 3 can be suitable for the monitoring camera apparatus 10 shown in FIG. 1 .
- step S 300 can be executed to transform the first image I 1 and the second image I 2 into binary forms for detecting a plurality of first features F 1 in the binary first image I 1 and a plurality of second features F 2 in the binary second image I 2 , as shown in FIG. 4 .
- the first feature F 1 and the second feature F 2 are human-made features, and can be a three-dimensional object with a specific shape or a two-dimensional printed pattern with a specific appearance, which depend on a design demand. If the first image I 1 and the second image I 2 are arranged horizontally, the first feature F 1 and the second feature F 2 can be mainly disposed on a right side and a left side of each image. If the first image I 1 and the second image I 2 are arranged vertically, the first feature F 1 and the second feature F 2 can be disposed on an upper side and a lower side of each image.
- An application about the images arranged side-by-side in a horizontal direction is illustrated as following.
- the first feature F 1 and the second feature F 2 can be the geometric pattern with any shapes, such as a circular form, or a polygon form similar to a triangular form or a rectangle form.
- the image stitching method may detect the fully geometric pattern for identification.
- the first feature F 1 and the second feature F 2 can be the specific pattern defined by a user, such as an animal pattern, or an object pattern similar to a vehicle or a building.
- the image stitching method may detect the fully specific pattern for identification; the image stitching method further may detect a partial region of the specific pattern, such as a face of the animal pattern or a top or a bottom of the object pattern, for identification.
- step S 302 can be executed to divide the plurality of first features F 1 and the plurality of second features F 2 into several groups.
- the image stitching method may choose one of the plurality of first features F 1 , such as a first feature F 1 a shown in FIG. 5 , and then compute an interval D 1 between the first feature F 1 a and a first feature F 1 b, an interval D 2 between the first feature F 1 a and a first feature F 1 c , and an interval D 3 between the first feature F 1 a and a first feature F 1 d .
- the image stitching method can set or acquire the threshold from a memory (not shown in the figures) of the monitoring camera apparatus 10 .
- the intervals D 1 , D 2 and D 3 can be respectively compared with the threshold.
- the threshold is used to classify the plurality of features into different groups, and can be manually set by the user or automatically set by a system.
- the threshold can be set according to a dimension of the image or the interval between the features.
- the minimal interval D 1 can be selected from the intervals D 1 , D 2 and D 3 .
- the minimal interval D 1 can be weighted to define as the threshold, so that the threshold can be dynamically decided according to the minimal interval between any two features in the images for conforming to an automatic design trend.
- a weighting value mentioned above can be, but not limited to, greater than 1.0.
- the threshold may not be preset by the user, and the monitoring camera apparatus 10 can automatically generate the threshold, which conforms to an actual situation, in accordance with the detected interval between the features when the weighting value is set; the above-mentioned design can provide preferred tolerance and convenience for disposing the features, and better advance operation of the image stitching method.
- the minimal interval D 1 not only can be a base of the threshold, but also can be a counting unit of the intervals D 2 and D 3 .
- the interval D 1 between the first feature F 1 a and the first feature F 1 b is defined as one unit length
- the interval D 2 between the first feature F 1 a and the first feature F 1 c may be represented as four times the interval D 1 (which means four unit lengths)
- the interval D 3 between the first feature F 1 a and the first feature F 1 d may be represented as five times the interval D 1 (which means five unit lengths).
- a ratio about the unit length of the intervals D 2 and D 3 to the interval D 1 depends on the actual demand.
- the first feature F 1 a may be defined as belonging to the first group G 1 , and the intervals D 1 , D 2 and D 3 can be respectively compared with the threshold. If the interval D 1 is smaller than or equal to the threshold, the first feature F 1 b can belong to the first group G 1 with the first feature F 1 a ; if the intervals D 2 and D 3 are greater than the threshold, the first features F 1 c and F 1 d can be different from the first feature F 1 a and belong to the second group G 2 (another group opposite to the first group G 1 ), as shown in FIG. 6 .
- a right side and a left side of the first image I 1 can be respectively stitched with the second image I 2 and another image (not shown in the figures), so that the first features F 1 can be divided into at least two groups. If three side of the first image I 1 are respectively stitched with three images, the first features F 1 can be divided into three or more groups.
- the second features F 2 can be divided at least into a third group G 3 and a fourth group G 4 , which is similar to a dividing method about the first features F 1 , and a detailed description is omitted herein for simplicity.
- the first feature F 1 a if the first feature F 1 a is defined as belonging to the second group G 2 , the first feature F 1 b having the interval D 1 smaller than or equal to the threshold can belong to the second group G 2 with the first feature F 1 a .
- the first features F 1 c and F 1 d have the intervals D 2 and D 3 greater than the threshold, and can be different from the first feature F 1 a and belong to the first group G 1 .
- a serial number of the group whereto the features belong may follow in proper order or be decided by the user, which depends to an applicable demand.
- group dividing is used to classify some first features F 1 (such as the first features in the second group G 2 ) matched with the second image I 2 and other first features F 1 (such as the first features in the first group G 1 ) matched with another image for stitching, so that the first group G 1 and the second group G 2 of the first image I 1 can respectively located at different regions in the first image I 1 .
- the different regions may be the right side and the left side, or the upper side and the lower side of the first image I 1 , which depend on a source and an aim of the stitching image.
- the third group G 3 and the fourth group G 4 are respectively located at different regions in the second image I 2 , and used to match with the first image I 1 and another image (not shown in the figures) for stitching.
- step S 304 can be executed to analyze the plurality of first features F 1 and the plurality of second features F 2 via an identification condition for determining whether one of the first group G 1 and the second group G 2 can be matched with the third group G 3 or the fourth group G 4 .
- the identification condition can be selected from a group consisting of color, a dimension, a shape, an amount, an arrangement and a combination of the plurality of first features F 1 and the plurality of second features F 2 .
- the image stitching method can rapidly determine that the second group G 2 is matched with the third group G 3 via analysis of the color features.
- the image stitching method can rapidly determine that the second group G 2 is matched with the third group G 3 via geometric analysis of the features.
- the image stitching method can rapidly determine that the second group G 2 is matched with the third group G 3 via arrangement analysis of the features.
- the image stitching method can determine the second group G 2 is matched with the third group G 3 .
- intervals between the plurality of features still can be used to determine matching of those groups. If some first features F 1 and some second features F 2 are the transverse arrangement, two groups may be considered as not matching because first intervals between these first features F 1 are different from second intervals between these second features F 2 , or because a difference between the first interval and the second interval exceeds a predefined threshold.
- step S 306 can be executed that the image stitching method determines the first image I 1 is not stitched with the second image I 2 . If one of the first group G 1 and the second group G 2 can be matched with the third group G 3 or the fourth group G 4 , such as the second group G 2 being matched with the third group G 3 , a region of the second group G 2 in the first image I 1 and a region of the third group G 3 in the second image I 2 can belong to an overlapping region of the first image I 1 and the second image I 2 , so that step S 308 can be executed to search at least two first features F 1 and at least two second features F 2 within the matched groups G 2 and G 3 for pairing via the foresaid identification condition.
- the first feature F 1 c can be paired with the second feature F 2 in an upper region of the third group G 3
- the first feature F 1 d can be paired with the second feature F 2 in a lower region of the third group G 3 .
- the image stitching method can search the first features F 1 and the second features F 2 for pairing within the matched second group G 2 and the matched third group G 3 according to the group consisting of the color, the dimension, the shape, the amount, the arrangement and the combination thereof.
- the first features F 1 and the second features F 2 which cannot be paired are not applied for the image stitching method.
- steps S 310 and S 312 can be executed to analyze difference between the at least two first features F 1 and the at least two second features F 2 paired with each other for acquiring a transformation parameter, and utilize the transformation parameter to stitch the first image I 1 and the second image I 2 for generating a combined image I 3 , as shown in FIG. 8 .
- the image stitching method can compute the transformation parameter via mean-square error (MSE) algorithm or any other mathematic model.
- MSE mean-square error
- the image stitching method can divide the plurality of first features F 1 and the plurality of second features F 2 at least into two groups, so the first image I 1 and the second image I 2 can be stitched with a left-side image and/or a right-side image.
- the image stitching method of the present invention further can be applied for a situation of one image stitched with other image only via one side. Please refer to FIG. 9 .
- FIG. 9 is a diagram of the image stitching recording according to another embodiment of the present invention.
- the second image receiver 16 can face the field edge of view of the monitoring camera apparatus 10 to acquire the second image I 2 ′, and step S 302 in the image stitching method can be executed by setting one group on a side of the second image I 2 ′ close to the first image I 1 , which means the third group G 3 can be set from a left cluster of the plurality of second features F 2 .
- a right side of the second image I 2 ′ is not stitched with other image, so that a left cluster of the plurality of second features F 2 does not set a group.
- the image stitching method can determine whether the first group G 1 or the second group G 2 in the first image I 1 is matched with the third group G 3 in the second image I 2 ′. If the first group G 1 is not matched with the third group G 3 , the left side of the first image I 1 can be stitched with another image instead of the second image I 2 ′; if the second group G 2 is matched with the third group G 3 , the right side of the first image I 1 can be stitched with the left side of the second image I 2 ′.
- a monitoring area of the monitoring camera apparatus 10 may have several features, and the image receiver cannot capture the image containing all the features due to an angle of view of the image receiver.
- the right side of the first image I 1 captured by the first image receiver 14 only contains two first features F 1
- the left side of the second image I 2 captured by the second image receiver 16 contains three second features F 2 .
- the right-side second feature F 2 is distant from two left-side second features F 2 in the group G 2 , so the field of view of the first image receiver 14 cannot contain all the three second features F 2 .
- the image stitching method can execute step S 302 to divide the second features F 2 in the second image I 2 into two groups.
- the amount of the first features F 1 in the second group G 2 is different from the amount of the second features F 2 in the third group G 3 , and the color, the dimension and the shape can be used as the identification condition for executing the group matching in step S 304 and feature pairing in step S 308 ; that is to say, selection of the color, the dimension, the shape, the amount and the arrangement of the feature can be varied in different procedures (such as the group matching and the feature pairing), which depends on design demand and actual application.
- the first feature and the second feature used in the image stitching method of the present invention do not have special identification pattern, so that the image stitching method and the related monitoring camera apparatus can increase its detectable distance and detectable range.
- the image can be stitched with one image or a plurality of images, and the features detected in the image can be used for stitching with one image or be divided for stitching several images.
- the image stitching method of the present invention can divide the features in each image into one or more groups, and then match the groups between different images for finding out the groups that are useful in image stitching. After the group matching, the image stitching method can pair the features between the matched groups and compute the related transformation parameter via the paired features.
- the images can be stitched via the paired features and the transformation parameter.
- the image stitching method and the monitoring camera apparatus of the present invention executes the group matching firstably, and then executes the feature pairing in accordance with a result of the group matching, so as to effectively increase diversity of the features, and further to provide preferred stitching speed and accuracy.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
- The present invention relates to an image stitching method and a monitoring camera apparatus, and more particularly, to an image stitching method utilizing a feature without an identification pattern to increase a detecting distance and systemic adaptability and a related monitoring camera apparatus.
- If a monitoring camera is applied for capturing a large-range monitor image, several camera units are arranged in different angles to face a monitoring region of the monitoring camera. A field of view of one camera unit is different from a field of view of any other cameras. An edge of the field of view of one camera unit can be partly overlapped with an edge of the field of view of the adjacent camera unit. Conventional image stitching technology can set some marking features in an overlapped region of the monitoring images, and the marking features in the overlapped region can be used to stitch small-range images for generating the large-range image. The marking feature has a special identification pattern, so that the monitoring camera can determine the stitching direction and stitching sequence of the monitoring images via the identification pattern. A drawback of the conventional image stitching technology is a limited installation height of the camera unit. If the camera unit is disposed on a location higher than an allowable height, the camera unit cannot identify whether the marking features in the plurality of monitoring images have the same identification pattern. Design of an image stitching method of using the marking feature without the identification pattern and increasing its detectable distance is an important issue in the monitoring industry.
- The present invention provides an image stitching method utilizing a feature without an identification pattern to increase a detecting distance and systemic adaptability and a related monitoring camera apparatus for solving above drawbacks.
- According to the claimed invention, an image stitching method is applied to a monitoring camera apparatus with a first image receiver and a second image receiver for respectively acquiring a first image and a second image. The image stitching method includes detecting a plurality of first features in the first image and a plurality of second features in the second image, dividing the plurality of first features at least into a first group and a second group and further dividing the plurality of second features at least into a third group, analyzing the plurality of first features and the plurality of second features via an identification condition to determine whether one of the first group and the second group is matched with the third group, and utilizing two matched groups to stitch the first image and the second image.
- According to the claimed invention, a monitoring camera apparatus with an image stitching function includes a first image receiver, a second image receiver and an operational processor. The first image receiver is adapted to acquire a first image. The second image receiver is adapted to acquire a second image. The operational processor is electrically connected to the first image receiver and the second image receiver. The operational processor is adapted to detect a plurality of first features in the first image and a plurality of second features in the second image, divide the plurality of first features at least into a first group and a second group and further dividing the plurality of second features at least into a third group, analyze the plurality of first features and the plurality of second features via an identification condition to determine whether one of the first group and the second group is matched with the third group, and utilize two matched groups to stitch the first image and the second image.
- The first feature and the second feature used in the image stitching method of the present invention do not have special identification pattern, so that the image stitching method and the related monitoring camera apparatus can increase its detectable distance and detectable range. The image can be stitched with one image or a plurality of images, and the features detected in the image can be used for stitching with one image or be divided for stitching several images. Thus, the image stitching method of the present invention can divide the features in each image into one or more groups, and then match the groups between different images for finding out the groups that are useful in image stitching. After the group matching, the image stitching method can pair the features between the matched groups and compute the related transformation parameter via the paired features. The images can be stitched via the paired features and the transformation parameter.
- These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
-
FIG. 1 is a functional block diagram of a monitoring camera apparatus according to an embodiment of the present invention. -
FIG. 2 is a diagram of images acquired by the monitoring camera apparatus according to the embodiment of the present invention. -
FIG. 3 is a flow chart of the image stitching method according to the embodiment of the present invention. -
FIG. 4 toFIG. 8 are diagrams of image stitching recording according to the embodiment of the present invention. -
FIG. 9 is a diagram of the image stitching recording according to another embodiment of the present invention. - Please refer to
FIG. 1 andFIG. 2 .FIG. 1 is a functional block diagram of amonitoring camera apparatus 10 according to an embodiment of the present invention.FIG. 2 is a diagram of images acquired by themonitoring camera apparatus 10 according to the embodiment of the present invention. Themonitoring camera apparatus 10 can include some image receivers and anoperational processor 12; the present invention gives an example of afirst image receiver 14 and asecond image receiver 16, and an actual application is not limited to the foresaid application. Themonitoring camera apparatus 10 may have three or more image receivers. A field of view of thefirst image receiver 14 is partly overlapped with a field of view of thesecond image receiver 16. Thefirst image receiver 14 and thesecond image receiver 16 can respectively acquire a first image I1 and a second image I2. Theoperational processor 12 can be electrically connected with thefirst image receiver 14 and thesecond image receiver 16 in a wire manner or in a wireless manner. Theoperational processor 12 can execute an image stitching method of the present invention to stitch the first image I1 and the second image I2. Theoperational processor 12 can be a built-in unit of themonitoring camera apparatus 10 or an external unit, which depends on actual demand. - Please refer to
FIG. 1 toFIG. 8 .FIG. 3 is a flow chart of the image stitching method according to the embodiment of the present invention.FIG. 4 toFIG. 8 are diagrams of image stitching recording according to the embodiment of the present invention. The image stitching method illustrated inFIG. 3 can be suitable for themonitoring camera apparatus 10 shown inFIG. 1 . For the image stitching method, step S300 can be executed to transform the first image I1 and the second image I2 into binary forms for detecting a plurality of first features F1 in the binary first image I1 and a plurality of second features F2 in the binary second image I2, as shown inFIG. 4 . Generally, the first feature F1 and the second feature F2 are human-made features, and can be a three-dimensional object with a specific shape or a two-dimensional printed pattern with a specific appearance, which depend on a design demand. If the first image I1 and the second image I2 are arranged horizontally, the first feature F1 and the second feature F2 can be mainly disposed on a right side and a left side of each image. If the first image I1 and the second image I2 are arranged vertically, the first feature F1 and the second feature F2 can be disposed on an upper side and a lower side of each image. An application about the images arranged side-by-side in a horizontal direction is illustrated as following. - The first feature F1 and the second feature F2 can be the geometric pattern with any shapes, such as a circular form, or a polygon form similar to a triangular form or a rectangle form. The image stitching method may detect the fully geometric pattern for identification. Besides, the first feature F1 and the second feature F2 can be the specific pattern defined by a user, such as an animal pattern, or an object pattern similar to a vehicle or a building. The image stitching method may detect the fully specific pattern for identification; the image stitching method further may detect a partial region of the specific pattern, such as a face of the animal pattern or a top or a bottom of the object pattern, for identification.
- Then, step S302 can be executed to divide the plurality of first features F1 and the plurality of second features F2 into several groups. As an example of the first image I1, the image stitching method may choose one of the plurality of first features F1, such as a first feature F1 a shown in
FIG. 5 , and then compute an interval D1 between the first feature F1 a and a first feature F1 b, an interval D2 between the first feature F1 a and a first feature F1 c, and an interval D3 between the first feature F1 a and a first feature F1 d. The image stitching method can set or acquire the threshold from a memory (not shown in the figures) of themonitoring camera apparatus 10. The intervals D1, D2 and D3 can be respectively compared with the threshold. The threshold is used to classify the plurality of features into different groups, and can be manually set by the user or automatically set by a system. The threshold can be set according to a dimension of the image or the interval between the features. For example, the minimal interval D1 can be selected from the intervals D1, D2 and D3. The minimal interval D1 can be weighted to define as the threshold, so that the threshold can be dynamically decided according to the minimal interval between any two features in the images for conforming to an automatic design trend. A weighting value mentioned above can be, but not limited to, greater than 1.0. According to foresaid embodiment, the threshold may not be preset by the user, and themonitoring camera apparatus 10 can automatically generate the threshold, which conforms to an actual situation, in accordance with the detected interval between the features when the weighting value is set; the above-mentioned design can provide preferred tolerance and convenience for disposing the features, and better advance operation of the image stitching method. - The minimal interval D1 not only can be a base of the threshold, but also can be a counting unit of the intervals D2 and D3. For example, if the interval D1 between the first feature F1 a and the first feature F1 b is defined as one unit length, the interval D2 between the first feature F1 a and the first feature F1 c may be represented as four times the interval D1 (which means four unit lengths), and the interval D3 between the first feature F1 a and the first feature F1 d may be represented as five times the interval D1 (which means five unit lengths). A ratio about the unit length of the intervals D2 and D3 to the interval D1 depends on the actual demand.
- In step S302, the first feature F1 a may be defined as belonging to the first group G1, and the intervals D1, D2 and D3 can be respectively compared with the threshold. If the interval D1 is smaller than or equal to the threshold, the first feature F1 b can belong to the first group G1 with the first feature F1 a; if the intervals D2 and D3 are greater than the threshold, the first features F1 c and F1 d can be different from the first feature F1 a and belong to the second group G2 (another group opposite to the first group G1), as shown in
FIG. 6 . In the embodiment, a right side and a left side of the first image I1 can be respectively stitched with the second image I2 and another image (not shown in the figures), so that the first features F1 can be divided into at least two groups. If three side of the first image I1 are respectively stitched with three images, the first features F1 can be divided into three or more groups. The second features F2 can be divided at least into a third group G3 and a fourth group G4, which is similar to a dividing method about the first features F1, and a detailed description is omitted herein for simplicity. - In the embodiment shown in
FIG. 6 , if the first feature F1 a is defined as belonging to the second group G2, the first feature F1 b having the interval D1 smaller than or equal to the threshold can belong to the second group G2 with the first feature F1 a. The first features F1 c and F1 d have the intervals D2 and D3 greater than the threshold, and can be different from the first feature F1 a and belong to the first group G1. A serial number of the group whereto the features belong may follow in proper order or be decided by the user, which depends to an applicable demand. - As an example of the first image I1, group dividing is used to classify some first features F1 (such as the first features in the second group G2) matched with the second image I2 and other first features F1 (such as the first features in the first group G1) matched with another image for stitching, so that the first group G1 and the second group G2 of the first image I1 can respectively located at different regions in the first image I1. The different regions may be the right side and the left side, or the upper side and the lower side of the first image I1, which depend on a source and an aim of the stitching image. The third group G3 and the fourth group G4 are respectively located at different regions in the second image I2, and used to match with the first image I1 and another image (not shown in the figures) for stitching.
- Then, step S304 can be executed to analyze the plurality of first features F1 and the plurality of second features F2 via an identification condition for determining whether one of the first group G1 and the second group G2 can be matched with the third group G3 or the fourth group G4. The identification condition can be selected from a group consisting of color, a dimension, a shape, an amount, an arrangement and a combination of the plurality of first features F1 and the plurality of second features F2. In an example of color features, if the first features F1 a and F1 b in the first group G1 are red, and the first features F1 c and F1 d in the second group G2 are blue, and the second features F2 in the third group G3 are blue, and the second features F2 in the fourth group G4 are yellow, the image stitching method can rapidly determine that the second group G2 is matched with the third group G3 via analysis of the color features.
- In an example of dimension and shape features, if the first features F1 a and F1 b in the first group G1 are small circular spots, and the first features F1 c and F1 d in the second group G2 are middle square blocks, and the second features F2 in the third group G3 are the middle square blocks, and the second features F2 in the fourth group G4 are large triangle forms, the image stitching method can rapidly determine that the second group G2 is matched with the third group G3 via geometric analysis of the features. In an example of arrangement features, if the first features F1 a and F1 b in the first group G1 are vertical arrangement, and the first features F1 c and F1 d in the second group G2 are transverse arrangement, and the second features F2 in the third group G3 are the transverse arrangement, and the second features F2 in the fourth group G4 are oblique arrangement, the image stitching method can rapidly determine that the second group G2 is matched with the third group G3 via arrangement analysis of the features. In an example of amount features, if an amount of the first features F1 in the second group G2 is identical with an amount of the second features F2 in the third group G3, but different from an amount of the second features F2 in the fourth group G4, the image stitching method can determine the second group G2 is matched with the third group G3.
- It should be mentioned that even though the plurality of features conforms to the same arrangement, intervals between the plurality of features still can be used to determine matching of those groups. If some first features F1 and some second features F2 are the transverse arrangement, two groups may be considered as not matching because first intervals between these first features F1 are different from second intervals between these second features F2, or because a difference between the first interval and the second interval exceeds a predefined threshold.
- If the first group G1 and the second group G2 cannot be matched with the third group G3 or the fourth group G4, step S306 can be executed that the image stitching method determines the first image I1 is not stitched with the second image I2. If one of the first group G1 and the second group G2 can be matched with the third group G3 or the fourth group G4, such as the second group G2 being matched with the third group G3, a region of the second group G2 in the first image I1 and a region of the third group G3 in the second image I2 can belong to an overlapping region of the first image I1 and the second image I2, so that step S308 can be executed to search at least two first features F1 and at least two second features F2 within the matched groups G2 and G3 for pairing via the foresaid identification condition. As shown in
FIG. 7 , the first feature F1 c can be paired with the second feature F2 in an upper region of the third group G3, and the first feature F1 d can be paired with the second feature F2 in a lower region of the third group G3. - When group matching is completed, the image stitching method can search the first features F1 and the second features F2 for pairing within the matched second group G2 and the matched third group G3 according to the group consisting of the color, the dimension, the shape, the amount, the arrangement and the combination thereof. The first features F1 and the second features F2 which cannot be paired are not applied for the image stitching method. Final, steps S310 and S312 can be executed to analyze difference between the at least two first features F1 and the at least two second features F2 paired with each other for acquiring a transformation parameter, and utilize the transformation parameter to stitch the first image I1 and the second image I2 for generating a combined image I3, as shown in
FIG. 8 . The image stitching method can compute the transformation parameter via mean-square error (MSE) algorithm or any other mathematic model. - In the above-mentioned embodiment, when the
monitoring camera apparatus 10 has three or more image receivers, the image stitching method can divide the plurality of first features F1 and the plurality of second features F2 at least into two groups, so the first image I1 and the second image I2 can be stitched with a left-side image and/or a right-side image. The image stitching method of the present invention further can be applied for a situation of one image stitched with other image only via one side. Please refer toFIG. 9 .FIG. 9 is a diagram of the image stitching recording according to another embodiment of the present invention. In this embodiment, thesecond image receiver 16 can face the field edge of view of themonitoring camera apparatus 10 to acquire the second image I2′, and step S302 in the image stitching method can be executed by setting one group on a side of the second image I2′ close to the first image I1, which means the third group G3 can be set from a left cluster of the plurality of second features F2. A right side of the second image I2′ is not stitched with other image, so that a left cluster of the plurality of second features F2 does not set a group. - Then, following steps can be similar to the above-mentioned embodiment. The image stitching method can determine whether the first group G1 or the second group G2 in the first image I1 is matched with the third group G3 in the second image I2′. If the first group G1 is not matched with the third group G3, the left side of the first image I1 can be stitched with another image instead of the second image I2′; if the second group G2 is matched with the third group G3, the right side of the first image I1 can be stitched with the left side of the second image I2′.
- In one specific embodiment, a monitoring area of the
monitoring camera apparatus 10 may have several features, and the image receiver cannot capture the image containing all the features due to an angle of view of the image receiver. As shown inFIG. 9 , the right side of the first image I1 captured by thefirst image receiver 14 only contains two first features F1, and the left side of the second image I2 captured by thesecond image receiver 16 contains three second features F2. The right-side second feature F2 is distant from two left-side second features F2 in the group G2, so the field of view of thefirst image receiver 14 cannot contain all the three second features F2. The image stitching method can execute step S302 to divide the second features F2 in the second image I2 into two groups. The amount of the first features F1 in the second group G2 is different from the amount of the second features F2 in the third group G3, and the color, the dimension and the shape can be used as the identification condition for executing the group matching in step S304 and feature pairing in step S308; that is to say, selection of the color, the dimension, the shape, the amount and the arrangement of the feature can be varied in different procedures (such as the group matching and the feature pairing), which depends on design demand and actual application. - In conclusion, the first feature and the second feature used in the image stitching method of the present invention do not have special identification pattern, so that the image stitching method and the related monitoring camera apparatus can increase its detectable distance and detectable range. The image can be stitched with one image or a plurality of images, and the features detected in the image can be used for stitching with one image or be divided for stitching several images. Thus, the image stitching method of the present invention can divide the features in each image into one or more groups, and then match the groups between different images for finding out the groups that are useful in image stitching. After the group matching, the image stitching method can pair the features between the matched groups and compute the related transformation parameter via the paired features. The images can be stitched via the paired features and the transformation parameter. Comparing to the prior art, the image stitching method and the monitoring camera apparatus of the present invention executes the group matching firstably, and then executes the feature pairing in accordance with a result of the group matching, so as to effectively increase diversity of the features, and further to provide preferred stitching speed and accuracy.
- Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/703,885 US11030718B1 (en) | 2019-12-05 | 2019-12-05 | Image stitching method and related monitoring camera apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/703,885 US11030718B1 (en) | 2019-12-05 | 2019-12-05 | Image stitching method and related monitoring camera apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
US11030718B1 US11030718B1 (en) | 2021-06-08 |
US20210174469A1 true US20210174469A1 (en) | 2021-06-10 |
Family
ID=76210613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/703,885 Active US11030718B1 (en) | 2019-12-05 | 2019-12-05 | Image stitching method and related monitoring camera apparatus |
Country Status (1)
Country | Link |
---|---|
US (1) | US11030718B1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11488371B2 (en) * | 2020-12-17 | 2022-11-01 | Concat Systems, Inc. | Machine learning artificial intelligence system for producing 360 virtual representation of an object |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418861B (en) * | 2022-03-31 | 2022-07-01 | 南京云创大数据科技股份有限公司 | Camera image splicing processing method and system |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7460730B2 (en) * | 2005-08-04 | 2008-12-02 | Microsoft Corporation | Video registration and image sequence stitching |
US7894689B2 (en) * | 2007-05-31 | 2011-02-22 | Seiko Epson Corporation | Image stitching |
US8824833B2 (en) * | 2008-02-01 | 2014-09-02 | Omnivision Technologies, Inc. | Image data fusion systems and methods |
US8385689B2 (en) * | 2009-10-21 | 2013-02-26 | MindTree Limited | Image alignment using translation invariant feature matching |
WO2013113373A1 (en) * | 2012-01-31 | 2013-08-08 | Sony Ericsson Mobile Communications Ab | Method and electronic device for creating a combined image |
US20160012594A1 (en) * | 2014-07-10 | 2016-01-14 | Ditto Labs, Inc. | Systems, Methods, And Devices For Image Matching And Object Recognition In Images Using Textures |
WO2016165016A1 (en) * | 2015-04-14 | 2016-10-20 | Magor Communications Corporation | View synthesis-panorama |
TWI543611B (en) * | 2015-11-20 | 2016-07-21 | 晶睿通訊股份有限公司 | Image stitching method and camera system with an image stitching function |
WO2018056355A1 (en) * | 2016-09-23 | 2018-03-29 | 株式会社日立国際電気 | Monitoring device |
US10373360B2 (en) * | 2017-03-02 | 2019-08-06 | Qualcomm Incorporated | Systems and methods for content-adaptive image stitching |
US10373327B2 (en) * | 2017-10-18 | 2019-08-06 | Adobe Inc. | Reassembling and repairing torn image pieces |
-
2019
- 2019-12-05 US US16/703,885 patent/US11030718B1/en active Active
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11488371B2 (en) * | 2020-12-17 | 2022-11-01 | Concat Systems, Inc. | Machine learning artificial intelligence system for producing 360 virtual representation of an object |
Also Published As
Publication number | Publication date |
---|---|
US11030718B1 (en) | 2021-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10445887B2 (en) | Tracking processing device and tracking processing system provided with same, and tracking processing method | |
US10776953B2 (en) | Method for identification of candidate points as possible characteristic points of a calibration pattern within an image of the calibration pattern | |
US9832455B2 (en) | Stereo camera and automatic range finding method for measuring a distance between stereo camera and reference plane | |
US9799117B2 (en) | Method for processing data and apparatus thereof | |
US11030718B1 (en) | Image stitching method and related monitoring camera apparatus | |
US10515459B2 (en) | Image processing apparatus for processing images captured by a plurality of imaging units, image processing method, and storage medium storing program therefor | |
US10063843B2 (en) | Image processing apparatus and image processing method for estimating three-dimensional position of object in image | |
US20160055645A1 (en) | People counting device and people counting method | |
KR101146119B1 (en) | Method and apparatus for determining positions of robot | |
CN102236784A (en) | Screen area detection method and system | |
US9286669B2 (en) | Image processing apparatus, image processing method and program | |
CN110458858A (en) | A kind of detection method of cross drone, system and storage medium | |
CN107315935B (en) | Multi-fingerprint identification method and device | |
JP2017032335A (en) | Information processing device, information processing method, and program | |
CN106934792B (en) | 3D effect detection method, device and system of display module | |
CN110909617B (en) | Living body face detection method and device based on binocular vision | |
CN103607558A (en) | Video monitoring system, target matching method and apparatus thereof | |
US10915988B2 (en) | Image stitching method and related monitoring camera device | |
JP6244960B2 (en) | Object recognition apparatus, object recognition method, and object recognition program | |
US20240046497A1 (en) | Image analysis method and camera apparatus | |
US10719707B2 (en) | Pedestrian detection method and related monitoring camera | |
US10909713B2 (en) | System and method for item location, delineation, and measurement | |
US11393190B2 (en) | Object identification method and related monitoring camera apparatus | |
KR101669850B1 (en) | Sensor Calibration Method and Electronic Device and Marker Board for the same | |
CN112927128B (en) | Image stitching method and related monitoring camera equipment thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIVOTEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, CHENG-CHIEH;WANG, HUI-CHIH;CHEN, CHANG-LI;AND OTHERS;REEL/FRAME:051183/0526 Effective date: 20191125 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |