US9031355B2 - Method of system for image stabilization through image processing, and zoom camera including image stabilization function - Google Patents

Method of system for image stabilization through image processing, and zoom camera including image stabilization function Download PDF

Info

Publication number
US9031355B2
US9031355B2 US13/729,302 US201213729302A US9031355B2 US 9031355 B2 US9031355 B2 US 9031355B2 US 201213729302 A US201213729302 A US 201213729302A US 9031355 B2 US9031355 B2 US 9031355B2
Authority
US
United States
Prior art keywords
image
input image
feature portion
enlarged
representative feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/729,302
Other versions
US20130170770A1 (en
Inventor
Je-Youl CHON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanwha Vision Co Ltd
Original Assignee
Samsung Techwin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Techwin Co Ltd filed Critical Samsung Techwin Co Ltd
Assigned to SAMSUNG TECHWIN CO., LTD. reassignment SAMSUNG TECHWIN CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHON, JE-YOUL
Publication of US20130170770A1 publication Critical patent/US20130170770A1/en
Application granted granted Critical
Publication of US9031355B2 publication Critical patent/US9031355B2/en
Assigned to HANWHA TECHWIN CO., LTD. reassignment HANWHA TECHWIN CO., LTD. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SAMSUNG TECHWIN CO., LTD.
Assigned to HANWHA TECHWIN CO., LTD. reassignment HANWHA TECHWIN CO., LTD. CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY ADDRESS PREVIOUSLY RECORDED AT REEL: 036714 FRAME: 0757. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME. Assignors: SAMSUNG TECHWIN CO., LTD.
Assigned to HANWHA AEROSPACE CO., LTD. reassignment HANWHA AEROSPACE CO., LTD. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: HANWHA TECHWIN CO., LTD
Assigned to HANWHA AEROSPACE CO., LTD. reassignment HANWHA AEROSPACE CO., LTD. CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER 10/853,669. IN ADDITION PLEASE SEE EXHIBIT A PREVIOUSLY RECORDED ON REEL 046927 FRAME 0019. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME. Assignors: HANWHA TECHWIN CO., LTD.
Assigned to HANWHA TECHWIN CO., LTD. reassignment HANWHA TECHWIN CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HANWHA AEROSPACE CO., LTD.
Assigned to HANWHA VISION CO., LTD. reassignment HANWHA VISION CO., LTD. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: HANWHA TECHWIN CO., LTD.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • G06T7/0044
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • Methods and apparatuses consistent with exemplary embodiments relate to image stabilization through image processing, and a zoom camera including an image stabilization function.
  • An image captured by a surveillance camera displays a current state of a place where the surveillance camera is installed through a monitor in a police station or a management office of a building.
  • a surveillance camera has a zoom function to enlarge or reduce an object to provide lots of user convenience.
  • an optical axis error occurs when an imaging device, such as a plurality of lens groups and a charge-coupled device (CCD), is assembled in the surveillance camera or due to various factors such as a tolerance of an optical system, and thus, an optical axis of the surveillance camera is changed according to zoom change.
  • an imaging device such as a plurality of lens groups and a charge-coupled device (CCD)
  • CCD charge-coupled device
  • FIGS. 1A to 1C are images showing when an optical axis error occurs in an image captured by a zoom camera.
  • a portion marked with ‘+’ in FIG. 1A represents a central area of the image.
  • the central area marked with ‘+’ of FIG. 1A is moved as shown in FIG. 1B , which shows that an optical axis error occurs.
  • FIG. 1C there is a need for a method of compensating for the optical axis error so that the central area of the image is maintained at its original position even after the image is enlarged.
  • One or more exemplary embodiments provide a method and system for determining whether an input image includes a representative feature portion through image processing and image stabilization by using an image matching method when the input image includes the representative feature portion and compensating for the optical axis error by using optical flow information if the input image includes no representative feature portion, and a zoom camera including an image stabilization function.
  • a method of image stabilization including: determining whether an input image comprises a representative feature portion; sampling the representative feature portion to generate a sampled image if it is determined that the input image comprises the representative feature portion; enlarging the sampled image, matching the enlarged sampled image with the sampled image, and obtaining a central coordinate of the matched image; and aligning an optical axis by calculating a difference between the central coordinate of the matched image and a central coordinate of the sampled image.
  • the representative feature portion may have an illuminance level higher than a predetermined illuminance level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
  • the determining of whether the input image includes a representative feature portion may include: extracting the plurality of feature portions from the input image; selecting a candidate feature portion having a predetermined feature from among the extracted feature portions; and determining that the input image includes the representative feature portion if the enlarged sampled image includes the candidate feature portion, and determining that the input image does not include the representative feature portion if the enlarged sampled image does not include the candidate feature portion.
  • the matching of the enlarged sampled image with the sampled image may include performing scale invariant feature transform (SIFT) matching if the sampled image is enlarged at a magnification equal to or less than 2.5.
  • SIFT scale invariant feature transform
  • the matching of the enlarged sampled image with the sampled image may include performing template matching by resizing the sampled image by a magnification at which the sampled image is enlarged if the sampled image is enlarged at a magnification equal to or greater than 2.5.
  • a method of image stabilization including: determining whether an input image comprises a representative feature portion; calculating optical flows of the input image during enlargement of the input image, if it is determined that the input image does not comprise the representative feature portion; obtaining optical axis error information by using optical flow information obtained from the calculated optical flows; and aligning an optical axis by using the obtained optical axis error information.
  • the calculating of the optical flows may include obtaining the input image according to a predetermined number of frames during the enlargement of the input image.
  • the obtaining of the optical axis error information may include forming an optical flow histogram based on at least one distance from a center of the input image and at least one direction at each of the at least one distance.
  • the optical axis error information may be obtained only if directions of minimum and maximum values of the optical flows at each of the at least one distance are symmetrical to each other.
  • the optical axis error information may be obtained based on a difference between an average value and a minimum value of the optical flows at each of the at least one distance.
  • an image stabilization system including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; a sampling unit which samples the representative feature portion to generate a sample image if it is determined that the input image comprises the representative feature portion; a matching unit which enlarges the sampled image, matches the enlarged sampled image with the sampled image, and obtains a central coordinate of the matched image; and an optical axis aligning unit which aligns an optical axis by calculating a difference between the central coordinate of the matched image and a central coordinate of the sampled image.
  • the representative feature portion may have an illuminance level higher than a predetermined illuminance level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
  • the representative feature portion presence determining unit may extract the plurality of feature portions from the input image, select a candidate feature portion having a predetermined feature from among the extracted feature portions, determine that the input image includes the representative feature portion if the enlarged sampled image includes the candidate feature portion, and determine that the input image does not include the representative feature portion if the enlarged sampled image does not include the candidate feature portion.
  • the matching unit may match the enlarged sampled image with the sampled image by scale invariant feature transform (SIFT) matching if the sampled image is enlarged at a magnification equal to or less than 2.5.
  • SIFT scale invariant feature transform
  • the matching unit may match the enlarged sampled image with the sampled image by template matching by resizing the sampled image by the magnification at which the sampled image is enlarged if the sampled image is enlarged at a magnification equal to or greater than 2.5.
  • an image stabilization system including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; an optical flow calculating unit which calculates optical flows of the input image during enlargement of the input image, if it is determined that the input image does not comprise the representative feature portion; an error information extracting unit which obtains optical axis error information by using optical flow information obtained from the calculated optical flows; and an optical axis aligning unit which aligns an optical axis by using the obtained optical axis error information.
  • the optical flow calculating unit may calculate the optical flows by obtaining the input image according to a predetermined number of frames during the enlargement of the input image.
  • the error information extracting unit may obtain the optical axis error information by forming an optical flow histogram based on at least one distance from a center of the input image and at least one direction at least of the at least one distance.
  • the error information extracting unit may obtain the optical axis error information only if directions of minimum and maximum values of the optical flows at each of the at least one distance are symmetrical to each other.
  • the error information extracting unit may obtain the optical axis error information based on a difference between an average value and a minimum value of the optical flows at each of the at least one distance.
  • a zoom camera including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; an optical axis error calculating unit which obtains optical axis error information by matching a sample image including the representative feature portion and an enlarged sample image if the representative feature portion presence determining unit determines that the input image comprises a representative feature portion, and obtaining the optical axis error information by calculating optical flow information between at least two images corresponding to the input image if the representative feature portion presence determining unit determines that the input image does not comprise the representative feature portion; and an optical axis aligning unit which compensates for an optical axis error by using the obtained optical axis error information.
  • FIGS. 1A to 1C are images showing when an optical axis error occurs in an image captured by a zoom camera
  • FIG. 2 is a block diagram of an image stabilization system according to an exemplary embodiment
  • FIGS. 3A to 3D are views showing examples for detecting a representative feature portion according to an exemplary embodiment
  • FIG. 4 is a view showing that a direction in which an optical axis error may occur is approximated to eight directions according to an exemplary embodiment
  • FIGS. 5A to 5C are views showing an image obtained by sampling the image of FIGS. 1A to 1C , an image obtained by enlarging the sampled image, and an image obtained by matching the sampled image with an enlarged or resized image of the sampled image, respectively, according to an exemplary embodiment;
  • FIG. 6 is a view showing size information of an optical flow according to distances from a center of an image when an optical axis is moved to the right upper side according to an exemplary embodiment
  • FIG. 7 is a flowchart for describing an image stabilization system according to an exemplary embodiment
  • FIG. 8 is a flowchart for describing a method of determining whether an image includes a representative feature portion, according to an exemplary embodiment
  • FIG. 9 is a flowchart for describing a process of calculating an optical flow by an optical flow calculating unit, according to an exemplary embodiment.
  • FIG. 10 is a flowchart for describing a process of extracting an optical axis error by using optical flow information by an error information extracting unit, according to an exemplary embodiment.
  • FIG. 2 is a block diagram of an image stabilization system according to an exemplary embodiment.
  • the image stabilization system includes a zoom camera 1 , an image preprocessor 10 , a representative feature portion presence determining unit 20 , a sampling unit 31 , a matching unit 32 , an optical flow calculating unit 33 , an error information extracting unit 34 , and an optical axis aligning unit 40 .
  • the zoom camera 1 is installed in an area where image capture is required and provides an image obtained capturing the area.
  • the zoom camera 1 is an apparatus capable of capturing an image, for example, a camera, a camcorder, or a closed-circuit television (CCTV).
  • the zoom camera 1 may provide a function to enlarge an input image.
  • the image preprocessor 10 converts an analog signal of image data input by the zoom camera 1 into a digital signal. Although the image preprocessor 10 is disposed outside of the zoom camera 1 in FIG. 2 , the image preprocessor 10 may be disposed inside the zoom camera 1 .
  • the representative feature portion presence determining unit 20 determines whether the input image obtained from the zoom camera 1 has a feature portion.
  • a representative feature portion of an image refers to a portion of the image which is a criterion used to determine an optical axis error in an input image.
  • the representative feature portion refers to a portion of the image having a particularly high illuminance and a distinctive shape so that even comparing an enlarged image with the image before its enlargement during image processing, a user may clearly recognize that both images are the same.
  • the input image may have one or more representative feature portions.
  • FIGS. 3A to 3D are views showing examples for detecting a representative feature portion according to an exemplary embodiment.
  • the image of FIG. 3A has a plurality of feature portions.
  • the representative feature portion presence determining unit 20 may determine a portion where some of a plurality of feature portions overlap with one another or a strong feature portion to be a candidate feature portion.
  • the candidate feature portion determined to be a representative feature portion by the representative feature portion presence determining unit 20 is marked with a black star.
  • an enlarged image of an area Z 1 may not include a candidate feature portion.
  • the representative feature portion presence determining unit 20 may redetermine the candidate feature portion in the area Z 1 .
  • the representative feature portion presence determining unit 20 may determine the candidate feature portion to be a representative feature portion.
  • the representative feature portion presence determining unit 20 may determine that the image does not include a representative feature portion. If the image does not include a representative feature portion, optical axis error information may be obtained by using optical flow information, as described later.
  • the representative feature portion presence determining unit 20 may determine whether an input image includes a representative feature portion only in a specific direction. In the zoom camera 1 , since an optical axis error generally occurs only in one direction, the representative feature portion presence determining unit 20 may determine the presence of a feature portion only in a direction in which the optical axis error occurs to reduce an amount of operations.
  • FIG. 4 is a view showing that a direction in which an optical axis error may occur is approximated to eight directions according to an exemplary embodiment.
  • the representative feature portion presence determining unit 20 may determine the presence of a representative feature portion only in eight directions as shown in FIG. 4 so as to reduce an amount of operations compared to when the presence of the representative feature portion in all directions is detected.
  • the sampling unit 31 samples a portion of an image including a feature portion.
  • An area where the sampling is performed by the sampling unit 31 includes a feature portion and may have a size suitable for matching between images.
  • the input image may include a plurality of sampled areas as well.
  • the sampling unit 31 may obtain a central coordinate of the sampled area.
  • the central coordinate obtained from the sampled area may be represented by (x_s, y_s).
  • FIGS. 5A to 5C are views showing an image obtained by sampling the image of FIGS. 1A to 1C , an image obtained by enlarging the sampled image, and an image obtained by matching the sampled image with an enlarged or resized image of the sampled image, respectively, according to an exemplary embodiment.
  • the portion marked with ‘+’ in the image of FIG. 1A that is, a central area and a peripheral area of the image, are sampled.
  • the representative feature portion presence determining unit 20 determines that a light in the center of the image is a representative feature portion, a predetermined area including the light may be obtained as a sampled image as shown in FIG. 5A .
  • the sampling unit 31 may resize the sampled image, as shown in FIG. 5B , to perform template matching to be described below.
  • the matching unit 32 matches a sampled image with an enlarged image of the input image. If the input image is enlarged at a magnification equal to or less than 2.5, the matching unit 32 performs matching by scale invariant feature transform (SIFT) matching.
  • SIFT scale invariant feature transform
  • the SIFT matching is used to smoothly match images by extracting a feature point generated in an edge or a vertex of an object as a vector even though the image is changed due to, for example, a change in a size, rotation, or a change in light.
  • the SIFT matching may match the enlarged input image with the sampled image regardless of a size and a direction of the image in view of characteristics of the SIFT matching.
  • the matching unit 32 may enlarge the input image at a remaining magnification after matching if the input image is enlarged at a magnification equal to or greater than 2.5, and may perform a repetitive compensating method including matching and compensating for the sampled image if an optical axis is moved. Accordingly, the matching unit 32 may perform template matching by resizing the sampled image by a magnification at which the sampled portion is enlarged if the input image is enlarged at a magnification equal to or greater than 2.5.
  • the matching unit 32 obtains a central coordinate of the matched area after the matching.
  • the matching unit 32 may obtain a central coordinate of a matched area in the image of FIG. 5A , for example, that is, a central coordinate of a brightest area.
  • the obtained central coordinate may be represented by (x_m, y_m).
  • the optical axis aligning unit 40 may move the zoom camera 1 by (x_s-x_m) in a pan direction and by (y_s-y_m) in a tilt direction by using the obtained central coordinate.
  • the representative feature portion presence determining unit 20 determines that an image does not include a representative feature portion and a plurality of feature points are scattered throughout the image, a general SIFT matching method or template matching method may not be used. In this case, the representative feature portion presence determining unit 20 extracts and compensates for an optical axis error of the zoom camera 1 by calculating an optical flow between images. According to an exemplary embodiment, the representative feature portion presence determining unit 20 may determine that an image does not include a representative feature portion if the image does not have any portion with an illuminance level higher than a predetermined level.
  • the optical flow calculating unit 33 calculates an optical flow between images while the zoom camera 1 is enlarging the images.
  • the optical flow calculating unit 33 may set an image before its enlargement as an image A_ 0 and may set an image after a predetermined period of time elapses (after N frames) as an image A_ 1 , where N is an integer greater than or equal to 1.
  • N is an integer greater than or equal to 1.
  • To calculate an optical flow of an image in every frame increases an amount of operations, and thus, process efficiency may be decreased. Accordingly, in the current embodiment, the amount of operations may be reduced without missing a feature portion by calculating an optical flow of an image at predetermined frames.
  • the optical flow calculating unit 33 may store optical flow information at the time of enlargement using the image A_ 0 and the image A_ 1 as OF_ 1 . By using such a method, the optical flow calculating unit 33 may obtain image information A_ 0 , A_ 1 , A_ 2 , . . . , A_k at every (N+1)-th frame and obtains optical flow information OF_ 0 , OF_ 1 , OF_ 2 , . . . , OF_k ⁇ 1 between images, where k is an integers greater than or equal to 1.
  • the error information extracting unit 34 obtains distances from centers of images of the optical flows by using the obtained optical flow information, and then, forms a two-dimensional (2D) histogram with respect to L distances and M directions, where L and M each are integers greater than or equal to 1.
  • the error information extracting unit 34 extracts a minimum value and a maximum value of an optical flow with respect to each direction at each distance with reference to the formed 2D histogram.
  • a size of an optical flow is proportional to a distance from the center of the image in view of characteristics of a camera curvature, reliable information may be extracted by obtaining the direction and the size of the optical flow for each of the distances, and thus, the error information extracting unit 34 obtains distances of optical flow feature portions from the center of the image.
  • FIG. 6 is a view showing size information of an optical flow according to distances from a center of an image when an optical axis is moved to the right upper side according to an exemplary embodiment.
  • FIG. 6 shows when an optical axis of the zoom camera 1 is moved to the upper-right side, in a direction (a). If the optical axis of the zoom camera 1 is not moved, an optical flow of an area C 1 spaced apart from a center C of an image at a distance d will have a radial form outside of a circle in an equal size.
  • the direction (a) in an optical flow of an area C 1 spaced apart from the center C of the image at the distance d, the direction (a) has a size smaller than that of a direction (b).
  • an optical flow in the direction (a) located on the area C 1 and an optical flow in the direction (b) have different sizes, unlike a case where the optical axis error does not occur.
  • the optical flow in the direction (a) is smaller than that in the direction (b) because the center C of the image is moved in the direction (a) compared to the image before its enlargement.
  • the size of the optical flow increases.
  • the size of the optical flow increases in the order of the areas C 1 , C 2 , and C 3 because the area that is far away from the center C of the image moves further when enlarging the image by using the zoom camera 1 .
  • the error information extracting unit 34 may form a histogram in M directions with respect to L distances by using the optical flow information obtained by the optical flow calculating unit 33 .
  • the error information extracting unit 34 may set the L distances at an appropriate level within a range in which an amount of operations is not excessively increased.
  • L may be three as in FIG. 6 .
  • the M directions may be set to eight. If the entire angle area is divided into eight, as shown in FIG. 4 , approximation with respect to the angle is performed, and thus, an approximate value with respect to an adjacent direction may be considered to decrease an error.
  • the error information extracting unit 34 extracts a minimum value and a maximum value of an optical flow with respect to each direction at each distance based on the formed histogram.
  • the size of the optical flow is proportional to the distance from the center of the image in view of characteristics of an optical axis error of the zoom camera 1 , directions and sizes of the optical flow are different from one another, and thus, the error information extracting unit 34 extracts the minimum value and the maximum value.
  • sizes of optical flows at an area in a same distance from the center C are different from each other, and the sizes of the optical flows are different from one another in the areas C 1 , C 2 , and C 3 spaced apart from the center C of the image at different distances.
  • the size of the optical flow increases, and the optical flow in the direction (a) that is the same as the direction of the optical axis error is smaller than the optical flow in the direction (b). Accordingly, reliable error compensation may be performed by obtaining optical axis error data with respect to both the distance and direction.
  • the error information extracting unit 34 determines whether directions of the minimum and maximum values are symmetrical to each other, and, if the directions of the minimum and maximum values are symmetrical to each other, obtains a difference between the minimum value and an average value of the optical flows at each distance. Based on the difference between the minimum value and the average value of the optical flows at each distance, the error information extracting unit extracts the error information by calculating a sum of the differences at the L distances and dividing the sum by L.
  • the directions of the minimum and maximum values are symmetrical to each other.
  • the size of the optical flow is small in the direction (a) in which the optical axis is moved and is large in the direction (b) opposite to the direction (a), the directions of the minimum and maximum values are symmetrical to each other. If the directions of the minimum and maximum values are not symmetrical to each other, it is determined that the optical axis is not moved to any one direction, which makes it difficult to extract optical axis error information by using a change in the optical flow.
  • the error information extracting unit 34 obtains a difference between the minimum value and the average value of optical flows at each of the L distances, adds up the differences obtained at the L distances, and divides a sum of the differences by L to extract error information of an optical axis. That is, the error information extracting unit 34 may obtain the optical axis error information by using the calculated average value. Referring back to FIG. 6 , the optical axis error information in the distance C 1 may be obtained by calculating a difference between the minimum value and the average value of the optical flows at the distance C 1 .
  • the optical axis aligning unit 40 compensates for an optical axis error by using the optical axis error information obtained by the matching unit 32 and the error information extracting unit 34 .
  • the compensation for the optical axis error may be performed through image processing by converting a degree to which an optical axis is moved into a coordinate instead of using a physical method.
  • FIG. 7 is a flowchart for describing an image stabilization system according to an exemplary embodiment.
  • the image stabilization system obtains an input image having a specific zoom value by using the zoom camera 1 (operation S 1 ).
  • the representative feature portion presence determining unit 20 determines whether the obtained input image includes a representative feature portion (operation S 2 ).
  • the representative feature portion may refer to a portion of the input image set as a representative value to compensate for the optical axis error from among a plurality of distinctive feature portions during image processing.
  • the representative feature portion presence determining unit 20 determines that the input image includes the representative feature portion
  • the representative feature portion of the input image is sampled to generate a sampled image, and a central coordinate of a sampled area is obtained and set as (x_s, y_s) (operation F 1 ).
  • the input image is enlarged by using the zoom camera 1 , and image information of an enlarged area of the input image is obtained (operation F 2 ).
  • the sampled image and enlarged input image are matched with each other by SIFT matching (operation F 4 ).
  • the sampled image may be resized by the magnification at which the input image is enlarged, and then the sampled image and enlarged input image are matched with each other by template matching (operation F 5 ).
  • the optical axis aligning unit 40 aligns the moved optical axis by moving the central coordinate by (x_s-x_m) in a pan direction and by (y_s-y_m) in a tilt direction to compensate for the optical axis error.
  • the optical flow calculating unit 33 obtains an image right before enlarging the input image (operation O 1 ).
  • the optical flow calculating unit 33 enlarges an image by using the zoom camera 1 and calculates an optical flow of the image during the enlargement of the image (operation O 2 ).
  • the error information extracting unit 34 extracts optical axis error information by using optical flow information (operation O 3 ).
  • the error information extracting unit 34 may obtain the optical axis error information based on the optical flow information in a movement direction of the optical flow and an opposite direction thereto by using a fact that an optical axis is moved only in one direction.
  • the optical axis aligning unit 40 compensates for an optical error by moving pan and tilt values by a size of the optical axis error information in an opposite direction to a direction of the optical axis error information (operation O 4 ).
  • FIG. 8 is a flowchart for describing a method of determining whether an image includes a representative feature portion, according to an exemplary embodiment.
  • the representative feature portion presence determining unit 20 extracts all feature portions from an image before its enlargement (operation S 21 ).
  • a feature portion that may be a representative feature portion in the image is set as a candidate feature portion (operation S 22 ), and the image is enlarged (operation S 23 ).
  • the enlarged image includes a candidate feature portion, it is determined whether the image needs to be additionally enlarged (operation S 26 ). If the image needs to be additionally enlarged, the method returns to operation S 23 , and the operation is repeated.
  • the representative feature portion presence determining unit 20 sets the set candidate feature portion as a representative feature portion (operation S 27 ).
  • the representative feature portion presence determining unit 20 may determine that the input image does not include a representative feature portion.
  • FIG. 9 is a flowchart for describing a process of calculating an optical flow by the optical flow calculating unit 33 , according to an exemplary embodiment.
  • the optical flow calculating unit 33 obtains an image right before its enlargement and stores the image as an image A_ 0 (operation O 21 ).
  • the optical flow calculating unit 33 calculates optical flows between the image A_ 0 and an image A_ 1 after N frames, and stores the optical flows as OF_ 1 (operation O 22 ).
  • the optical flow calculating unit 33 calculates optical flows between the image A_ 1 and of an image A_ 2 after N frames from the frame of the image A_ 1 , and stores the optical flows as OF_ 2 (operation O 23 ).
  • the optical flow calculating unit 33 calculates optical flows of an image A_k ⁇ 1 and an image A_k at an interval of N frames, and stores the optical flows as OF_k ⁇ 1 by using the above-described method (operation O 24 ). For the efficiency of operation, the optical flow calculating unit 33 calculates an optical frame in every (N+1)-th frame instead of calculating an optical flow of an image in every frame.
  • FIG. 10 is a flowchart for describing a process of extracting an optical axis error by using optical flow information by the error information extracting unit 34 , according to an exemplary embodiment.
  • the error information extracting unit 34 calculates distances of stored optical flow feature portions from a center of an image (operation O 31 ).
  • the error information extracting unit 34 forms an optical flow histogram in M directions with respect to the L distances from the center of the image (operation O 32 ). As described above, referring to FIGS. 4 and 6 , the error information extracting unit 34 may form the optical flow histogram in eight directions with respect to three distances from the center of the image.
  • the error information extracting unit 34 extracts a maximum value and a minimum value in every direction with respect to the L distances (operation O 33 ). As described above, when an optical axis is moved in any one direction, an optical flow in the direction in which the optical axis is moved has a minimum value and an optical flow in an opposite direction to the direction in which the optical axis is moved has a maximum value.
  • the error information extracting unit 34 may calculate optical axis error information by using a difference between the maximum value and the minimum value.
  • the error information extracting unit 34 determines whether directions of the minimum and maximum values are symmetrical to each other at each of the L distances (operation O 34 ). If the optical axis is moved in any one direction, the directions of the minimum and maximum values are symmetrical to each other.
  • the error information extracting unit 34 extracts an average value of optical flows at each of the L distances and stores the average values as (Avg_ 1 , . . . , Avg_L) (operation O 35 ).
  • the error information extracting unit 34 may not use feature portion information of the corresponding distances (operation O 36 ).
  • the error information extracting unit 34 may extract the optical axis error information by obtaining a difference between the minimum value and the average value of optical flows at each of the L distances, adding up the differences obtained at the L distances, and dividing a sum of the differences by L.
  • the optical axis error information may be obtained by an equation, ⁇ (Avg_a ⁇ Min_a)/L, where a is 1 to L. (operation O 37 ).
  • an optical axis error system may effectively compensate for an optical axis error through image processing.
  • the optical axis error may be compensated through image processing without physically changing setting of a zoom camera.

Abstract

A method and system for image stabilization through image processing, and a zoom camera including an image stabilization function. The method includes: determining whether an input image comprises a representative feature portion; sampling the representative feature portion to generate a sampled image if it is determined that the input image comprises the representative feature portion; enlarging the input image by an optical zooming, matching the enlarged input image with the sampled image, and obtaining a central coordinate of the enlarged input image and a central coordinate of the sampled image; and aligning an optical axis by calculating a difference between the central coordinate of the enlarged input image and the central coordinate of the sampled image.

Description

CROSS-REFERENCE TO RELATED PATENT APPLICATION
This application claims priority from Korean Patent Application No. 10-2011-0146112, filed on Dec. 29, 2011, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
BACKGROUND
1. Field
Methods and apparatuses consistent with exemplary embodiments relate to image stabilization through image processing, and a zoom camera including an image stabilization function.
2. Description of the Related Art
Recently, many surveillance cameras are installed in buildings and streets to prevent crimes and theft. An image captured by a surveillance camera displays a current state of a place where the surveillance camera is installed through a monitor in a police station or a management office of a building. In general, a surveillance camera has a zoom function to enlarge or reduce an object to provide lots of user convenience.
However, in the surveillance camera having the zoom function, an optical axis error occurs when an imaging device, such as a plurality of lens groups and a charge-coupled device (CCD), is assembled in the surveillance camera or due to various factors such as a tolerance of an optical system, and thus, an optical axis of the surveillance camera is changed according to zoom change.
FIGS. 1A to 1C are images showing when an optical axis error occurs in an image captured by a zoom camera.
Referring to FIG. 1, a portion marked with ‘+’ in FIG. 1A represents a central area of the image. However, comparing the image of FIG. 1A with the image of FIG. 1B captured by zooming-in the zoom camera, the central area marked with ‘+’ of FIG. 1A is moved as shown in FIG. 1B, which shows that an optical axis error occurs. Accordingly, as shown in FIG. 1C, there is a need for a method of compensating for the optical axis error so that the central area of the image is maintained at its original position even after the image is enlarged.
In general, a method of mechanically image stabilization has been provided. However, this method has a problem that if a physical impact is applied to a camera after compensating for the optical axis error or if a long time elapses, a physical compensation should be applied to the camera again.
SUMMARY
One or more exemplary embodiments provide a method and system for determining whether an input image includes a representative feature portion through image processing and image stabilization by using an image matching method when the input image includes the representative feature portion and compensating for the optical axis error by using optical flow information if the input image includes no representative feature portion, and a zoom camera including an image stabilization function.
According to an aspect of an exemplary embodiment, there is provided a method of image stabilization, the method including: determining whether an input image comprises a representative feature portion; sampling the representative feature portion to generate a sampled image if it is determined that the input image comprises the representative feature portion; enlarging the sampled image, matching the enlarged sampled image with the sampled image, and obtaining a central coordinate of the matched image; and aligning an optical axis by calculating a difference between the central coordinate of the matched image and a central coordinate of the sampled image.
The representative feature portion may have an illuminance level higher than a predetermined illuminance level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
The determining of whether the input image includes a representative feature portion may include: extracting the plurality of feature portions from the input image; selecting a candidate feature portion having a predetermined feature from among the extracted feature portions; and determining that the input image includes the representative feature portion if the enlarged sampled image includes the candidate feature portion, and determining that the input image does not include the representative feature portion if the enlarged sampled image does not include the candidate feature portion.
The matching of the enlarged sampled image with the sampled image may include performing scale invariant feature transform (SIFT) matching if the sampled image is enlarged at a magnification equal to or less than 2.5.
The matching of the enlarged sampled image with the sampled image may include performing template matching by resizing the sampled image by a magnification at which the sampled image is enlarged if the sampled image is enlarged at a magnification equal to or greater than 2.5.
According to an aspect of another exemplary embodiment, there is provided a method of image stabilization, the method including: determining whether an input image comprises a representative feature portion; calculating optical flows of the input image during enlargement of the input image, if it is determined that the input image does not comprise the representative feature portion; obtaining optical axis error information by using optical flow information obtained from the calculated optical flows; and aligning an optical axis by using the obtained optical axis error information.
The calculating of the optical flows may include obtaining the input image according to a predetermined number of frames during the enlargement of the input image.
The obtaining of the optical axis error information may include forming an optical flow histogram based on at least one distance from a center of the input image and at least one direction at each of the at least one distance.
The optical axis error information may be obtained only if directions of minimum and maximum values of the optical flows at each of the at least one distance are symmetrical to each other.
The optical axis error information may be obtained based on a difference between an average value and a minimum value of the optical flows at each of the at least one distance.
According to an aspect of another exemplary embodiment, there is provided an image stabilization system including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; a sampling unit which samples the representative feature portion to generate a sample image if it is determined that the input image comprises the representative feature portion; a matching unit which enlarges the sampled image, matches the enlarged sampled image with the sampled image, and obtains a central coordinate of the matched image; and an optical axis aligning unit which aligns an optical axis by calculating a difference between the central coordinate of the matched image and a central coordinate of the sampled image.
The representative feature portion may have an illuminance level higher than a predetermined illuminance level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
The representative feature portion presence determining unit may extract the plurality of feature portions from the input image, select a candidate feature portion having a predetermined feature from among the extracted feature portions, determine that the input image includes the representative feature portion if the enlarged sampled image includes the candidate feature portion, and determine that the input image does not include the representative feature portion if the enlarged sampled image does not include the candidate feature portion.
The matching unit may match the enlarged sampled image with the sampled image by scale invariant feature transform (SIFT) matching if the sampled image is enlarged at a magnification equal to or less than 2.5.
The matching unit may match the enlarged sampled image with the sampled image by template matching by resizing the sampled image by the magnification at which the sampled image is enlarged if the sampled image is enlarged at a magnification equal to or greater than 2.5.
According to an aspect of another exemplary embodiment, there is provided an image stabilization system including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; an optical flow calculating unit which calculates optical flows of the input image during enlargement of the input image, if it is determined that the input image does not comprise the representative feature portion; an error information extracting unit which obtains optical axis error information by using optical flow information obtained from the calculated optical flows; and an optical axis aligning unit which aligns an optical axis by using the obtained optical axis error information.
The optical flow calculating unit may calculate the optical flows by obtaining the input image according to a predetermined number of frames during the enlargement of the input image.
The error information extracting unit may obtain the optical axis error information by forming an optical flow histogram based on at least one distance from a center of the input image and at least one direction at least of the at least one distance.
The error information extracting unit may obtain the optical axis error information only if directions of minimum and maximum values of the optical flows at each of the at least one distance are symmetrical to each other.
The error information extracting unit may obtain the optical axis error information based on a difference between an average value and a minimum value of the optical flows at each of the at least one distance.
According to an aspect of another exemplary embodiment, there is provided a zoom camera including: a representative feature portion presence determining unit which determines whether an input image comprises a representative feature portion; an optical axis error calculating unit which obtains optical axis error information by matching a sample image including the representative feature portion and an enlarged sample image if the representative feature portion presence determining unit determines that the input image comprises a representative feature portion, and obtaining the optical axis error information by calculating optical flow information between at least two images corresponding to the input image if the representative feature portion presence determining unit determines that the input image does not comprise the representative feature portion; and an optical axis aligning unit which compensates for an optical axis error by using the obtained optical axis error information.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
FIGS. 1A to 1C are images showing when an optical axis error occurs in an image captured by a zoom camera;
FIG. 2 is a block diagram of an image stabilization system according to an exemplary embodiment;
FIGS. 3A to 3D are views showing examples for detecting a representative feature portion according to an exemplary embodiment;
FIG. 4 is a view showing that a direction in which an optical axis error may occur is approximated to eight directions according to an exemplary embodiment;
FIGS. 5A to 5C are views showing an image obtained by sampling the image of FIGS. 1A to 1C, an image obtained by enlarging the sampled image, and an image obtained by matching the sampled image with an enlarged or resized image of the sampled image, respectively, according to an exemplary embodiment;
FIG. 6 is a view showing size information of an optical flow according to distances from a center of an image when an optical axis is moved to the right upper side according to an exemplary embodiment;
FIG. 7 is a flowchart for describing an image stabilization system according to an exemplary embodiment;
FIG. 8 is a flowchart for describing a method of determining whether an image includes a representative feature portion, according to an exemplary embodiment;
FIG. 9 is a flowchart for describing a process of calculating an optical flow by an optical flow calculating unit, according to an exemplary embodiment; and
FIG. 10 is a flowchart for describing a process of extracting an optical axis error by using optical flow information by an error information extracting unit, according to an exemplary embodiment.
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
The inventive concept will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the inventive concept are shown. The exemplary embodiments will be described in detail such that one of ordinary skill in the art may easily work the inventive concept. It should be understood that the exemplary embodiments of the inventive concept may vary but do not have to be mutually exclusive. For example, particular shapes, structures, and properties according to a predetermined embodiment described in this specification may be modified in other embodiments without departing from the spirit and scope of the inventive concept. In addition, positions or arrangement of individual components of each of the exemplary embodiments may also be modified without departing from the spirit and scope of the inventive concept. Accordingly, the detailed description below should not be construed as having limited meanings but construed to encompass the scope of the claims and any equivalent ranges thereto. In the drawings, like reference numerals denote like elements in various aspects.
Hereinafter, the inventive concept will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the inventive concept are shown such that one of ordinary skill in the art may easily work the inventive concept.
FIG. 2 is a block diagram of an image stabilization system according to an exemplary embodiment.
Referring to FIG. 2, the image stabilization system includes a zoom camera 1, an image preprocessor 10, a representative feature portion presence determining unit 20, a sampling unit 31, a matching unit 32, an optical flow calculating unit 33, an error information extracting unit 34, and an optical axis aligning unit 40.
The zoom camera 1 is installed in an area where image capture is required and provides an image obtained capturing the area. The zoom camera 1 is an apparatus capable of capturing an image, for example, a camera, a camcorder, or a closed-circuit television (CCTV). In the current embodiment, the zoom camera 1 may provide a function to enlarge an input image.
The image preprocessor 10 converts an analog signal of image data input by the zoom camera 1 into a digital signal. Although the image preprocessor 10 is disposed outside of the zoom camera 1 in FIG. 2, the image preprocessor 10 may be disposed inside the zoom camera 1.
The representative feature portion presence determining unit 20 determines whether the input image obtained from the zoom camera 1 has a feature portion.
In the current embodiment, a representative feature portion of an image refers to a portion of the image which is a criterion used to determine an optical axis error in an input image. In other words, the representative feature portion refers to a portion of the image having a particularly high illuminance and a distinctive shape so that even comparing an enlarged image with the image before its enlargement during image processing, a user may clearly recognize that both images are the same. In the current embodiment, the input image may have one or more representative feature portions.
FIGS. 3A to 3D are views showing examples for detecting a representative feature portion according to an exemplary embodiment.
Referring to FIG. 3A, when a portion marked with a star in an image is referred to as a feature portion, the image of FIG. 3A has a plurality of feature portions. In this case, the representative feature portion presence determining unit 20 may determine a portion where some of a plurality of feature portions overlap with one another or a strong feature portion to be a candidate feature portion. In FIGS. 3A to 3C, the candidate feature portion determined to be a representative feature portion by the representative feature portion presence determining unit 20 is marked with a black star.
Referring to FIG. 3B, when the image is enlarged by zooming in the zoom camera 1, an enlarged image of an area Z1 may not include a candidate feature portion. In this case, the representative feature portion presence determining unit 20 may redetermine the candidate feature portion in the area Z1. By repeating the above-described process, assuming that an image of an area Z2 in FIG. 3C is a finally enlarged image by zooming in the zoom camera 1, if the image of the enlarged area Z2 includes a candidate feature portion, the representative feature portion presence determining unit 20 may determine the candidate feature portion to be a representative feature portion.
Alternatively, assuming that an image of an area Z3 in FIG. 3D is a finally enlarged image, if the finally enlarged image does not include any feature portion or a feature portion that can be regarded as a representative feature portion, the representative feature portion presence determining unit 20 may determine that the image does not include a representative feature portion. If the image does not include a representative feature portion, optical axis error information may be obtained by using optical flow information, as described later.
In addition, the representative feature portion presence determining unit 20 may determine whether an input image includes a representative feature portion only in a specific direction. In the zoom camera 1, since an optical axis error generally occurs only in one direction, the representative feature portion presence determining unit 20 may determine the presence of a feature portion only in a direction in which the optical axis error occurs to reduce an amount of operations.
FIG. 4 is a view showing that a direction in which an optical axis error may occur is approximated to eight directions according to an exemplary embodiment.
Referring to FIG. 4, it is assumed that an optical axis error occurs only in eight directions of No. 1 to No. 8. In detail, the optical axis error between −22.5 degrees and +22.5 degrees is approximated such that the optical axis error occurs in a direction at +0 degrees. The representative feature portion presence determining unit 20 may determine the presence of a representative feature portion only in eight directions as shown in FIG. 4 so as to reduce an amount of operations compared to when the presence of the representative feature portion in all directions is detected.
Hereinafter, a method of image stabilization if the representative feature portion presence determining unit 20 determines that an image includes a feature portion will be described. A method of image stabilization if the representative feature portion presence determining unit 20 determines that an image does not include a feature portion will be described later.
The sampling unit 31 samples a portion of an image including a feature portion. An area where the sampling is performed by the sampling unit 31 includes a feature portion and may have a size suitable for matching between images. In the current embodiment, if an input image includes a plurality of feature portions, the input image may include a plurality of sampled areas as well.
Also, the sampling unit 31 may obtain a central coordinate of the sampled area. The central coordinate obtained from the sampled area may be represented by (x_s, y_s).
FIGS. 5A to 5C are views showing an image obtained by sampling the image of FIGS. 1A to 1C, an image obtained by enlarging the sampled image, and an image obtained by matching the sampled image with an enlarged or resized image of the sampled image, respectively, according to an exemplary embodiment.
Referring to FIG. 5A, the portion marked with ‘+’ in the image of FIG. 1A, that is, a central area and a peripheral area of the image, are sampled. In other words, if the representative feature portion presence determining unit 20 determines that a light in the center of the image is a representative feature portion, a predetermined area including the light may be obtained as a sampled image as shown in FIG. 5A. Also, the sampling unit 31 may resize the sampled image, as shown in FIG. 5B, to perform template matching to be described below.
Referring back to FIG. 2, the matching unit 32 matches a sampled image with an enlarged image of the input image. If the input image is enlarged at a magnification equal to or less than 2.5, the matching unit 32 performs matching by scale invariant feature transform (SIFT) matching. The SIFT matching is used to smoothly match images by extracting a feature point generated in an edge or a vertex of an object as a vector even though the image is changed due to, for example, a change in a size, rotation, or a change in light. In other words, the SIFT matching may match the enlarged input image with the sampled image regardless of a size and a direction of the image in view of characteristics of the SIFT matching.
On the contrary, the matching unit 32 may enlarge the input image at a remaining magnification after matching if the input image is enlarged at a magnification equal to or greater than 2.5, and may perform a repetitive compensating method including matching and compensating for the sampled image if an optical axis is moved. Accordingly, the matching unit 32 may perform template matching by resizing the sampled image by a magnification at which the sampled portion is enlarged if the input image is enlarged at a magnification equal to or greater than 2.5.
Also, the matching unit 32 obtains a central coordinate of the matched area after the matching. Referring back to FIGS. 5A to 5C, the matching unit 32 may obtain a central coordinate of a matched area in the image of FIG. 5A, for example, that is, a central coordinate of a brightest area. The obtained central coordinate may be represented by (x_m, y_m). The optical axis aligning unit 40 may move the zoom camera 1 by (x_s-x_m) in a pan direction and by (y_s-y_m) in a tilt direction by using the obtained central coordinate.
Hereinafter, a method of image stabilization if the representative feature portion presence determining unit 20 determines that an image does not include a feature portion will be described.
If the representative feature portion presence determining unit 20 determines that an image does not include a representative feature portion and a plurality of feature points are scattered throughout the image, a general SIFT matching method or template matching method may not be used. In this case, the representative feature portion presence determining unit 20 extracts and compensates for an optical axis error of the zoom camera 1 by calculating an optical flow between images. According to an exemplary embodiment, the representative feature portion presence determining unit 20 may determine that an image does not include a representative feature portion if the image does not have any portion with an illuminance level higher than a predetermined level.
In detail, the optical flow calculating unit 33 calculates an optical flow between images while the zoom camera 1 is enlarging the images. The optical flow calculating unit 33 may set an image before its enlargement as an image A_0 and may set an image after a predetermined period of time elapses (after N frames) as an image A_1, where N is an integer greater than or equal to 1. To calculate an optical flow of an image in every frame increases an amount of operations, and thus, process efficiency may be decreased. Accordingly, in the current embodiment, the amount of operations may be reduced without missing a feature portion by calculating an optical flow of an image at predetermined frames.
The optical flow calculating unit 33 may store optical flow information at the time of enlargement using the image A_0 and the image A_1 as OF_1. By using such a method, the optical flow calculating unit 33 may obtain image information A_0, A_1, A_2, . . . , A_k at every (N+1)-th frame and obtains optical flow information OF_0, OF_1, OF_2, . . . , OF_k−1 between images, where k is an integers greater than or equal to 1.
The error information extracting unit 34 obtains distances from centers of images of the optical flows by using the obtained optical flow information, and then, forms a two-dimensional (2D) histogram with respect to L distances and M directions, where L and M each are integers greater than or equal to 1. The error information extracting unit 34 extracts a minimum value and a maximum value of an optical flow with respect to each direction at each distance with reference to the formed 2D histogram. Here, since a size of an optical flow is proportional to a distance from the center of the image in view of characteristics of a camera curvature, reliable information may be extracted by obtaining the direction and the size of the optical flow for each of the distances, and thus, the error information extracting unit 34 obtains distances of optical flow feature portions from the center of the image.
FIG. 6 is a view showing size information of an optical flow according to distances from a center of an image when an optical axis is moved to the right upper side according to an exemplary embodiment.
FIG. 6 shows when an optical axis of the zoom camera 1 is moved to the upper-right side, in a direction (a). If the optical axis of the zoom camera 1 is not moved, an optical flow of an area C1 spaced apart from a center C of an image at a distance d will have a radial form outside of a circle in an equal size.
However, referring to FIG. 6, in an optical flow of an area C1 spaced apart from the center C of the image at the distance d, the direction (a) has a size smaller than that of a direction (b). In other words, an optical flow in the direction (a) located on the area C1 and an optical flow in the direction (b) have different sizes, unlike a case where the optical axis error does not occur. Similarly, in areas C2 and C3 spaced apart from the center C of the image at different distances, the optical flow in the direction (a) is smaller than that in the direction (b) because the center C of the image is moved in the direction (a) compared to the image before its enlargement.
Also, referring to FIG. 6, as the distance from the center C of the image increases, the size of the optical flow increases. The size of the optical flow increases in the order of the areas C1, C2, and C3 because the area that is far away from the center C of the image moves further when enlarging the image by using the zoom camera 1.
Referring back to FIG. 2, the error information extracting unit 34 may form a histogram in M directions with respect to L distances by using the optical flow information obtained by the optical flow calculating unit 33. Regarding the L distances, the error information extracting unit 34 may set the L distances at an appropriate level within a range in which an amount of operations is not excessively increased. For example, L may be three as in FIG. 6. Also, as described above with reference to FIG. 4, the M directions may be set to eight. If the entire angle area is divided into eight, as shown in FIG. 4, approximation with respect to the angle is performed, and thus, an approximate value with respect to an adjacent direction may be considered to decrease an error.
The error information extracting unit 34 extracts a minimum value and a maximum value of an optical flow with respect to each direction at each distance based on the formed histogram. Here, since the size of the optical flow is proportional to the distance from the center of the image in view of characteristics of an optical axis error of the zoom camera 1, directions and sizes of the optical flow are different from one another, and thus, the error information extracting unit 34 extracts the minimum value and the maximum value.
In other words, referring back to FIG. 6, sizes of optical flows at an area in a same distance from the center C (for example, C1) are different from each other, and the sizes of the optical flows are different from one another in the areas C1, C2, and C3 spaced apart from the center C of the image at different distances. In other words, as the distance from the center C of the image increases, the size of the optical flow increases, and the optical flow in the direction (a) that is the same as the direction of the optical axis error is smaller than the optical flow in the direction (b). Accordingly, reliable error compensation may be performed by obtaining optical axis error data with respect to both the distance and direction.
The error information extracting unit 34 determines whether directions of the minimum and maximum values are symmetrical to each other, and, if the directions of the minimum and maximum values are symmetrical to each other, obtains a difference between the minimum value and an average value of the optical flows at each distance. Based on the difference between the minimum value and the average value of the optical flows at each distance, the error information extracting unit extracts the error information by calculating a sum of the differences at the L distances and dividing the sum by L. Here, since an optical axis is generally moved, the directions of the minimum and maximum values are symmetrical to each other.
In other words, as shown in FIG. 6, the size of the optical flow is small in the direction (a) in which the optical axis is moved and is large in the direction (b) opposite to the direction (a), the directions of the minimum and maximum values are symmetrical to each other. If the directions of the minimum and maximum values are not symmetrical to each other, it is determined that the optical axis is not moved to any one direction, which makes it difficult to extract optical axis error information by using a change in the optical flow.
If the directions of the minimum and maximum values pass the aforementioned symmetry test, the error information extracting unit 34 obtains a difference between the minimum value and the average value of optical flows at each of the L distances, adds up the differences obtained at the L distances, and divides a sum of the differences by L to extract error information of an optical axis. That is, the error information extracting unit 34 may obtain the optical axis error information by using the calculated average value. Referring back to FIG. 6, the optical axis error information in the distance C1 may be obtained by calculating a difference between the minimum value and the average value of the optical flows at the distance C1.
Finally, the optical axis aligning unit 40 compensates for an optical axis error by using the optical axis error information obtained by the matching unit 32 and the error information extracting unit 34. Here, the compensation for the optical axis error may be performed through image processing by converting a degree to which an optical axis is moved into a coordinate instead of using a physical method.
FIG. 7 is a flowchart for describing an image stabilization system according to an exemplary embodiment.
Referring to FIG. 7, the image stabilization system obtains an input image having a specific zoom value by using the zoom camera 1 (operation S1).
Next, the representative feature portion presence determining unit 20 determines whether the obtained input image includes a representative feature portion (operation S2). As described above, the representative feature portion may refer to a portion of the input image set as a representative value to compensate for the optical axis error from among a plurality of distinctive feature portions during image processing.
Next, if the representative feature portion presence determining unit 20 determines that the input image includes the representative feature portion, the representative feature portion of the input image is sampled to generate a sampled image, and a central coordinate of a sampled area is obtained and set as (x_s, y_s) (operation F1).
Next, the input image is enlarged by using the zoom camera 1, and image information of an enlarged area of the input image is obtained (operation F2).
Then, it is determined whether the sampled image is enlarged at a magnification equal to or less than 2.5 (operation F3).
If the input image is enlarged at a magnification equal to or less than 2.5, the sampled image and enlarged input image are matched with each other by SIFT matching (operation F4). On the contrary, if the input image is enlarged at a magnification equal to or greater than 2.5, the sampled image may be resized by the magnification at which the input image is enlarged, and then the sampled image and enlarged input image are matched with each other by template matching (operation F5).
Next, a central coordinate (x_m, y_m) of a matched area in the enlarged image is obtained (operation F6).
Finally, the optical axis aligning unit 40 aligns the moved optical axis by moving the central coordinate by (x_s-x_m) in a pan direction and by (y_s-y_m) in a tilt direction to compensate for the optical axis error.
On the other hand, if the representative feature portion presence determining unit 20 determines that the input image does not include a representative feature portion, the optical flow calculating unit 33 obtains an image right before enlarging the input image (operation O1).
Next, the optical flow calculating unit 33 enlarges an image by using the zoom camera 1 and calculates an optical flow of the image during the enlargement of the image (operation O2).
Next, the error information extracting unit 34 extracts optical axis error information by using optical flow information (operation O3). As described above, the error information extracting unit 34 may obtain the optical axis error information based on the optical flow information in a movement direction of the optical flow and an opposite direction thereto by using a fact that an optical axis is moved only in one direction.
Finally, the optical axis aligning unit 40 compensates for an optical error by moving pan and tilt values by a size of the optical axis error information in an opposite direction to a direction of the optical axis error information (operation O4).
FIG. 8 is a flowchart for describing a method of determining whether an image includes a representative feature portion, according to an exemplary embodiment.
Referring to FIG. 8, the representative feature portion presence determining unit 20 extracts all feature portions from an image before its enlargement (operation S21).
Next, a feature portion that may be a representative feature portion in the image is set as a candidate feature portion (operation S22), and the image is enlarged (operation S23).
Next, it is determined if the enlarged image includes a candidate feature portion (operation S24). If the enlarged image does not include a candidate feature portion, the candidate feature portion is reset in the enlarged image (operation S25).
Otherwise, if the enlarged image includes a candidate feature portion, it is determined whether the image needs to be additionally enlarged (operation S26). If the image needs to be additionally enlarged, the method returns to operation S23, and the operation is repeated.
Otherwise, if the image does not need to be additionally enlarged, the representative feature portion presence determining unit 20 sets the set candidate feature portion as a representative feature portion (operation S27). Although not shown in FIG. 8, when the representative feature portion presence determining unit 20 may not set a representative feature portion regardless of a process shown in FIG. 8, the representative feature portion presence determining unit 20 may determine that the input image does not include a representative feature portion.
FIG. 9 is a flowchart for describing a process of calculating an optical flow by the optical flow calculating unit 33, according to an exemplary embodiment.
Referring to FIG. 9, the optical flow calculating unit 33 obtains an image right before its enlargement and stores the image as an image A_0 (operation O21).
Next, the optical flow calculating unit 33 calculates optical flows between the image A_0 and an image A_1 after N frames, and stores the optical flows as OF_1 (operation O22).
Next, the optical flow calculating unit 33 calculates optical flows between the image A_1 and of an image A_2 after N frames from the frame of the image A_1, and stores the optical flows as OF_2 (operation O23).
The optical flow calculating unit 33 calculates optical flows of an image A_k−1 and an image A_k at an interval of N frames, and stores the optical flows as OF_k−1 by using the above-described method (operation O24). For the efficiency of operation, the optical flow calculating unit 33 calculates an optical frame in every (N+1)-th frame instead of calculating an optical flow of an image in every frame.
FIG. 10 is a flowchart for describing a process of extracting an optical axis error by using optical flow information by the error information extracting unit 34, according to an exemplary embodiment.
Referring to FIG. 10, the error information extracting unit 34 calculates distances of stored optical flow feature portions from a center of an image (operation O31).
Next, the error information extracting unit 34 forms an optical flow histogram in M directions with respect to the L distances from the center of the image (operation O32). As described above, referring to FIGS. 4 and 6, the error information extracting unit 34 may form the optical flow histogram in eight directions with respect to three distances from the center of the image.
Next, the error information extracting unit 34 extracts a maximum value and a minimum value in every direction with respect to the L distances (operation O33). As described above, when an optical axis is moved in any one direction, an optical flow in the direction in which the optical axis is moved has a minimum value and an optical flow in an opposite direction to the direction in which the optical axis is moved has a maximum value. The error information extracting unit 34 may calculate optical axis error information by using a difference between the maximum value and the minimum value.
Next, the error information extracting unit 34 determines whether directions of the minimum and maximum values are symmetrical to each other at each of the L distances (operation O34). If the optical axis is moved in any one direction, the directions of the minimum and maximum values are symmetrical to each other.
If the directions of the minimum and maximum values are symmetrical to each other at each of the L distances, the error information extracting unit 34 extracts an average value of optical flows at each of the L distances and stores the average values as (Avg_1, . . . , Avg_L) (operation O35).
On the other hand, if the directions of the minimum and maximum values are not symmetrical to each other, the optical axis is moved in various directions, and thus, the error information extracting unit 34 may not use feature portion information of the corresponding distances (operation O36).
Finally, the error information extracting unit 34 may extract the optical axis error information by obtaining a difference between the minimum value and the average value of optical flows at each of the L distances, adding up the differences obtained at the L distances, and dividing a sum of the differences by L. In other words, the optical axis error information may be obtained by an equation, Σ(Avg_a−Min_a)/L, where a is 1 to L. (operation O37).
According to the above exemplary embodiments, an optical axis error system may effectively compensate for an optical axis error through image processing.
Also, the optical axis error may be compensated through image processing without physically changing setting of a zoom camera.
While the inventive concept has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the inventive concept as defined by the following claims.

Claims (10)

What is claimed is:
1. A method of image stabilization, the method comprising:
determining whether an input image comprises a representative feature portion;
sampling the representative feature portion to generate a sampled image if it is determined that the input image comprises the representative feature portion;
enlarging the input image optically by a magnification factor and matching the enlarged input image with the sampled image or a resized sample image to obtain central coordinates of the enlarged input image, wherein the resized sample image is obtained by the magnification factor; and
aligning an optical axis by calculating a difference between the central coordinates of the enlarged input image and the central coordinates of the sampled image.
2. The method of claim 1, wherein the representative feature portion has a luminance level higher than a predetermined level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
3. The method of claim 1, wherein the determining whether the input image comprises a representative feature portion comprises:
extracting a plurality of feature portions from the input image;
selecting a candidate feature portion having a predetermined feature from among the extracted plurality of feature portions; and
determining that the input image comprises the representative feature portion if the enlarged input image comprises the candidate feature portion, and determining that the input image does not comprise the representative feature portion if the enlarged input image does not comprise the candidate feature portion.
4. The method of claim 1, wherein the matching the enlarged input image with the sample image comprises performing scale invariant feature transform (SIFT) matching if the magnification factor is equal to or less than 2.5.
5. The method of claim 1, wherein the matching the enlarged input image with the resized sample image comprises performing template matching if the magnification factor is greater than 2.5.
6. An image stabilization system comprising:
a zoom camera;
an image preprocessor that converts an analog input signal input from the zoom camera into a digital signal;
the system is further configured to:
determine whether an input image comprises a representative feature portion;
sample the representative feature portion to generate a sample image if it is determined that the input image comprises the representative feature portion;
enlarge the input image optically by a magnification factor and match the enlarged input image with the sample image or a resized sample image to obtain central coordinates of the enlarged input image, where said resized sample image is obtained by resizing the sample image by said magnification factor; and
align an optical axis by calculating a difference between the central coordinates of the enlarged input image and the central coordinates of the sample image.
7. The image stabilization system of claim 6, wherein the representative feature portion has a luminance level higher than a predetermined level or comprises a predetermined shape, and is selected from among a plurality of feature portions in the input image.
8. The image stabilization system of claim 6, further comprising:
determining that the input image comprises the representative feature portion if the enlarged input image comprises a candidate feature portion, and determining that the input image does not comprise the representative feature portion if the enlarged input image does not comprise the candidate feature portion, wherein the candidate feature portion has a predetermined feature and is selected from a plurality of feature portions extracted from the image.
9. The image stabilization system of claim 6, wherein the enlarged input image is matched with the sample image by scale invariant feature transform (SIFT) matching if the magnification factor is equal to or less than 2.5.
10. The image stabilization system of claim 6, wherein the enlarged input image is matched with the resized sample image by template matching if the magnification factor is greater than 2.5.
US13/729,302 2011-12-29 2012-12-28 Method of system for image stabilization through image processing, and zoom camera including image stabilization function Active 2033-05-31 US9031355B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2011-0146112 2011-12-29
KR1020110146112A KR101324250B1 (en) 2011-12-29 2011-12-29 optical axis error compensation method using image processing, the method of the same, and the zoom camera provided for the compensation function of the optical axis error

Publications (2)

Publication Number Publication Date
US20130170770A1 US20130170770A1 (en) 2013-07-04
US9031355B2 true US9031355B2 (en) 2015-05-12

Family

ID=48679390

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/729,302 Active 2033-05-31 US9031355B2 (en) 2011-12-29 2012-12-28 Method of system for image stabilization through image processing, and zoom camera including image stabilization function

Country Status (3)

Country Link
US (1) US9031355B2 (en)
KR (1) KR101324250B1 (en)
CN (1) CN103188442B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101323609B1 (en) * 2012-08-21 2013-11-01 주식회사 엠씨넥스 Apparatus for aligning optical axis of a camera module
KR101932547B1 (en) * 2014-10-23 2018-12-27 한화테크윈 주식회사 Camera system and Method of image registration thereof
KR102352681B1 (en) 2015-07-27 2022-01-18 삼성전자주식회사 Method and electronic apparatus for stabilizing video
KR102516175B1 (en) 2017-12-21 2023-03-30 한화비전 주식회사 Method and apparatus for correcting optical axis of zoom lens, and computer program for executing the method
KR20210118622A (en) 2020-03-23 2021-10-01 삼성전자주식회사 Method for Stabilization at high magnification and Electronic Device thereof
KR102171740B1 (en) * 2020-07-28 2020-10-29 주식회사 원우이엔지 Video surveillance device with AF zoom lens module capable of optical axis compensation
CN113792708B (en) * 2021-11-10 2022-03-18 湖南高至科技有限公司 ARM-based remote target clear imaging system and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136459A1 (en) * 2001-02-01 2002-09-26 Kazuyuki Imagawa Image processing method and apparatus
US6720997B1 (en) * 1997-12-26 2004-04-13 Minolta Co., Ltd. Image generating apparatus
KR100541618B1 (en) 2003-12-29 2006-01-10 전자부품연구원 Apparatus and method for controlling a monitoring camera
US20060133785A1 (en) 2004-12-21 2006-06-22 Byoung-Chul Ko Apparatus and method for distinguishing between camera movement and object movement and extracting object in a video surveillance system
US20080310730A1 (en) * 2007-06-06 2008-12-18 Makoto Hayasaki Image processing apparatus, image forming apparatus, image processing system, and image processing method
US20090225174A1 (en) * 2008-03-04 2009-09-10 Sony Corporation Image processing apparatus, image processing method, hand shake blur area estimation device, hand shake blur area estimation method, and program
US20100229452A1 (en) 2009-03-12 2010-09-16 Samsung Techwin Co., Ltd. Firearm system having camera unit with adjustable optical axis
US20110169977A1 (en) * 2010-01-12 2011-07-14 Nec Casio Mobile Communications, Ltd. Image quality evaluation device, terminal device, image quality evaluation system, image quality evaluation method and computer-readable recording medium for storing programs
JP4767052B2 (en) 2006-03-22 2011-09-07 ダイハツ工業株式会社 Optical axis deviation detector
KR101076487B1 (en) 2009-06-29 2011-10-24 중앙대학교 산학협력단 Apparatus and method for automatic area enlargement control in ptz camera using sift

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4112819B2 (en) 2000-05-11 2008-07-02 株式会社東芝 Object area information generation apparatus and object area information description program
JP4196256B2 (en) 2002-06-07 2008-12-17 ソニー株式会社 Object contour extraction device
KR100588170B1 (en) * 2003-11-20 2006-06-08 엘지전자 주식회사 Method for setting a privacy masking block
JP2006113738A (en) 2004-10-13 2006-04-27 Matsushita Electric Ind Co Ltd Device and method for detecting object
JP3941822B2 (en) 2005-07-22 2007-07-04 横河電機株式会社 Displacement measuring device
JP2008269396A (en) * 2007-04-23 2008-11-06 Sony Corp Image processor, image processing method, and program
CN101334267B (en) * 2008-07-25 2010-11-24 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
KR101341632B1 (en) * 2008-11-05 2013-12-16 삼성테크윈 주식회사 Optical axis error compensation system of the zoom camera, the method of the same
KR20100082147A (en) * 2009-01-08 2010-07-16 삼성전자주식회사 Method for enlarging and changing captured image, and phographed apparatus using the same
JP4752918B2 (en) * 2009-01-16 2011-08-17 カシオ計算機株式会社 Image processing apparatus, image collation method, and program

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6720997B1 (en) * 1997-12-26 2004-04-13 Minolta Co., Ltd. Image generating apparatus
US20020136459A1 (en) * 2001-02-01 2002-09-26 Kazuyuki Imagawa Image processing method and apparatus
KR100541618B1 (en) 2003-12-29 2006-01-10 전자부품연구원 Apparatus and method for controlling a monitoring camera
US20060133785A1 (en) 2004-12-21 2006-06-22 Byoung-Chul Ko Apparatus and method for distinguishing between camera movement and object movement and extracting object in a video surveillance system
JP2006180479A (en) 2004-12-21 2006-07-06 Samsung Electronics Co Ltd Apparatus and method for distinguishing between camera movement and object movement and extracting object in video monitoring system
JP4767052B2 (en) 2006-03-22 2011-09-07 ダイハツ工業株式会社 Optical axis deviation detector
US20080310730A1 (en) * 2007-06-06 2008-12-18 Makoto Hayasaki Image processing apparatus, image forming apparatus, image processing system, and image processing method
US20090225174A1 (en) * 2008-03-04 2009-09-10 Sony Corporation Image processing apparatus, image processing method, hand shake blur area estimation device, hand shake blur area estimation method, and program
US20100229452A1 (en) 2009-03-12 2010-09-16 Samsung Techwin Co., Ltd. Firearm system having camera unit with adjustable optical axis
KR20100102959A (en) 2009-03-12 2010-09-27 삼성테크윈 주식회사 Firearm system having camera unit with adjustable optical axis
KR101076487B1 (en) 2009-06-29 2011-10-24 중앙대학교 산학협력단 Apparatus and method for automatic area enlargement control in ptz camera using sift
US20110169977A1 (en) * 2010-01-12 2011-07-14 Nec Casio Mobile Communications, Ltd. Image quality evaluation device, terminal device, image quality evaluation system, image quality evaluation method and computer-readable recording medium for storing programs

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Amanatiadis et al. ("An integrated architecture for adaptive image stabilization in zooming operation," IEEE Trans. Consumer Electronics, vol. 54, No. 2, 2008, pp. 600-608). *
Kim et al. ("Automatic radial distortion correction in zoom lens video camera," J. Electronic Imaging, 19(4), 2010). *
Yoneyama et al. ("Lens distortion correction for digital image correlation by measuring rigid body displacement," Opt. Eng. 45(2), 2006). *

Also Published As

Publication number Publication date
KR20130077414A (en) 2013-07-09
CN103188442B (en) 2018-02-06
US20130170770A1 (en) 2013-07-04
CN103188442A (en) 2013-07-03
KR101324250B1 (en) 2013-11-01

Similar Documents

Publication Publication Date Title
US9031355B2 (en) Method of system for image stabilization through image processing, and zoom camera including image stabilization function
US9019426B2 (en) Method of generating image data by an image device including a plurality of lenses and apparatus for generating image data
US9769443B2 (en) Camera-assisted two dimensional keystone correction
US9325899B1 (en) Image capturing device and digital zooming method thereof
US9466119B2 (en) Method and apparatus for detecting posture of surveillance camera
KR102225617B1 (en) Method of setting algorithm for image registration
JP2010041417A (en) Image processing unit, image processing method, image processing program, and imaging apparatus
EP3093822B1 (en) Displaying a target object imaged in a moving picture
JP6494418B2 (en) Image analysis apparatus, image analysis method, and program
Wang et al. Panoramic image mosaic based on SURF algorithm using OpenCV
JP4198536B2 (en) Object photographing apparatus, object photographing method and object photographing program
US10783646B2 (en) Method for detecting motion in a video sequence
KR101705330B1 (en) Keypoints Selection method to Find the Viewing Angle of Objects in a Stereo Camera Image
JP2019021189A (en) Object detection device
KR101076487B1 (en) Apparatus and method for automatic area enlargement control in ptz camera using sift
KR101020921B1 (en) Controlling Method For Rotate Type Camera
RU2647645C1 (en) Method of eliminating seams when creating panoramic images from video stream of frames in real-time
CN106709942B (en) Panorama image mismatching elimination method based on characteristic azimuth angle
JP5132705B2 (en) Image processing device
De Villiers Real-time photogrammetric stitching of high resolution video on COTS hardware
US10430971B2 (en) Parallax calculating apparatus
JP6579764B2 (en) Image processing apparatus, image processing method, and program
KR101756274B1 (en) Monitoring camera and operating method thereof
KR101636481B1 (en) Method And Apparatus for Generating Compound View Image
KR101873257B1 (en) Apparatus for Controlling Camera and Driving Method Thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG TECHWIN CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHON, JE-YOUL;REEL/FRAME:029539/0116

Effective date: 20121221

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: HANWHA TECHWIN CO., LTD., KOREA, DEMOCRATIC PEOPLE

Free format text: CHANGE OF NAME;ASSIGNOR:SAMSUNG TECHWIN CO., LTD.;REEL/FRAME:036714/0757

Effective date: 20150629

AS Assignment

Owner name: HANWHA TECHWIN CO., LTD., KOREA, REPUBLIC OF

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY ADDRESS PREVIOUSLY RECORDED AT REEL: 036714 FRAME: 0757. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME;ASSIGNOR:SAMSUNG TECHWIN CO., LTD.;REEL/FRAME:037072/0008

Effective date: 20150629

AS Assignment

Owner name: HANWHA AEROSPACE CO., LTD., KOREA, REPUBLIC OF

Free format text: CHANGE OF NAME;ASSIGNOR:HANWHA TECHWIN CO., LTD;REEL/FRAME:046927/0019

Effective date: 20180401

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: HANWHA AEROSPACE CO., LTD., KOREA, REPUBLIC OF

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE APPLICATION NUMBER 10/853,669. IN ADDITION PLEASE SEE EXHIBIT A PREVIOUSLY RECORDED ON REEL 046927 FRAME 0019. ASSIGNOR(S) HEREBY CONFIRMS THE CHANGE OF NAME;ASSIGNOR:HANWHA TECHWIN CO., LTD.;REEL/FRAME:048496/0596

Effective date: 20180401

AS Assignment

Owner name: HANWHA TECHWIN CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HANWHA AEROSPACE CO., LTD.;REEL/FRAME:049013/0723

Effective date: 20190417

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

AS Assignment

Owner name: HANWHA VISION CO., LTD., KOREA, REPUBLIC OF

Free format text: CHANGE OF NAME;ASSIGNOR:HANWHA TECHWIN CO., LTD.;REEL/FRAME:064549/0075

Effective date: 20230228