EP2696573A2 - Method for generating a panoramic image, user terminal device, and computer-readable recording medium - Google Patents

Method for generating a panoramic image, user terminal device, and computer-readable recording medium Download PDF

Info

Publication number
EP2696573A2
EP2696573A2 EP11859264.1A EP11859264A EP2696573A2 EP 2696573 A2 EP2696573 A2 EP 2696573A2 EP 11859264 A EP11859264 A EP 11859264A EP 2696573 A2 EP2696573 A2 EP 2696573A2
Authority
EP
European Patent Office
Prior art keywords
images
adjusted
image
resolutions
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11859264.1A
Other languages
German (de)
French (fr)
Other versions
EP2696573A4 (en
Inventor
Bong Cheol Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olaworks Inc
Original Assignee
Olaworks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olaworks Inc filed Critical Olaworks Inc
Publication of EP2696573A2 publication Critical patent/EP2696573A2/en
Publication of EP2696573A4 publication Critical patent/EP2696573A4/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates to a method, a terminal and a computer-readable recording medium for generating a panoramic image. More specifically, the present invention relates to the method, the user terminal and the computer-readable recording medium for performing a resolution adjusting process in which a resolution of an image that is a subject for image matching is reduced step by step by using a pyramidal structure and a pre-processing process in which edges in the image are visually expressed with a tangent vector vertical to the gradient vector representing changes in intensity or color to thereby improve accuracy and operation speed of generating the panoramic image.
  • panoramic image a variety of services using an image including complete views viewed from a random point, so-called panoramic image.
  • a service for supporting users to acquire panoramic images by automatically synthesizing multiple images taken consecutively in the use of portable terminals which have photographic equipment with a relatively narrow angle of view was also introduced.
  • panoramic images are created by putting boundaries of multiple consecutive images together and synthesizing them, the quality of the panoramic images may depend on how accurately the boundaries of adjacent images are put together.
  • a panoramic image is created by synthesizing the original copies of photographed images as they are or synthesizing the original copies of photographed images from which just noise is removed.
  • the contours of important objects such as buildings included in the original image and those of meaningless objects such as dirt may not be divided clearly, which may cause the problem of the synthesis of images becoming less accurate.
  • the original image contains many features to be considered when the boundaries of the adjacent images are matched, it may cause a great number of operations to be required to generate the panoramic image.
  • the inventor of the present invention came to invent a technology for effectively generating panoramic images even in a mobile environment by applying a method for adjusting the resolution of an image step by step and a method for characterizing images for simplifying the image by emphasizing only important part(s) of the image.
  • a method for generating a panoramic image comprising, (a) a step in which resolutions of first and second input images are respectively adjusted to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images; (b) a step in which first and second pre-set images are respectively generated which represent information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and (c) a step in which image matching operations between the first and the second pre-processed images are performed and then a position is determined where the first and the second input images are synthesized by referring to results of the image matching operations.
  • a user terminal for generating a panoramic image comprising, a resolution adjusting part that adjusts resolutions for first and second input images, respectively, to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images, a pre-processing part that generates first and second pre-processed images which represent information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and a matching part that performs image matching operations between the first and the second pre-processed images and then determines a position where the first and the second input images should be synthesized by referring to the results of the image matching operations.
  • the resolution of an image may be reduced to reduce operations required for image matching, the time required for generating a panoramic image can be reduced.
  • an image may be characterized and simplified by using the image which expresses edges in the image with a tangent vector vertical to a gradient vector representing the change in intensity or color which is the subject of image matching, the effect is that the accuracy of synthesizing panoramic image is guaranteed and its operation speed is improved.
  • a panoramic image means an image acquired as a result of photographing a complete view viewed from a point and more specifically, a type of the image capable of offering visual information in all directions actually shown at a shooting point three-dimensionally and realistically by expressing pixels constructing the image in a virtual celestial sphere whose center is the shooting point according to spherical coordinates.
  • the panoramic image may be an image expressing the pixels constructing the image according to cylindrical coordinates.
  • Figure 1 is a drawing that illustratively presents an internal configuration of a user terminal (100) in accordance with one example embodiment of the present invention.
  • the user terminal (100) in accordance with one example embodiment of the present invention may include a resolution adjusting part (110), a pre-processing part (120), a matching part (130), a synthesizing and blending part (140), a communication part (150) and a control part (160).
  • a resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140), the communication part (150) and the control part (160) may be program modules communicating with the user terminal (100).
  • Such program modules may be included in the user terminal (100) in the form of an operating system, an application program module and other program modules, and they may be physically stored in various storage devices well known to those skilled in the art. Or, such program modules may be stored in a remote storage device capable of communicating with the user terminal (100).
  • program modules include but are not be limited to a routine, a subroutine, a program, an object, a component, and a data structure for executing a specific operation or a type of specific abstract data that will be described in accordance with the present invention.
  • the resolution adjusting part (110) may perform the function of generating the image with adjusted resolution (hereinafter, 'adjusted image') by adjusting resolutions of input images which are subj ects of syntheses for generating a panoramic image.
  • the resolution of the adjusted image may also be determined by referring to preset relationship data regarding the resolution of the adjusted image versus the input image.
  • the adjusting part (110) in accordance with one example embodiment of the present invention resolution may determine the resolution of the adjusted image by diminishing its resolution step by step by using a pyramid structure for which a matching rate between adjacent adjusted images in the preset overlapped region where the adjacent adjusted images are overlapped satisfies a preset level.
  • the preset overlapped region means a region where adjacent images are overlapped when the adjacent images are placed close enough to be overlapped as expected statistically or empirically before image matching is performed to put multiple images together to generate a panoramic image.
  • the overlapped region as a region corresponding to the boundaries including top, bottom, left and right of the image, may be set to be a region accounting for 10% of the whole area of the image. In the following is examined more specifically a process of deciding the resolution of the adjusted image in accordance with one example embodiment of the present invention.
  • adjacent input images A and B are 1920 x 1080 pixels with the preset matching rate of 80% and the resolutions of the input images A and B are reduced step by step by 1/4 by using the pyramid structure.
  • a matching rate of the first adjusted images A and B (whose resolutions become 960 x 540 pixels respectively, thanks to reduction by 1/4) in a preset overlapped region reaches 84%, since the matching rate of the first adjusted images A and B satisfies the preset matching rate (80%), it may be possible to temporarily determine the resolutions of the first adjusted images A and B as 960 x 540 pixels, respectively, and then reduce the resolutions thereof by one fourth again respectively at the next step.
  • the process for reducing the resolutions is suspended, and then the resolutions of the adjusted images A and B may be finally determined as 960 x 540 pixels which are the resolutions of the first adjusted images.
  • the process for acquiring relationship data in the present invention is not limited only to the method mentioned above and it can obviously be changed within the scope of achieving the objectives of the present invention.
  • the pre-processing part (120) may perform a function for generating an image for which pre-processing has been performed (hereinafter, 'pre-processed image') which expresses information on edges (i.e., contour) in the input image(s) whose resolution is adjusted by the resolution adjusting part (110), wherein the edges are acquired by referring to the tangent vector vertical to the gradient vector which represents the changes in intensity or color in the adjusted image.
  • 'pre-processed image' which expresses information on edges (i.e., contour) in the input image(s) whose resolution is adjusted by the resolution adjusting part (110), wherein the edges are acquired by referring to the tangent vector vertical to the gradient vector which represents the changes in intensity or color in the adjusted image.
  • the pre-processing part (120) in accordance with one example embodiment of the present invention may calculate the gradient vector representing the changes in intensity or color which is a scalar value with respect to respective pixels in the two-dimensional adjusted image.
  • the direction of the gradient vector may be determined in the direction of maximum changes in intensity or color and the magnitude of the gradient vector may be decided to be the rate of change in the direction of the maximum changes in intensity or color.
  • the magnitude of the gradient vector is large in some parts, such as contours of an object, where the changes in intensity or color are great and on the other hand the magnitude of the gradient vector is small in other parts where the changes in intensity or color are small
  • the edges included in the adjusted image may be detected by referring to the gradient vector.
  • the Sobel operator may be used to calculate the gradient vector in the adjusted image. But it is not limited only to this, and other operators for computing the gradient vector to detect edges in the adjusted image may be also applied.
  • Figure 2 is a drawing that visually illustrates a result of calculating gradient vector components in an image in accordance with one example embodiment of the present invention.
  • the direction and the magnitude of the gradient vector are expressed by blue lines. It may be found that a length of a blue line appears long in a part where a change in intensity or color is great while a length of a blue line is short or does not appear at all in a part where a change in intensity or color is small.
  • the pre-processing part (120) in accordance with one example embodiment of the present invention may perform a function of calculating a tangent vector by rotating the gradient vector calculated for respective pixels of the two-dimensional adjusted image, for example, by 90 degrees counterclockwise. Since the calculated tangent vector is parallel to virtual outlines drawn based on the scalar value of intensity or color, the visually expressed tangent vector may present the shapes same as those along the edges of the contour, etc. of the object included in the adjusted image. Accordingly, the pre-processed image which visually illustrates the tangent vector in the adjusted image may play a role itself as an edge image by emphasizing and presenting only the edges included in the adjusted image.
  • Figure 3 is a diagram that visually shows a result of calculating the tangent vector in an image in accordance with one example embodiment of the present invention.
  • Figure 4 and Figure 5 are drawings that illustratively present an original image and its pre-processed image respectively in accordance with one example embodiment of the present invention.
  • the pre-processed images in Figure 4(b) and Figure 5(b) are the images whose pixels are expressed brightly if the magnitude of the tangent vector is large.
  • the use of the pre-processed image which is acquired as a result of performing a process of reducing the resolution of the original input image to a reasonable level and then a pre-processing process to visually express the edges with the tangent vector for the adjusted image as an image for matching process to be explained below enables the improvement of the accuracy of image matching and increasing the operational speed of image matching at the same time.
  • the matching part (130) may perform image matching operations between adjacent pre-processed images by using the pre-processed images which are generated by the pre-processing part (120) and perform a function of determining an optimal overlapped position between the original input images corresponding to the pre-processed images by referring to the results of the matching.
  • the matching part (130) in accordance with one example embodiment of the present invention may perform the image matching operations between the pre-processed images at the aforementioned preset overlapped region first.
  • the synthesizing and blending part (140) may synthesize the adjacent input images by referring to the synthesis position determined by the matching part (130) and perform a blending process to make the connected portion in the synthesized input images look natural.
  • Figure 6 is a drawing that illustratively presents results of generating respective panoramic images by synthesizing two adjacent input images in accordance with one example embodiment of the present invention.
  • panoramic images illustrated in Figure 6 were acquired as a result of synthesizing two taken input images of a traditional styled building viewed from different angles.
  • Figure 6(a) is a drawing representing a result of generating a panoramic image without going through the process of adjusting the resolution and then the process of pre-processing
  • Figure 6(b) is a drawing showing a result of generating a panoramic image through the process of adjusting the resolution and then the process of pre-processing in accordance with one example embodiment of the present invention.
  • the panoramic image on Figure 6(b) in accordance with the present invention may be confirmed to be generated more accurately and more naturally than the existing panoramic image in Figure 6(a) and particularly, it may be confirmed that there are big differences between the part of the stairs and the part of pillars located to the right of the signboard.
  • the communication part (150) in accordance with one example embodiment of the present invention performs the function of allowing the user terminal (100) to communicate with an external device (not illustrated).
  • the control part (160) in accordance with one example embodiment of the present invention performs the function of controlling data flow among the resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140) and the communication part (150).
  • the control part (160) controls the flow of data from outside or among the components of the user terminal (100) to thereby force the resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140) and the communication part (150) to perform their unique functions.
  • the examples described above according to the present invention can be implemented in a form of program command that may be executed through a variety of computer components and recorded on computer-readable recording media.
  • the computer readable media may include solely or in combination, program commands, data files and data structures.
  • the program commands recorded on the computer-readable recording medium may be specially designed and configured for the present invention or may be known and be usable by a person skilled in the field of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as a hard disk, floppy disk, and magnetic tape; optical media such as a CD-ROM and DVD; magnetooptical media such as a floptical disk and hardware devices such as a ROM, RAM and flash memory specially configured to store and execute program commands.
  • Program commands include not only a machine language code made by a compiler but also a high level code that can be used by an interpreter etc., which is executed by a computer.
  • the hardware device may be configured to work as one or more software modules to perform the action according to the present invention, and its reverse is also the same.

Abstract

According to one aspect of the present invention, a method for generating a panoramic image includes: (a) generating first and second adjustment images by adjusting the resolutions of first and second input images; (b) generating first and second pre-processing images representing edge information by referring to a tangent vector perpendicular to a gradient vector representing a change in intensity or color with respect to each of the first and second adjustment images; and (c) determining a position at which the first and second input images are combined by referring to a matching result between the first and second pre-processing images.

Description

    Technical Field
  • The present invention relates to a method, a terminal and a computer-readable recording medium for generating a panoramic image. More specifically, the present invention relates to the method, the user terminal and the computer-readable recording medium for performing a resolution adjusting process in which a resolution of an image that is a subject for image matching is reduced step by step by using a pyramidal structure and a pre-processing process in which edges in the image are visually expressed with a tangent vector vertical to the gradient vector representing changes in intensity or color to thereby improve accuracy and operation speed of generating the panoramic image.
  • Background Technology
  • Recently, as digital cameras have become popular and digital processing technologies have developed, a variety of services using an image including complete views viewed from a random point, so-called panoramic image, are being introduced.
  • As an example of a service using panoramic images, a service for supporting users to acquire panoramic images by automatically synthesizing multiple images taken consecutively in the use of portable terminals which have photographic equipment with a relatively narrow angle of view was also introduced.
  • Generally, while panoramic images are created by putting boundaries of multiple consecutive images together and synthesizing them, the quality of the panoramic images may depend on how accurately the boundaries of adjacent images are put together. According to conventional technology for generating panoramic image, a panoramic image is created by synthesizing the original copies of photographed images as they are or synthesizing the original copies of photographed images from which just noise is removed.
  • According to the conventional technology, the contours of important objects such as buildings included in the original image and those of meaningless objects such as dirt, however, may not be divided clearly, which may cause the problem of the synthesis of images becoming less accurate. In addition, since the original image contains many features to be considered when the boundaries of the adjacent images are matched, it may cause a great number of operations to be required to generate the panoramic image. These problems may be more serious in a mobile environment where portable user terminals with relatively poor operational capabilities are used.
  • Therefore, the inventor of the present invention came to invent a technology for effectively generating panoramic images even in a mobile environment by applying a method for adjusting the resolution of an image step by step and a method for characterizing images for simplifying the image by emphasizing only important part(s) of the image.
  • Detailed Description of the Invention Technical Task
  • It is an objective of the present invention to solve all the problems mentioned above.
  • It is another objective of the present invention to reduce the resolution of an image which is a subject of image matching and diminish operations required for image matching by using image pyramid technology, to thereby generate a panoramic image.
  • In addition, it is still another objective of the present invention to emphasize important parts of an image and simplify the image by performing a pre-processing process in which edges in the image are visually expressed with a tangent vector vertical to the gradient vector representing changes in intensity or color to thereby generate a panoramic image.
  • Means for Task Resolution
  • A representative configuration of the present invention for achieving the above objectives is described below:
  • In accordance with one aspect of the present invention, there is provided a method for generating a panoramic image comprising, (a) a step in which resolutions of first and second input images are respectively adjusted to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images; (b) a step in which first and second pre-set images are respectively generated which represent information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and (c) a step in which image matching operations between the first and the second pre-processed images are performed and then a position is determined where the first and the second input images are synthesized by referring to results of the image matching operations.
  • In accordance with another aspect of the present invention, there is provided a user terminal for generating a panoramic image comprising, a resolution adjusting part that adjusts resolutions for first and second input images, respectively, to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images, a pre-processing part that generates first and second pre-processed images which represent information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and a matching part that performs image matching operations between the first and the second pre-processed images and then determines a position where the first and the second input images should be synthesized by referring to the results of the image matching operations.
  • In addition, other methods, systems, and computer-readable recording media for recording a computer program to execute the methods that are intended to implement the present invention are further provided.
  • Effects of the Invention
  • In accordance with the present invention, since the resolution of an image may be reduced to reduce operations required for image matching, the time required for generating a panoramic image can be reduced.
  • In addition, in accordance with the present invention, since an image may be characterized and simplified by using the image which expresses edges in the image with a tangent vector vertical to a gradient vector representing the change in intensity or color which is the subject of image matching, the effect is that the accuracy of synthesizing panoramic image is guaranteed and its operation speed is improved.
  • Brief Description of the Drawings
    • Figure 1 is a drawing that illustratively presents an internal configuration of a user terminal (100) in accordance with one example embodiment of the present invention.
    • Figure 2 is a drawing that visually illustrates a result of calculating a gradient vector in an image in accordance with one example embodiment of the present invention.
    • Figure 3 is a diagram that visually shows a result of calculating a tangent vector in an image in accordance with one example embodiment of the present invention.
    • Figures 4 and 5 are drawings that illustratively present an original image and its pre-processed image respectively in accordance with one example embodiment of the present invention.
    • Figures 6 is a drawing that illustratively presents results of generating respective panoramic images by synthesizing two adjacent input images in accordance with one example embodiment of the present invention.
    <Description of Reference Numerals>
    • 100: user terminal
    • 110: resolution adjusting part
    • 120: pre-processing part
    • 130: matching part
    • 140: synthesizing and blending part
    • 150: communication part
    • 160: control part
    Forms for Embodiment of the Invention
  • The attached figures are examples that will help explain the invention in detail. These examples are described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that the various examples of the invention, although different, are not necessarily mutually exclusive.
  • For example, a particular feature, structure, or characteristic described herein in connection with one example may be implemented within other examples without departing from the spirit and scope of the invention. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed example may be modified without departing from the spirit and scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the claims are entitled. In the drawings, like numerals refer to the same or similar functionality throughout the several views.
  • In the following is described the present invention in detail referring to the preferred examples of the present invention so that those having common knowledge of the technical field to which the present invention belongs may easily practice the present invention.
  • [Preferred Embodiments of the Present Invention]
  • In the present specification, a panoramic image means an image acquired as a result of photographing a complete view viewed from a point and more specifically, a type of the image capable of offering visual information in all directions actually shown at a shooting point three-dimensionally and realistically by expressing pixels constructing the image in a virtual celestial sphere whose center is the shooting point according to spherical coordinates. Further, while it is not directly illustrated in the present specification, the panoramic image may be an image expressing the pixels constructing the image according to cylindrical coordinates.
  • Configuration of User Terminal
  • Figure 1 is a drawing that illustratively presents an internal configuration of a user terminal (100) in accordance with one example embodiment of the present invention.
  • By referring to Figure 1, the user terminal (100) in accordance with one example embodiment of the present invention may include a resolution adjusting part (110), a pre-processing part (120), a matching part (130), a synthesizing and blending part (140), a communication part (150) and a control part (160). In accordance with one example embodiment of the present invention, at least some of the resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140), the communication part (150) and the control part (160) may be program modules communicating with the user terminal (100). Such program modules may be included in the user terminal (100) in the form of an operating system, an application program module and other program modules, and they may be physically stored in various storage devices well known to those skilled in the art. Or, such program modules may be stored in a remote storage device capable of communicating with the user terminal (100). On the other hand, such program modules include but are not be limited to a routine, a subroutine, a program, an object, a component, and a data structure for executing a specific operation or a type of specific abstract data that will be described in accordance with the present invention.
  • First, in accordance with one example embodiment of the present invention, the resolution adjusting part (110) may perform the function of generating the image with adjusted resolution (hereinafter, 'adjusted image') by adjusting resolutions of input images which are subj ects of syntheses for generating a panoramic image. Herein, the resolution of the adjusted image may also be determined by referring to preset relationship data regarding the resolution of the adjusted image versus the input image.
  • More specifically, the adjusting part (110) in accordance with one example embodiment of the present invention resolution may determine the resolution of the adjusted image by diminishing its resolution step by step by using a pyramid structure for which a matching rate between adjacent adjusted images in the preset overlapped region where the adjacent adjusted images are overlapped satisfies a preset level. Herein, the preset overlapped region means a region where adjacent images are overlapped when the adjacent images are placed close enough to be overlapped as expected statistically or empirically before image matching is performed to put multiple images together to generate a panoramic image. For example, the overlapped region, as a region corresponding to the boundaries including top, bottom, left and right of the image, may be set to be a region accounting for 10% of the whole area of the image. In the following is examined more specifically a process of deciding the resolution of the adjusted image in accordance with one example embodiment of the present invention.
  • For example, it may be assumed that adjacent input images A and B are 1920 x 1080 pixels with the preset matching rate of 80% and the resolutions of the input images A and B are reduced step by step by 1/4 by using the pyramid structure. In such an instance, in accordance with one example embodiment of the present invention, assuming that a matching rate of the first adjusted images A and B (whose resolutions become 960 x 540 pixels respectively, thanks to reduction by 1/4) in a preset overlapped region reaches 84%, since the matching rate of the first adjusted images A and B satisfies the preset matching rate (80%), it may be possible to temporarily determine the resolutions of the first adjusted images A and B as 960 x 540 pixels, respectively, and then reduce the resolutions thereof by one fourth again respectively at the next step. At the second reduction step, if a matching rate of the second adjusted images A and B in the preset overlapped region whose resolutions are 480 x 270 pixels due to the reduction by one fourth again is 65%, it fails to satisfy the preset matching rate (80%). Therefore, the process for reducing the resolutions is suspended, and then the resolutions of the adjusted images A and B may be finally determined as 960 x 540 pixels which are the resolutions of the first adjusted images. However, the process for acquiring relationship data in the present invention is not limited only to the method mentioned above and it can obviously be changed within the scope of achieving the objectives of the present invention.
  • Next, in accordance with one example embodiment of the present invention, the pre-processing part (120) may perform a function for generating an image for which pre-processing has been performed (hereinafter, 'pre-processed image') which expresses information on edges (i.e., contour) in the input image(s) whose resolution is adjusted by the resolution adjusting part (110), wherein the edges are acquired by referring to the tangent vector vertical to the gradient vector which represents the changes in intensity or color in the adjusted image. In the following is a more detailed explanation on the pre-processing process in accordance with one example embodiment of the present invention.
  • First, the pre-processing part (120) in accordance with one example embodiment of the present invention may calculate the gradient vector representing the changes in intensity or color which is a scalar value with respect to respective pixels in the two-dimensional adjusted image. Herein, the direction of the gradient vector may be determined in the direction of maximum changes in intensity or color and the magnitude of the gradient vector may be decided to be the rate of change in the direction of the maximum changes in intensity or color. In general, because the magnitude of the gradient vector is large in some parts, such as contours of an object, where the changes in intensity or color are great and on the other hand the magnitude of the gradient vector is small in other parts where the changes in intensity or color are small, the edges included in the adjusted image may be detected by referring to the gradient vector. In accordance with one example embodiment of the present invention, the Sobel operator may be used to calculate the gradient vector in the adjusted image. But it is not limited only to this, and other operators for computing the gradient vector to detect edges in the adjusted image may be also applied.
  • Figure 2 is a drawing that visually illustrates a result of calculating gradient vector components in an image in accordance with one example embodiment of the present invention.
  • By referring to Figure 2, the direction and the magnitude of the gradient vector are expressed by blue lines. It may be found that a length of a blue line appears long in a part where a change in intensity or color is great while a length of a blue line is short or does not appear at all in a part where a change in intensity or color is small.
  • Herein, the pre-processing part (120) in accordance with one example embodiment of the present invention may perform a function of calculating a tangent vector by rotating the gradient vector calculated for respective pixels of the two-dimensional adjusted image, for example, by 90 degrees counterclockwise. Since the calculated tangent vector is parallel to virtual outlines drawn based on the scalar value of intensity or color, the visually expressed tangent vector may present the shapes same as those along the edges of the contour, etc. of the object included in the adjusted image. Accordingly, the pre-processed image which visually illustrates the tangent vector in the adjusted image may play a role itself as an edge image by emphasizing and presenting only the edges included in the adjusted image.
  • Figure 3 is a diagram that visually shows a result of calculating the tangent vector in an image in accordance with one example embodiment of the present invention.
  • By referring to Figure 3, it may be found that the tangent vector whose directions and magnitudes are expressed by blue lines are parallel along parts whose changes in intensity or color in the image are great, i.e., edges.
  • On the other hand, as an example of a technology available to compute tangent vector in an image, it is possible to refer to an article entitled "Coherent Line Drawing" co-authored by H. KANG and two others and published in 2007 in "ACM Symposium on Non-Photorealistic Animation and Rendering" (the whole content of the article must be considered to have been combined in the present specification). The article describes a method for calculating edge tangent flow (ETF) in an image as a step of the method for automatically illustrating lines corresponding to the contours included in the image. Of course, the technology for calculating the tangent vector applicable to the present invention is not limited only to the method described in the aforementioned article and various modified examples may be applied to implement the present invention.
  • On Figures 2 and 3, while the lines for parts where the changes in intensity or color are great are long, the lines for parts where the changes in intensity or color are small are short but it is not limited only to this. As shown in Figures 4 and 5, such modified example may be introduced where pixels are expressed more brightly as the magnitude of tangent vector becomes larger and that pixels are expressed more darkly as the magnitude of the tangent vector becomes smaller.
  • Figure 4 and Figure 5 are drawings that illustratively present an original image and its pre-processed image respectively in accordance with one example embodiment of the present invention. By reference, the pre-processed images in Figure 4(b) and Figure 5(b) are the images whose pixels are expressed brightly if the magnitude of the tangent vector is large.
  • By referring to Figures 4 and 5, in comparison with the original input images (Figures 4(a) and 5(a)), the pre-processed images (Figures 4(b) and 5(b)) may be confirmed that the original input images are featured and simplified by emphasizing important parts including contours of the object and boldly omitting unimportant parts.
  • As examined above, the use of the pre-processed image, which is acquired as a result of performing a process of reducing the resolution of the original input image to a reasonable level and then a pre-processing process to visually express the edges with the tangent vector for the adjusted image as an image for matching process to be explained below enables the improvement of the accuracy of image matching and increasing the operational speed of image matching at the same time.
  • Next, in accordance with one example embodiment of the present invention, the matching part (130) may perform image matching operations between adjacent pre-processed images by using the pre-processed images which are generated by the pre-processing part (120) and perform a function of determining an optimal overlapped position between the original input images corresponding to the pre-processed images by referring to the results of the matching. For example, the matching part (130) in accordance with one example embodiment of the present invention may perform the image matching operations between the pre-processed images at the aforementioned preset overlapped region first.
  • Next, in accordance with one example embodiment of the present invention, the synthesizing and blending part (140), additionally, may synthesize the adjacent input images by referring to the synthesis position determined by the matching part (130) and perform a blending process to make the connected portion in the synthesized input images look natural.
  • On the other hand, as an example of a technology available for matching, synthesizing and blending images, an article entitled "Panoramic Imaging System for Camera Phones" co-authored by Karl Pulli and four others and published in 2010 in the "International Conference on Consumer Electronics" may be referred to (the whole content of the article may be considered to have been combined in the present specification). The article describes a method for performing image matching between adjacent images by using feature-based matching technology combined with RANSAC (RANdom SAmple Consensus) and a method for processing connected portions of the adjacent images softly by using an alpha blending technology. Of course, the synthesis and blending technologies applicable for the present invention is not limited only to the method described in the aforementioned article and various modified examples may be applied to implement the present invention.
  • Figure 6 is a drawing that illustratively presents results of generating respective panoramic images by synthesizing two adjacent input images in accordance with one example embodiment of the present invention. By reference, panoramic images illustrated in Figure 6 were acquired as a result of synthesizing two taken input images of a traditional styled building viewed from different angles. Figure 6(a) is a drawing representing a result of generating a panoramic image without going through the process of adjusting the resolution and then the process of pre-processing, while Figure 6(b) is a drawing showing a result of generating a panoramic image through the process of adjusting the resolution and then the process of pre-processing in accordance with one example embodiment of the present invention.
  • By referring to Figures 6, the panoramic image on Figure 6(b) in accordance with the present invention may be confirmed to be generated more accurately and more naturally than the existing panoramic image in Figure 6(a) and particularly, it may be confirmed that there are big differences between the part of the stairs and the part of pillars located to the right of the signboard.
  • The communication part (150) in accordance with one example embodiment of the present invention performs the function of allowing the user terminal (100) to communicate with an external device (not illustrated).
  • The control part (160) in accordance with one example embodiment of the present invention performs the function of controlling data flow among the resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140) and the communication part (150). In other words, the control part (160) controls the flow of data from outside or among the components of the user terminal (100) to thereby force the resolution adjusting part (110), the pre-processing part (120), the matching part (130), the synthesizing and blending part (140) and the communication part (150) to perform their unique functions.
  • The examples described above according to the present invention can be implemented in a form of program command that may be executed through a variety of computer components and recorded on computer-readable recording media. The computer readable media may include solely or in combination, program commands, data files and data structures. The program commands recorded on the computer-readable recording medium may be specially designed and configured for the present invention or may be known and be usable by a person skilled in the field of computer software.
  • Examples of the computer-readable recording medium include magnetic media such as a hard disk, floppy disk, and magnetic tape; optical media such as a CD-ROM and DVD; magnetooptical media such as a floptical disk and hardware devices such as a ROM, RAM and flash memory specially configured to store and execute program commands. Program commands include not only a machine language code made by a compiler but also a high level code that can be used by an interpreter etc., which is executed by a computer. The hardware device may be configured to work as one or more software modules to perform the action according to the present invention, and its reverse is also the same.
  • While the present invention has been described so far by certain details such as specific components and limited examples and drawings, they were merely provided to promote overall understanding of the present invention, and the present invention is not limited by the examples above. A person with common knowledge of the field to which the present invention belongs may attempt various modifications and changes based on such descriptions.
  • Therefore, the idea of the present invention must not be confined to the explained examples, and the claims to be described later as well as everything including variations equal or equivalent to the claims would belong to the category of the idea of the present invention.

Claims (15)

  1. A method for generating a panoramic image comprising,
    (a) a step in which resolutions of first and second input images are respectively adjusted to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images,
    (b) a step in which first and second pre-processed images are respectively generated which represent information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and
    (c) a step in which image matching operations between the first and the second pre-processed images are performed and then a position is determined where the first and the second input images are synthesized by referring to results of the image matching operations.
  2. The method recited in Claim 1 wherein, (d) it further comprises a step in which a panoramic image is generated by synthesizing the first and the second input images in accordance with the synthesis position determined and then blending the synthesized first and second input images.
  3. The method recited in Claim 1 wherein, in step (a), the resolutions of the first and the second adjusted images are determined within a scope of a matching rate between the first and the second adjusted images satisfying the preset level in a region where the first and the second adjusted images are overlapped.
  4. The method recited in Claim 1 wherein the gradient vector is calculated by a Sobel operator.
  5. The method recited in Claim 1 wherein the tangent vector is a vector acquired after rotating the gradient vector by 90 degrees counterclockwise.
  6. The method recited in Claim 1 wherein the image matching between the first and the second pre-processed images is performed by using a feature-based matching technology combined with RANSAC (RANdom SAmple Consensus).
  7. The method recited in Claim 2 wherein the blending is performed by using an alpha blending technology.
  8. A user terminal for generating a panoramic image comprising,
    a resolution adjusting part that adjusts resolutions for first and second input images, respectively, to thereby generate first and second adjusted images, wherein the resolutions of the first and the second adjusted images are determined by referring to preset relationship data with respect to the resolutions of the adjusted images versus the input images,
    a pre-processing part that generates first and second pre-processed images which represents information on edges of the first and the second adjusted images by referring to a tangent vector vertical to a gradient vector showing changes in intensity or color of the first and the second adjusted images, respectively, and
    a matching part that performs image matching operations between the first and the second pre-processed images and then determines a position where the first and the second input images are synthesized by referring to results of the image matching operations.
  9. The user terminal recited in Claim 8 wherein it further comprises a synthesizing and blending part that generates a panoramic image by synthesizing the first and the second input images in accordance with the synthesis position determined and then blending the synthesized first and second input images.
  10. The user terminal recited in Claim 8 wherein the resolutions of the first and the second adjusted images are determined within a scope of a matching rate between the first and the second adjusted images satisfying the preset level in a region where the first and the second adjusted images are overlapped.
  11. The user terminal recited in Claim 8 wherein the gradient vector is calculated by a Sobel operator.
  12. The user terminal recited in Claim 8 wherein the tangent vector is a vector acquired after rotating the gradient vector by 90 degrees counterclockwise.
  13. The method recited in Claim 8 wherein the matching part performs image matching between the first and the second pre-processed images is performed by using a feature-based matching technology combined with RANSAC (RANdom SAmple Consensus).
  14. The terminal recited in Claim 9 wherein the synthesizing and blending part performs the blending process by using an alpha blending technology.
  15. A computer-readable recording medium in which a computer program is recorded to execute the method according to any one of Claims 1 to 7.
EP11859264.1A 2011-02-21 2011-12-29 Method for generating a panoramic image, user terminal device, and computer-readable recording medium Withdrawn EP2696573A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110015125A KR101049928B1 (en) 2011-02-21 2011-02-21 Method, terminal and computer-readable recording medium for generating panoramic images
PCT/KR2011/010349 WO2012115347A2 (en) 2011-02-21 2011-12-29 Method for generating a panoramic image, user terminal device, and computer-readable recording medium

Publications (2)

Publication Number Publication Date
EP2696573A2 true EP2696573A2 (en) 2014-02-12
EP2696573A4 EP2696573A4 (en) 2015-12-16

Family

ID=44923728

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11859264.1A Withdrawn EP2696573A4 (en) 2011-02-21 2011-12-29 Method for generating a panoramic image, user terminal device, and computer-readable recording medium

Country Status (5)

Country Link
US (1) US20120212573A1 (en)
EP (1) EP2696573A4 (en)
KR (1) KR101049928B1 (en)
CN (1) CN103718540B (en)
WO (1) WO2012115347A2 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2675173A1 (en) 2012-06-15 2013-12-18 Thomson Licensing Method and apparatus for fusion of images
US8369595B1 (en) * 2012-08-10 2013-02-05 EyeVerify LLC Texture features for biometric authentication
US9349188B2 (en) 2013-03-15 2016-05-24 Samsung Electronics Co., Ltd. Creating details in an image with adaptive frequency strength controlled transform
US9305332B2 (en) 2013-03-15 2016-04-05 Samsung Electronics Company, Ltd. Creating details in an image with frequency lifting
US9066025B2 (en) 2013-03-15 2015-06-23 Samsung Electronics Co., Ltd. Control of frequency lifting super-resolution with image features
US9536288B2 (en) 2013-03-15 2017-01-03 Samsung Electronics Co., Ltd. Creating details in an image with adaptive frequency lifting
KR101528556B1 (en) * 2013-12-12 2015-06-17 (주)씨프로 Panorama camera device for closed circuit television
KR101530163B1 (en) * 2013-12-12 2015-06-17 (주)씨프로 Panorama camera device for closed circuit television
GB2516995B (en) * 2013-12-18 2015-08-19 Imagination Tech Ltd Task execution in a SIMD processing unit
KR101554421B1 (en) 2014-04-16 2015-09-18 한국과학기술원 Method and apparatus for image expansion using image structure
US9652829B2 (en) 2015-01-22 2017-05-16 Samsung Electronics Co., Ltd. Video super-resolution by fast video segmentation for boundary accuracy control
KR101576130B1 (en) 2015-07-22 2015-12-09 (주)씨프로 Panorama camera device of closed circuit television for high resolution
CN108156386B (en) * 2018-01-11 2020-09-29 维沃移动通信有限公司 Panoramic photographing method and mobile terminal
CN108447107B (en) * 2018-03-15 2022-06-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating video
CN108848354B (en) * 2018-08-06 2021-02-09 四川省广播电视科研所 VR content camera system and working method thereof
CN110097086B (en) * 2019-04-03 2023-07-18 平安科技(深圳)有限公司 Image generation model training method, image generation method, device, equipment and storage medium
CN110536066B (en) * 2019-08-09 2021-06-29 润博全景文旅科技有限公司 Panoramic camera shooting method and device, electronic equipment and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706416A (en) * 1995-11-13 1998-01-06 Massachusetts Institute Of Technology Method and apparatus for relating and combining multiple images of the same scene or object(s)
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6785427B1 (en) * 2000-09-20 2004-08-31 Arcsoft, Inc. Image matching using resolution pyramids with geometric constraints
KR20020078663A (en) * 2001-04-07 2002-10-19 휴먼드림 주식회사 Patched Image Alignment Method and Apparatus In Digital Mosaic Image Construction
JP2004334843A (en) * 2003-04-15 2004-11-25 Seiko Epson Corp Method of composting image from two or more images
US7409105B2 (en) * 2003-10-22 2008-08-05 Arcsoft, Inc. Panoramic maker engine for a low profile system
KR100724134B1 (en) * 2006-01-09 2007-06-04 삼성전자주식회사 Method and apparatus for providing panoramic view with high speed image matching and mild mixed color blending
KR100866278B1 (en) * 2007-04-26 2008-10-31 주식회사 코아로직 Apparatus and method for making a panorama image and Computer readable medium stored thereon computer executable instruction for performing the method
KR101354899B1 (en) * 2007-08-29 2014-01-27 삼성전자주식회사 Method for photographing panorama picture
KR100934211B1 (en) * 2008-04-11 2009-12-29 주식회사 디오텍 How to create a panoramic image on a mobile device
CN101853524A (en) * 2010-05-13 2010-10-06 北京农业信息技术研究中心 Method for generating corn ear panoramic image by using image sequence
US8640020B2 (en) * 2010-06-02 2014-01-28 Microsoft Corporation Adjustable and progressive mobile device street view

Also Published As

Publication number Publication date
CN103718540B (en) 2017-11-07
KR101049928B1 (en) 2011-07-15
EP2696573A4 (en) 2015-12-16
US20120212573A1 (en) 2012-08-23
WO2012115347A2 (en) 2012-08-30
CN103718540A (en) 2014-04-09
WO2012115347A3 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
EP2696573A2 (en) Method for generating a panoramic image, user terminal device, and computer-readable recording medium
KR101956149B1 (en) Efficient Determination of Optical Flow Between Images
US11004208B2 (en) Interactive image matting using neural networks
US9516214B2 (en) Information processing device and information processing method
EP2080170B1 (en) Combined intensity projection
US8547378B2 (en) Time-based degradation of images using a GPU
US10535147B2 (en) Electronic apparatus and method for processing image thereof
CN107451976B (en) A kind of image processing method and device
Fung et al. Mediated reality using computer graphics hardware for computer vision
JP2019527355A (en) Computer system and method for improved gloss rendering in digital images
US20030146922A1 (en) System and method for diminished reality
JP6345345B2 (en) Image processing apparatus, image processing method, and image processing program
KR20100122381A (en) Apparatus and method for painterly rendering
US10212406B2 (en) Image generation of a three-dimensional scene using multiple focal lengths
US11100617B2 (en) Deep learning method and apparatus for automatic upright rectification of virtual reality content
US11410398B2 (en) Augmenting live images of a scene for occlusion
AU2012268887A1 (en) Saliency prediction method
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
KR102587298B1 (en) Real-time omnidirectional stereo matching method using multi-view fisheye lenses and system therefore
AU2015258346A1 (en) Method and system of transitioning between images
Sung et al. Selective anti-aliasing for virtual reality based on saliency map
AU2015271981A1 (en) Method, system and apparatus for modifying a perceptual attribute for at least a part of an image
Herbon et al. Adaptive planar and rotational image stitching for mobile devices
KR20200103527A (en) Image processing method and apparatus therefor
Burger et al. Corner detection

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

PUAB Information related to the publication of an a document modified or deleted

Free format text: ORIGINAL CODE: 0009199EPPU

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130821

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20151113

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/232 20060101AFI20151109BHEP

Ipc: G06T 3/40 20060101ALI20151109BHEP

17Q First examination report despatched

Effective date: 20180509

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20181120