US20080166069A1 - Image processing apparatus using the difference among scaled images as a layered image and method thereof - Google Patents

Image processing apparatus using the difference among scaled images as a layered image and method thereof Download PDF

Info

Publication number
US20080166069A1
US20080166069A1 US11/797,526 US79752607A US2008166069A1 US 20080166069 A1 US20080166069 A1 US 20080166069A1 US 79752607 A US79752607 A US 79752607A US 2008166069 A1 US2008166069 A1 US 2008166069A1
Authority
US
United States
Prior art keywords
image
layered
editing
layered image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/797,526
Inventor
Tsung-Wei Lin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cascade Parent Ltd
Original Assignee
Corel TW Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Corel TW Corp filed Critical Corel TW Corp
Assigned to INTERVIDEO, DIGITAL TECHNOLOGY CORPORATION reassignment INTERVIDEO, DIGITAL TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIN, TSUNG-WEI
Assigned to COREL TW CORP. reassignment COREL TW CORP. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: INTERVIDEO DIGITAL TECHNOLOGY CORPORATION
Publication of US20080166069A1 publication Critical patent/US20080166069A1/en
Assigned to COREL CORPORATION reassignment COREL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: COREL TW CORPORATION
Priority to US13/082,115 priority Critical patent/US8280179B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting

Definitions

  • the present invention relates to an image processing apparatus and method, more particularly to an image processing apparatus and method by using the difference among scaled images as a layered image for enabling a user to edit or perform special effect to the edge and line characteristics of the layered images of a scene of an original image, so as to simulate different visual effects based on different vision models.
  • an image processing software usually comes with a layer processing function that allows each image to be resided at a layer all the time as shown in FIG. 1 , and thus users usually need to create a layer 111 , 121 or add a layer before processing (including editing or composing) an image 110 , 120 in a file, and a canvas 130 is situated at the lowest layer of all layers 111 , 121 but the canvas 130 is not a layer by itself. Therefore, the layers 111 , 121 are similar to a stack of transparent films, and the layers 111 , 121 can help a user to organize the images 110 , 120 in a file, so that the user can edit the image 110 on the layer 111 without affecting the image 120 on the other layer 121 .
  • the layer 111 contains no image 110 , then the user will be able to see the image 120 on the layer 121 through the layer 111 .
  • a user can view the stack sequence of the layers and images on a layer display panel provided by different image processing software, and such stack sequence is also the sequence of images appeared in the document.
  • various different image processing software stack the layers according to such sequence, and the lately produced layer is placed at the top of the stack, and the stack sequence determines how to stack an image of a layer on an image of another layer, such that users can rearrange the sequence of layers and images through a control interface of the image processing software to change the content of the images in a file.
  • a user can create two images 210 , 220 on two separate layers from a layer display panel provided by the image processing software, if the user wants to compose two images 210 , 220 .
  • a composed image 310 is created as shown in FIG. 3 , after the size and position of each image 210 , 220 are adjusted according to the user's requirements.
  • a user can click on an [Add Mask] button 450 to create a mask 440 for a first image 410 on the first layer 411 , if the user wants to edit the first image 410 on the layer display panel 400 .
  • FIG. 2 for an image processing software called “Photoshop” available in the market
  • a brush tool 471 is selected from a tool menu 470 .
  • the selected brush tool 471 is used to paint the mask 440 , such that the dark color position of a second image 420 at a second layer 421 corresponding to the mask 440 is set to a black color. If the user selects to apply a gentle pressure of the brush tool 471 , the user needs to adjust the transparency of the black color, such that the image 410 (or foreground) at the first layer 411 is merged with the second image 420 (or background) at the second layer 421 to form a composed image 430 with the best composition effect.
  • the image processing software available in the market can use the concept and technique of a layer to provide a tool for editing the image on each layer, or rearranging the image sequence of each layer, and can use the image composition technology to simulate a digital dark room, for editing and composing images, but these image processing software cannot show the effect of different vision models for the image.
  • LMS is a color space used for indicating the response of three kinds of cones of human eyes, which refer to the sensitivity of color lights with a long wavelength, a medium wavelength and a short wavelength respectively, and the cross-section of a human retinal-cortical system includes complicated neural links.
  • LMS cone is known in general, but the profound structure of the retinal-cortical system still includes tree-structured constructions of cells strains, and the root of these constructions link several cones together to form a so-called “ganglion cell”, for making several receptive fields.
  • an image 530 with the effect of a different vision model can be produced in an imaging area of a human brain as shown in FIG. 9 .
  • the inventor of the present invention based on years of experience in the related industry to conduct extensive researches and experiments, and finally developed an image processing apparatus using the difference among scaled images as a layered image and its method in accordance with the present invention.
  • Another objective of the present invention is to use the Gaussian and Laplacian pyramid theory to convert an original image into a plurality of scaled images of different scales, and the difference among scaled images of two adjacent different scales as a layered image of the corresponding layer, so that the edge and line characteristics of a scene of the original image for each layered image can be displayed in different levels sequentially from a clear level to a vague level, and provide a layered image display interface and an image characteristic editing interface for users to examine each layered image through the layered image display interface.
  • the image characteristic editing interface is used for editing or performing another special effect for each layered image, so as to simulate different visual effects based on different vision models.
  • a further objective of the present invention is to provide a layered image editing interface, and the layered image editing interface includes a layered image editing window for displaying each editing layered image or a composed image composed of the layered images, and its periphery has at least one layered image characteristic adjusting button for users to click to adjust the contrast or Gaussian variance of the layered image; at least one size adjusting button for users to click to zoom in or out the layered image or composed image; and at least one image composition switch button for users to click to switch the layered image editing window and select each layered image or composed image, so that a user can click the image composition switch button to browse each editing layered image or composed image.
  • Another further objective of the present invention is to provide an image output interface, for integrating the edited layered image into a new image, and converting the new image into an image file having a specific format for saving, displaying or printing the image file.
  • FIG. 1 is a schematic view of images and layers in various different image files of a prior art
  • FIG. 2 is a schematic view of two images
  • FIG. 3 is a schematic view of using Photoshop image processing software to compose two images as depicted in FIG. 2 into one image;
  • FIG. 4 is a schematic view of using Photoshop image processing software to add a mask operation to the composed image as depicted in FIG. 3 ;
  • FIG. 5 is a schematic view of a tool menu of Photoshop image processing software
  • FIG. 6 is a schematic view of using Photoshop image processing software to add a new mask to the image as depicted in FIG. 3 for a composition;
  • FIG. 7 is a schematic view of a scene captured by a camera or a video camera
  • FIG. 8 is a schematic view of an area (such as a cross area) of a certain scene attracted to human eyes;
  • FIG. 9 is a schematic view of simulating an image with an area interested to and viewed by a human eye as shown in FIG. 8 for a longer time;
  • FIG. 10 is a schematic view of simulating five visual model spaces according to human vision sensing experience
  • FIG. 11 is a schematic view of five levels of visual model spaces as shown in FIG. 10 ;
  • FIG. 12 is a schematic view of converting an original image into a plurality of scaled images of different scales by using the Gaussian and Laplacian pyramid theory;
  • FIG. 13 is a schematic view of using an image difference among scaled images of two adjacent different scales to compute a layered image corresponding to a layer;
  • FIG. 14 is a schematic view of a structure of an image processing apparatus in accordance with a preferred embodiment of the present invention.
  • FIG. 15 is a schematic view of a layered image display interface of an image processing apparatus as depicted in FIG. 14 ;
  • FIG. 16 is a schematic view of an image characteristic editing interface of an image processing apparatus as depicted in FIG. 14 ;
  • FIG. 17 is a schematic view of a layered image editing interface of an image processing apparatus as depicted in FIG. 14 ;
  • FIG. 18 is a schematic view of a layered image display interface in accordance with a preferred embodiment of the present invention.
  • FIG. 19 is a schematic view of a layer control panel in accordance with another preferred embodiment of the present invention.
  • FIG. 20 is a schematic view of a layer control panel in accordance with a further preferred embodiment of the present invention.
  • a human visual system is used for identifying an edge and a line of an image, which is an easy task for many occasions except for a camera system. Even though many relatively complicated theories and algorithms have been adopted, it is not easy to simulate the recognition capability of a human visual system due to the following factors:
  • the generally acknowledge best edge detection method is called a multi-scale edge detection, and the main concept of the method is to applied a smoothing filter (such as a Gaussian filter) with different scales to perform a convolution with the original image to obtain a filtered image with a different scale and then take the edge of the filtered image for every scale, and finally stack all edges with all scales to form an edge image.
  • a smoothing filter such as a Gaussian filter
  • the scale of a human face falls within a range of size from 3 ⁇ 3 pixels to 300 ⁇ 300 pixels or even a larger range for a plurality of captured images.
  • a change of characteristics of the human face will show up in the captured images, which is equivalent to the characteristics of the human face being changed continuously on the photographer's retina, so that the photographer's visual cortex at a high level perception area senses the quantum jumps to form visual models with different scales of a different generation.
  • the complexity of a graph structure will be increased if the size of an image 600 is enlarged, so that the characteristics of the image 600 become more significant. If a different generation model with a different scale can be created, then a series of models can be used for defining perceptual model spaces. From the experience of human perceptual sensing, the perceptual model spaces can be defined into five major regimes:
  • a texture regime 610 If a viewer views a person at a distance, the viewer cannot identify an image of the human face of that person easily, and thus the color of skin is generally identified to segment the human face in an image;
  • a PCA regime 620 This regime was proven as a regime capable of showing the characteristics of a scene of a mid-scale image the best;
  • a parts regime 630 This regime has a higher resolution for clearly identify the image with five face features (including eyes, nose and mouths), and thus it can identify the movements of the five face features (including the closing or opening movement of an eye or a nose);
  • a sketch regime 640 This regime has a much higher resolution, for displaying characteristics in more details to identify the five human face features including eyelids, eyebrows, and crow's feet;
  • a super-resolution regime 650 This regime has a much higher resolution for displaying characteristics in more details.
  • any image can generate a different visual model according to a different scale of its perceptual space.
  • the present invention adopts this concept together with the Gaussian and Laplacian pyramid theory as illustrated in FIG. 12 to convert an original image into a plurality of scaled images 700 , 710 , 720 , 730 , 740 , 750 in a visual model with a different scale (as shown in FIG.
  • the present invention also provides a layered image display interface and an image characteristic editing interface, so that a user can examine each layered image 701 , 711 , 721 , 731 in a visual model of a different scale of the original image through the layered image display interface, and edit or perform a special effect for each layered image 701 , 711 , 721 , 731 through the image characteristic editing interface, so as to simulate different visual effects based on different vision models and the user's requirements.
  • the image processing apparatus includes the following interfaces:
  • a scaled image conversion interface 800 This interface as shown in FIG. 12 is provided for reading an original image and converting the original image into a plurality of scaled images 700 , 710 , 720 , 730 , 740 , 750 in a visual model of a different scale.
  • FIG. 12 This interface as shown in FIG. 12 is provided for reading an original image and converting the original image into a plurality of scaled images 700 , 710 , 720 , 730 , 740 , 750 in a visual model of a different scale.
  • an image difference among scaled images 700 , 710 , 720 , 730 of two adjacent different scales is computed and used as a layered image 701 , 711 , 721 , 731 of a corresponding layer, such that each layered image 701 , 711 , 721 , 731 can be shown in a sequence of different levels from a clear level to a vague level for displaying the characteristics including an edge and a line of the original image;
  • the layered image display interface 810 as shown in FIG. 15 comprises a plurality of layer control panels 811 , each having a layer display window 812 and an editing start button 813 , wherein the layer display window 812 is provided for displaying a layered image of a corresponding layer, such that a user can examine a layered image in a visual model with a different scale in the original image through the layer display window 812 , and determining whether or not to edit or perform a special effect for each layered image; and the editing start button 813 is provided for starting an editing program that allows users to edit a layered image by the editing program displayed on the layer display window 812 ;
  • An image characteristic editing interface 820 The image characteristic editing interface 820 as shown in FIG. 16 comprises a plurality of image characteristic menus 821 for users to click to edit and adjust the characteristics including contrast, highlight, midtone, shadow and white balance of the layered image of a started editing program;
  • the layered image editing interface 830 as shown in FIG. 17 comprises a layered image editing window 831 for displaying each editing layered image or a composed image composed of the layered images; at least one layered image characteristic adjusting button 832 disposed at the periphery of the layered image editing interface 830 for users to click to adjust the contrast or Gaussian variance of the layered image; at least one size adjusting button 833 for users to click to adjust the process of zooming in or out the layered image or the composed image; and at least one image composition switch button 834 for users to click to switch the layered image editing window 831 to select and show each layered image or composed image, such that a user can click the image composition switch button 834 to browse each editing layered image or composed image; and
  • An image output interface 840 is provided for integrating each edited layered image into a new image as shown in FIG. 14 and converting the new image into an image file having a specific format for saving, displaying and printing the image file.
  • each layer control panel 811 further installs a layer display button 814 for users to click to control whether or not to display a corresponding layered image in the layered image editing window 831 . Therefore, users can click a layer display button 814 on the layer control panel 811 according to actual requirements to select to open or close the corresponding layered image in the layer control panel 811 , and click an editing start button 813 of the layer control panel 811 to determine editing or performing a special effect for the layered images.
  • each layer control panel 861 further adds a mask display window 862 and a mask menu 863 , wherein the mask menu 863 is provided for users to click the desired mask and display the mask on the mask display window 862 , for displaying a mask on a corresponding layer, such that users can click a plurality of image characteristic menus 821 in the image characteristic editing interface 820 to edit the mask.
  • each layer control panel 871 adds a mask display button 872 for users to click to control whether or not to display the mask in the layered image editing window 831 .
  • the image characteristic editing interface 820 further adds a profile menu 822 , for users to click, to determine whether or not to perform an access by the existing profile of the layered image.
  • the image characteristic editing interface 820 adds a LMS channel menu 823 , for users to click to select a LMS channel.
  • the layer control panel 881 adds a blurring or fine-tune switching button 882 for users to click to adjust the blurring or fine-tune of the pixels of the layered image.
  • the layer control panel 881 adds a noise reduction button 883 , for users to click to effectively prevent the interference of a noise to the layered image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The present invention is to provide an image processing apparatus using the difference among scaled images as a layered image and a method thereof, which utilize the Gaussian and Laplacian pyramid theory to convert an original image into a plurality of scaled images of different scales, and the difference among scaled images of two adjacent different scales as a layered image of the corresponding layer, so that the edge and line characteristics of a scene of the original image for each layered image can be displayed in different levels sequentially from a clear level to a vague level, and provide a layered image display interface and an image characteristic editing interface for users to examine each layered image through the layered image display interface and edit or perform special effect to each layered image, so as to simulate different visual effects based on different vision models.

Description

    FIELD OF THE INVENTION
  • The present invention relates to an image processing apparatus and method, more particularly to an image processing apparatus and method by using the difference among scaled images as a layered image for enabling a user to edit or perform special effect to the edge and line characteristics of the layered images of a scene of an original image, so as to simulate different visual effects based on different vision models.
  • BACKGROUND OF THE INVENTION
  • In general, an image processing software usually comes with a layer processing function that allows each image to be resided at a layer all the time as shown in FIG. 1, and thus users usually need to create a layer 111, 121 or add a layer before processing (including editing or composing) an image 110, 120 in a file, and a canvas 130 is situated at the lowest layer of all layers 111, 121 but the canvas 130 is not a layer by itself. Therefore, the layers 111, 121 are similar to a stack of transparent films, and the layers 111, 121 can help a user to organize the images 110, 120 in a file, so that the user can edit the image 110 on the layer 111 without affecting the image 120 on the other layer 121. If the layer 111 contains no image 110, then the user will be able to see the image 120 on the layer 121 through the layer 111. In addition, a user can view the stack sequence of the layers and images on a layer display panel provided by different image processing software, and such stack sequence is also the sequence of images appeared in the document. In general, various different image processing software stack the layers according to such sequence, and the lately produced layer is placed at the top of the stack, and the stack sequence determines how to stack an image of a layer on an image of another layer, such that users can rearrange the sequence of layers and images through a control interface of the image processing software to change the content of the images in a file.
  • Referring to FIG. 2 for an image processing software called “Photoshop” available in the market, a user can create two images 210, 220 on two separate layers from a layer display panel provided by the image processing software, if the user wants to compose two images 210, 220. A composed image 310 is created as shown in FIG. 3, after the size and position of each image 210, 220 are adjusted according to the user's requirements. Referring to FIG. 4, a user can click on an [Add Mask] button 450 to create a mask 440 for a first image 410 on the first layer 411, if the user wants to edit the first image 410 on the layer display panel 400. In FIG. 5, a brush tool 471 is selected from a tool menu 470. In FIGS. 5 and 6, the selected brush tool 471 is used to paint the mask 440, such that the dark color position of a second image 420 at a second layer 421 corresponding to the mask 440 is set to a black color. If the user selects to apply a gentle pressure of the brush tool 471, the user needs to adjust the transparency of the black color, such that the image 410 (or foreground) at the first layer 411 is merged with the second image 420 (or background) at the second layer 421 to form a composed image 430 with the best composition effect.
  • From the description above, the image processing software available in the market can use the concept and technique of a layer to provide a tool for editing the image on each layer, or rearranging the image sequence of each layer, and can use the image composition technology to simulate a digital dark room, for editing and composing images, but these image processing software cannot show the effect of different vision models for the image.
  • As LMS is a color space used for indicating the response of three kinds of cones of human eyes, which refer to the sensitivity of color lights with a long wavelength, a medium wavelength and a short wavelength respectively, and the cross-section of a human retinal-cortical system includes complicated neural links. Only the LMS cone is known in general, but the profound structure of the retinal-cortical system still includes tree-structured constructions of cells strains, and the root of these constructions link several cones together to form a so-called “ganglion cell”, for making several receptive fields. Although neurophysists already have relatively high understanding on the visual imaging method of a retinal-cortical system of human eyes, yet this understanding is limited to an edge enhancement effect only, since the ganglion cells of different sizes are distributed increasingly denser from a fovea to the peripheral areas of a cornea, and thus the basic visual imaging principle decreases the vision from the center of the visual line to the peripheral areas. While human eyes are viewing a scene, an image sensed by a fixation point or a perceptual field of an eye cannot be the same as a camera or camcorder as shown in FIG. 7 for treating each position on the image 510 the same way. In fact, human eyes as shown in FIG. 8 only perceive an area (such as the cross area) interested to the eyes for a longer time, but the fixation point or perceptive field varies and the level of fixation is different. As a result, an image 530 with the effect of a different vision model can be produced in an imaging area of a human brain as shown in FIG. 9.
  • In recent years, unsharp-masking (USM) has been used extensively in different traditional image processing software and provided for users to perform special effects for an image. Regardless of the computation program used by these image processing software, the core technology uses the Laplacian of Gaussian edge enhancement technology and concept to simulate the effects of a receptive field of a human vision, but the computation carries out a single-level processing for the image only, and it cannot show the effect of different vision models for the image.
  • SUMMARY OF THE INVENTION
  • In view of the foregoing shortcomings of the prior art, the inventor of the present invention based on years of experience in the related industry to conduct extensive researches and experiments, and finally developed an image processing apparatus using the difference among scaled images as a layered image and its method in accordance with the present invention.
  • It is a primary objective of the present invention to provide an image processing apparatus using the difference among scaled images as a layered image and its method to simulate an image with an effect of different vision models of the same scene in an imaging area of a human brain according to the fixation point, perceptive field or level of fixation of a human eye that views the scene, while a user is editing the image.
  • Another objective of the present invention is to use the Gaussian and Laplacian pyramid theory to convert an original image into a plurality of scaled images of different scales, and the difference among scaled images of two adjacent different scales as a layered image of the corresponding layer, so that the edge and line characteristics of a scene of the original image for each layered image can be displayed in different levels sequentially from a clear level to a vague level, and provide a layered image display interface and an image characteristic editing interface for users to examine each layered image through the layered image display interface. According to actual requirements, the image characteristic editing interface is used for editing or performing another special effect for each layered image, so as to simulate different visual effects based on different vision models.
  • A further objective of the present invention is to provide a layered image editing interface, and the layered image editing interface includes a layered image editing window for displaying each editing layered image or a composed image composed of the layered images, and its periphery has at least one layered image characteristic adjusting button for users to click to adjust the contrast or Gaussian variance of the layered image; at least one size adjusting button for users to click to zoom in or out the layered image or composed image; and at least one image composition switch button for users to click to switch the layered image editing window and select each layered image or composed image, so that a user can click the image composition switch button to browse each editing layered image or composed image.
  • Another further objective of the present invention is to provide an image output interface, for integrating the edited layered image into a new image, and converting the new image into an image file having a specific format for saving, displaying or printing the image file.
  • To make it easier for our examiner to understand the objective, technical characteristics and effects of the present invention, preferred embodiment will be described with accompanying drawings as follows:
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic view of images and layers in various different image files of a prior art;
  • FIG. 2 is a schematic view of two images;
  • FIG. 3 is a schematic view of using Photoshop image processing software to compose two images as depicted in FIG. 2 into one image;
  • FIG. 4 is a schematic view of using Photoshop image processing software to add a mask operation to the composed image as depicted in FIG. 3;
  • FIG. 5 is a schematic view of a tool menu of Photoshop image processing software;
  • FIG. 6 is a schematic view of using Photoshop image processing software to add a new mask to the image as depicted in FIG. 3 for a composition;
  • FIG. 7 is a schematic view of a scene captured by a camera or a video camera;
  • FIG. 8 is a schematic view of an area (such as a cross area) of a certain scene attracted to human eyes;
  • FIG. 9 is a schematic view of simulating an image with an area interested to and viewed by a human eye as shown in FIG. 8 for a longer time;
  • FIG. 10 is a schematic view of simulating five visual model spaces according to human vision sensing experience;
  • FIG. 11 is a schematic view of five levels of visual model spaces as shown in FIG. 10;
  • FIG. 12 is a schematic view of converting an original image into a plurality of scaled images of different scales by using the Gaussian and Laplacian pyramid theory;
  • FIG. 13 is a schematic view of using an image difference among scaled images of two adjacent different scales to compute a layered image corresponding to a layer;
  • FIG. 14 is a schematic view of a structure of an image processing apparatus in accordance with a preferred embodiment of the present invention;
  • FIG. 15 is a schematic view of a layered image display interface of an image processing apparatus as depicted in FIG. 14;
  • FIG. 16 is a schematic view of an image characteristic editing interface of an image processing apparatus as depicted in FIG. 14;
  • FIG. 17 is a schematic view of a layered image editing interface of an image processing apparatus as depicted in FIG. 14;
  • FIG. 18 is a schematic view of a layered image display interface in accordance with a preferred embodiment of the present invention;
  • FIG. 19 is a schematic view of a layer control panel in accordance with another preferred embodiment of the present invention; and
  • FIG. 20 is a schematic view of a layer control panel in accordance with a further preferred embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • In general, a human visual system is used for identifying an edge and a line of an image, which is an easy task for many occasions except for a camera system. Even though many relatively complicated theories and algorithms have been adopted, it is not easy to simulate the recognition capability of a human visual system due to the following factors:
  • (1) An error caused by the quantization of the original image and a detected noise will cause an area of an edge or a line of the image to disappear as the detected brightness varies;
  • (2) The precise positions of an edge and a line of an image may be affected by quantized errors and noises to produce a deviation; and
  • (3) An edge and a line of an object in an image should show the change of a sharp brightness theoretically, due to the high-frequency characteristic, and thus any smooth filtered wave for reducing the noise of an image will cause a blurring to the signals at the edge areas. Since a vast majority of edge detection methods adopt the derivative processing, not only amplifying the high-frequency signals, but also amplifying the noises as well, therefore it is necessary to perform a smoothing filter to the signals, and the smoothing filter effect depends on the size of the filter and the scale of the filtered wave. The larger the scale of filter, the broader is the range of brightness, but the precision detected at the edge positions also becomes lower as well. The smaller the scale of filter, the more precise is the edge position, but it is easier to generate errors at the edge points. Furthermore, the influence on different scaled images will be different, and thus it is not easy to appropriately find the best scale for all edges of an image with all edges, not even mentioning about finding an appropriate scale of all images.
  • In the present image processing area, the generally acknowledge best edge detection method is called a multi-scale edge detection, and the main concept of the method is to applied a smoothing filter (such as a Gaussian filter) with different scales to perform a convolution with the original image to obtain a filtered image with a different scale and then take the edge of the filtered image for every scale, and finally stack all edges with all scales to form an edge image.
  • As an object can be expressed in different scales and displayed in an image. For instance, if a camera is shooting a photograph of a person walking towards the camera and capturing the images continuously, the scale of a human face falls within a range of size from 3×3 pixels to 300×300 pixels or even a larger range for a plurality of captured images. As the scale of the human face varies, a change of characteristics of the human face will show up in the captured images, which is equivalent to the characteristics of the human face being changed continuously on the photographer's retina, so that the photographer's visual cortex at a high level perception area senses the quantum jumps to form visual models with different scales of a different generation.
  • Referring to FIGS. 10 and 11, the complexity of a graph structure will be increased if the size of an image 600 is enlarged, so that the characteristics of the image 600 become more significant. If a different generation model with a different scale can be created, then a series of models can be used for defining perceptual model spaces. From the experience of human perceptual sensing, the perceptual model spaces can be defined into five major regimes:
  • (1) A texture regime 610: If a viewer views a person at a distance, the viewer cannot identify an image of the human face of that person easily, and thus the color of skin is generally identified to segment the human face in an image;
  • (2) A PCA regime 620: This regime was proven as a regime capable of showing the characteristics of a scene of a mid-scale image the best;
  • (3) A parts regime 630: This regime has a higher resolution for clearly identify the image with five face features (including eyes, nose and mouths), and thus it can identify the movements of the five face features (including the closing or opening movement of an eye or a nose);
  • (4) A sketch regime 640: This regime has a much higher resolution, for displaying characteristics in more details to identify the five human face features including eyelids, eyebrows, and crow's feet; and
  • (5) A super-resolution regime 650: This regime has a much higher resolution for displaying characteristics in more details.
  • From the description above, any image can generate a different visual model according to a different scale of its perceptual space. The present invention adopts this concept together with the Gaussian and Laplacian pyramid theory as illustrated in FIG. 12 to convert an original image into a plurality of scaled images 700, 710, 720, 730, 740, 750 in a visual model with a different scale (as shown in FIG. 13,) and the image difference among scaled images 700, 710, 720, 730 of two adjacent different scales is used for the computation, and a computed image difference is used as a layered image 701, 711, 721, 731 of the corresponding layer, such that each layered image 701, 711, 721, 731 can be shown in a sequence of different levels from a clear level to a vague level for displaying the characteristics of an edge and a line of the original image. The present invention also provides a layered image display interface and an image characteristic editing interface, so that a user can examine each layered image 701, 711, 721, 731 in a visual model of a different scale of the original image through the layered image display interface, and edit or perform a special effect for each layered image 701, 711, 721, 731 through the image characteristic editing interface, so as to simulate different visual effects based on different vision models and the user's requirements.
  • It is noteworthy to point out that the mathematical algorithm for computing the image difference among scaled images of two adjacent different scales have been disclosed in many technical literatures and journals. These algorithms vary according to actual needs and objectives, but the basic algorithm generally adopts the visual model of the Gaussian and Laplacian pyramid theory to obtain the user's expected edge and line characteristics of an original image in a visual model of a different scale. Since these algorithms and mathematical models are not intended to be covered in the patent claims of the present invention, therefore they will not be described here.
  • In a preferred embodiment of the present invention as shown in FIG. 14, the image processing apparatus includes the following interfaces:
  • (1) A scaled image conversion interface 800: This interface as shown in FIG. 12 is provided for reading an original image and converting the original image into a plurality of scaled images 700, 710, 720, 730, 740, 750 in a visual model of a different scale. In FIG. 13, an image difference among scaled images 700, 710, 720, 730 of two adjacent different scales is computed and used as a layered image 701, 711, 721, 731 of a corresponding layer, such that each layered image 701, 711, 721, 731 can be shown in a sequence of different levels from a clear level to a vague level for displaying the characteristics including an edge and a line of the original image;
  • (2) A layered image display interface 810: The layered image display interface 810 as shown in FIG. 15 comprises a plurality of layer control panels 811, each having a layer display window 812 and an editing start button 813, wherein the layer display window 812 is provided for displaying a layered image of a corresponding layer, such that a user can examine a layered image in a visual model with a different scale in the original image through the layer display window 812, and determining whether or not to edit or perform a special effect for each layered image; and the editing start button 813 is provided for starting an editing program that allows users to edit a layered image by the editing program displayed on the layer display window 812;
  • (3) An image characteristic editing interface 820: The image characteristic editing interface 820 as shown in FIG. 16 comprises a plurality of image characteristic menus 821 for users to click to edit and adjust the characteristics including contrast, highlight, midtone, shadow and white balance of the layered image of a started editing program;
  • (4) A layered image editing interface 830: The layered image editing interface 830 as shown in FIG. 17 comprises a layered image editing window 831 for displaying each editing layered image or a composed image composed of the layered images; at least one layered image characteristic adjusting button 832 disposed at the periphery of the layered image editing interface 830 for users to click to adjust the contrast or Gaussian variance of the layered image; at least one size adjusting button 833 for users to click to adjust the process of zooming in or out the layered image or the composed image; and at least one image composition switch button 834 for users to click to switch the layered image editing window 831 to select and show each layered image or composed image, such that a user can click the image composition switch button 834 to browse each editing layered image or composed image; and
  • (5) An image output interface 840: The image output interface 840 is provided for integrating each edited layered image into a new image as shown in FIG. 14 and converting the new image into an image file having a specific format for saving, displaying and printing the image file.
  • Referring to FIGS. 15 and 17 for another preferred embodiment of the present invention, each layer control panel 811 further installs a layer display button 814 for users to click to control whether or not to display a corresponding layered image in the layered image editing window 831. Therefore, users can click a layer display button 814 on the layer control panel 811 according to actual requirements to select to open or close the corresponding layered image in the layer control panel 811, and click an editing start button 813 of the layer control panel 811 to determine editing or performing a special effect for the layered images.
  • Referring to FIGS. 16 and 18 for another preferred embodiment of the present invention, each layer control panel 861 further adds a mask display window 862 and a mask menu 863, wherein the mask menu 863 is provided for users to click the desired mask and display the mask on the mask display window 862, for displaying a mask on a corresponding layer, such that users can click a plurality of image characteristic menus 821 in the image characteristic editing interface 820 to edit the mask.
  • Referring to FIGS. 17 and 19 for another preferred embodiment of the present invention, each layer control panel 871 adds a mask display button 872 for users to click to control whether or not to display the mask in the layered image editing window 831.
  • Referring to FIG. 16 for another preferred embodiment of the present invention, the image characteristic editing interface 820 further adds a profile menu 822, for users to click, to determine whether or not to perform an access by the existing profile of the layered image.
  • In another preferred embodiment of the present invention as shown in FIG. 16, the image characteristic editing interface 820 adds a LMS channel menu 823, for users to click to select a LMS channel.
  • In another preferred embodiment of the present invention as shown in FIG. 20, the layer control panel 881 adds a blurring or fine-tune switching button 882 for users to click to adjust the blurring or fine-tune of the pixels of the layered image.
  • In another preferred embodiment of the present invention as shown in FIG. 20, the layer control panel 881 adds a noise reduction button 883, for users to click to effectively prevent the interference of a noise to the layered image.
  • It is noteworthy to point out that the editing and composition described in the preferred embodiment is used for the illustration purposes only, and the persons skilled in the art should be able to use the Gaussian and Laplacian pyramid theory and the concept of the present invention to convert an original image into a plurality of scaled images and display each layered image in visual models of different scales in the original image through a layered image display interface, so that users can edit or perform a special effect to each layered image through an image characteristic editing interface. Based on different vision models, different visual effects can be simulated. All mathematical conversion programs or editing and composition programs of this sort are covered in the scope of the patent claims of the present invention.

Claims (13)

1. A method of processing an image by using the difference among scaled images as a layered image, being applied in a computer, and said method comprising the steps of:
converting an original image into a plurality of scaled images with different scales in a visual model, and using the difference among said scaled images of two adjacent different scales as a layered image of a corresponding layer, such that said each layered image can be displayed in a sequence of different levels from a clear level to a vague level for showing the characteristics of an edge and a line of a scene in said original image;
providing a layered image display interface, for displaying said each layered image in a visual model of a different scale in said original image; and
providing an image characteristic editing interface, for editing or performing a special effect for the characteristics of said each layered image.
2. The method of claim 1, further comprising the step of providing a layered image editing interface for displaying an editing composed image comprised of said each layered image or said layered images.
3. The method of claim 2, further comprising the steps of providing an image output interface for integrating said edited layered images into a new image, and converting said new image into an image file having a specific format.
4. An image processing apparatus using the difference among scaled images as a layered image, comprising:
a scaled image conversion interface, for reading an original image, and converting said original image into a plurality of scaled images in a visual model with a different scale, and using an image difference among said scale images of two adjacent different scales for a computation to find said image difference as a layered image of a corresponding layer, such that said each layered image can be shown in a sequence of levels from a clear level to a vague level to show the characteristics of an edge and a line of said original image;
a layered image display interface, including a plurality of layer control panels, each having a layer display window and an editing start button, wherein said layer display window is provided for displaying a corresponding layered image on said layer to examine a layered image in said visual model with a different scale of said original image through said layer display window; and
said editing start button is provided for starting an editing program for editing a layered image displayed on said layer display window; and
an image characteristic editing interface, including a plurality of image characteristic menus, for editing and adjusting a characteristic value of said layered image of a started editing program.
5. The apparatus of claim 4, further comprising a layered image editing interface, and said layered image editing interface comprising:
a layered image editing window, for displaying said each editing layered image or a composed image composed of said layered images;
at least one layered image characteristic adjusting button, installed at the periphery of said layered image editing window, for adjusting the contrast or Gaussian of said layered image;
at least one size adjusting button, for adjusting a process of zooming in or zooming out of said layered image or said composed image; and
at least one image composition switch button, for switching said layered image editing window to selectively show said each layered image or composed image.
6. The apparatus of claim 5, further comprising an image output interface for integrating said each edited layered image into a new image, and converting said new image into an image file with a specific format for saving, displaying or printing the output of said image file.
7. The apparatus of claim 4, wherein said each layer control panel further comprises a layer display button, for controlling whether or not to display a corresponding layered image on said layered image editing window.
8. The apparatus of claim 4, wherein said each layer control panel further comprises a mask display window and a mask menu, and said mask menu is provided for users to click and use said mask, and display said mask on said mask display window, for displaying a mask of said corresponding layer.
9. The apparatus of claim 8, wherein said each layer control panel further comprises a mask display button, for controlling whether or not to display said mask in said layer editing window.
10. The apparatus of claim 4, wherein said each layer control panel further comprises a blurring or fine-tune switching button, for adjusting a blurring or a fine-tune of pixels of said layered image.
11. The apparatus of claim 4, wherein said each layer control panel further comprises a noise reduction button, for effectively preventing said layered image from being interfered by a noise.
12. The apparatus of claim 4, wherein said each image characteristic editing interface further comprises a profile menu used for determining whether or not to access said existing profile of said layered image.
13. The apparatus of claim 4, wherein said each image characteristic editing interface further comprises a LMS channel menu, for selecting a LMS channel.
US11/797,526 2007-01-08 2007-05-04 Image processing apparatus using the difference among scaled images as a layered image and method thereof Abandoned US20080166069A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/082,115 US8280179B2 (en) 2007-01-08 2011-04-07 Image processing apparatus using the difference among scaled images as a layered image and method thereof

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW096100638A TW200830217A (en) 2007-01-08 2007-01-08 Image processing device and method by using differences of different scaled images as layered images
TW096100638 2007-01-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/082,115 Continuation US8280179B2 (en) 2007-01-08 2011-04-07 Image processing apparatus using the difference among scaled images as a layered image and method thereof

Publications (1)

Publication Number Publication Date
US20080166069A1 true US20080166069A1 (en) 2008-07-10

Family

ID=39594360

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/797,526 Abandoned US20080166069A1 (en) 2007-01-08 2007-05-04 Image processing apparatus using the difference among scaled images as a layered image and method thereof
US13/082,115 Expired - Fee Related US8280179B2 (en) 2007-01-08 2011-04-07 Image processing apparatus using the difference among scaled images as a layered image and method thereof

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/082,115 Expired - Fee Related US8280179B2 (en) 2007-01-08 2011-04-07 Image processing apparatus using the difference among scaled images as a layered image and method thereof

Country Status (2)

Country Link
US (2) US20080166069A1 (en)
TW (1) TW200830217A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110298822A1 (en) * 2009-12-17 2011-12-08 Dassault Systemes Method, apparatus, and program for displaying an object on a computer screen
US20120008011A1 (en) * 2008-08-08 2012-01-12 Crambo, S.A. Digital Camera and Associated Method
US20120159376A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Editing data records associated with static images
US20150348284A1 (en) * 2014-05-27 2015-12-03 Disney Enterprises, Inc. Example based editing of virtual terrain maps
CN105125228A (en) * 2015-10-10 2015-12-09 四川大学 Image processing method for chest X-ray DR (digital radiography) image rib inhibition
US20170140512A1 (en) * 2015-11-18 2017-05-18 Adobe Systems Incorporated Color-based geometric feature enhancement for 3d models
US9779529B2 (en) * 2015-02-20 2017-10-03 Adobe Systems Incorporated Generating multi-image content for online services using a single image

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7391929B2 (en) 2000-02-11 2008-06-24 Sony Corporation Masking tool
JP4844664B2 (en) * 2009-09-30 2011-12-28 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
TWI405148B (en) * 2009-12-09 2013-08-11 Univ Nat Taiwan Method of realism assessment of an image composite
US8943442B1 (en) * 2009-12-21 2015-01-27 Lucasfilm Entertainment Company Ltd. Controlling operations for execution
US9143659B2 (en) 2012-01-08 2015-09-22 Gary Shuster Clothing and body covering pattern creation machine and method
EP2765495B1 (en) * 2013-02-07 2016-06-22 Advanced Digital Broadcast S.A. A method and a system for generating a graphical user interface
KR20140115836A (en) * 2013-03-22 2014-10-01 삼성전자주식회사 Mobile terminal for providing haptic effect and method therefor
CN106586135B (en) * 2016-12-28 2018-09-18 天津普达软件技术有限公司 A kind of product packing box date of manufacture spray printing defective products elimination method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4692806A (en) * 1985-07-25 1987-09-08 Rca Corporation Image-data reduction technique
US5469536A (en) * 1992-02-25 1995-11-21 Imageware Software, Inc. Image editing system including masking capability
US5666475A (en) * 1995-01-03 1997-09-09 University Of Washington Method and system for editing multiresolution images at fractional-levels of resolution using a wavelet representation
US5835086A (en) * 1997-11-26 1998-11-10 Microsoft Corporation Method and apparatus for digital painting
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6606105B1 (en) * 1999-12-22 2003-08-12 Adobe Systems Incorporated Layer enhancements in digital illustration system
US20040062420A1 (en) * 2002-09-16 2004-04-01 Janos Rohaly Method of multi-resolution adaptive correlation processing
US20040264799A1 (en) * 2003-06-26 2004-12-30 Eastman Kodak Company Method of processing an image to form an image pyramid
US20060038827A1 (en) * 2004-08-23 2006-02-23 Hu Shane C Simple and robust color saturation adjustment for digital images
US7075535B2 (en) * 2003-03-05 2006-07-11 Sand Codex System and method for exact rendering in a zooming user interface
US20070003152A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Multi-level image stack of filtered images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7149358B2 (en) * 2002-11-27 2006-12-12 General Electric Company Method and system for improving contrast using multi-resolution contrast based dynamic range management

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4692806A (en) * 1985-07-25 1987-09-08 Rca Corporation Image-data reduction technique
US5469536A (en) * 1992-02-25 1995-11-21 Imageware Software, Inc. Image editing system including masking capability
US5666475A (en) * 1995-01-03 1997-09-09 University Of Washington Method and system for editing multiresolution images at fractional-levels of resolution using a wavelet representation
US5835086A (en) * 1997-11-26 1998-11-10 Microsoft Corporation Method and apparatus for digital painting
US6434265B1 (en) * 1998-09-25 2002-08-13 Apple Computers, Inc. Aligning rectilinear images in 3D through projective registration and calibration
US6606105B1 (en) * 1999-12-22 2003-08-12 Adobe Systems Incorporated Layer enhancements in digital illustration system
US20040062420A1 (en) * 2002-09-16 2004-04-01 Janos Rohaly Method of multi-resolution adaptive correlation processing
US7075535B2 (en) * 2003-03-05 2006-07-11 Sand Codex System and method for exact rendering in a zooming user interface
US7554543B2 (en) * 2003-03-05 2009-06-30 Microsoft Corporation System and method for exact rendering in a zooming user interface
US20040264799A1 (en) * 2003-06-26 2004-12-30 Eastman Kodak Company Method of processing an image to form an image pyramid
US20060038827A1 (en) * 2004-08-23 2006-02-23 Hu Shane C Simple and robust color saturation adjustment for digital images
US20070003152A1 (en) * 2005-06-30 2007-01-04 Microsoft Corporation Multi-level image stack of filtered images

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120008011A1 (en) * 2008-08-08 2012-01-12 Crambo, S.A. Digital Camera and Associated Method
US20110298822A1 (en) * 2009-12-17 2011-12-08 Dassault Systemes Method, apparatus, and program for displaying an object on a computer screen
US8791958B2 (en) * 2009-12-17 2014-07-29 Dassault Systemes Method, apparatus, and program for displaying an object on a computer screen
KR101793017B1 (en) * 2009-12-17 2017-11-02 다솔 시스템므 Method, apparatus, and program for displaying an object on a computer screen
US20120159376A1 (en) * 2010-12-15 2012-06-21 Microsoft Corporation Editing data records associated with static images
CN102542011A (en) * 2010-12-15 2012-07-04 微软公司 Editing data records associated with static images
US20150348284A1 (en) * 2014-05-27 2015-12-03 Disney Enterprises, Inc. Example based editing of virtual terrain maps
US9805496B2 (en) * 2014-05-27 2017-10-31 Disney Enterprises, Inc. Example based editing of virtual terrain maps
US9779529B2 (en) * 2015-02-20 2017-10-03 Adobe Systems Incorporated Generating multi-image content for online services using a single image
CN105125228A (en) * 2015-10-10 2015-12-09 四川大学 Image processing method for chest X-ray DR (digital radiography) image rib inhibition
US20170140512A1 (en) * 2015-11-18 2017-05-18 Adobe Systems Incorporated Color-based geometric feature enhancement for 3d models
US10347052B2 (en) * 2015-11-18 2019-07-09 Adobe Inc. Color-based geometric feature enhancement for 3D models

Also Published As

Publication number Publication date
TW200830217A (en) 2008-07-16
US20110194757A1 (en) 2011-08-11
US8280179B2 (en) 2012-10-02
TWI377519B (en) 2012-11-21

Similar Documents

Publication Publication Date Title
US8280179B2 (en) Image processing apparatus using the difference among scaled images as a layered image and method thereof
CN113454981B (en) Techniques for multi-exposure fusion of multiple image frames based on convolutional neural networks and for deblurring the multiple image frames
CN109919869B (en) Image enhancement method and device and storage medium
CN105122302B (en) Generation without ghost image high dynamic range images
CN106056064B (en) A kind of face identification method and face identification device
US8295557B2 (en) Face image processing method
Al Sobbahi et al. Low-light homomorphic filtering network for integrating image enhancement and classification
US20220230323A1 (en) Automatically Segmenting and Adjusting Images
WO2007074844A1 (en) Detecting method and detecting system for positions of face parts
CN113822830A (en) Multi-exposure image fusion method based on depth perception enhancement
Florea et al. Directed color transfer for low-light image enhancement
Abeln et al. Preference for well-balanced saliency in details cropped from photographs
Zhuang et al. Image enhancement by deep learning network based on derived image and retinex
CN117372272A (en) Attention mechanism-based multi-exposure image fusion method and system
CN110728630A (en) Internet image processing method based on augmented reality and augmented reality glasses
US9013497B2 (en) Image aesthetic signatures
Yeganeh Cross dynamic range and cross resolution objective image quality assessment with applications
JP4445026B2 (en) Image processing method, apparatus, and program
Funahashi et al. Image adjustment for multi-exposure images based on convolutional neural networks
Pierre et al. Hue constrained image colorization in the RGB space
US11854120B2 (en) Techniques for reducing distractions in an image
CN113784077B (en) Information processing method and device and electronic equipment
KR102606373B1 (en) Method and apparatus for adjusting facial landmarks detected in images
Ige et al. Exploring Face Recognition under Complex Lighting Conditions with HDR Imaging.
Wu et al. Norm constraints pyramid for image dehazing

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERVIDEO, DIGITAL TECHNOLOGY CORPORATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIN, TSUNG-WEI;REEL/FRAME:019338/0243

Effective date: 20070420

AS Assignment

Owner name: COREL TW CORP., TAIWAN

Free format text: CHANGE OF NAME;ASSIGNOR:INTERVIDEO DIGITAL TECHNOLOGY CORPORATION;REEL/FRAME:021019/0417

Effective date: 20080421

AS Assignment

Owner name: COREL CORPORATION, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:COREL TW CORPORATION;REEL/FRAME:025075/0673

Effective date: 20100929

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION