US20170103559A1 - Image Processing Method And Electronic Apparatus With Image Processing Mechanism - Google Patents

Image Processing Method And Electronic Apparatus With Image Processing Mechanism Download PDF

Info

Publication number
US20170103559A1
US20170103559A1 US15/384,310 US201615384310A US2017103559A1 US 20170103559 A1 US20170103559 A1 US 20170103559A1 US 201615384310 A US201615384310 A US 201615384310A US 2017103559 A1 US2017103559 A1 US 2017103559A1
Authority
US
United States
Prior art keywords
image
depth value
image processing
processing device
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/384,310
Inventor
Cheng-Che Chan
Cheng-Che Chen
Ding-Yun Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiwan Semiconductor Manufacturing Co TSMC Ltd
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to US15/384,310 priority Critical patent/US20170103559A1/en
Publication of US20170103559A1 publication Critical patent/US20170103559A1/en
Assigned to XUESHAN TECHNOLOGIES INC. reassignment XUESHAN TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MEDIATEK INC.
Assigned to TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD. reassignment TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XUESHAN TECHNOLOGIES INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • the present disclosure relates to an image processing method and an electronic apparatus with an image processing mechanism, and particularly relates to an image processing method and an electronic apparatus which can automatically process an image according to depth values.
  • a user may desire to alter an image after the image is captured. For example, the user wants to paste his image to an image for a place he has never been. Alternatively, the user may want to paste an image for a furniture to an image of a room to see if the furniture matches that room.
  • FIG. 1 is a schematic diagram illustrating a conventional image altering method
  • the user pastes an first object O 1 (ex. the user's image) in a first image I 1 onto a second image I 2 which comprises an image for a house (a second object O 21 ) and an image for a tree (a second object O 22 ), to generate an image I 3 .
  • the first object O 1 is large since the camera is near the person while shooting a photo
  • the second objects O 21 and O 22 are small since the camera is far from the house and the tree while shooting a photo. Therefore, if the user does not alter the size of first object O 1 after pasting the first object O 1 onto the second image I 2 , the resultant third image I 3 will be weird.
  • the user may not know what the most suitable size and location are for the first object O 1 in the second image I 2 .
  • One objective of the present disclosure is to provide an image processing method to automatically alter the object to be pasted.
  • One objective of the present disclosure is to provide an electronic apparatus to automatically alter the object to be pasted.
  • One implementation of the present disclosure discloses an image processing method comprising: (a) acquiring a first depth value for a first object in a first image; and (b)altering image effect for the first object according the first depth value when the first object is pasted onto a second image.
  • an electronic apparatus with an image processing mechanism comprising: a depth detecting device, configured to acquire a first depth value for a first object in a first image; and an image processing device, configured to alter image effect for the first object according the first depth value when the first object is pasted onto a second image.
  • the image effect of the object to be pasted can be automatically altered based on the depth value of the object to be pasted. Accordingly, the images can be merged with less disharmony. Also, optimized location and size for the object can be acquired.
  • FIG. 1 is a schematic diagram illustrating a conventional image altering method.
  • FIG. 2 is a schematic diagram illustrating relations between real sizes for targets, distances for the targets, sizes for images of targets, and distances for images of targets.
  • FIG. 3 - FIG. 8 are schematic diagrams illustrating image processing methods according different implementations of the present disclosure.
  • FIG. 9 is a flow chart illustrating image processing methods corresponding to the implementations depicted in FIG. 3 - FIG. 7 .
  • FIG. 10 is a schematic diagram illustrating an image processing method according to another implementation of the present disclosure.
  • FIG. 11 is a flow chart illustrating an image processing method corresponding to the implementation depicted in FIG. 10 .
  • FIG. 12 is a schematic diagram illustrating an image processing method according other implementations of the present disclosure.
  • FIG. 13 is a block diagram illustrating an electronic apparatus with an image processing mechanism according to one implementation of the present disclosure.
  • FIG. 2 is a schematic diagram illustrating relations between real sizes for targets, distances for the targets, sizes for images of targets, and distances for images of targets. Based on FIG. 2 , the Equations (1)-(3) can be acquired:
  • the z is a real size of the target T
  • x, y are distances between the camera lens L and the target T while T is at different locations.
  • w is the distance between the image sensor and the camera lens L.
  • y1 and y2 are image sizes for the target T (i.e. the objects in following description) while the target T is at different locations. Based on above-mentioned equations, if x, y are acquired, the ratio between y1 and y2 can be acquired as well.
  • the following implementations depicted in FIG. 3 - FIG. 6 can be implemented based on the equations (1)-(3), but not limited.
  • FIGS. 3 and FIG. 4 are schematic diagrams illustrating an image processing method according to one implementation of the present disclosure.
  • the first image I 1 comprises a first object O 1 (in this implementation, a person's image) and the second image I 2 comprises a second object O 2 (in this implementation, a desk's image).
  • the first object O 1 will be copied (or cut out) and pasted onto the second image I 2 .
  • the size of the first object O 1 will be remained the same, as shown in the third image I 3 in FIG. 4 .
  • the depth values (ex. depth map) for the first image I 1 and the second image I 2 are acquired, and the size and the location for the first object O 1 are altered according to the depth value of the first object O 1
  • the first object O 1 has a depth value of 100 cm and the second object O 2 has a depth value of 50 cm. Therefore, the location of the first object O 1 in the second image I 2 is supposed to fall behind the second object O 2 .
  • the size of the first object O 1 can be acquired according to above-mentioned Equations (1)-(3). Accordingly, the location and the size for the first object O 1 in the third image I 3 are automatically altered, thereby an altered third image I 3 ′ is generated.
  • the third image I 3 illustrated in FIG. 4 is only for the convenience of explaining, thus the third image I 3 may not appear and the user directly acquires the altered third image I 3 ′ after pasting the first object O 1 from the first image I 1 to the second image I 2 .
  • the first object O 1 in the altered third image I 3 ′ of FIG. 4 is not covered by the second object O 2 since it locates far behind the second object O 2 .
  • the first object O 1 may be partially covered by the second object O 2 , as depicted in FIG. 5 .
  • the user can further move the first object to another location.
  • the first object O 1 is enlarged when the first object O 1 is moved from the location in the altered third image I 3 ′ of FIG. 4 to a second location with a depth value smaller than the depth value in the first image I 1 .
  • the first object is shrunk when the first object O 1 is moved from the location in the altered third image I 3 ′ of FIG. 4 to a second location with a depth value larger than the depth value in the first image I 1 .
  • the steps for the implementations of FIG. 3 - FIG. 5 can be summarized as follows: The depth value for the first object O 1 is merged into the second image I 2 while the first object O 1 is pasted from the first image I 1 to the second image I 2 . After that, the location with the first depth value in the second image I 2 is acquired and the first object O 1 is provided at this location in the second image I 2 . Further, the first object O 1 is resized according to the depth value thereof.
  • the maximum depth value of the second image I 2 is smaller than the first object O 1 .
  • the first object O 1 is enlarged according to a relation between the depth value of the first object O 1 and the maximum depth value of the second image I 2 .
  • the first image I 1 comprises a first object O 1 with a depth value 100 cm
  • the second image I 2 comprises a second object O 2 with a depth value 50 cm.
  • the maximum depth value for the second image I 2 is 60 cm.
  • the first object O 1 is located at a location having a depth value equaling to or smaller than the maximum depth value of the second image I 2 . As shown in FIG.
  • the first object O 1 in the altered third image I 3 ′ is located at a location with a depth value 40 cm, and the size thereof is enlarged according to the depth values for the first object O 1 in the first image I 1 and the altered third image I 3 ′ (i.e. 100 cm and 40 cm).
  • the maximum depth value of the second image I 2 is also smaller than the first object O 1 .
  • a target second object in the second image is shrunk, according to a relation between a depth value of the target second object and the depth value of the first object O 1 .
  • the first image I 1 comprises a first object O 1 with a depth value 100 cm
  • the second image I 2 comprises a second object O 2 with a depth value 50 cm.
  • the maximum depth value for the second image I 2 is 60 cm.
  • the second object O 2 (the target second object) is shrunk according to a relation between a depth value thereof and the depth value of the first object O 1 (ex. 50 cm and 100 cm).
  • the first object O 1 falls behind the shrunk second object O 2 in the altered third image I 3 ′ since it has a depth value larger than that of the shrunk second object O 2 .
  • the size of the first object O 1 is also shrunk based on a relation between a depth value of the second object O 2 and the depth value of the first object O 1 , and based on the maximum depth value. By this way, the ratio between the size of the first object O 1 and the size of the second object O 2 can be optimized.
  • a lock mode and a unlock mode are provided and can be switched via a trigger operation.
  • the lock mode the relation between the first object O 1 and the second image I 2 is fixed, thus the first object O 1 is resized according to the relation between the first object O 1 and the second image l 2 (or an object in the second image I 2 ).
  • the first object O 1 is resized according to the depth value thereof and the depth value of the second object O 2 in FIG. 4
  • the location and the size of the first object O 1 can be altered unlimitedly.
  • the size of the first object O 1 can be manually altered by the user.
  • the location of the first object O 1 can be manually altered by the user.
  • the location and the size of the first object are not automatically altered.
  • FIG. 9 and FIG. 10 show a flow chart illustrating image processing methods corresponding to the implementations depicted in FIG. 3 - FIG. 7 . Please note, these flow charts are only examples for explaining and do not mean to limit the scope of the present disclosure.
  • FIG. 9 is a flow chart illustrating the image processing method in the lock mode, which comprises following steps:
  • Select images having depth values For example, the first image I 1 and the second image I 2 in FIG. 3 .
  • Paste an object For example, as depicted in FIG. 3 and FIG. 4 , paste the first object O 1 onto the second image I 2 .
  • the first object O 1 moves to a desired location. As stated above, the first object O 1 is enlarged when the first object O 1 is moved to a location with a smaller depth value. Also, the first object is shrunk when the first object is moved to a location with a larger depth value.
  • step 911 can be omitted in another implementation.
  • Save the altered image For example, save the altered third image I 3 ′ in FIG. 4 - FIG. 7 .
  • an unlock mode is provided in another implementation, as depicted in FIG. 8 .
  • the unlock mode can be applied to an image without depth values, and can be applied to an image with depth values as well.
  • FIG. 10 is a schematic diagram illustrating an image processing method according to another implementation of the present disclosure.
  • the first object O 11 and the first object O 12 in the first image I 1 has a disparity value 30 cm.
  • the first object O 11 is behind the first object O 12 for 30 cm.
  • the second object O 2 in the second image I 2 has a depth value 50 cm. Therefore, the location of the first object O 11 in the altered third image I 3 ′ is determined by the disparity value between the first object O 11 and the first object O 12 .
  • the first object O 11 in the altered third image I 3 ′ has a depth value 80 cm since the second object O 2 in the second image I 2 has a depth value 50 cm and the first object O 11 has a disparity value 30 cm.
  • the size of the object O 11 in the altered third image I 3 ′ can be altered based on a relation between the depth value for the object O 11 in the first image I 1 and the depth value for the object O 11 in the altered third image I 3 ′.
  • FIG. 11 is a flow chart illustrating the image processing method in the lock mode, corresponding to the implementation depicted in FIG. 10 .
  • FIG. 11 comprises following steps:
  • Paste the first object to another image For example, paste the first object O 11 to the second image I 2 .
  • the first object O 11 moves to a desired location.
  • the first object O 11 is enlarged when the first object O 1 is moved from the location stated in the step 1209 to a second location with a depth value smaller than the first depth value.
  • the first object is shrunk when the first object is moved from the location stated in the step 1209 to a third location with a depth value larger than the first depth value.
  • step 1213 can be omitted in another implementation.
  • Save the altered image For example, save the altered third image I 3 ′ in FIG. 10 .
  • FIG. 12 is a schematic diagram illustrating an image processing method according other implementations of the present disclosure.
  • the first object O 1 in the first image I 1 comprises a depth value 100 cm.
  • the second image I 2 comprises a second object O 21 which has a depth value 50 cm, the second objects O 22 and O 23 both having a depth value 100 cm.
  • the defocus level in the second image I 2 is higher if the object has a larger depth value.
  • the size and the distance of the first object O 1 is altered based on the depth value thereof, as above-mentioned.
  • the defocus level of the first object O 1 is also altered based on the depth value thereof.
  • the defocus level of the first object O 1 is altered to be the same as that of the second objects O 22 and O 23 since the depth value of the first object O 1 is the same as that of the second objects O 22 and O 23 .
  • such implementation is not limited to the case that the depth value of the first object O 1 is the same as which of the second objects O 22 and O 23 .
  • the second objects O 22 and O 23 both have a depth value 100 cm but the first object O 1 has a depth value of 80 cm, in such case he defocus level of the first object O 1 is still altered to be the same as that of the second objects O 22 and O 23 since the first object O 1 the second objects O 22 and O 23 are all located outside a focus range.
  • an image processing method comprising: (a) acquiring a first depth value for a first object in a first image; and (b) altering image effect(s) (ex. location, size, defocus level, sharpness, color, and brightness) for the first object according the first depth value when the first object is pasted onto a second image.
  • FIG. 13 is a block diagram illustrating an electronic apparatus with an image processing mechanism according to one implementation of the present disclosure.
  • the electronic apparatus 1400 comprises a depth detecting device 1401 and an image processing device 1403 .
  • the depth detecting device 1401 is configured to acquire a first depth value for a first object in a first image I 1 .
  • the image processing device 1403 is configured to alter image effect for the first object according the first depth value when the first object is pasted onto a second image I 2 .
  • the first image I 1 and the second image I 2 can be from any image source.
  • the first image I 1 and the second image I 2 can be the images captured by the image sensor 1405 .
  • first image I 1 and the second image I 2 can be the images stored in the memory device 1407 . Further, the first image I 1 and the second image I 2 can be the images from a website. Accordingly, the image sensor 1405 and the memory device 1407 are not limited to be included in the electronic apparatus 1400 .
  • the image effect of the object to be pasted can be automatically altered based on the depth value of the object to be pasted. Accordingly, the images can be merged with less disharmony. Also, optimized location and size for the object can be acquired.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image processing method comprising: (a) acquiring a first depth value for a first object in a first image; and (b) altering image effect for the first object according the first depth value when the first object is pasted onto a second image.

Description

  • CROSS REFERENCE TO RELATED PATENT APPLICATION(S)
  • The present disclosure is part of a continuation application of U.S. patent application Ser. No. 14/791,273, filed on 3 Jul. 2015, which is incorporated by reference in its entirety.
  • TECHNICAL FIELD
  • The present disclosure relates to an image processing method and an electronic apparatus with an image processing mechanism, and particularly relates to an image processing method and an electronic apparatus which can automatically process an image according to depth values.
  • BACKGROUND
  • Sometimes, a user may desire to alter an image after the image is captured. For example, the user wants to paste his image to an image for a place he has never been. Alternatively, the user may want to paste an image for a furniture to an image of a room to see if the furniture matches that room.
  • However, many altering steps are needed to complete this altering process. Firstly, the user must copy his image and paste his image to the image he wants. Secondly, the user must alter the location and the size for his image manually. However, the user may forget the real distance and the real size for the objects in the image he wants. Accordingly, the image after altering may be weird.
  • Take FIG. 1 for example, which is a schematic diagram illustrating a conventional image altering method, the user pastes an first object O1 (ex. the user's image) in a first image I1 onto a second image I2 which comprises an image for a house (a second object O21 ) and an image for a tree (a second object O22), to generate an image I3. However, the first object O1 is large since the camera is near the person while shooting a photo, and the second objects O21 and O22 are small since the camera is far from the house and the tree while shooting a photo. Therefore, if the user does not alter the size of first object O1 after pasting the first object O1 onto the second image I2, the resultant third image I3 will be weird. However, the user may not know what the most suitable size and location are for the first object O1 in the second image I2.
  • SUMMARY
  • One objective of the present disclosure is to provide an image processing method to automatically alter the object to be pasted.
  • One objective of the present disclosure is to provide an electronic apparatus to automatically alter the object to be pasted.
  • One implementation of the present disclosure discloses an image processing method comprising: (a) acquiring a first depth value for a first object in a first image; and (b)altering image effect for the first object according the first depth value when the first object is pasted onto a second image.
  • Another implementation of the present disclosure discloses an electronic apparatus with an image processing mechanism comprising: a depth detecting device, configured to acquire a first depth value for a first object in a first image; and an image processing device, configured to alter image effect for the first object according the first depth value when the first object is pasted onto a second image.
  • In view of above-mentioned implementations, the image effect of the object to be pasted can be automatically altered based on the depth value of the object to be pasted. Accordingly, the images can be merged with less disharmony. Also, optimized location and size for the object can be acquired.
  • These and other objectives of the present disclosure will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred implementation that is illustrated in the various figures and drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram illustrating a conventional image altering method.
  • FIG. 2 is a schematic diagram illustrating relations between real sizes for targets, distances for the targets, sizes for images of targets, and distances for images of targets.
  • FIG. 3-FIG. 8 are schematic diagrams illustrating image processing methods according different implementations of the present disclosure.
  • FIG. 9 is a flow chart illustrating image processing methods corresponding to the implementations depicted in FIG. 3-FIG. 7.
  • FIG. 10 is a schematic diagram illustrating an image processing method according to another implementation of the present disclosure.
  • FIG. 11 is a flow chart illustrating an image processing method corresponding to the implementation depicted in FIG. 10.
  • FIG. 12 is a schematic diagram illustrating an image processing method according other implementations of the present disclosure.
  • FIG. 13 is a block diagram illustrating an electronic apparatus with an image processing mechanism according to one implementation of the present disclosure.
  • DETAILED DESCRIPTION
  • FIG. 2 is a schematic diagram illustrating relations between real sizes for targets, distances for the targets, sizes for images of targets, and distances for images of targets. Based on FIG. 2, the Equations (1)-(3) can be acquired:
  • z x = y 1 w Equation ( 1 ) z y = y 2 w Equation ( 2 ) y 2 = z y w = x y y 1 Equation ( 3 )
  • The z is a real size of the target T, and x, y are distances between the camera lens L and the target T while T is at different locations. Further, w is the distance between the image sensor and the camera lens L. Besides, y1 and y2 are image sizes for the target T (i.e. the objects in following description) while the target T is at different locations. Based on above-mentioned equations, if x, y are acquired, the ratio between y1 and y2 can be acquired as well. The following implementations depicted in FIG. 3-FIG. 6 can be implemented based on the equations (1)-(3), but not limited.
  • Please refer to FIGS. 3 and FIG. 4, which are schematic diagrams illustrating an image processing method according to one implementation of the present disclosure. As shown in FIG. 3, the first image I1 comprises a first object O1 (in this implementation, a person's image) and the second image I2 comprises a second object O2 (in this implementation, a desk's image). In the implementations of FIGS. 3 and FIG. 4, the first object O1 will be copied (or cut out) and pasted onto the second image I2. As shown in FIG. 4, if the first object O1 is pasted following a conventional method, the size of the first object O1 will be remained the same, as shown in the third image I3 in FIG. 4.
  • However, in one implementation of the present disclosure, the depth values (ex. depth map) for the first image I1 and the second image I2 are acquired, and the size and the location for the first object O1 are altered according to the depth value of the first object O1 Take FIG. 4 for example, the first object O1 has a depth value of 100 cm and the second object O2 has a depth value of 50 cm. Therefore, the location of the first object O1 in the second image I2 is supposed to fall behind the second object O2. Also, the size of the first object O1 can be acquired according to above-mentioned Equations (1)-(3). Accordingly, the location and the size for the first object O1 in the third image I3 are automatically altered, thereby an altered third image I3′ is generated. Please note, the third image I3 illustrated in FIG. 4 is only for the convenience of explaining, thus the third image I3 may not appear and the user directly acquires the altered third image I3′ after pasting the first object O1 from the first image I1 to the second image I2.
  • Please note, the first object O1 in the altered third image I3′ of FIG. 4 is not covered by the second object O2 since it locates far behind the second object O2. However, if the first object O1 falls behind the second object O2 but is close to the second object O2, the first object O1 may be partially covered by the second object O2, as depicted in FIG. 5.
  • In one implementation, after the steps for automatically altering the size and location of the first object O1 according to the depth value thereof, the user can further move the first object to another location. In such implementation, the first object O1 is enlarged when the first object O1 is moved from the location in the altered third image I3′ of FIG. 4 to a second location with a depth value smaller than the depth value in the first image I1. Also, the first object is shrunk when the first object O1 is moved from the location in the altered third image I3′ of FIG. 4 to a second location with a depth value larger than the depth value in the first image I1.
  • The steps for the implementations of FIG. 3-FIG. 5 can be summarized as follows: The depth value for the first object O1 is merged into the second image I2 while the first object O1 is pasted from the first image I1 to the second image I2. After that, the location with the first depth value in the second image I2 is acquired and the first object O1 is provided at this location in the second image I2. Further, the first object O1 is resized according to the depth value thereof.
  • In another implementation, the maximum depth value of the second image I2 is smaller than the first object O1. In such implementation, the first object O1 is enlarged according to a relation between the depth value of the first object O1 and the maximum depth value of the second image I2. Please refer to FIG. 6, the first image I1 comprises a first object O1 with a depth value 100 cm, and the second image I2 comprises a second object O2 with a depth value 50 cm. Further, the maximum depth value for the second image I2 is 60 cm. In such case, the first object O1 is located at a location having a depth value equaling to or smaller than the maximum depth value of the second image I2. As shown in FIG. 6, the first object O1 in the altered third image I3′ is located at a location with a depth value 40 cm, and the size thereof is enlarged according to the depth values for the first object O1 in the first image I1 and the altered third image I3′ (i.e. 100 cm and 40 cm).
  • In another similar implementation, the maximum depth value of the second image I2 is also smaller than the first object O1. In such implementation, a target second object in the second image is shrunk, according to a relation between a depth value of the target second object and the depth value of the first object O1. Please refer to FIG. 7 the first image I1 comprises a first object O1 with a depth value 100 cm, and the second image I2 comprises a second object O2 with a depth value 50 cm. Further, the maximum depth value for the second image I2 is 60 cm. In such case, the second object O2 (the target second object) is shrunk according to a relation between a depth value thereof and the depth value of the first object O1 (ex. 50 cm and 100 cm). In one implementation, the first object O1 falls behind the shrunk second object O2 in the altered third image I3′ since it has a depth value larger than that of the shrunk second object O2. Also, the size of the first object O1 is also shrunk based on a relation between a depth value of the second object O2 and the depth value of the first object O1, and based on the maximum depth value. By this way, the ratio between the size of the first object O1 and the size of the second object O2 can be optimized.
  • In one implementation, a lock mode and a unlock mode are provided and can be switched via a trigger operation. In the lock mode, the relation between the first object O1 and the second image I2 is fixed, thus the first object O1 is resized according to the relation between the first object O1 and the second image l2 (or an object in the second image I2). For example, the first object O1 is resized according to the depth value thereof and the depth value of the second object O2 in FIG. 4 In the unlock mode, the location and the size of the first object O1 can be altered unlimitedly. For example, in part (a) of FIG. 8 the size of the first object O1 can be manually altered by the user. Also, in part (b) of FIG. 8 the location of the first object O1 can be manually altered by the user. In the implementation of FIG. 8 the location and the size of the first object are not automatically altered.
  • FIG. 9 and FIG. 10 show a flow chart illustrating image processing methods corresponding to the implementations depicted in FIG. 3-FIG. 7. Please note, these flow charts are only examples for explaining and do not mean to limit the scope of the present disclosure.
  • FIG. 9 is a flow chart illustrating the image processing method in the lock mode, which comprises following steps:
  • Step 901
  • Start.
  • Step 903
  • Select images having depth values. For example, the first image I1 and the second image I2 in FIG. 3.
  • Step 905
  • Paste an object. For example, as depicted in FIG. 3 and FIG. 4, paste the first object O1 onto the second image I2.
  • Step 907
  • Alter the location of the first object O1 automatically according to the depth value of the first object O1 after the first object O1 is pasted to the second image I2.
  • Step 909
  • Alter the size of the first object O1 automatically according to the depth value of the first object O1 after the first object O1 is pasted to the second image I2.
  • Step 911
  • Move the first object O1 to a desired location. As stated above, the first object O1 is enlarged when the first object O1 is moved to a location with a smaller depth value. Also, the first object is shrunk when the first object is moved to a location with a larger depth value.
  • Please note, the step 911 can be omitted in another implementation.
  • Step 913
  • Save the altered image. For example, save the altered third image I3′ in FIG. 4-FIG. 7.
  • As above-mentioned description, an unlock mode is provided in another implementation, as depicted in FIG. 8. The unlock mode can be applied to an image without depth values, and can be applied to an image with depth values as well.
  • FIG. 10 is a schematic diagram illustrating an image processing method according to another implementation of the present disclosure. As depicted in FIG. 10, the first object O11 and the first object O12 in the first image I1 has a disparity value 30 cm. For more detail, the first object O11 is behind the first object O12 for 30 cm. Also, the second object O2 in the second image I2 has a depth value 50 cm. Therefore, the location of the first object O11 in the altered third image I3′ is determined by the disparity value between the first object O11 and the first object O12. In one implementation, the first object O11 in the altered third image I3′ has a depth value 80 cm since the second object O2 in the second image I2 has a depth value 50 cm and the first object O11 has a disparity value 30 cm. Also, the size of the object O11 in the altered third image I3′ can be altered based on a relation between the depth value for the object O11 in the first image I1 and the depth value for the object O11 in the altered third image I3′.
  • FIG. 11 is a flow chart illustrating the image processing method in the lock mode, corresponding to the implementation depicted in FIG. 10. FIG. 11 comprises following steps:
  • Step 1201
  • Start.
  • Step 1203
  • Acquire an image with disparity values. For example, the first image I1 in FIG. 10.
  • Step 1205
  • Acquire another image. For example, the second image I2 in FIG. 10.
  • Step 1207
  • Paste the first object to another image. For example, paste the first object O11 to the second image I2.
  • Step 1209
  • Automatically alter the first object O11 to a location determined by the disparity information.
  • Step 1211
  • Automatically alter a size of the first object according to the depth value in the original image (ex. the first image I1 ) and the altered image (ex. the altered third image I3′).
  • Step 1213
  • Move the first object O11 to a desired location. As stated above, the first object O11 is enlarged when the first object O1 is moved from the location stated in the step 1209 to a second location with a depth value smaller than the first depth value. Also, the first object is shrunk when the first object is moved from the location stated in the step 1209 to a third location with a depth value larger than the first depth value.
  • Please note, the step 1213 can be omitted in another implementation.
  • Step 1215
  • Save the altered image. For example, save the altered third image I3′ in FIG. 10.
  • The image processing method provided by the present disclosure can further comprise altering other image effects besides “altering the position and size of the object”. FIG. 12 is a schematic diagram illustrating an image processing method according other implementations of the present disclosure. As depicted in FIG. 12, the first object O1 in the first image I1 comprises a depth value 100 cm. Also, the second image I2 comprises a second object O21 which has a depth value 50 cm, the second objects O22 and O23 both having a depth value 100 cm. As shown in FIG. 12,the defocus level in the second image I2 is higher if the object has a larger depth value. Accordingly, if the first object O1 is pasted from the first image I1 to the second image I2, the size and the distance of the first object O1 is altered based on the depth value thereof, as above-mentioned. Also, the defocus level of the first object O1 is also altered based on the depth value thereof. For more detail, the defocus level of the first object O1 is altered to be the same as that of the second objects O22 and O23 since the depth value of the first object O1 is the same as that of the second objects O22 and O23. Please note such implementation is not limited to the case that the depth value of the first object O1 is the same as which of the second objects O22 and O23. For example, the second objects O22 and O23 both have a depth value 100 cm but the first object O1 has a depth value of 80 cm, in such case he defocus level of the first object O1 is still altered to be the same as that of the second objects O22 and O23 since the first object O1 the second objects O22 and O23 are all located outside a focus range.
  • Based on the implementation of FIG. 12, other image effects can be altered according to the depth value of the first object O1 as well, for example, the sharpness, the color, and the brightness. Please note, the image processing method provided by the present disclosure can only alter the image effect other than the location and the size of the first object without altering the location and the size of the first object. Accordingly, the image processing method provided by the present disclosure can be summarized as follows: An image processing method, comprising: (a) acquiring a first depth value for a first object in a first image; and (b) altering image effect(s) (ex. location, size, defocus level, sharpness, color, and brightness) for the first object according the first depth value when the first object is pasted onto a second image.
  • FIG. 13 is a block diagram illustrating an electronic apparatus with an image processing mechanism according to one implementation of the present disclosure. As shown in FIG. 13, the electronic apparatus 1400 comprises a depth detecting device 1401 and an image processing device 1403. The depth detecting device 1401 is configured to acquire a first depth value for a first object in a first image I1. The image processing device 1403 is configured to alter image effect for the first object according the first depth value when the first object is pasted onto a second image I2. Please note, the first image I1 and the second image I2 can be from any image source. For example, the first image I1 and the second image I2 can be the images captured by the image sensor 1405. Also, the first image I1 and the second image I2 can be the images stored in the memory device 1407. Further, the first image I1 and the second image I2 can be the images from a website. Accordingly, the image sensor 1405 and the memory device 1407 are not limited to be included in the electronic apparatus 1400.
  • In view of above-mentioned implementations, the image effect of the object to be pasted can be automatically altered based on the depth value of the object to be pasted. Accordingly, the images can be merged with less disharmony. Also, optimized location and size for the object can be acquired.
  • Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.

Claims (20)

What is claimed is:
1. An image processing method, comprising:
altering, by an image processing device of an electronic apparatus, an image effect for a first object according to a first depth value of the first object in a first image when the first object is pasted onto a second image to generate an altered third image; and
saving, by the image processing device, the altered third image in a memory device.
2. The image processing method of claim 1, further comprising:
merging, by the image processing device, the first depth value for the first object into the second image.
3. The image processing method of claim 1, wherein the altering of the image effect for the first object comprises:
acquiring, by the image processing device, a location with the first depth value in the second image; and
locating, by the image processing device, the first object at the location in the second image.
4. The image processing method of claim 3, further comprising:
resizing, by the image processing device, the first object according to the first depth value.
5. The image processing method of claim 3, further comprising:
enlarging, by the image processing device, the first object when the first object is moved from the location to a second location with a depth value smaller than the first depth value; and
shrinking, by the image processing device, the first object when the first object is moved from the location to a third location with a depth value larger than the first depth value.
6. The image processing method of claim 1, wherein the altering of the image effect for the first object comprises:
covering, by the image processing device, partially the first object by a second object with a second depth value smaller than the first depth value in the second image.
7. The image processing method of claim 1, further comprising:
acquiring, by a depth detecting device of the electronic apparatus, the first depth value for the first image;
acquiring, by the depth detecting device, a second depth value for the second image; and
enlarging, by the image processing device, a size of the first object according to a relation between the first depth value of the first object and a maximum depth of the second image in an event that the maximum depth value of the second image is smaller than the first depth value of the first object.
8. The image processing method of claim 1, further comprising:
acquiring, by a depth detecting device of the electronic apparatus, the first depth value for the first image;
acquiring, by the depth detecting device, a second depth value for the second image; and
shrinking, by the image processing device, a size for a target second object in the second image, according to a relation between a depth value of the target second object and the first depth value of the first object, in an event that a maximum depth value of the second image is smaller than the first depth value of the first object.
9. The image processing method of claim 1, wherein the altering comprises altering, by the image processing device, the image effect for the first object according to a disparity value between the first object and another object in the first image when the first object is pasted onto the second image.
10. The image processing method of claim 1, wherein the image effect comprises one or more of a defocus level, a brightness, a sharpness, and a color.
11. An electronic apparatus with an image processing mechanism, comprising:
a memory device; and
an image processing device capable of altering an image effect for a first object according to a first depth value of the first object in a first image when the first object is pasted onto a second image to generate an altered third image, the image processing device further capable of saving the altered third image in the memory device.
12. The electronic apparatus of claim 11, wherein the image processing device is further capable of merging the first depth value for the first object into the second image.
13. The electronic apparatus of claim 11, further comprising:
a depth detecting device capable of acquiring the first depth value for the first object in the first image, the depth detecting device further capable of acquiring a second depth value for the second object in the second image,
wherein the image processing device is further capable of altering the image effect for the first object by performing operations comprising:
acquiring a location with the first depth value in the second image; and
locating the first object at the location in the second image.
14. The electronic apparatus of claim 13, wherein the image processing device is further capable of resizing the first object according to the first depth value when the first object is pasted onto the second image.
15. The electronic apparatus of claim 13, wherein, when the first object is pasted onto the second image, the image processing device is further capable of performing operations comprising:
enlarging the first object when the first object is moved from the location to a second location with a depth value smaller than the first depth value; and
shrinking the first object when the first object is moved from the location to a third location with a depth value larger than the first depth value.
16. The electronic apparatus of claim 11, wherein the image processing device is further capable of covering partially the first object by a second object with a second depth value smaller than the first depth value in the second image when the first object is pasted onto the second image.
17. The electronic apparatus of claim 11, further comprising:
a depth detecting device capable of acquiring the first depth value for the first object in the first image, the depth detecting device further capable of acquiring a second depth value for the second object in the second image,
wherein the image processing device is further capable of enlarging the size of the first object, according to a relation between the first depth value of the first object and a maximum depth value of the second image, in an even that the maximum depth value of the second image is smaller than the first depth value of the first object.
18. The electronic apparatus of claim 11, further comprising:
a depth detecting device capable of acquiring the first depth value for the first object in the first image, the depth detecting device further capable of acquiring a second depth value for the second object in the second image,
wherein the image processing device is further capable of shrinking a size for a target second object in the second image, according to a relation between a depth value of the target second object and the first depth value of the first object, in an event that a maximum depth value of the second image is smaller than the first depth value of the first object.
19. The electronic apparatus of claim 11, wherein the image processing device is further capable of altering the image effect for the first object according to a disparity value between the first object and another object in the first image when the first object is pasted onto the second image.
20. The electronic apparatus of claim 11, wherein the image effect comprises one or more of a defocus level, a brightness, a sharpness, and a color.
US15/384,310 2015-07-03 2016-12-19 Image Processing Method And Electronic Apparatus With Image Processing Mechanism Abandoned US20170103559A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/384,310 US20170103559A1 (en) 2015-07-03 2016-12-19 Image Processing Method And Electronic Apparatus With Image Processing Mechanism

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/791,273 US9569830B2 (en) 2015-07-03 2015-07-03 Image processing method and electronic apparatus with image processing mechanism
US15/384,310 US20170103559A1 (en) 2015-07-03 2016-12-19 Image Processing Method And Electronic Apparatus With Image Processing Mechanism

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/791,273 Continuation US9569830B2 (en) 2015-07-03 2015-07-03 Image processing method and electronic apparatus with image processing mechanism

Publications (1)

Publication Number Publication Date
US20170103559A1 true US20170103559A1 (en) 2017-04-13

Family

ID=57684353

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/791,273 Active US9569830B2 (en) 2015-07-03 2015-07-03 Image processing method and electronic apparatus with image processing mechanism
US15/384,310 Abandoned US20170103559A1 (en) 2015-07-03 2016-12-19 Image Processing Method And Electronic Apparatus With Image Processing Mechanism

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/791,273 Active US9569830B2 (en) 2015-07-03 2015-07-03 Image processing method and electronic apparatus with image processing mechanism

Country Status (2)

Country Link
US (2) US9569830B2 (en)
CN (1) CN106331472A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569830B2 (en) * 2015-07-03 2017-02-14 Mediatek Inc. Image processing method and electronic apparatus with image processing mechanism
CN109542307B (en) * 2018-11-27 2021-12-03 维沃移动通信(杭州)有限公司 Image processing method, device and computer readable storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069713A (en) * 1997-02-14 2000-05-30 Canon Kabushiki Kaisha Image editing method and apparatus and storage medium
US7476796B2 (en) * 2002-02-19 2009-01-13 Yamaha Corporation Image controlling apparatus capable of controlling reproduction of image data in accordance with event
US20110169825A1 (en) * 2008-09-30 2011-07-14 Fujifilm Corporation Three-dimensional display apparatus, method, and program
US20120314077A1 (en) * 2011-06-07 2012-12-13 Verizon Patent And Licensing Inc. Network synchronized camera settings
US8351713B2 (en) * 2007-02-20 2013-01-08 Microsoft Corporation Drag-and-drop pasting for seamless image composition
US8417023B2 (en) * 2006-06-22 2013-04-09 Nikon Corporation Image playback device
US20130142452A1 (en) * 2011-12-02 2013-06-06 Sony Corporation Image processing device and image processing method
US20140132725A1 (en) * 2012-11-13 2014-05-15 Institute For Information Industry Electronic device and method for determining depth of 3d object image in a 3d environment image
US8783554B2 (en) * 2010-08-31 2014-07-22 Toshiba Tec Kabushiki Kaisha Information reading apparatus, commodity sales information processing apparatus, and pasted object
US20150062370A1 (en) * 2013-08-30 2015-03-05 Qualcomm Incorporated Method and apparatus for generating an all-in-focus image
US20150215602A1 (en) * 2014-01-29 2015-07-30 Htc Corporation Method for ajdusting stereo image and image processing device using the same
US20160014400A1 (en) * 2014-07-09 2016-01-14 Samsung Electronics Co., Ltd. Multiview image display apparatus and multiview image display method thereof
US20160180575A1 (en) * 2013-09-13 2016-06-23 Square Enix Holdings Co., Ltd. Rendering apparatus
US20160191908A1 (en) * 2014-12-30 2016-06-30 Au Optronics Corporation Three-dimensional image display system and display method
US20160198097A1 (en) * 2015-01-05 2016-07-07 GenMe, Inc. System and method for inserting objects into an image or sequence of images
US20160261781A1 (en) * 2015-03-08 2016-09-08 Mediatek Inc. Electronic device having dynamically controlled flashlight for image capturing and related control method
US20160321515A1 (en) * 2015-04-30 2016-11-03 Samsung Electronics Co., Ltd. System and method for insertion of photograph taker into a photograph
US20170004608A1 (en) * 2015-07-03 2017-01-05 Mediatek Inc. Image processing method and electronic apparatus with image processing mechanism
US20170061677A1 (en) * 2015-08-25 2017-03-02 Samsung Electronics Co., Ltd. Disparate scaling based image processing device, method of image processing, and electronic system including the same
US20170127038A1 (en) * 2015-11-04 2017-05-04 Mediatek Inc. Method for performing depth information management in an electronic device, and associated apparatus and associated computer program product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5339386A (en) * 1991-08-08 1994-08-16 Bolt Beranek And Newman Inc. Volumetric effects pixel processing
US7986916B2 (en) * 2005-05-20 2011-07-26 Innovision Research & Technology Plc Demodulation communication signals in a near field radio frequency (RF) communicator
US7797621B1 (en) * 2006-10-26 2010-09-14 Bank Of America Corporation Method and system for altering data during navigation between data cells
US7814407B1 (en) * 2006-10-26 2010-10-12 Bank Of America Corporation Method and system for treating data
US8213711B2 (en) * 2007-04-03 2012-07-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Method and graphical user interface for modifying depth maps
CN101610421B (en) * 2008-06-17 2011-12-21 华为终端有限公司 Video communication method, video communication device and video communication system
CN102938825B (en) * 2012-11-12 2016-03-23 小米科技有限责任公司 A kind ofly to take pictures and the method for video and device
CN103442181B (en) * 2013-09-06 2017-10-13 努比亚技术有限公司 A kind of image processing method and image processing equipment

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069713A (en) * 1997-02-14 2000-05-30 Canon Kabushiki Kaisha Image editing method and apparatus and storage medium
US7476796B2 (en) * 2002-02-19 2009-01-13 Yamaha Corporation Image controlling apparatus capable of controlling reproduction of image data in accordance with event
US8417023B2 (en) * 2006-06-22 2013-04-09 Nikon Corporation Image playback device
US8351713B2 (en) * 2007-02-20 2013-01-08 Microsoft Corporation Drag-and-drop pasting for seamless image composition
US20110169825A1 (en) * 2008-09-30 2011-07-14 Fujifilm Corporation Three-dimensional display apparatus, method, and program
US8783554B2 (en) * 2010-08-31 2014-07-22 Toshiba Tec Kabushiki Kaisha Information reading apparatus, commodity sales information processing apparatus, and pasted object
US9189672B2 (en) * 2010-08-31 2015-11-17 Toshiba Tec Kabushiki Kaisha Information reading apparatus, commodity sales information processing apparatus, and pasted object
US20120314077A1 (en) * 2011-06-07 2012-12-13 Verizon Patent And Licensing Inc. Network synchronized camera settings
US20130142452A1 (en) * 2011-12-02 2013-06-06 Sony Corporation Image processing device and image processing method
US20140132725A1 (en) * 2012-11-13 2014-05-15 Institute For Information Industry Electronic device and method for determining depth of 3d object image in a 3d environment image
US20150062370A1 (en) * 2013-08-30 2015-03-05 Qualcomm Incorporated Method and apparatus for generating an all-in-focus image
US20160180575A1 (en) * 2013-09-13 2016-06-23 Square Enix Holdings Co., Ltd. Rendering apparatus
US20150215602A1 (en) * 2014-01-29 2015-07-30 Htc Corporation Method for ajdusting stereo image and image processing device using the same
US20160014400A1 (en) * 2014-07-09 2016-01-14 Samsung Electronics Co., Ltd. Multiview image display apparatus and multiview image display method thereof
US20160191908A1 (en) * 2014-12-30 2016-06-30 Au Optronics Corporation Three-dimensional image display system and display method
US20160198097A1 (en) * 2015-01-05 2016-07-07 GenMe, Inc. System and method for inserting objects into an image or sequence of images
US20160261781A1 (en) * 2015-03-08 2016-09-08 Mediatek Inc. Electronic device having dynamically controlled flashlight for image capturing and related control method
US20160321515A1 (en) * 2015-04-30 2016-11-03 Samsung Electronics Co., Ltd. System and method for insertion of photograph taker into a photograph
US20170004608A1 (en) * 2015-07-03 2017-01-05 Mediatek Inc. Image processing method and electronic apparatus with image processing mechanism
US9569830B2 (en) * 2015-07-03 2017-02-14 Mediatek Inc. Image processing method and electronic apparatus with image processing mechanism
US20170061677A1 (en) * 2015-08-25 2017-03-02 Samsung Electronics Co., Ltd. Disparate scaling based image processing device, method of image processing, and electronic system including the same
US20170127038A1 (en) * 2015-11-04 2017-05-04 Mediatek Inc. Method for performing depth information management in an electronic device, and associated apparatus and associated computer program product

Also Published As

Publication number Publication date
US20170004608A1 (en) 2017-01-05
CN106331472A (en) 2017-01-11
US9569830B2 (en) 2017-02-14

Similar Documents

Publication Publication Date Title
JP6730690B2 (en) Dynamic generation of scene images based on the removal of unwanted objects present in the scene
US7349020B2 (en) System and method for displaying an image composition template
US9479754B2 (en) Depth map generation
JP5384190B2 (en) Method and apparatus for performing touch adjustment in an imaging device
US20160301868A1 (en) Automated generation of panning shots
CN107950018A (en) The shallow depth of field true to nature presented by focus stack
KR101725884B1 (en) Automatic processing of images
US10317777B2 (en) Automatic zooming method and apparatus
US7565075B2 (en) System and method for exhibiting image focus information on a viewfinder
US20120098946A1 (en) Image processing apparatus and methods of associating audio data with image data therein
GB2372168A (en) Automatic enhancement of facial features in images
CN111415302B (en) Image processing method, device, storage medium and electronic equipment
US9774833B2 (en) Projector auto-focus correction with the aid of a camera
US20160227206A1 (en) Calibration methods for thick lens model
JP2018528631A (en) Stereo autofocus
CN107392087B (en) Image processing method and device
US20150317770A1 (en) Camera defocus direction estimation
US20170103559A1 (en) Image Processing Method And Electronic Apparatus With Image Processing Mechanism
KR20160111757A (en) Image photographing apparatus and method for photographing thereof
KR102272310B1 (en) Method of processing images, Computer readable storage medium of recording the method and an electronic apparatus
US20230209182A1 (en) Automatic photography composition recommendation
US9489727B2 (en) Method for generating a preferred image by replacing a region of a base image
KR100690855B1 (en) Method for displaying photograph of the subject to be taken and mobile communication terminal using the same method
JP6598402B1 (en) Receipt and other form image automatic acquisition / reading method, program, and portable terminal device
KR20190026636A (en) Apparatus and Method of Image Support Technology Using OpenCV

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: XUESHAN TECHNOLOGIES INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MEDIATEK INC.;REEL/FRAME:056593/0167

Effective date: 20201223

AS Assignment

Owner name: TAIWAN SEMICONDUCTOR MANUFACTURING COMPANY, LTD., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:XUESHAN TECHNOLOGIES INC.;REEL/FRAME:061789/0686

Effective date: 20211228