CN101938604A - Image processing method and camera - Google Patents

Image processing method and camera Download PDF

Info

Publication number
CN101938604A
CN101938604A CN200910088475XA CN200910088475A CN101938604A CN 101938604 A CN101938604 A CN 101938604A CN 200910088475X A CN200910088475X A CN 200910088475XA CN 200910088475 A CN200910088475 A CN 200910088475A CN 101938604 A CN101938604 A CN 101938604A
Authority
CN
China
Prior art keywords
image
area
angle
objects interfered
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910088475XA
Other languages
Chinese (zh)
Inventor
赵磊
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN200910088475XA priority Critical patent/CN101938604A/en
Publication of CN101938604A publication Critical patent/CN101938604A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides an image processing method and a camera. The method comprises the following steps of: detecting a first triggering event and generating a first detection result; acquiring at least one first image at a first angle and at a first position according to the first detection result; detecting a second triggering event and generating a second detection result; acquiring a second image comprising a target object at a second angle and at a second position according to the second detection result, wherein similarity between the at least one first image and the second image is greater than a threshold; processing the at least one first image and the second image to generate a final image, wherein the number of interference objects in the final image is less than or equal to that of the interference objects in one, with the fewest interference objects, of the at least one first image and the second image; and displaying the final image. Through the scheme of the invention, users can timely acquire expected images and the experiences of the users can be enhanced.

Description

A kind of image processing method and camera
Technical field
The present invention relates to technical field of image processing, particularly a kind of image processing method and camera.
Background technology
Image processing equipment such as digital camera have become the typical number code product in people's daily life.Usually, when people adopt digital camera to take pictures, occur the subject scenes in the camera lens in the photo of wishing to shoot, and do not wish that the disturbing factor in the camera lens influences subject scenes.
Enumerate above-mentioned situation as, photographer adopts digital camera to take pictures for the person of being taken at certain sight spot, but since the person of being taken in face of or up and-down after one's death pedestrian numerous, disturb photographer's photographed subject scene, thereby often except having the person of being taken and sight spot, also have one or more pedestrian on the final photo that shooting is come out.In order to address the above problem, on the one hand, photographer needs spended time wait for to remove the pedestrian the person of being taken more usually, and take this opportunity of firmly grasping, thereby wasted the time.On the other hand, the person of being taken may need constantly to set attitude and prepare at any time to be taken before right moment for camera arrives, thereby brought inconvenience to photographer.
Also has a solution to the problems described above, the user is if seek out its desired photo, the photo that to shoot of can oneself taking time imports computer, adopt image processing software (for example Photoshop), comparison film carries out editing and processing, as other pedestrians at one's side of the person of being taken in the photo are deleted or the like from photo.
The inventor finds that there is following at least technical problem in prior art in realizing process of the present invention:
Above-mentioned two kinds of solutions all can't solve, and exist in user's subject scenes of camera under the situation of disturbing factor, objects interfered to obtain desired image (being the photo that does not have objects interfered in the subject scenes) timely
Summary of the invention
The technical problem to be solved in the present invention provides a kind of method and camera of handling image, is difficult to make things convenient for the user in time to obtain the technical problem of desired image to solve at existing camera.
For solving the problems of the technologies described above, embodiments of the invention provide technical scheme as follows:
Embodiments of the invention provide a kind of image processing method, and described method is used one and had in the electronic equipment of camera function, comprising:
Detect first trigger event, produce one first testing result;
According to described first testing result, obtain at least one first image in first angle and primary importance;
Detect second trigger event, produce one second testing result;
According to described second testing result, obtain second image that comprises destination object in described second angle and the described second place, wherein, the similarity of described at least one first image and described second image is greater than a threshold value;
Handle described at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image is less than or equal to the number of the objects interfered of the image of least interference object in described at least one first image and described second image;
Show described final image.
Preferably, described at least one first image of described processing and described second image, the step that generates final image comprises:
In described second image, determine the first area that described objects interfered takies;
In described at least one first image, obtain corresponding with position, described first area, area is identical and do not have the second area of described objects interfered, utilizes described second area to cover described first area, generates final image.
Preferably, described in described second image, determine that the step of the first area that described objects interfered takies is specially:
Described at least one first image and described second image are carried out grid cut apart, relatively the pixel value of each little grid determines that the bigger zone of margin of image element is the first area.
Preferably, described in described second image, determine that the step of the first area that described objects interfered takies is specially:
In described second image, determine subject area;
According to the destination object of selecting, determine the 3rd shared zone of destination object in the described subject area, removing the described the 3rd extra-regional other zone in the described subject area is the first area.
Preferably, described at least one first image of described processing and described second image, the step that generates final image comprises:
In described second image, determine the 3rd zone that described destination object is shared;
In described at least one first image, choose the background image of a no described objects interfered;
Utilize the image in described the 3rd zone to cover, the 4th zone corresponding with described the 3rd regional location in the described background image, that area is identical generates final image.
Preferably, described at least one first image of described processing and described second image, the step that generates final image comprises:
In described second image,, determine directly that then described second image is a final image if there is not objects interfered.
Preferably, the described step of obtaining at least one first image in first angle and primary importance comprises:
Obtain first angle and primary importance information by positioner;
According to described first angle and primary importance information and default focus information, obtain at least one first image from network.
Embodiments of the invention also provide a kind of camera, comprising:
First push-button unit is used to detect first trigger event, produces one first testing result;
First acquiring unit is used for according to described first testing result, obtains at least one first image in first angle and primary importance;
Second push-button unit is used to detect second trigger event, produces one second testing result;
Second acquisition unit according to described second testing result, obtains second image that comprises destination object in described second angle and the described second place;
Processing unit, be used to handle described at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image is less than or equal to the number of the objects interfered of the image of least interference object in described at least one first image and described second image;
Display unit is used to show described final image.
Preferably, described first acquiring unit comprises:
Locating module is used to obtain first angle and primary importance information;
Module is set, is used to be provided with focus information;
Obtain submodule, be used for obtaining at least one first image from network according to described first angle and primary importance information and default described focus information.
Preferably, described locating module is the GPS locating module, and the described module that is provided with is the lens focus adjustment module.
Embodiments of the invention have following beneficial effect:
Such scheme is by handling first image and second image, so that the user when taking pictures, can in time obtain the image of user expectation.
Description of drawings
Fig. 1 is the schematic flow sheet of embodiments of the invention image processing method;
Fig. 2 is in the method shown in Figure 1, to first image and the second image processing process schematic diagram;
Fig. 3 is in the method shown in Figure 1, to first image and the second image processing process schematic diagram;
Fig. 4 is in the method shown in Figure 1, to first image and another process schematic diagram of second image processing;
Fig. 5 is in the method shown in Figure 1, to first image and another process schematic diagram of second image processing;
Fig. 6 is the structural representation of another embodiment of the present invention camera.
Embodiment
For technical problem, technical scheme and advantage that embodiments of the invention will be solved is clearer, be described in detail below in conjunction with the accompanying drawings and the specific embodiments.
Embodiments of the invention are in the prior art, and the user can't in time obtain the problem of the desired image of user, and providing a kind of can be in time provide the image processing method and the camera of desired image for the user.
As shown in Figure 1, embodiments of the invention provide a kind of image processing method, comprising:
Step 11 detects first trigger event, produces one first testing result;
Step 12 according to described first testing result, is obtained at least one first image in first angle and primary importance;
Step 13 detects second trigger event, produces one second testing result;
Step 14 according to described second testing result, is obtained second image that comprises destination object in described second angle and the described second place, and wherein, the similarity of described at least one first image and described second image is greater than a threshold value; This threshold value can be 100%, represent that second angle and first angle are same angles, and the second place is identical position with primary importance; Certainly this threshold value also can be less than 100%, and as 95%, some is different a little to represent second angle and first angle, and perhaps some is different a little for primary importance and the second place, and perhaps second angle is different with first angle and primary importance is different with the second place;
Step 15, handle described at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image, be less than or equal in described at least one first image and described second image number of the objects interfered of the image of least interference object;
Step 16 shows described final image.
The executive agent of this image processing method can be any image processing equipment, as camera, the mobile phone with camera function or other the video camera etc. of taking pictures specially and can handle the photo of taking;
When this image processing method was applied to camera, first trigger event in the above-mentioned steps 11 can be the incident of for the first time touching camera shutter and being triggered, and first testing result that this trigger event produced is: start the continuous shot picture function; In the step 12, the processor of camera will in first angle and primary importance, be taken multiple pictures the current person of taking pictures continuously according to this first testing result, i.e. at least one first image; This at least one first image can be some frames that a picture breakdown obtains; These images can be stored in the memory that camera itself has, and this memory can be easy flash memory;
Second trigger event in the above-mentioned steps 13 can be to touch the incident that clicks the camera and triggered for the second time, and second testing result that this trigger event produced is: the real camera function that starts camera; In the step 14, the processor of camera will be according to this second testing result, in the current person of taking pictures place second angle and the second place, shooting includes a photo of destination object, i.e. second image, certainly also can include objects interfered in this second image, but the number of objects interfered should be less than or equal in described at least one first image and described second image, have the number of objects interfered of the image of least interference object, this final image is only the image that user expectation obtains like this; Certainly can understand, the number of objects interfered with image of least interference object can be zero, does not have objects interfered in the photo of promptly described image with least interference object.This second angle is preferably identical with first angle, and the second place is identical with primary importance, some difference a little, and this moment, the similarity of second image and first image will be less than 100%; This second image also can be stored in the easy flash memory that camera itself has;
In step 15, described at least one first image of the processor processing of camera and described second image generate final image, and are presented on the screen of camera.
Above step all is that camera is handled automatically to image, so that the user when taking pictures, can in time obtain the image of user expectation.
Said method is when specific implementation, and step 15 can comprise:
Step 151 in described second image, is determined the first area that objects interfered takies;
Step 152, in described at least one first image, obtain corresponding with position, described first area, area is identical and do not have the second area of described objects interfered, utilizes described second area to cover described first area, generates final image.
As Fig. 2, shown in Figure 3, be that example describes with one first image among this figure, image M is first image, image N is second image, has destination object A, objects interfered B and background image C in this image M; Among the figure N, as the person of taking pictures with by the person's of taking pictures (destination object A) position and angle when all not changing, background image C in the photo can not change certainly yet, but variation has but taken place in image M with in the position of figure N in objects interfered B, therefore, can in image N (second image), determine the shared image-region of objects interfered B, i.e. first area B; Utilize in the image M (first image), with the corresponding area image D of first area B (being second area) among the image N (second image), cover this first area B, can obtain and comprise destination object A and background image C, do not comprise the desired image of user of objects interfered B.
In the above-mentioned steps 151, determine that the step of the first area that objects interfered takies can be specially:
Step 1511 is carried out grid with described at least one first image and described second image and is cut apart, and relatively the pixel value of each little grid determines that the bigger zone of margin of image element is the first area.
As above-mentioned Fig. 2 and sight shown in Figure 3, the user takes pictures in the process, objects interfered B moves, this objects interfered B is in the multiple pictures that the user takes continuously and in the last photo of taking that includes destination object A of user so, the margin of image element of the image-region that objects interfered B is gone through is bigger, therefore, margin of image element that can be by more above-mentioned image is determined objects interfered B shared zone in second image.
In the above-mentioned steps 151, determine that the step of the first area that objects interfered takies can also specifically comprise:
In described second image, determine subject area; This subject area promptly comprises the zone that destination object is shared, comprises the zone that objects interfered is shared again;
According to the destination object of selecting, determine shared the 3rd zone of described destination object in the described subject area, remove the described the 3rd extra-regional other zone in the described subject area and be first area (being the shared zone of objects interfered).
Wherein, handle described at least one first image and second image, the process that generates final image can also be as above-mentioned Fig. 2, shown in Figure 3.If objects interfered in the process of taking pictures, is static, at this moment, step 15 can also specifically comprise:
Step 153 in described second image, is determined the 3rd zone that described destination object is shared;
Step 154 in described at least one first image, is chosen the background image of a no described objects interfered;
Step 155 utilizes the image in described the 3rd zone to cover the 4th zone corresponding with described the 3rd regional location in the described background image, that area is identical, generates final image.
As Fig. 4, shown in Figure 5, image P is second image, image Y is the image of the noiseless object chosen at least one first image, the shared zone of destination object A in second image is determined, and utilize the image (being regional A shown in the dotted line) in the 3rd shared zone of this destination object A, cover the respective regions (regional A shown in the solid line) among this image Y, promptly the 4th zone obtains final image.
Certainly, said method in the step 15, in described second image, if there is not objects interfered, determines directly that then described second image is a final image when specific implementation.
When comprising a plurality of objects interfered in the photo background that user's desire is taken, above-mentioned steps 12 can also comprise:
Positioner according to camera obtains first angle and primary importance information;
According to described first angle and primary importance information and default focus information, obtain at least one first image from network; The similarity of this first image and second image still is greater than described predetermined threshold value.
The positioner of camera specifically can be the GPS locating module, can obtain the person's of taking pictures present located position, specifically comprises: the person of taking pictures geographical position of living in and the angle of taking pictures; In addition, camera also has wireless processing module, as Wireless Internet access module WLAN, by the WiFi wireless mode, obtains first image of taking in current first angle and primary importance that does not comprise objects interfered from network; This first image can be stored in the memory that camera itself has, and this memory can be easy flash memory;
In the above embodiment of the present invention, destination object can be a target person, and objects interfered can be non-target person, and wherein, non-target person is the personage different with target person; Definite utilization of target person looks for people's face technology to look for people's face, and (for example, photographer is triggering soon behind the door, and camera just can be found out the people's face that comprises by looking for people's face technology in current images), thus further can confirm people's face of target person.People's face of so non-target person is objects interfered.At this moment, after determining objects interfered, further determine the first area that described objects interfered is shared according to above-mentioned steps 1511 described modes; Certainly destination object also can the desired object of other user, and objects interfered also can be a not desired object of user.
As shown in Figure 6, embodiments of the invention also provide a kind of camera 60, comprising:
First push-button unit 61 is used to detect first trigger event, produces one first testing result;
First acquiring unit 62 is used for according to described first testing result, obtains at least one first image in first angle and primary importance;
Second push-button unit 63 is used to detect second trigger event, produces one second testing result;
Second acquisition unit 64 according to described second testing result, obtains second image that comprises destination object in described second angle and the described second place;
Processing unit 65, be used to handle at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image is less than or equal to the number of the objects interfered of the image of least interference object in described at least one first image and described second image;
Display unit 66 is used to show described final image.
This camera is to the treatment of picture process such as the above-mentioned treatment of picture process shown in Figure 1 of its shooting; Specifically, this first push-button unit 61, first acquiring unit 62, second push-button unit 63 and second acquisition unit 64 all can be realized by the shutter of camera, but this shutter can realize touching for the first time triggering and real for the second time the triggering; Processing unit 65 can be the processor of camera, and display unit 66 can be the display screen of camera.
Above-mentioned processing unit 65 can specifically comprise in the process of handling first image and second image:
First determination module is used for determining the first area that objects interfered takies at described second image;
First image synthesis unit is used at described at least one first image, obtain corresponding with position, described first area, area is identical and do not have the second area of described objects interfered, utilizes second area to cover described first area, generates final image.
This first determination module and first image synthesis unit realize the detailed process of this function, and Fig. 2 and process shown in Figure 3 do not repeat them here as described above.
Wherein, some disturb to as if move, first determination module can carry out grid with at least one first image and second image and cut apart when determining the first area that objects interfered takies, relatively the pixel value of each little grid determines that the bigger zone of margin of image element is the first area.
Can certainly in described second image, determine subject area;
According to the destination object that the user selects, determine the 3rd shared zone of destination object in described second image, removing the described the 3rd extra-regional other zone in the described subject area is the first area.
Above-mentioned processing unit 65 if objects interfered in the process of taking pictures, is static, can also specifically comprise in the process of handling first image and second image:
Second determination module is used for determining the 3rd zone that described destination object is shared at described second image;
Second image synthesis unit is used for choosing the background image of a no described objects interfered at least one first image; Utilize the image in described the 3rd zone, cover described background image and corresponding the 4th zone of described the 3rd regional location, generate final image.
Certainly, this processing unit 65 can also judge in this second image whether comprise objects interfered earlier after obtaining second image that includes destination object, if do not have, then directly output is presented on the display screen of camera as final image with this second image.
This camera is by at least one first image and second image that includes destination object are handled automatically, so that the user when taking pictures, can in time obtain the image of user expectation.
Especially, when comprising a plurality of objects interfered in the photo background that user's desire is taken, first acquisition module 62 can also comprise:
Locating module is used to obtain first angle and primary importance information;
Module is set, is used to be provided with focus information;
Obtain submodule, be used for obtaining at least one first image from network according to described first angle and primary importance information and default focus information; The similarity of this first image and second image still is greater than described predetermined threshold value.
The positioner of camera specifically can be GPS (Global Positioning System, a global positioning system) locating module, can obtain the person's of taking pictures present located position, specifically comprises: the person of taking pictures geographical position of living in and the angle of taking pictures; In addition, camera also has wireless processing module, as Wireless Internet access module WLAN, by the WiFi wireless mode, obtains first image of taking in current first angle and primary importance that does not comprise objects interfered from network; This first image can be stored in the memory that camera itself has, and this memory can be easy flash memory;
In the above embodiment of the present invention, destination object can be a target person, and objects interfered can be non-target person, and wherein, non-target person is the personage different with target person; Definite utilization of target person looks for people's face technology to look for people's face, thereby further can confirm people's face of target person.People's face of so non-target person is objects interfered.At this moment, after determining objects interfered, further determine first area that described objects interfered is shared or destination object shared the 3rd zone in described second image according to above-mentioned steps 1511 described modes, look for people's face technology in present camera field, to belong to known technology, do not repeat them here; Certainly destination object also can be the desired object of other user, and objects interfered also can be a not desired object of user.
To sum up, camera embodiment of the present invention is by obtaining first image and second image, and they are handled automatically, need not user's participation, can make the user in time obtain the desired image of user, promotes user's experience.
The above is a preferred implementation of the present invention; should be pointed out that for those skilled in the art, under the prerequisite that does not break away from principle of the present invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. image processing method, described method are used one and are had in the electronic equipment of camera function, it is characterized in that, comprising:
Detect first trigger event, produce one first testing result;
According to described first testing result, obtain at least one first image in first angle and primary importance;
Detect second trigger event, produce one second testing result;
According to described second testing result, obtain second image that comprises destination object in described second angle and the described second place, wherein, the similarity of described at least one first image and described second image is greater than a threshold value;
Handle described at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image is less than or equal to the number of the objects interfered of the image of least interference object in described at least one first image and described second image;
Show described final image.
2. image processing method according to claim 1 is characterized in that, described at least one first image of described processing and described second image, and the step that generates final image comprises:
In described second image, determine the first area that described objects interfered takies;
In described at least one first image, obtain corresponding with position, described first area, area is identical and do not have the second area of described objects interfered, utilizes described second area to cover described first area, generates final image.
3. image processing method according to claim 2 is characterized in that, and is described in described second image, determines that the step of the first area that described objects interfered takies is specially:
Described at least one first image and described second image are carried out grid cut apart, relatively the pixel value of each little grid determines that the bigger zone of margin of image element is the first area.
4. image processing method according to claim 2 is characterized in that, and is described in described second image, determines that the step of the first area that described objects interfered takies is specially:
In described second image, determine subject area;
According to the destination object of selecting, determine the 3rd shared zone of destination object in the described subject area, removing the described the 3rd extra-regional other zone in the described subject area is the first area.
5. image processing method according to claim 1 is characterized in that, described at least one first image of described processing and described second image, and the step that generates final image comprises:
In described second image, determine the 3rd zone that described destination object is shared;
In described at least one first image, choose the background image of a no described objects interfered;
Utilize the image in described the 3rd zone to cover, the 4th zone corresponding with described the 3rd regional location in the described background image, that area is identical generates final image.
6. image processing method according to claim 1 is characterized in that, described at least one first image of described processing and described second image, and the step that generates final image comprises:
In described second image,, determine directly that then described second image is a final image if there is not objects interfered.
7. image processing method according to claim 1 is characterized in that, the described step of obtaining at least one first image in first angle and primary importance comprises:
Obtain first angle and primary importance information by positioner;
According to described first angle and primary importance information and default focus information, obtain at least one first image from network.
8. a camera is characterized in that, comprising:
First push-button unit is used to detect first trigger event, produces one first testing result;
First acquiring unit is used for according to described first testing result, obtains at least one first image in first angle and primary importance;
Second push-button unit is used to detect second trigger event, produces one second testing result;
Second acquisition unit according to described second testing result, obtains second image that comprises destination object in described second angle and the described second place;
Processing unit, be used to handle described at least one first image and described second image, generate final image, the number of the objects interfered that has in the described final image is less than or equal to the number of the objects interfered of the image of least interference object in described at least one first image and described second image;
Display unit is used to show described final image.
9. camera according to claim 8 is characterized in that, described first acquiring unit comprises:
Locating module is used to obtain first angle and primary importance information;
Module is set, is used to be provided with focus information;
Obtain submodule, be used for obtaining at least one first image from network according to described first angle and primary importance information and default described focus information.
10. camera according to claim 9 is characterized in that, described locating module is the global position system GPS locating module, and the described module that is provided with is the lens focus adjustment module.
CN200910088475XA 2009-07-02 2009-07-02 Image processing method and camera Pending CN101938604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910088475XA CN101938604A (en) 2009-07-02 2009-07-02 Image processing method and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910088475XA CN101938604A (en) 2009-07-02 2009-07-02 Image processing method and camera

Publications (1)

Publication Number Publication Date
CN101938604A true CN101938604A (en) 2011-01-05

Family

ID=43391711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910088475XA Pending CN101938604A (en) 2009-07-02 2009-07-02 Image processing method and camera

Country Status (1)

Country Link
CN (1) CN101938604A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103079033A (en) * 2012-12-28 2013-05-01 东莞宇龙通信科技有限公司 Photographing method and device
CN103188434A (en) * 2011-12-31 2013-07-03 联想(北京)有限公司 Method and device of image collection
CN104751500A (en) * 2013-12-31 2015-07-01 厦门美图网科技有限公司 Quick image inpainting method
CN104954656A (en) * 2014-03-24 2015-09-30 联想(北京)有限公司 Method and device for information processing
CN105472241A (en) * 2015-11-20 2016-04-06 努比亚技术有限公司 Image splicing method and mobile terminals
CN106331486A (en) * 2016-08-25 2017-01-11 珠海市魅族科技有限公司 Image processing method and system
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
CN106778901A (en) * 2016-12-30 2017-05-31 广州视源电子科技股份有限公司 Indoor article loses reminding method and device
CN106791449A (en) * 2017-02-27 2017-05-31 努比亚技术有限公司 Method, photo taking and device
WO2017166726A1 (en) * 2016-03-31 2017-10-05 北京小米移动软件有限公司 Intelligent image capturing method and device
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN108447105A (en) * 2018-02-02 2018-08-24 微幻科技(北京)有限公司 A kind of processing method and processing device of panoramic picture
CN108924423A (en) * 2018-07-18 2018-11-30 曾文斌 A method of eliminating interfering object in the picture photo of fixed camera position
WO2019015120A1 (en) * 2017-07-17 2019-01-24 华为技术有限公司 Image processing method and terminal
CN111568199A (en) * 2020-02-28 2020-08-25 佛山市云米电器科技有限公司 Method and system for identifying water receiving container and storage medium
CN112399079A (en) * 2020-10-30 2021-02-23 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103188434A (en) * 2011-12-31 2013-07-03 联想(北京)有限公司 Method and device of image collection
CN103079033B (en) * 2012-12-28 2016-05-04 东莞宇龙通信科技有限公司 Photographic method and device
CN103079033A (en) * 2012-12-28 2013-05-01 东莞宇龙通信科技有限公司 Photographing method and device
CN104751500A (en) * 2013-12-31 2015-07-01 厦门美图网科技有限公司 Quick image inpainting method
CN104751500B (en) * 2013-12-31 2018-10-19 厦门美图网科技有限公司 A kind of method of quick reparation image
CN104954656A (en) * 2014-03-24 2015-09-30 联想(北京)有限公司 Method and device for information processing
CN104954656B (en) * 2014-03-24 2018-08-31 联想(北京)有限公司 A kind of information processing method and device
CN105472241A (en) * 2015-11-20 2016-04-06 努比亚技术有限公司 Image splicing method and mobile terminals
CN105472241B (en) * 2015-11-20 2019-03-22 努比亚技术有限公司 Image split-joint method and mobile terminal
WO2017166726A1 (en) * 2016-03-31 2017-10-05 北京小米移动软件有限公司 Intelligent image capturing method and device
CN106331486A (en) * 2016-08-25 2017-01-11 珠海市魅族科技有限公司 Image processing method and system
CN106375662A (en) * 2016-09-22 2017-02-01 宇龙计算机通信科技(深圳)有限公司 Photographing method and device based on double cameras, and mobile terminal
WO2018053906A1 (en) * 2016-09-22 2018-03-29 宇龙计算机通信科技(深圳)有限公司 Dual camera-based shooting method and device, and mobile terminal
CN106375662B (en) * 2016-09-22 2019-04-12 宇龙计算机通信科技(深圳)有限公司 A kind of image pickup method based on dual camera, device and mobile terminal
CN106778901A (en) * 2016-12-30 2017-05-31 广州视源电子科技股份有限公司 Indoor article loses reminding method and device
CN106791449B (en) * 2017-02-27 2020-02-11 努比亚技术有限公司 Photo shooting method and device
CN106791449A (en) * 2017-02-27 2017-05-31 努比亚技术有限公司 Method, photo taking and device
WO2019015120A1 (en) * 2017-07-17 2019-01-24 华为技术有限公司 Image processing method and terminal
CN109952758A (en) * 2017-07-17 2019-06-28 华为技术有限公司 A kind of method and terminal of image procossing
US11350043B2 (en) 2017-07-17 2022-05-31 Huawei Technologies Co., Ltd. Image processing method and terminal
CN107844765A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Photographic method, device, terminal and storage medium
CN108447105A (en) * 2018-02-02 2018-08-24 微幻科技(北京)有限公司 A kind of processing method and processing device of panoramic picture
CN108924423A (en) * 2018-07-18 2018-11-30 曾文斌 A method of eliminating interfering object in the picture photo of fixed camera position
CN111568199A (en) * 2020-02-28 2020-08-25 佛山市云米电器科技有限公司 Method and system for identifying water receiving container and storage medium
CN111568199B (en) * 2020-02-28 2023-11-07 佛山市云米电器科技有限公司 Water receiving container identification method, system and storage medium
CN112399079A (en) * 2020-10-30 2021-02-23 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN101938604A (en) Image processing method and camera
CN104967803B (en) A kind of video recording method and device
US9674395B2 (en) Methods and apparatuses for generating photograph
EP3063730B1 (en) Automated image cropping and sharing
KR101612727B1 (en) Method and electronic device for implementing refocusing
US11158027B2 (en) Image capturing method and apparatus, and terminal
US9973677B2 (en) Refocusable images
WO2018040180A1 (en) Photographing method and apparatus
WO2016115805A1 (en) Shooting method, shooting device, mobile terminal and computer storage medium
CN112822412A (en) Exposure method and electronic apparatus
CN113014798A (en) Image display method and device and electronic equipment
CN112637500A (en) Image processing method and device
CN114125268A (en) Focusing method and device
CN112911059B (en) Photographing method and device, electronic equipment and readable storage medium
WO2016011881A1 (en) Photographing process remaining time reminder method and system
CN113866782A (en) Image processing method and device and electronic equipment
CN113709368A (en) Image display method, device and equipment
KR101094648B1 (en) Auto Photograph Robot for Taking a Composed Picture and Method Thereof
CN105467741B (en) A kind of panorama photographic method and terminal
CN112219218A (en) Method and electronic device for recommending image capture mode
CN112653841B (en) Shooting method and device and electronic equipment
CN106488128B (en) Automatic photographing method and device
CN112153291B (en) Photographing method and electronic equipment
CN111917989B (en) Video shooting method and device
KR20060130647A (en) Methods and apparatuses for formatting and displaying content

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110105