CN104506768A - Method and device for image selection as well as terminal - Google Patents

Method and device for image selection as well as terminal Download PDF

Info

Publication number
CN104506768A
CN104506768A CN201410719702.5A CN201410719702A CN104506768A CN 104506768 A CN104506768 A CN 104506768A CN 201410719702 A CN201410719702 A CN 201410719702A CN 104506768 A CN104506768 A CN 104506768A
Authority
CN
China
Prior art keywords
depth information
image
pixel
user
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410719702.5A
Other languages
Chinese (zh)
Inventor
蓝和
孙剑波
张弓
张学勇
韦怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201410719702.5A priority Critical patent/CN104506768A/en
Publication of CN104506768A publication Critical patent/CN104506768A/en
Pending legal-status Critical Current

Links

Abstract

The invention is suitable for the field of image processing and provides a method and a device for image selection. The method comprises the following steps of obtaining depth information of each pixel point in an image to be selected; receiving the depth information input by a user and searching the pixel points corresponding to the depth information input by the user in the image; and forming an image selection area corresponding to the depth information input by the user according to the searched pixel points. The method and the device solve the problems that the image needs to be stored to the other image processing terminal and the needed image is selected through manual work in the prior art, so that operation is troublesome and the selection precision cannot be guaranteed, and then the purposes of simplicity in operation and high selection precision are achieved.

Description

Image-selecting method, device and terminal
Technical field
The invention belongs to image processing field, particularly relate to image-selecting method, device and terminal.
Background technology
Along with the development of electronic technology, mobile phone is on the basis of traditional communication function, derive various New function, as by mobile phone recording, calculated, play music, broadcast listening, take pictures or video etc., greatly improve the convenience that user uses.
When using mobile phone to take pictures, usually need the partial content of different images to splice.Such as the portrait in image A is presented in background image B, current common practices is that image A and background image B is put into the image processing software that can carry out image selection, such as image is left in computer, by conventional image processing software, as PS etc., select the portrait in image A by selection tool, and the image of selection is covered on background image B, after Flatten Image, complete the fusion of image.
Realizing in process of the present invention, inventor finds that in prior art, at least there are the following problems: because image is stored to other image processing terminal by needs, the image needed by artificial selection, troublesome poeration, and the precision selected can not be guaranteed.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of image-selecting method, device and terminal, needs image to be stored to other image processing terminal, the image needed by artificial selection to solve prior art, troublesome poeration, and the problem that the precision selected can not be guaranteed.
First aspect, embodiments provides a kind of image-selecting method, and described method comprises:
Obtain the depth information waiting to select each pixel in image;
Receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs;
The image selected zone corresponding to depth information of described user input is formed according to the pixel searched.
In conjunction with first aspect, in the first possibility implementation of first aspect, in described acquisition image to be fused, the depth information step of each pixel specifically comprises:
The first image and the second image is obtained according to dual camera;
According to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
In conjunction with first aspect, in the second possibility implementation of first aspect, the depth information of described reception user input, in described image, search the pixel step corresponding with the depth information that described user inputs specifically comprise:
Receive the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user;
The depth information ranges that the depth information inputted according to described user is determined;
According to described depth information ranges, select to search the pixel belonging to described depth information ranges in image described waiting.
In conjunction with the second possibility implementation of first aspect, in the third possibility implementation of first aspect, the depth information ranges step that the described depth information inputted according to described user is determined specifically comprises:
Centered by the depth information that described user inputs, the degree of depth radius according to setting determines described depth information ranges; Or
The depth information inputted with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
The depth information inputted with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
In conjunction with the third possibility implementation of first aspect, in the 4th kind of possibility implementation of first aspect, the large I of described degree of depth radius adjusts accordingly according to waiting to select alternative in image.
In conjunction with first aspect, may in implementation at the 5th kind of first aspect, the image selected zone step that the pixel that described basis is searched forms the depth information of described user input corresponding comprises:
Judge whether the region that the pixel searched is formed comprises multiple region;
If the region that the pixel searched is formed comprises multiple region, then receive the confirmation instruction in one or more region of selection of user's input.
In conjunction with the first possibility implementation, the second possibility implementation of first aspect, the third possibility implementation of first aspect, the 4th kind of possibility implementation of first aspect, the 5th kind of possibility implementation of first aspect of first aspect, first aspect, in the 6th kind of possibility implementation of first aspect, after the pixel searched in described basis forms image selected zone step corresponding to the depth information of described user input, described method also comprises:
By the selected zone of described image and other image co-registration.
Second aspect, embodiments provides a kind of image-selecting device, and described device comprises:
Depth Information Acquistion unit, for obtaining the depth information waiting to select each pixel in image;
Pixel searches unit, for receiving the depth information of user's input, according to the depth information of each pixel that described Depth Information Acquistion unit obtains, in described image, searches the pixel corresponding with the depth information that described user inputs;
Selected zone determining unit, the pixel searched for searching unit according to described pixel forms the image selected zone corresponding to depth information of described user input.
In conjunction with second aspect, in the first possibility implementation of first aspect, described Depth Information Acquistion unit comprises:
Image acquisition subelement, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion subelement, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image acquisition subelement and the second image, obtains the depth information of pixel in image.
In conjunction with second aspect, in the second possibility implementation of first aspect, described pixel is searched unit and is comprised:
Computation subunit, for receiving the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user;
Depth information ranges determination subelement, the depth information ranges that the depth information for the described user's input calculated according to computation subunit is determined;
Pixel searches subelement, for the depth information ranges determined according to described depth information ranges determination subelement, selects to search the pixel belonging to described depth information ranges in image described waiting.
In conjunction with the second possibility implementation of second aspect, in the third possibility implementation of first aspect, described depth information ranges determination subelement comprises:
First determination module, for centered by the depth information that inputs by described user, the degree of depth radius according to setting determines described depth information ranges; Or
Second determination module, the depth information for inputting with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
3rd determination module, the depth information for inputting with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
In conjunction with the third possibility implementation of second aspect, in the 4th kind of possibility implementation of first aspect, the large I of described degree of depth radius adjusts accordingly according to waiting to select alternative in image.
In conjunction with second aspect, in the 5th kind of possibility implementation of second aspect, described selected zone determining unit comprises:
Region decision subelement, for judging whether the region that the pixel searched is formed comprises multiple region;
Confirming command reception subelement, if the region that the pixel for searching is formed comprises multiple region, then receiving the confirmation instruction in one or more region of selection of user's input.
In conjunction with the first possibility implementation, the second possibility implementation of second aspect, the third possibility implementation of second aspect, the 4th kind of possibility implementation of second aspect, the 5th kind of possibility implementation of second aspect of second aspect, second aspect, in the 6th kind of possibility implementation of second aspect, described device also comprises:
Integrated unit, for by the selected zone of described image and other image co-registration.
The third aspect, embodiments provides a kind of terminal, and described terminal comprises the image-selecting device described in above-mentioned second aspect.
In conjunction with the third aspect, in the first possibility implementation of the third aspect, described terminal is smart mobile phone or panel computer.
Because the embodiment of the present invention adopts the depth information receiving user's input, and search corresponding pixel in the picture according to the depth information receiving user's input, the required region selected is determined according to the pixel searched, the image-region that the depth information of input is corresponding can be selected fast and effectively by depth information, thus overcome prior art and need image to be stored to other image processing terminal, by the image that artificial selection needs, troublesome poeration, and the problem that the precision selected can not be guaranteed, and then realize simple to operate, the goal of the invention that choice accuracy is high.
Accompanying drawing explanation
Fig. 1 is the realization flow figure of the image-selecting method that first embodiment of the invention provides;
Fig. 2 is the realization flow figure of the image-selecting method that second embodiment of the invention provides;
Fig. 3 is the realization flow figure of the image-selecting method that third embodiment of the invention provides;
Fig. 4 is the realization flow figure of the image-selecting method that fourth embodiment of the invention provides;
Fig. 5 is the structural schematic block diagram of the image-selecting device that fifth embodiment of the invention provides.
Embodiment
In order to make the object of the embodiment of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the embodiment of the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention embodiment.
In embodiments of the present invention, for overcoming the selection troublesome poeration to objects in images in prior art, and the problem that choice accuracy can not be guaranteed, such as select to be difficult to operation to image in smart mobile phone.The present invention proposes a kind of image-selecting method, described method comprises: obtain the depth information waiting to select each pixel in image; Receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs; The image selected zone corresponding to depth information of described user input is formed according to the pixel searched.Thus realize simple to operate, that choice accuracy is high goal of the invention.Be described below in conjunction with specific embodiment.
Embodiment one:
Fig. 1 shows the realization flow of the image-selecting method that first embodiment of the invention provides, and to this, details are as follows:
In step S101, obtain the depth information waiting to select each pixel in image.
Concrete, described in wait to select image, the i.e. objective for implementation of image-selecting method described in the embodiment of the present invention.The embodiment of the present invention needs in the information waiting to select to select in image required for certain customers.Information required for described user can be main information, the personage etc. in such as image.Certainly, the information required for described note also can be background information, such as extracts the background information etc. of a landscape picture.
Described waiting selects the pixel in image, with to wait to select the resolution of image relevant.For the image of two same sizes, higher when waiting to select the resolution of image, then this waits to select image to need the number of the pixel of compute depth also more.
The depth information of described image, namely in scene, each pixel of object is relative to the distance of imaging terminal, and described object can by light imaging in shooting camera to reflection for people, tree, automobile, house etc., observable object.The depth information of described image, can be obtained by the method for the depth information of current various computed image, the depth information of each point in object such as can be obtained by laser radar range, also can be found range by zoom, or the mode of multi-base stereo imaging, obtains the depth information of each point in image.Certainly, according to the mode of the dual camera range finding described in the embodiment of the present invention two, the depth information of image can also be obtained, specifically will introduce in the embodiment of the present invention two.
In step s 102, receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs.
Concrete, the depth information of described user's input, can be obtained by the mode of recording user alternative in the picture.Such as user can point out the position of required image by the mode touched, such as user touches portrait position in image, depth information that then can be corresponding according to the location lookup of touch point, and search the pixel corresponding with this depth information in the picture.
Certainly, the depth information of described user's input, also can be directly input a depth value, such as depth information be a distance value of 5m, thus directly searches corresponding pixel according to the distance value of this input.
The corresponding relation of described depth information and pixel, can for a depth information ranges centered by the depth information of input, also can be a region on border for the depth information inputted, such as can for being less than all pixels of the depth information of input, or be greater than all pixels of depth information of input.Certainly, should not be confined to these concrete execution modes that the embodiment of the present invention exemplifies, according to concrete selection requirement, can select the relation of different correspondences flexibly, specifically have in the embodiment of the present invention three and introduce explanation accordingly.
In step s 103, the image selected zone corresponding to depth information of described user input is formed according to the pixel searched.
Concrete, the described pixel searched, refers to the search criterion met described in step S102, the set of pixel corresponding with the depth information that user inputs in other words, the described pixel met the demands is selected by tool, can obtain in the described selection waiting the object selecting image.
Described selected zone, can be chosen by highlighted dotted line frame, and other mark obviously can distinguishing current selected areas also can be used to mark, or is carried out by unchecked image hiding display.To facilitate user to watch current selection mode information in time, and adjust correction timely according to current selection mode information.
The depth information of user's input that the embodiment of the present invention receives, and search corresponding pixel in the picture according to the depth information receiving user's input, the required region selected is determined according to the pixel searched, can select by depth information the image-region that the depth information of input is corresponding fast and effectively, thus simple to operate, that choice accuracy is high goal of the invention can be realized.
Embodiment two:
Fig. 2 shows the realization flow of the image-selecting method that second embodiment of the invention provides, and details are as follows:
In step s 201, the first image and the second image is obtained according to dual camera.
Concrete, described dual camera can be the camera of two center line equalitys, and the distance of two cameras presets.Described camera can be simulation camera, also can be digital camera, for ease of the comparison of the image that two cameras obtain, generally select the camera of same kind, so that follow-up comparison calculates.
When the resolution of two cameras is different, such as the resolution of the first image is high, then the resolution of the first high for resolution image can be turned down, make the resolution of the first image after adjustment identical with the resolution of the second image.
Include in the first image, but the image do not comprised in the second image, because it directly can not calculate the depth information of its correspondence, then can using this parts of images not as waiting the part selecting image.
In step S202, according to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
Because the position of two cameras is different, therefore, be similar to the eyes of people, the first image obtained by two cameras and the second image, can determine the parallax information of two images, according to the range information of obtained parallax information and two cameras, near according to distance video camera distance, parallax is large, the principle that during distance video camera distance, parallax is little, by mating the first image and the second image, the depth information of each pixel can be calculated.
In step S203, receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs.
In step S204, form the image selected zone corresponding to depth information of described user input according to the pixel searched.
Step S203-S204 is identical with step S102-S103 in the embodiment of the present invention one, does not repeat at this.
The difference of the embodiment of the present invention and embodiment one is, the embodiment of the present invention obtains the first image and the second image respectively by dual camera, and by the coupling of image, complete the calculating of the depth information of image, compare with the obtain manner of other depth information, the cost that the embodiment of the present invention obtains image depth information is lower, and calculates more for convenience, can significantly improve image of the present invention and select the convenient convenience applied in the terminal.
Embodiment three:
Fig. 3 shows the realization flow of the image-selecting method that third embodiment of the invention provides, and details are as follows:
In step S301, obtain the depth information waiting to select each pixel in image.Specifically can be identical with step S101 described in the embodiment of the present invention one.
In step s 302, receive the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user.
Concrete, due to the convenience of the use of touch terminal, touch screen terminal uses also more and more extensive, when user clicks object to be selected by touching, because the pixel of image is generally much smaller than the region touched corresponding to instruction, for Obtaining Accurate touches the depth information corresponding to instruction, can adopt in two ways:
1, by obtaining the touch area touched corresponding to instruction, search the depth information of each pixel in this touch area, and calculate the mean value of the depth information of all pixels in this touch area, by the mean value of this depth information, as the depth information of user's input;
2, search the touch area that touch instruction is corresponding, and calculate the central pixel point of this touch area, calculate the depth information of this central pixel point, as the depth information of user's input.
Certainly, above-mentioned two kinds of modes are the mode of the depth information of wherein preferably two kinds of acquisition users input, are to be understood that and are not limited to above-mentioned two kinds of modes.
The depth information of user's input, directly can also input the mode of depth information value, as by the depth information slider bar of setting, removable slider bar realizes the adjustment of depth information, and the real-time image selected zone can observed corresponding to depth information, thus realize the adjustment of image selected zone more easily.
In step S303, the depth information ranges that the depth information inputted according to described user is determined.
According to user input depth information, multiple corresponding relation can be adopted, to determine the depth information ranges that the depth information that described user inputs is determined, specifically can comprise as:
Centered by the depth information that described user inputs, the degree of depth radius according to setting determines described depth information ranges; Or
The depth information inputted with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
The depth information inputted with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
Wherein, a kind of typical application scenarios can be such as, comprise multiple objects that the distance of camera distance is different in the picture, then can according to the depth information ranges selected centered by the depth information of input, by the size of percentage regulation range of information, the precision of selected object can be adjusted further.
And for some image subject and background information distance image relatively far apart, then by using the depth information being less than or equal to described user input as described depth information ranges.For avoid required image not can completely choose, can the boundary value of the depth information of main body and background in computed image further, the pixel of the boundary value being less than described depth information is selected, or hides unselected pixel.
Same reason, for the distance image relatively far apart of image subject and background information, when extracting its background image, then can adopt contrary mode, selects the pixel being greater than the critical value of depth information.
In step s 304, according to described depth information ranges, select to search the pixel belonging to described depth information ranges in image described waiting.
In step S305, form the image selected zone corresponding to depth information of described user input according to the pixel searched.
Step S304-S305 is identical with step S102-S103 in the embodiment of the present invention one, does not repeat at this.
The difference of the embodiment of the present invention and embodiment one is, the embodiment of the present invention specifically describes the step obtaining corresponding pixel, the depth information ranges that the depth information inputted by user is corresponding, can make the required pixel selected of acquisition that user can be more convenient.Improve acquisition efficiency and the accuracy rate of pixel, be understandable that, the distinctive points of the embodiment of the present invention and embodiment one, can be applied to equally in embodiment two and obtain corresponding technique effect.
Embodiment four:
Fig. 4 shows the realization flow figure of the image-selecting method that fourth embodiment of the invention provides, and details are as follows:
In step S401, obtain the depth information waiting to select each pixel in image.Identical with step S101 in embodiment one.
In step S402, receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs.Identical with step S102 in embodiment one.
In step S403, judge whether the region that the pixel searched is formed comprises multiple region.
Concrete, whether the multiple regions described in the embodiment of the present invention are that continuum carries out distinguishing according to searched pixel.When in the pixel searched, when one of them partial pixel point searched is not connected with other pixel searching part, then think that the pixel that this searches part is an independent region.
The situation in multiple region is there is in described image, can be understood as and there is multiple self-existent object corresponding to selected depth information in the picture, can be such as the object such as the multiple people being positioned at same depth bounds existed in the picture or multiple actions, certainly also may comprise other foreign material information.
In step s 404, if the region that the pixel searched is formed comprises multiple region, then receive the confirmation instruction in one or more region of selection of user's input.
When there is the region that multiple pixel searched is formed in image, for avoiding occurring falsely dropping the situation of selecting, the confirmation instruction of user's input can be received, confirming the required region selected.And other unacknowledged region, then not can think it is the required region selected of user, can selection mode be cancelled.
Certainly, as the execution mode that the embodiment of the present invention is optimized further, method described in the embodiment of the present invention can also comprise step S405, in step S405, by the selected zone of described image and other image co-registration.
Can obtain after the selected zone in the image selected, by duplicate instructions by select region duplication at clipbook, when opening other image, can by pasting instruction by the region of current selection and other image co-registration, for improving the convenience of use further, other Adjustment effect can also be increased, the selected zone of image of such as sprouting wings, or can rotate the selected zone of image or the process such as stretching.
Should understand, in the embodiment of the present invention one to embodiment four, the size of the sequence number of each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
The embodiment of the present invention is compared with embodiment one, its difference part is, the embodiment of the present invention also comprises and confirming the selected zone of image, to improve the accuracy of the image of selection, and the image-region of selection and other image can be merged, improve the diversity of camera image effect.Be understandable that, the distinctive points of the embodiment of the present invention and embodiment one, can be applied to equally in embodiment two, embodiment three and obtain corresponding technique effect.
Embodiment five:
Fig. 5 shows the structural representation of the image-selecting device that fifth embodiment of the invention provides, and details are as follows:
Described in the embodiment of the present invention, image-selecting device comprises:
Depth Information Acquistion unit 501, for obtaining the depth information waiting to select each pixel in image;
Pixel searches unit 502, for receiving the depth information of user's input, according to the depth information of each pixel that described Depth Information Acquistion unit obtains, in described image, searches the pixel corresponding with the depth information that described user inputs;
Selected zone determining unit 503, the pixel searched for searching unit according to described pixel forms the image selected zone corresponding to depth information of described user input.
Preferably, described Depth Information Acquistion unit comprises:
Image acquisition subelement, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion subelement, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image acquisition subelement and the second image, obtains the depth information of pixel in image.
Optionally, described pixel is searched unit and is comprised:
Computation subunit, for receiving the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user;
Depth information ranges determination subelement, the depth information ranges that the depth information for the described user's input calculated according to computation subunit is determined;
Pixel searches subelement, for the depth information ranges determined according to described depth information ranges determination subelement, selects to search the pixel belonging to described depth information ranges in image described waiting.
Preferably, described depth information ranges determination subelement comprises:
First determination module, for centered by the depth information that inputs by described user, the degree of depth radius according to setting determines described depth information ranges; Or
Second determination module, the depth information for inputting with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
3rd determination module, the depth information for inputting with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
Preferably, the large I of described degree of depth radius adjusts accordingly according to waiting to select alternative in image.
Preferably, described selected zone determining unit comprises:
Region decision subelement, for judging whether the region that the pixel searched is formed comprises multiple region;
Confirming command reception subelement, if the region that the pixel for searching is formed comprises multiple region, then receiving the confirmation instruction in one or more region of selection of user's input.
Preferably, described device also comprises:
Integrated unit, for by the selected zone of described image and other image co-registration.
Image-selecting device described in the embodiment of the present invention is corresponding with the image-selecting method described in embodiment one, two, three, four, does not repeat at this.
In addition, present invention also offers a kind of terminal, described terminal comprises above-mentioned image-selecting device, and preferred embodiment in, described terminal is smart mobile phone or panel computer.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
In several embodiments that the application provides, should be understood that disclosed system, apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a terminal (can be personal computer, server, or the network terminal etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection range of claim.

Claims (16)

1. an image-selecting method, is characterized in that, described method comprises:
Obtain the depth information waiting to select each pixel in image;
Receive the depth information of user's input, in described image, search the pixel corresponding with the depth information that described user inputs;
The image selected zone corresponding to depth information of described user input is formed according to the pixel searched.
2. method according to claim 1, it is characterized in that, in described acquisition image to be fused, the depth information step of each pixel specifically comprises:
The first image and the second image is obtained according to dual camera;
According to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
3. method according to claim 1, is characterized in that, the depth information of described reception user input, searches the pixel step corresponding with the depth information that described user inputs and specifically comprise in described image:
Receive the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user;
The depth information ranges that the depth information inputted according to described user is determined;
According to described depth information ranges, select to search the pixel belonging to described depth information ranges in image described waiting.
4. method according to claim 3, it is characterized in that, the depth information ranges step that the described depth information inputted according to described user is determined specifically comprises:
Centered by the depth information that described user inputs, the degree of depth radius according to setting determines described depth information ranges; Or
The depth information inputted with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
The depth information inputted with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
5. method according to claim 4, is characterized in that, the large I of described degree of depth radius adjusts accordingly according to waiting to select alternative in image.
6. method according to claim 1, is characterized in that, the image selected zone step that the pixel that described basis is searched forms the depth information of described user input corresponding comprises:
Judge whether the region that the pixel searched is formed comprises multiple region;
If the region that the pixel searched is formed comprises multiple region, then receive the confirmation instruction in one or more region of selection of user's input.
7. method according to any one of claim 1-6, is characterized in that, after the pixel searched in described basis forms image selected zone step corresponding to the depth information of described user input, described method also comprises:
By the selected zone of described image and other image co-registration.
8. an image-selecting device, is characterized in that, described device comprises:
Depth Information Acquistion unit, for obtaining the depth information waiting to select each pixel in image;
Pixel searches unit, for receiving the depth information of user's input, according to the depth information of each pixel that described Depth Information Acquistion unit obtains, in described image, searches the pixel corresponding with the depth information that described user inputs;
Selected zone determining unit, the pixel searched for searching unit according to described pixel forms the image selected zone corresponding to depth information of described user input.
9. device according to claim 8, it is characterized in that, described Depth Information Acquistion unit comprises:
Image acquisition subelement, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion subelement, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image acquisition subelement and the second image, obtains the depth information of pixel in image.
10. device according to claim 8, it is characterized in that, described pixel is searched unit and is comprised:
Computation subunit, for receiving the touch area of user's input, the depth information that the mean depth information calculating the pixel of described touch area inputs as user, or the depth information that the depth information value receiving user's input inputs as user;
Depth information ranges determination subelement, the depth information ranges that the depth information for the described user's input calculated according to computation subunit is determined;
Pixel searches subelement, for the depth information ranges determined according to described depth information ranges determination subelement, selects to search the pixel belonging to described depth information ranges in image described waiting.
11. devices according to claim 10, it is characterized in that, described depth information ranges determination subelement comprises:
First determination module, for centered by the depth information that inputs by described user, the degree of depth radius according to setting determines described depth information ranges; Or
Second determination module, the depth information for inputting with described user is critical point, to be greater than the depth information of described user input as described depth information ranges; Or
3rd determination module, the depth information for inputting with described user is critical point, to be less than or equal to the depth information of described user input as described depth information ranges.
12., according to device described in claim 11, is characterized in that, the large I of described degree of depth radius adjusts accordingly according to waiting to select alternative in image.
13. devices according to claim 8, it is characterized in that, described selected zone determining unit comprises:
Region decision subelement, for judging whether the region that the pixel searched is formed comprises multiple region;
Confirming command reception subelement, if the region that the pixel for searching is formed comprises multiple region, then receiving the confirmation instruction in one or more region of selection of user's input.
14. devices described in-13 any one according to Claim 8, it is characterized in that, described device also comprises:
Integrated unit, for by the selected zone of described image and other image co-registration.
15. 1 kinds of terminals, is characterized in that, described terminal comprises the image-selecting device described in any one of claim 7-14.
16., according to terminal described in claim 15, is characterized in that, described terminal is smart mobile phone or panel computer.
CN201410719702.5A 2014-11-28 2014-11-28 Method and device for image selection as well as terminal Pending CN104506768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410719702.5A CN104506768A (en) 2014-11-28 2014-11-28 Method and device for image selection as well as terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410719702.5A CN104506768A (en) 2014-11-28 2014-11-28 Method and device for image selection as well as terminal

Publications (1)

Publication Number Publication Date
CN104506768A true CN104506768A (en) 2015-04-08

Family

ID=52948482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410719702.5A Pending CN104506768A (en) 2014-11-28 2014-11-28 Method and device for image selection as well as terminal

Country Status (1)

Country Link
CN (1) CN104506768A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067523A1 (en) * 2015-10-22 2017-04-27 努比亚技术有限公司 Image processing method, device and mobile terminal
CN107888833A (en) * 2017-11-28 2018-04-06 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN111369612A (en) * 2018-12-25 2020-07-03 北京欣奕华科技有限公司 Three-dimensional point cloud image generation method and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090129679A1 (en) * 2007-11-16 2009-05-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
CN102257511A (en) * 2008-10-30 2011-11-23 诺基亚公司 Method, apparatus and computer program product for providing adaptive gesture analysis
CN102479220A (en) * 2010-11-30 2012-05-30 财团法人资讯工业策进会 Image retrieval system and method thereof
US20130101169A1 (en) * 2011-10-20 2013-04-25 Lg Innotek Co., Ltd. Image processing method and apparatus for detecting target

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090129679A1 (en) * 2007-11-16 2009-05-21 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable medium
CN102257511A (en) * 2008-10-30 2011-11-23 诺基亚公司 Method, apparatus and computer program product for providing adaptive gesture analysis
CN102479220A (en) * 2010-11-30 2012-05-30 财团法人资讯工业策进会 Image retrieval system and method thereof
US20130101169A1 (en) * 2011-10-20 2013-04-25 Lg Innotek Co., Ltd. Image processing method and apparatus for detecting target

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067523A1 (en) * 2015-10-22 2017-04-27 努比亚技术有限公司 Image processing method, device and mobile terminal
CN106612393A (en) * 2015-10-22 2017-05-03 努比亚技术有限公司 Image processing method, image processing device and mobile terminal
CN106612393B (en) * 2015-10-22 2019-10-15 努比亚技术有限公司 A kind of image processing method and device and mobile terminal
CN107888833A (en) * 2017-11-28 2018-04-06 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN111369612A (en) * 2018-12-25 2020-07-03 北京欣奕华科技有限公司 Three-dimensional point cloud image generation method and equipment
CN111369612B (en) * 2018-12-25 2023-11-24 北京欣奕华科技有限公司 Three-dimensional point cloud image generation method and device

Similar Documents

Publication Publication Date Title
US9865062B2 (en) Systems and methods for determining a region in an image
CN108604373B (en) System and method for implementing seamless zoom function using multiple cameras
WO2021164469A1 (en) Target object detection method and apparatus, device, and storage medium
US20190197735A1 (en) Method and apparatus for image processing, and robot using the same
US10235587B2 (en) Method and system for optimizing an image capturing boundary in a proposed image
EP3036901B1 (en) Method, apparatus and computer program product for object detection and segmentation
US9594945B2 (en) Method and apparatus for protecting eyesight
CN104333748A (en) Method, device and terminal for obtaining image main object
US9756261B2 (en) Method for synthesizing images and electronic device thereof
CN107343081A (en) Mobile terminal and its control method
WO2019104953A1 (en) Positioning method and apparatus, and mobile terminal
CN105210113A (en) Monocular visual SLAM with general and panorama camera movements
US10284956B2 (en) Technologies for localized audio enhancement of a three-dimensional video
CN112714255B (en) Shooting method and device, electronic equipment and readable storage medium
CN104363378A (en) Camera focusing method, camera focusing device and terminal
CN106134176A (en) System and method for multifocal imaging
EP2672455B1 (en) Apparatus and method for providing 3D map showing area of interest in real time
CN104363377A (en) Method and apparatus for displaying focus frame as well as terminal
US9838594B2 (en) Irregular-region based automatic image correction
US20210168279A1 (en) Document image correction method and apparatus
CN104580928A (en) Camera shooting light fill-in method and device
CN105704367A (en) Camera shooting control method and device of unmanned aerial vehicle
US20150145786A1 (en) Method of controlling electronic device using transparent display and apparatus using the same
CN104506768A (en) Method and device for image selection as well as terminal
US9456136B2 (en) Apparatus and method for generating image data in portable terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150408

RJ01 Rejection of invention patent application after publication