CN104363377A - Method and apparatus for displaying focus frame as well as terminal - Google Patents
Method and apparatus for displaying focus frame as well as terminal Download PDFInfo
- Publication number
- CN104363377A CN104363377A CN201410715279.1A CN201410715279A CN104363377A CN 104363377 A CN104363377 A CN 104363377A CN 201410715279 A CN201410715279 A CN 201410715279A CN 104363377 A CN104363377 A CN 104363377A
- Authority
- CN
- China
- Prior art keywords
- pixel
- depth information
- region
- agent object
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Studio Devices (AREA)
Abstract
The invention provides a method and an apparatus for displaying a focus frame as well as a terminal. The method comprises the following steps: acquiring a focus area selected on a camera image, determining the area in which a main body object corresponding to the focus area is located according to the depth information of the camera image, calculating the outline of the area in which the main body object is located according to the area in which the main body object is located, and displaying by taking the outline as a focus frame. The method and the apparatus provided by the invention can be used for overcoming the defect that the shape of the existing focus frame is fixed and single, and displaying a more explicit focus area to users.
Description
Technical field
The invention belongs to imaging field, particularly relate to the display packing of focusing frame, device and terminal.
Background technology
When camera is taken pictures, for obtaining photographic images clearly, needing to focus to camera, as by changing the focal length of camera lens or changing the distance of camera lens photocentre plate plane on earth, making camera focus on the screen objects of the desired shooting of user.
In the display packing of existing focusing frame, generally choose by a rectangle focusing frame region needing focusing, thus effectively can complete the operations such as focusing or exposure.
Realizing in process of the present invention, inventor finds that in prior art, at least there are the following problems: because the shape of rectangle focusing frame is fixed, and the shape of destination object and size usually change, fixing rectangle focusing frame can not show the destination object of selection accurately and effectively, makes focusing frame show inadequate ocular and clear.
Summary of the invention
In view of this, display packing, device and terminal that the embodiment of the present invention provides a kind of camera to focus frame, to solve in prior art because the shape of rectangle focusing frame is fixed, and the shape of destination object and size usually change, fixing rectangle focusing frame can not show the destination object of selection accurately and effectively, makes focusing frame show the problem of inadequate ocular and clear.
First aspect, embodiments provide a kind of display packing of frame of focusing, described method comprises:
Obtain the focusing area selected on camera image;
According to the depth information of described camera image, determine the region at the agent object place that described focusing area is corresponding;
Calculate the profile in the region at described agent object place according to the region at described agent object place, show as focusing frame using described profile.
In conjunction with first aspect, in the first possibility implementation of embodiment of the present invention first aspect, the described depth information according to described camera image, determine that the region at the agent object place that described focusing area is corresponding specifically comprises:
Obtain the depth information of each pixel in camera image;
According to the threshold range preset and described focusing area, search the pixel that depth information meets described threshold range in the picture;
The region at agent object place is formed according to searched pixel.
In conjunction with the first possibility implementation of first aspect, in the second possibility implementation of embodiment of the present invention first aspect, described focusing area is the focusing area that camera is searched automatically, the threshold range that described basis presets and described focusing area, search the pixel that depth information meets described threshold range in the picture and specifically comprise:
Calculate the mean depth information in described camera image;
Setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
In conjunction with the first possibility implementation of first aspect, in the third possibility implementation of embodiment of the present invention first aspect, described focusing area is the focusing area that camera is searched automatically, the threshold range that described basis presets and described focusing area, search the pixel that depth information meets described threshold range in the picture and specifically comprise:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
In conjunction with the first possibility implementation of first aspect, in the third possibility implementation of embodiment of the present invention first aspect, in described acquisition camera image, the depth information step of each pixel comprises:
The first image and the second image is obtained according to dual camera;
According to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
In conjunction with the first possibility implementation of first aspect, in the third possibility implementation of embodiment of the present invention first aspect, the described region step according to searched pixel formation agent object place comprises:
Whether the region area that the pixel searched described in judgement is formed is greater than default area surface product value;
If described in the region area that forms of the pixel searched be greater than default area surface product value, then form the region at agent object place according to searched pixel.
In conjunction with the first possibility implementation of first aspect, in the third possibility implementation of embodiment of the present invention first aspect, the described region step according to searched pixel formation agent object place comprises:
Whether the region that the pixel searched described in judgement is formed is continuous print region;
If described in the region that forms of the pixel searched be continuous print region, then form the region at agent object place according to searched pixel.
Second aspect, embodiments provide a kind of display unit of frame of focusing, described device comprises:
Focusing area acquiring unit, for obtaining the focusing area selected on camera image;
The area determination unit at agent object place, for the depth information according to described camera image, determines the region at the agent object place that described focusing area is corresponding;
Profile calculates display unit, for calculating the profile in the region at described agent object place according to the region at described agent object place, shows using described profile as focusing frame.
In conjunction with second aspect, in the first possibility implementation of second aspect, described focusing area is the focusing area that camera is searched automatically, and the area determination unit at described agent object place specifically comprises:
Depth Information Acquistion subelement, for obtaining the depth information of each pixel in camera image;
Pixel searches subelement, for the depth information of each pixel according to described Depth Information Acquistion subelement acquisition, and the threshold range of setting, search the pixel that depth information meets described threshold range in the picture;
Agent object obtains subelement, for forming the region at agent object place according to searched pixel.
In conjunction with the first possibility implementation of second aspect, in the second possibility implementation of second aspect, described focusing area is the focusing area that user specifies, and described pixel is searched subelement and comprised:
Computing module, for calculating the mean depth information in described camera image;
Pixel searches module, and for the mean depth information calculated according to described computing module, setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
May implementation in conjunction with the first of second aspect, may in implementation at the third of second aspect, described pixel search subelement for:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
In conjunction with the first possibility implementation of second aspect, in the 4th kind of possibility implementation of second aspect, described Depth Information Acquistion subelement comprises:
Image collection module, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion module, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image collection module and the second image, obtains the depth information of pixel in image.
In conjunction with the first possibility implementation of second aspect, in the 5th kind of possibility implementation of second aspect, described agent object obtains subelement and comprises:
First judge module, whether the region area that the pixel for searching described in judging is formed is greater than default area surface product value;
First agent object region determination module, if the region area that the pixel for searching according to the first judge module judgement is formed is greater than default area surface product value, then forms the region at agent object place according to searched pixel.
In conjunction with the first possibility implementation of second aspect, in the 6th kind of possibility implementation of second aspect, described agent object obtains subelement and comprises:
Second judge module, whether the region that the pixel for searching described in judging is formed is continuous print region;
Second agent object region determination module, if the region that the pixel for searching according to the second judge module judgement is formed is continuous print region, then forms the region at agent object place according to searched pixel.
The third aspect, embodiments provides a kind of terminal, and described terminal comprises the display unit of the camera focusing frame in any one possibility implementation of second aspect.
In conjunction with the third aspect, in the first possibility implementation of the third aspect, described terminal is smart mobile phone or panel computer.
In embodiments of the present invention, by searching the region of the agent object in camera image, determining according to the region of described agent object and showing the profile of described agent object.Thus overcome prior art due to rectangle focusing frame shape fix, and the shape of destination object and size usually change, fixing rectangle focusing frame can not show the destination object of selection accurately and effectively, focusing frame is made to show the problem of inadequate ocular and clear, and then the selection of destination object can be made more accurate, effectively can improve the efficiency of selection to destination object.
Accompanying drawing explanation
Fig. 1 is the realization flow figure of the display packing of the focusing frame that first embodiment of the invention provides;
Fig. 2 is the realization flow figure of the display packing of the focusing frame that second embodiment of the invention provides;
Fig. 3 be third embodiment of the invention provide according to the threshold range that presets, search the realization flow figure that depth information meets the pixel of described threshold range in the picture;
Fig. 4 shows the realization flow of the depth information of each pixel in the acquisition camera image that fifth embodiment of the invention provides;
Fig. 5 shows the structural representation of the display unit of the focusing frame that sixth embodiment of the invention provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Auto-focusing when the embodiment of the present invention can be applicable to camera or video camera at photographic images to image.Its main purpose is to overcome in prior art because the shape of rectangle focusing frame is fixed, and the shape of destination object and size usually change, fixing rectangle focusing frame can not show the destination object of selection accurately and effectively, makes focusing frame show the problem of inadequate ocular and clear.For overcoming the problems referred to above, the display packing of focusing frame of the present invention, comprising:
Obtain the focusing area selected on camera image; According to the depth information of described camera image, determine the region at the agent object place that described focusing area is corresponding; Calculate the profile in the region at described agent object place according to the region at described agent object place, show as focusing frame using described profile.Be described further below in conjunction with embodiment.
Embodiment one:
Fig. 1 shows the realization flow figure of the display packing of the focusing frame that first embodiment of the invention provides, and details are as follows:
In step S101, obtain the focusing area selected on camera image.
Concrete, described camera, including, but not limited to smart mobile phone camera, notebook camera or special camera or video camera, camera obtains the mode of image, include but not limited to that (English full name is Charge-coupled Device to CCD, Chinese full name be charge coupled device) photosensitive part carry out photosensitive acquisition or CMOS (English full name is Complementary Metal-Oxide Semiconductor, and Chinese full name is) the photosensitive acquisition of photosensitive part.
Described camera image, refer to camera take pictures or capture video time the image that obtains.
The focusing area that described camera image is selected can by receiving the determined focusing area of touch instruction of user's input, or also can be the focusing area of other input mode input.
In step s 102, according to the depth information of described camera image, determine the region at the agent object place that described focusing area is corresponding.
Concrete, obtain the mode of the depth information of each pixel in camera image, the depth information that the image using dual camera to obtain determines each pixel can be comprised, the acquisition methods of other depth information can certainly be used, such as use infrared distance measurement to obtain depth information.
According to the region at described focusing area determination agent object place, the depth information of described focusing area can be searched, determine the region at searched agent object place according to this depth information.Further instruction will illustrate in embodiment two to enforcement four.
Agent object in described camera image, refers to the comparatively significant object of the feature being arranged in described camera image, such as can define the feature of some agent objects, as being the comparatively notable feature such as character facial, human limbs action.Except defining the feature of some main agent objects, can also according to the extracting method of conventional agent object, such as the image in webpage, can by the agent object in the markup information determination image of image, by searching and marking corresponding information, thus determine the agent object in image.
Certainly, should not be limited to this, the embodiment of the present invention also comprises the depth information according to the pixel in image, determines the agent object of image, will illustrate in subsequent embodiment.
The region at the place of agent object described in the embodiment of the present invention, namely by after above-mentioned various method determination agent objects, and record the position of described agent object in camera image, described position can represent with coordinate.Can by recording the shape of described agent object, and the key position point of the shape of corresponding record agent object, such as triangle, then only need the position coordinates on record three summits, and for polygon, then need the apex coordinate recording multiple boundary position, and the connected mode on summit, such as which summit with which summit is connected, and the line be connected is straight line or camber line, if camber line, comprise the information such as the radius of curvature of camber line, radian.
In step s 103, calculate the profile in the region at described agent object place according to the region at described agent object place, show as focusing frame using described profile.
Concrete, the profile of described agent object, refers to the background in described agent object and image, i.e. the boundary line of non-agent object.
The calculating of the profile of described agent object, the state information of pixel that can be adjacent with surrounding according to each pixel in agent object is determined.
Such as, the shape supposing each pixel in image is square, so, has four adjacent pixels to be close to it for each pixel.When determining whether described pixel is the profile of agent object, can search centered by described pixel, search the pixel whether four adjacent pixels of this central pixel point are agent object, if have one or more than one neighbor pixel not belong to described agent object, then described central pixel point is the pixel of the profile of agent object.On the contrary, if adjacent four pixels of described and described central pixel point are agent object, then this central pixel point is not the pixel of the profile of agent object.
According to same reason, when the shape of described pixel is other shape, also only need to search pixel centered by this pixel, whether other pixel adjacent with described central pixel point is main object, if there is one or more pixel not to be agent object, then described central pixel point is the profile of agent object.
Because the embodiment of the present invention is by searching the region of the agent object in camera image, determines according to the region of described agent object and showing the profile of described agent object.Make the selection of destination object more accurate, effectively can improve the efficiency of selection to destination object.
Embodiment two:
Fig. 2 shows the realization flow of the display packing of the focusing frame that second embodiment of the invention provides, and details are as follows:
In step s 201, the focusing area selected on camera image is obtained.
In step S202, obtain the depth information of each pixel in camera image.
Concrete, the depth information of described camera image, namely in scene, each pixel of object is relative to the distance of imaging terminal, and described object can the imaging in shooting camera by light reflection for people, tree, automobile, house etc., observable object.The depth information of described image, can be obtained by the method for the depth information of current various computed image, the depth information of each point in object such as can be obtained by laser radar range, also can be found range by zoom, or the mode of multi-base stereo imaging, obtains the depth information of each point in image.Certainly, according to the mode of the dual camera range finding described in the embodiment of the present invention three, the depth information of image can also be obtained, specifically will introduce in the embodiment of the present invention three.
In step S203, according to the threshold range preset and described focusing area, search the pixel that depth information meets described threshold range in the picture.
The threshold range of described setting, the central point that user specifies can be received, and the threshold radius of a depth information is obtained according to described central point and threshold radius, or also can set two threshold boundaries values, by two threshold boundaries value definite threshold scopes, certainly can also pass through setting threshold boundaries, will described threshold boundaries be greater than or less than as set threshold range.The mode of concrete setting threshold scope, will be described in subsequent embodiment.
In step S204, form the region at agent object place according to searched pixel.
When the depth information of the pixel in camera image meets the threshold range set by step S202, then can think that it meets set threshold range requirement.
Certainly, as in the execution mode that the embodiment of the present invention is optimized further, can also comprise and filtration step is carried out to the region at the agent object place of searching, to comprise one of the following two kinds mode in concrete, or comprise two kinds of modes and filter simultaneously.
Mode one:
Whether the region area that the pixel searched described in judgement is formed is greater than default area surface product value;
If described in the region area that forms of the pixel searched be greater than default area surface product value, then form the region at agent object place according to searched pixel.
Wherein, the area surface product value preset, can be described by pixel, such as can set default area surface product value is N number of pixel.Described default area surface product value, relevant with the resolution of camera image, when camera image resolution is larger, then can improve default area surface product value accordingly.
In mode one, the object that the region area of the image obtained filters is, the interference of noise image to main information of other non-main body can be reduced, such as identical or close with subject image depth information object is to the interference of subject image, or due to the interference that the depth information error of calculation is brought, thus improve the accuracy of agent object acquisition further.
Mode two:
Whether the region that the pixel searched described in judgement is formed is continuous print region;
If described in the region that forms of the pixel searched be continuous print region, then form the region at agent object place according to searched pixel.
Equally, for in the noise image existed in camera image, also can be queued up by the continuity of image-region the interference of the interference of noise image to agent object, the pocket that such as identical or close with the depth information of subject image object or the depth information error of calculation are brought.
By the judgement of the main information in region formed pixel, satisfactory, that area is maximum region can be selected as the region at agent object place.
As in a kind of execution mode more optimized of the present invention, mode one and mode two can also be combined, better to filter noise image, obtain agent object more accurately.
In step S205, calculate the profile in the region at described agent object place according to the region at described agent object place, show as focusing frame using described profile.
Step S201, S205 are identical with the step S101 in embodiment one, S103, do not repeat at this.
The embodiment of the present invention is by obtaining the depth information of each pixel in camera image, compared with the threshold range preset by depth information, the pixel being met requirement forms the region at agent object place, calculate the profile of described agent object according to the region at described agent object place, and described profile is shown as focusing frame.The present invention also describes the mode that two kinds are optimized agent object further, thus the further accuracy improving agent object and search.
Embodiment three:
On the basis of embodiment two, the described focusing area that Fig. 3 shows third embodiment of the invention to be provided be camera automatically search focusing area time, according to the threshold range preset, search the realization flow that depth information meets the pixel of described threshold range in the picture, details are as follows:
In step S301, calculate the mean depth information in described camera image.
Concrete, the mean depth information of described camera image, can average according to the depth information of pixel each in camera image and calculate.
In step s 302, setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
According to the mean depth information that step S301 calculates, each pixel in itself and camera image is compared, image can be divided into two parts, the region that a part is formed for the pixel being greater than mean depth information, the region that a part is formed for being less than or equal to mean depth information pixels point.Consider that agent object is general near camera terminal, therefore can by selecting the region being less than or equal to mean depth information pixels point formation to be agent object.By this filter type, be especially applicable to agent object and the distant scene of background.
Certainly, when the region selecting the pixel being greater than mean depth information to form, can using the background object of this subregion as image.
When the agent object obtained comprises multiple, can according to the selection instruction of user, select one of them or several as agent object.
The embodiment of the present invention is on the basis of embodiment two, specifically describe the mean depth information by calculating camera image, according to the depth information of pixel and comparing of mean depth information, screening obtains agent object, simple to operate, and the accuracy rate obtaining agent object is higher.
Embodiment four:
As execution mode alternative in the embodiment of the present invention three, in the embodiment of the present invention four, providing a kind of is the focusing area that user specifies based on the embodiment of the present invention two, described focusing area, the threshold range that described basis presets and described focusing area, search the execution mode that depth information meets the pixel of described threshold range in the picture, be specially:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
The depth information that described user specifies can, for obtaining depth information corresponding to touch area by touching instruction, also can be the concrete numerical value of user's input, the depth information value that certainly can also be regulated by slider bar.
Described degree of depth radius, can set one or more accepted value according to different scenes, and Auto-Sensing scene information and call corresponding degree of depth radius accordingly, also can user adjust according to actual conditions.
In addition, as in the execution mode that the present invention more optimizes, described degree of depth radius can be optimized for and extend different distances respectively forward and backward with center, thus better adapts to the selection requirement of different subjects object.
Similar with embodiment three, the embodiment of the present invention is on the basis of embodiment two, and specifically describe by setting central value with degree of depth radius, obtain the agent object in camera image, selection mode is more flexible.
Embodiment five:
On the basis of embodiment two, Fig. 4 shows the realization flow of the depth information of each pixel in the acquisition camera image that fifth embodiment of the invention provides, and details are as follows:
In step S401, obtain the first image and the second image according to dual camera.
Concrete, described dual camera can be the camera of two center line equalitys, and the distance of two cameras presets.Described camera can be simulation camera, also can be digital camera, for ease of the comparison of the image that two cameras obtain, generally select the camera of same kind, so that follow-up comparison calculates.
When the resolution of two cameras is different, such as the resolution of the first image is high, then the resolution of the first high for resolution image can be turned down, and makes the resolution of the first image after adjusting identical with the resolution of the second image.
Include in the first image, but the image do not comprised in the second image, because it directly can not calculate the depth information of its correspondence, then can using this parts of images not as waiting the part selecting image.
In step S402, according to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
Because the position of two cameras is different, therefore, be similar to the eyes of people, the first image obtained by two cameras and the second image, can determine the parallax information of two images, according to the range information of obtained parallax information and two cameras, near according to distance video camera distance, parallax is large, the principle that during distance video camera distance, parallax is little, by mating the first image and the second image, the depth information of each pixel can be calculated.
The embodiment of the present invention obtains the depth information in camera image by dual camera, and be compared to the mode that other obtains depth information, the cost that embodiment of the present invention mode needs is lower, realizes more convenient.
Should understand, in the embodiment of the present invention one to embodiment five, the size of the sequence number of each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
Embodiment six:
Fig. 5 shows the structural representation of the display unit of the camera focusing frame that sixth embodiment of the invention provides, and details are as follows:
The display unit of the camera focusing frame described in the embodiment of the present invention, comprising:
Focusing area acquiring unit 501, for obtaining the focusing area selected on camera image;
The area determination unit 502 at agent object place, for the depth information according to described camera image, determines the region at the agent object place that described focusing area is corresponding;
Profile calculates display unit 503, for calculating the profile in the region at described agent object place according to the region at described agent object place, shows using described profile as focusing frame.
Preferably, described focusing area is the focusing area that camera is searched automatically, and the area determination unit at described agent object place specifically comprises:
Depth Information Acquistion subelement, for obtaining the depth information of each pixel in camera image;
Pixel searches subelement, for the depth information of each pixel according to described Depth Information Acquistion subelement acquisition, and the threshold range of setting, search the pixel that depth information meets described threshold range in the picture;
Agent object obtains subelement, for forming the region at agent object place according to searched pixel.
Preferably, described focusing area is the focusing area that user specifies, and described pixel is searched subelement and comprised:
Computing module, for calculating the mean depth information in described camera image;
Pixel searches module, and for the mean depth information calculated according to described computing module, setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
Preferably, described pixel search subelement for:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
Preferably, described Depth Information Acquistion subelement comprises:
Image collection module, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion module, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image collection module and the second image, obtains the depth information of pixel in image.
Preferably, described agent object acquisition subelement comprises:
First judge module, whether the region area that the pixel for searching described in judging is formed is greater than default area surface product value;
First agent object region determination module, if the region area that the pixel for searching according to the first judge module judgement is formed is greater than default area surface product value, then forms the region at agent object place according to searched pixel.
Preferably, described agent object acquisition subelement comprises:
Second judge module, whether the region that the pixel for searching described in judging is formed is continuous print region;
Second agent object region determination module, if the region that the pixel for searching according to the second judge module judgement is formed is continuous print region, then forms the region at agent object place according to searched pixel.
The focus display packing of frame of camera exposure device described in the embodiment of the present invention and camera described in embodiment one, two, three, four, five is corresponding, does not repeat at this.
In addition, embodiments provide a kind of terminal, described terminal comprises the display unit of the camera focusing frame described in above-mentioned any one.Preferred embodiment, described terminal is smart mobile phone or panel computer.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
In several embodiments that the application provides, should be understood that disclosed system, apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection range of claim.
Claims (16)
1. focus the display packing of frame, it is characterized in that, described method comprises:
Obtain the focusing area selected on camera image;
According to the depth information of described camera image, determine the region at the agent object place that described focusing area is corresponding;
Calculate the profile in the region at described agent object place according to the region at described agent object place, show as focusing frame using described profile.
2. method according to claim 1, is characterized in that, the described depth information according to described camera image, determines that the region at the agent object place that described focusing area is corresponding specifically comprises:
Obtain the depth information of each pixel in camera image;
According to the threshold range preset and described focusing area, search the pixel that depth information meets described threshold range in the picture;
The region at agent object place is formed according to searched pixel.
3. method according to claim 2, it is characterized in that, described focusing area is the focusing area that camera is searched automatically, the threshold range that described basis presets and described focusing area, searches the pixel that depth information meets described threshold range in the picture and specifically comprises:
Calculate the mean depth information in described camera image;
Setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
4. method according to claim 2, it is characterized in that, described focusing area is the focusing area that user specifies, the threshold range that described basis presets and described focusing area, searches the pixel that depth information meets described threshold range in the picture and specifically comprises:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
5. method according to claim 2, it is characterized in that, in described acquisition camera image, the depth information step of each pixel comprises:
The first image and the second image is obtained according to dual camera;
According to the parallax information of same object and the positional information of dual camera in described first image and the second image, obtain the depth information of pixel in image.
6. method according to claim 2, is characterized in that, the described region step forming agent object place according to searched pixel comprises:
Whether the region area that the pixel searched described in judgement is formed is greater than default area surface product value;
If described in the region area that forms of the pixel searched be greater than default area surface product value, then form the region at agent object place according to searched pixel.
7. method according to claim 2, is characterized in that, the described region step forming agent object place according to searched pixel comprises:
Whether the region that the pixel searched described in judgement is formed is continuous print region;
If described in the region that forms of the pixel searched be continuous print region, then form the region at agent object place according to searched pixel.
8. focus the display unit of frame, it is characterized in that, described device comprises:
Focusing area acquiring unit, for obtaining the focusing area selected on camera image;
The area determination unit at agent object place, for the depth information according to described camera image, determines the region at the agent object place that described focusing area is corresponding;
Profile calculates display unit, for calculating the profile in the region at described agent object place according to the region at described agent object place, shows using described profile as focusing frame.
9. device according to claim 8, it is characterized in that, described focusing area is the focusing area that camera is searched automatically, and the area determination unit at described agent object place specifically comprises:
Depth Information Acquistion subelement, for obtaining the depth information of each pixel in camera image;
Pixel searches subelement, for the depth information of each pixel according to described Depth Information Acquistion subelement acquisition, and the threshold range of setting, search the pixel that depth information meets described threshold range in the picture;
Agent object obtains subelement, for forming the region at agent object place according to searched pixel.
10. device according to claim 9, it is characterized in that, described focusing area is the focusing area that user specifies, and described pixel is searched subelement and comprised:
Computing module, for calculating the mean depth information in described camera image;
Pixel searches module, and for the mean depth information calculated according to described computing module, setting described threshold range is the scope being less than described mean depth information, searches the pixel being less than described mean depth information.
11. devices according to claim 9, is characterized in that, described pixel search subelement for:
Receive depth information and the degree of depth radius of user's input, centered by the depth information that described user specifies, determine described threshold range according to described degree of depth radius, search the pixel that depth information meets described threshold range in the picture.
12. devices according to claim 9, it is characterized in that, described Depth Information Acquistion subelement comprises:
Image collection module, for obtaining the first image and the second image according to dual camera;
Depth Information Acquistion module, for the parallax information of same object and the positional information of dual camera in described first image that obtains according to described image collection module and the second image, obtains the depth information of pixel in image.
13. methods according to claim 9, is characterized in that, described agent object obtains subelement and comprises:
First judge module, whether the region area that the pixel for searching described in judging is formed is greater than default area surface product value;
First agent object region determination module, if the region area that the pixel for searching according to the first judge module judgement is formed is greater than default area surface product value, then forms the region at agent object place according to searched pixel.
14. devices according to claim 9, is characterized in that, described agent object obtains subelement and comprises:
Second judge module, whether the region that the pixel for searching described in judging is formed is continuous print region;
Second agent object region determination module, if the region that the pixel for searching according to the second judge module judgement is formed is continuous print region, then forms the region at agent object place according to searched pixel.
15. 1 kinds of terminals, is characterized in that, described terminal comprises the display unit of camera focusing frame described in any one of claim 9-14.
16., according to terminal described in claim 15, is characterized in that, described terminal is smart mobile phone or panel computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410715279.1A CN104363377B (en) | 2014-11-28 | 2014-11-28 | Display methods, device and the terminal of focus frame |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410715279.1A CN104363377B (en) | 2014-11-28 | 2014-11-28 | Display methods, device and the terminal of focus frame |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104363377A true CN104363377A (en) | 2015-02-18 |
CN104363377B CN104363377B (en) | 2017-08-29 |
Family
ID=52530600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410715279.1A Active CN104363377B (en) | 2014-11-28 | 2014-11-28 | Display methods, device and the terminal of focus frame |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104363377B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100620A (en) * | 2015-07-30 | 2015-11-25 | 深圳市永兴元科技有限公司 | Shooting method and apparatus |
CN105827952A (en) * | 2016-02-01 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method for removing specified object and mobile terminal |
CN106973227A (en) * | 2017-03-31 | 2017-07-21 | 努比亚技术有限公司 | Intelligent photographing method and device based on dual camera |
CN106991696A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Backlight image processing method, backlight image processing apparatus and electronic installation |
CN107566731A (en) * | 2017-09-28 | 2018-01-09 | 努比亚技术有限公司 | A kind of focusing method and terminal, computer-readable storage medium |
CN107613208A (en) * | 2017-09-29 | 2018-01-19 | 努比亚技术有限公司 | Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area |
CN108322726A (en) * | 2018-05-04 | 2018-07-24 | 浙江大学 | A kind of Atomatic focusing method based on dual camera |
CN110996003A (en) * | 2019-12-16 | 2020-04-10 | Tcl移动通信科技(宁波)有限公司 | Photographing positioning method and device and mobile terminal |
CN111669492A (en) * | 2019-03-06 | 2020-09-15 | 青岛海信移动通信技术股份有限公司 | Method for processing shot digital image by terminal and terminal |
CN112204945A (en) * | 2019-08-14 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Image processing method, image processing apparatus, image capturing device, movable platform, and storage medium |
CN115037880A (en) * | 2022-07-13 | 2022-09-09 | 山西工程职业学院 | Quick focusing method for airborne camera |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070117338A (en) * | 2006-06-08 | 2007-12-12 | 엘지전자 주식회사 | Mobile communication device having function for setting up focus area and method thereby |
CN104092946A (en) * | 2014-07-24 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | Image collection method and device |
CN104102068A (en) * | 2013-04-11 | 2014-10-15 | 聚晶半导体股份有限公司 | Automatic focusing method and automatic focusing device |
-
2014
- 2014-11-28 CN CN201410715279.1A patent/CN104363377B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070117338A (en) * | 2006-06-08 | 2007-12-12 | 엘지전자 주식회사 | Mobile communication device having function for setting up focus area and method thereby |
CN104102068A (en) * | 2013-04-11 | 2014-10-15 | 聚晶半导体股份有限公司 | Automatic focusing method and automatic focusing device |
CN104092946A (en) * | 2014-07-24 | 2014-10-08 | 北京智谷睿拓技术服务有限公司 | Image collection method and device |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105100620A (en) * | 2015-07-30 | 2015-11-25 | 深圳市永兴元科技有限公司 | Shooting method and apparatus |
CN105100620B (en) * | 2015-07-30 | 2018-05-25 | 深圳市永兴元科技股份有限公司 | Image pickup method and device |
CN105827952B (en) * | 2016-02-01 | 2019-05-17 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal removing specified object |
CN105827952A (en) * | 2016-02-01 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method for removing specified object and mobile terminal |
CN106991696A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Backlight image processing method, backlight image processing apparatus and electronic installation |
US11295421B2 (en) | 2017-03-09 | 2022-04-05 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method, image processing device and electronic device |
CN106973227A (en) * | 2017-03-31 | 2017-07-21 | 努比亚技术有限公司 | Intelligent photographing method and device based on dual camera |
CN107566731A (en) * | 2017-09-28 | 2018-01-09 | 努比亚技术有限公司 | A kind of focusing method and terminal, computer-readable storage medium |
CN107613208B (en) * | 2017-09-29 | 2020-11-06 | 努比亚技术有限公司 | Focusing area adjusting method, terminal and computer storage medium |
CN107613208A (en) * | 2017-09-29 | 2018-01-19 | 努比亚技术有限公司 | Adjusting method and terminal, the computer-readable storage medium of a kind of focusing area |
CN108322726A (en) * | 2018-05-04 | 2018-07-24 | 浙江大学 | A kind of Atomatic focusing method based on dual camera |
CN111669492A (en) * | 2019-03-06 | 2020-09-15 | 青岛海信移动通信技术股份有限公司 | Method for processing shot digital image by terminal and terminal |
CN112204945A (en) * | 2019-08-14 | 2021-01-08 | 深圳市大疆创新科技有限公司 | Image processing method, image processing apparatus, image capturing device, movable platform, and storage medium |
CN110996003A (en) * | 2019-12-16 | 2020-04-10 | Tcl移动通信科技(宁波)有限公司 | Photographing positioning method and device and mobile terminal |
CN110996003B (en) * | 2019-12-16 | 2022-03-25 | Tcl移动通信科技(宁波)有限公司 | Photographing positioning method and device and mobile terminal |
CN115037880A (en) * | 2022-07-13 | 2022-09-09 | 山西工程职业学院 | Quick focusing method for airborne camera |
Also Published As
Publication number | Publication date |
---|---|
CN104363377B (en) | 2017-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104363378A (en) | Camera focusing method, camera focusing device and terminal | |
CN104363377A (en) | Method and apparatus for displaying focus frame as well as terminal | |
CN104333748A (en) | Method, device and terminal for obtaining image main object | |
US10769470B2 (en) | Method and system for optimizing an image capturing boundary in a proposed image | |
WO2021052487A1 (en) | Method and apparatus for obtaining extended depth of field image, and electronic device | |
US20190197735A1 (en) | Method and apparatus for image processing, and robot using the same | |
US10269130B2 (en) | Methods and apparatus for control of light field capture object distance adjustment range via adjusting bending degree of sensor imaging zone | |
WO2019105214A1 (en) | Image blurring method and apparatus, mobile terminal and storage medium | |
US10915998B2 (en) | Image processing method and device | |
KR101612727B1 (en) | Method and electronic device for implementing refocusing | |
CN104333710A (en) | Camera exposure method, camera exposure device and camera exposure equipment | |
EP3036901B1 (en) | Method, apparatus and computer program product for object detection and segmentation | |
CN111726521B (en) | Photographing method and photographing device of terminal and terminal | |
US10237491B2 (en) | Electronic apparatus, method of controlling the same, for capturing, storing, and reproducing multifocal images | |
US20170054897A1 (en) | Method of automatically focusing on region of interest by an electronic device | |
CN103837129B (en) | Distance-finding method in a kind of terminal, device and terminal | |
CN105144710A (en) | Technologies for increasing the accuracy of depth camera images | |
CN108702457B (en) | Method, apparatus and computer-readable storage medium for automatic image correction | |
CN111314689B (en) | Method and electronic device for identifying foreground object in image | |
US20220343520A1 (en) | Image Processing Method and Image Processing Apparatus, and Electronic Device Using Same | |
CN104754234A (en) | Photographing method and device | |
CN104184935A (en) | Image shooting device and method | |
CN112017137A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN106778689A (en) | A kind of iris capturing recognition methods of dual camera and device | |
CN105467741A (en) | Panoramic shooting method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. Address before: Changan town in Guangdong province Dongguan 523841 usha Beach Road No. 18 Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS CORP., Ltd. |
|
CP03 | Change of name, title or address |