CN104793910A - Method and electronic equipment for processing information - Google Patents

Method and electronic equipment for processing information Download PDF

Info

Publication number
CN104793910A
CN104793910A CN201410025858.3A CN201410025858A CN104793910A CN 104793910 A CN104793910 A CN 104793910A CN 201410025858 A CN201410025858 A CN 201410025858A CN 104793910 A CN104793910 A CN 104793910A
Authority
CN
China
Prior art keywords
depth
preview
area
information
view information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410025858.3A
Other languages
Chinese (zh)
Other versions
CN104793910B (en
Inventor
黄茂林
郑启忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410025858.3A priority Critical patent/CN104793910B/en
Publication of CN104793910A publication Critical patent/CN104793910A/en
Application granted granted Critical
Publication of CN104793910B publication Critical patent/CN104793910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)

Abstract

The invention discloses a method and electronic equipment for processing information. The electronic equipment comprises an image collecting unit. When a first preview image in a first space area is obtained through the image collecting unit and displayed on a displaying unit of the electronic equipment, the method comprises the steps of obtaining depth information of the first preview image, wherein the depth information is used for representing the distance between the image collecting unit and a feature point in the first space area; determining first depth-of-field information in the first preview image; according to the depth information and the first depth-of-field information, determining a first preview area and a second preview area in the first preview image, wherein the first preview area corresponds to a first depth-of-field area where the first depth-of-field information is located, the second preview area corresponds to the first space area other than the first depth-of-field area, and the first preview area and the second preview area have different display modes.

Description

A kind of method of information processing and electronic equipment
Technical field
The application relates to electronic technology field, the method for particularly a kind of information processing and electronic equipment.
Background technology
Along with the fast development of electronic technology, the function of various electronic equipment is also more and more abundanter, more and more hommization, makes user have better Experience Degree in the process using electronic equipment.With regard to adept machine, present smart mobile phone, except keeping the most basic call function, can say and be modified into a small-sized computer, not only there is powerful storage space, various software can be installed, and the various functions of mobile phone are also done more and more meticulousr and human nature.
Existing camera can shoot the image with Deep Canvas substantially, and the depth of field to refer to before and after camera focusing areas imaging relatively clearly, and the image within the depth of field is more clearly, before this scope or image is afterwards then fuzzyyer.The depth of field of shooting picture and the sensor size of camera, lens focus, focusing Distance geometry f-number is relevant, in general, the less depth of field of sensor is larger, and focal length more flash is dark larger, focusing is from more distant view is dark larger, and f-number larger (aperture the is less) depth of field is larger; Otherwise the depth of field is less.
At present, when wanting the image shooting Deep Canvas, mm professional camera special camera lens can be adopted, front and back field depth is obtained by observing depth of field prompt window, but this direct mode arranged the parameter on camera is not owing to having depth of field preview function, directly perceived not, need darker shooting knowledge and shooting experience, and cannot realize on the picture pick-up device such as mobile phone, card camera.
Therefore, there is again the camera having depth of field preview function, because these cameras generally use maximum ring when preview, not necessarily consistent with the aperture that actual photographed uses, so with depth of field preview function in some cameras, under current setting and scene can be seen, before and after focus, which is virtualization, which is clearly, namely the degree that preview is background blurring, thus obtain and expect shooting effect preferably, but when taking, due to screen or view finder smaller, badly determine field depth, the subjectivity of people is needed to judge in addition, different people is familiar with different, so the meeting of the Deep Canvas taken out and anticipation is inconsistent.
Visible, there is the technical matters that can not provide the effect of depth of field preview intuitively in existing image capture device.
Summary of the invention
The embodiment of the present application by providing a kind of method and electronic equipment of information processing, in order to solve existing image capture device exist the technical matters that the effect of depth of field preview intuitively can not be provided.
On the one hand, the embodiment of the present application provides a kind of method of information processing, be applied to an electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when described first preview image is presented on the display unit of described electronic equipment, described method comprises: the depth information obtaining described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space; Determine the first depth of view information in described first preview image; The first preview area in described first preview image and the second preview area is determined according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
Optionally, the depth information of described first preview image of described acquisition, be specially: by camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
Optionally, described the first depth of view information determined in described first preview image, is specially: according to the focal length of described image acquisition units, f-number, and focusing distance, calculates the first depth of view information obtaining described first preview image.
Optionally, describedly determine the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, described first preview area has different display modes from the second preview area, specifically comprise: based on described depth information and described first depth of view information, the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information is determined from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M, described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
Optionally, described based on described depth information and described first depth of view information, the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information is determined from a described M distance value, be specially: when first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
Optionally, described described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode, be specially: in described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
On the other hand, the embodiment of the present application also provides a kind of electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when described first preview image is presented on the display unit of described electronic equipment, described electronic equipment also comprises: degree of depth acquiring unit, for obtaining the depth information of described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space; Depth of field determining unit, for determining the first depth of view information in described first preview image; Area determination unit, for determining the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
Optionally, described degree of depth acquiring unit, specifically for: by camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
Optionally, described depth of field determining unit, specifically for: according to the focal length of described image acquisition units, f-number, and focusing distance, calculate the first depth of view information obtaining described first preview image.
Optionally, described area determination unit, specifically comprise: determine subelement, for based on described depth information and described first depth of view information, from a described M distance value, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information, N is more than or equal to the integer that 1 is less than or equal to M; Display unit, for described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
Optionally, describedly determine subelement, specifically for: when first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
Optionally, described display unit, specifically for: in described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
The one or more technical schemes provided in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt when the image that will be gathered by image acquisition units preview, adopt the depth information and depth of view information that obtain preview image, to determine the first browsing area and second browsing area of two display modes in image browsing according to depth information and depth of view information, wherein, depth of view information is made to be presented at the first browsing area, non-depth of view information is presented at the technological means of the second browsing area, solve the technical matters that the effect of depth of field preview intuitively can not be provided that existing image capture device exists, achieve the technique effect that by different display modes, the depth of view information of image browsing and non-depth of view information can be showed user.
(2) due in the embodiment of the present application, adopt the technological means that the first preview area in the first preview image and the second preview area show by different display modes, more intuitively depth of field region and non-depth of field region are distinguished and show, achieve the technique effect facilitating user to adjust picture of taking pictures.
Accompanying drawing explanation
The method flow diagram of a kind of information processing that Fig. 1 provides for the embodiment of the present application;
The schematic diagram of the sampling depth information that Fig. 2 provides for the embodiment of the present application;
The schematic diagram of the determination depth of view information that Fig. 3 provides for the embodiment of the present application;
The schematic diagram the first preview image being divided into the first preview area and the second preview area that Fig. 4 provides for the embodiment of the present application;
The structural representation of the electronic equipment that Fig. 5 provides for the embodiment of the present application.
Embodiment
The embodiment of the present application, by providing a kind of method and electronic equipment of information processing, solves the technical matters that can not provide the effect of depth of field preview intuitively that existing image capture device exists.
Technical scheme in the embodiment of the present application is for solving the problem, and general thought is as follows:
A kind of method of information processing is provided, be applied to an electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when described first preview image is presented on the display unit of described electronic equipment, described method comprises: the depth information obtaining described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space; Determine the first depth of view information in described first preview image; The first preview area in described first preview image and the second preview area is determined according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
Visible, the embodiment of the present application is due to when the image that will be gathered by image acquisition units preview, adopt the depth information and depth of view information that obtain preview image, to determine the first browsing area and second browsing area of two display modes in image browsing according to depth information and depth of view information, wherein, depth of view information is made to be presented at the first browsing area, non-depth of view information is presented at the technological means of the second browsing area, solve the technical matters that the effect of depth of field preview intuitively can not be provided that existing image capture device exists, achieve the technique effect that by different display modes, the depth of view information of image browsing and non-depth of view information can be showed user.
In order to better understand technique scheme, below in conjunction with Figure of description and concrete embodiment, technique scheme is described in detail, the specific features being to be understood that in the embodiment of the present application and embodiment is the detailed description to technical scheme, instead of the restriction to technical scheme, when not conflicting, the technical characteristic in the embodiment of the present application and embodiment can combine mutually.
The electronic equipment that the method for the information processing provided in this application is applied to mainly refers to the electronic equipment comprising image acquisition units, image acquisition units specifically refers to camera, and therefore, the electronic equipment mentioned in the application is exactly the electronic equipment being provided with camera, such as, camera, panel computer, notebook computer, smart mobile phone, PDA(Personal Digital Assistant, personal digital assistant) etc., can as long as the electronic equipment of image can be obtained by camera.
When electronic equipment is when gathering the image in the first area of space by image acquisition units, first can obtain the first preview image of the first area of space and this first preview image is presented at the display unit of electronic equipment, to make user can the image that will take of preview, such as, no matter be to be taken pictures by the camera of specialty or the electronic equipment such as mobile phone is taken pictures, before pressing shutter, the scenery that user will be able to be taken by the display screen preview of camera or mobile phone.But because Deep Canvas can not react on preview image by image capture device intuitively, user's preview to picture and actual photographed to image have certain difference, may need to take multiple choose in order to satisfied image will be obtained, therefore, in order to solve the technical matters that can not provide the effect of depth of field preview intuitively that existing image capture device exists, in the embodiment of the present application, provide following methods.
As shown in Figure 1, the method for the information processing that the embodiment of the present application provides, specifically comprises step:
S1: the depth information obtaining described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space;
In specific implementation process, when the first preview image being obtained the first area of space by image acquisition units and when this first preview image is shown on the display unit, perform step S1, obtain the depth information of the first preview image, what namely obtain is the depth information of the scenery that the first preview image comprises, and depth information is used for the distance between the unique point in token image collecting unit and the first area of space.Unique point is the point relatively given prominence to that system is chosen from the scenery in the first area of space when acquisition depth information, such as, a desk is had in the first area of space, the corner angle of four table legs and desktop can be chosen as unique point, by obtaining the distance between each unique point and image acquisition units, thus just can draw the approximate location of this desk in the first area of space, i.e. depth information.
Further, step S1, is specially:
By camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
In specific implementation process, in the embodiment of the present application concrete restriction is not done for the mode obtaining depth information, but depth information is obtained mainly by M the distance value of M in acquisition first area of space between unique point and image acquisition units to the first preview image, is made up of the depth information of the first preview image M distance value.The depth information of the first preview image can be obtained by shooting array way or structured light mode.In fact, image acquisition units is when the image of collection first area of space, there is an acquisition range, that is, the M in the first area of space the unique point spy mentioned in the embodiment of the present application refers to M unique point in acquisition range, as shown in Figure 2, image acquisition units 200 in electronic equipment 100 is when carrying out image acquisition to the first area of space, form a pickup area 800, system chooses M unique point in pickup area 800, the distance value between each unique point to image acquisition units 200 is obtained by shooting array way or structured light mode, if the distance between the unique point 501 in Fig. 2 to image acquisition units 200 is 502, that is the depth value of unique point 501 is 502.Wherein, shooting array way mainly adopts multiple camera, carrys out the depth value of judging characteristic point according to sight equation; Structured light mode is mainly passed through to area of space emission of light, and unique point can form hot spot, and the distance value between hot spot and image acquisition units is exactly the depth information of unique point.
S2: determine the first depth of view information in described first preview image;
Further, step S2, is specially:
According to the focal length of described image acquisition units, f-number, and focusing distance, calculate the first depth of view information obtaining described first preview image.
In specific implementation process, can be calculated by depth of field computing formula and obtain depth of view information, as shown in Figure 3, wherein: δ---allow blur circle diameter; F---lens focus; The shooting f-number of F---camera lens; L---focusing from; Δ L1---the front depth of field; Δ L2---the rear depth of field; Δ L---the depth of field; Front depth of field Δ L1=F δ L 2/ (f 2+ F δ L); Rear depth of field Δ L2=F δ L 2/ (f 2-F δ L); Depth of field Δ L=Δ L1+ Δ L2=(2f 2f δ L 2)/(f 4-F 2δ 2l 2); δ=0.125*B*V/300*C, B---sensor diagonal is long; C---picture diagonal is long; V---the apparent distance; As can be seen from depth of field computing formula, depth of field Δ L only uses aperture F, lens focus f, shooting distance L and relevant to the requirement (showing as the size δ to allowing blur circle) of picture element with camera lens, due to B, C, V are constant respectively, so δ is also constant, so depth of field Δ L only uses aperture F, lens focus f, shooting distance L relevant with camera lens, therefore, once the camera lens of camera uses aperture F, lens focus f, shooting distance L fix, depth of field Δ L namely secures.Wherein, camera lens uses aperture F, lens focus f can be obtained by the position of the data that arrange in camera or adjustment, and shooting distance L can be obtained by the depth information obtained in step S1, the distance value namely between focus to camera.
S3: determine the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
In specific implementation process, when being obtained depth information and the depth of view information of the first preview image by step S1 and S2, mainly to depth information be bonded in depth of view information in step s3, that is be displayed intuitively by the depth of view information of depth information by image.First be according to the depth information in whole pickup area, from acquisition range, the depth of field region at depth of view information place is determined, what show due to the first preview image is scenery in whole acquisition range, therefore, be the equal of just that the first preview area is divided into the first preview area and the second preview area, wherein, first preview area is corresponding with depth of field region, second preview area is corresponding with non-depth of field region, different display modes is adopted to show the first preview area and the second preview area again, then user just can find out depth of field region and non-depth of field region intuitively from the first preview image.
Further, step S3, specifically comprises:
Based on described depth information and described first depth of view information, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M;
Described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
In specific implementation process, in step s3, to determine that in the first preview image the major way of the first preview area and the second preview area (namely determining depth of field region and non-depth of field region) is, according to M distance value of the unique point of M in depth information, and first depth of field Δ L of depth of view information, the N number of distance value determined and be arranged in depth of field Δ L is divided distance value from M, thus determine that N number of unique point that this N number of distance value is corresponding just belongs in depth of field region, therefore, just N number of unique point can be presented at the first preview area with the first display mode, M-N unique point is presented at the second preview area with the second display mode simultaneously.
Further, described based on described depth information and described first depth of view information, from a described M distance value, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information, be specially:
When first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
In specific implementation process, the mode determining the N number of unique point corresponding to the N number of distance value being positioned at the first depth of view information from M distance value mentioned in a upper embodiment, specifically as shown in Figure 4, in acquisition range 800, the first depth of field region 801 corresponding to depth of field Δ L is and the distance value of image acquisition units 200 is scope between D1 to D2, if the N number of distance value corresponding to the N number of unique point in the M in depth information unique point is greater than D1 and is less than D2, then this N number of unique point belongs to the first depth of field region 700, N number of unique point in first depth of field region 801 just belongs to the first preview area, and the region be positioned at before and after the first depth of field region 801 is all non-depth of field region 802, namely the second preview area is belonged to.Utilizing different display modes the first preview area and the second preview area to be shown, just can very intuitively depth of field region and non-depth of field region displayed.
Further, described described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode, is specially:
In described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
In specific implementation process, the concrete mode of the embodiment of the present application to the first display mode and the second display mode is not specifically limited, but must be two different display modes, in optimum embodiment, display mode can be different brightness display, the first preview area by N number of unique point place shows with the first brightness, first preview area at M-N unique point place is shown with the second brightness, namely distinguish the first preview area and the second preview area by two kinds of different brightness, to be shown in depth of field region intuitively.The pixel that what unique point mentioned in the embodiment of the present application referred in fact is exactly in picture, therefore, adopt different brightness to show the first preview area and the second preview area, in fact exactly the pixel of the pixel in depth of field region from non-depth of field region is shown with different brightness.Except display brightness is different, flash pattern can also be adopted, such as, in actual photographed process during preview screen, highlighted flash for prompting is carried out to the region of non-field depth, the frequency of current use stops 3 seconds for dodging 1 second, is the depth of field region that sharpness is higher time the region of not glimmering to point out user presents in the picture, facilitates user to be adjusted preview screen by the parameter or camera site changing camera.
Based on same inventive concept, the embodiment of the present application also provides a kind of electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when described first preview image is presented on the display unit of described electronic equipment, as shown in Figure 5, described electronic equipment comprises:
Degree of depth acquiring unit 10, for obtaining the depth information of described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space;
Depth of field determining unit 20, for determining the first depth of view information in described first preview image;
Area determination unit 30, for determining the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
Further, described degree of depth acquiring unit 10, specifically for:
By camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
Further, described depth of field determining unit 20, specifically for:
According to the focal length of described image acquisition units, f-number, and focusing distance, calculate the first depth of view information obtaining described first preview image.
Further, described area determination unit 30, specifically comprises:
Determine subelement, for based on described depth information and described first depth of view information, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M;
Display unit, for described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
Further, describedly determine subelement, specifically for:
When first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
Further, described display unit, specifically for:
In described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
According to above to the description of the method for the information processing that the application provides, above-mentioned electronic equipment is used for realizing said method, so the course of work of this electronic equipment is consistent with one or more embodiments of said method, has not just repeated one by one at this.
The one or more technical schemes provided in the embodiment of the present application, at least have following technique effect or advantage:
(1) due in the embodiment of the present application, adopt when the image that will be gathered by image acquisition units preview, adopt the depth information and depth of view information that obtain preview image, to determine the first browsing area and second browsing area of two display modes in image browsing according to depth information and depth of view information, wherein, depth of view information is made to be presented at the first browsing area, non-depth of view information is presented at the technological means of the second browsing area, solve the technical matters that the effect of depth of field preview intuitively can not be provided that existing image capture device exists, achieve the technique effect that by different display modes, the depth of view information of image browsing and non-depth of view information can be showed user.
(2) due in the embodiment of the present application, adopt the technological means that the first preview area in the first preview image and the second preview area show by different display modes, more intuitively depth of field region and non-depth of field region are distinguished and show, achieve the technique effect facilitating user to adjust picture of taking pictures.
Those skilled in the art should understand, embodiments of the invention can be provided as method, system or computer program.Therefore, the present invention can adopt the form of complete hardware embodiment, completely software implementation or the embodiment in conjunction with software and hardware aspect.And the present invention can adopt in one or more form wherein including the upper computer program implemented of computer-usable storage medium (including but not limited to magnetic disk memory, CD-ROM, optical memory etc.) of computer usable program code.
The present invention describes with reference to according to the process flow diagram of the method for the embodiment of the present invention, equipment (system) and computer program and/or block scheme.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block scheme and/or square frame and process flow diagram and/or block scheme and/or square frame.These computer program instructions can being provided to the processor of multi-purpose computer, special purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computing machine or other programmable data processing device produce device for realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
These computer program instructions also can be loaded in computing machine or other programmable data processing device, make on computing machine or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computing machine or other programmable devices is provided for the step realizing the function of specifying in process flow diagram flow process or multiple flow process and/or block scheme square frame or multiple square frame.
Specifically, the computer program instructions that the method for the information processing in the embodiment of the present application is corresponding can be stored in CD, hard disk, on the storage mediums such as USB flash disk, when the computer program instructions corresponding with the method for information processing in storage medium is read by an electronic equipment or be performed, comprise the steps:
Obtain the depth information of described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space; Determine the first depth of view information in described first preview image; The first preview area in described first preview image and the second preview area is determined according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
Optionally, the depth information of described first preview image of described acquisition, be specially: by camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
Optionally, described the first depth of view information determined in described first preview image, is specially: according to the focal length of described image acquisition units, f-number, and focusing distance, calculates the first depth of view information obtaining described first preview image.
Optionally, describedly determine the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, described first preview area has different display modes from the second preview area, specifically comprise: based on described depth information and described first depth of view information, the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information is determined from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M, described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
Optionally, described based on described depth information and described first depth of view information, the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information is determined from a described M distance value, be specially: when first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
Optionally, described described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode, be specially: in described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (12)

1. the method for an information processing, be applied to an electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when being presented on the display unit of described electronic equipment by described first preview image, described method comprises:
Obtain the depth information of described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space;
Determine the first depth of view information in described first preview image;
The first preview area in described first preview image and the second preview area is determined according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
2. the method for claim 1, is characterized in that, the depth information of described first preview image of described acquisition, is specially:
By camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
3. method as claimed in claim 1 or 2, it is characterized in that, described the first depth of view information determined in described first preview image, is specially:
According to the focal length of described image acquisition units, f-number, and focusing distance, calculate the first depth of view information obtaining described first preview image.
4. method as claimed in claim 3, it is characterized in that, describedly determine the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, described first preview area has different display modes from the second preview area, specifically comprises:
Based on described depth information and described first depth of view information, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M;
Described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
5. method as claimed in claim 4, is characterized in that, described based on described depth information and described first depth of view information, determines the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information, be specially from a described M distance value:
When first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
6. method as claimed in claim 4, it is characterized in that, described described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode, is specially:
In described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
7. an electronic equipment, described electronic equipment comprises image acquisition units, being obtained the first preview image in the first area of space by described image acquisition units, and when being presented on the display unit of described electronic equipment by described first preview image, described electronic equipment also comprises:
Degree of depth acquiring unit, for obtaining the depth information of described first preview image, wherein, described depth information is for characterizing the distance between the unique point in described image acquisition units and described first area of space;
Depth of field determining unit, for determining the first depth of view information in described first preview image;
Area determination unit, for determining the first preview area in described first preview image and the second preview area according to described depth information and described first depth of view information, described first preview area is corresponding with first depth of field region at described first depth of view information place, described second preview area is corresponding with the first area of space beyond described first depth of field region, and described first preview area has different display modes from the second preview area.
8. electronic equipment as claimed in claim 7, is characterized in that, described degree of depth acquiring unit, specifically for:
By camera array mode or structured light mode, obtain M distance value between M unique point in described first area of space and described image acquisition units, be made up of the depth information of described first preview image a described M distance value, M be more than or equal to 1 integer.
9. electronic equipment as claimed in claim 7 or 8, is characterized in that, described depth of field determining unit, specifically for:
According to the focal length of described image acquisition units, f-number, and focusing distance, calculate the first depth of view information obtaining described first preview image.
10. electronic equipment as claimed in claim 9, it is characterized in that, described area determination unit, specifically comprises:
Determine subelement, for based on described depth information and described first depth of view information, determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information from a described M distance value, N is more than or equal to the integer that 1 is less than or equal to M;
Display unit, for described N number of unique point is presented at described first preview area with the first display mode, and a described M-N unique point second display mode is presented at described second preview area, described first display mode is different display modes from described second display mode.
11. electronic equipments as claimed in claim 10, is characterized in that, describedly determine subelement, specifically for:
When first depth of field region at described first depth of view information place is specially the distance range between distance D1 to distance D2, from a described M distance value, determine that distance value is greater than D1, and be less than N number of distance value of D2, thus determine the N number of unique point corresponding to the N number of distance value being positioned at described first depth of view information.
12. electronic equipments as claimed in claim 10, is characterized in that, described display unit, specifically for:
In described first preview image, the first preview area of described N number of Feature point correspondence is shown with the first brightness, and the second preview area of a described M-N Feature point correspondence is shown with the second brightness, described first brightness is different display brightness from described second brightness.
CN201410025858.3A 2014-01-20 2014-01-20 A kind of method and electronic equipment of information processing Active CN104793910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410025858.3A CN104793910B (en) 2014-01-20 2014-01-20 A kind of method and electronic equipment of information processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410025858.3A CN104793910B (en) 2014-01-20 2014-01-20 A kind of method and electronic equipment of information processing

Publications (2)

Publication Number Publication Date
CN104793910A true CN104793910A (en) 2015-07-22
CN104793910B CN104793910B (en) 2018-11-09

Family

ID=53558732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410025858.3A Active CN104793910B (en) 2014-01-20 2014-01-20 A kind of method and electronic equipment of information processing

Country Status (1)

Country Link
CN (1) CN104793910B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933532A (en) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 Image processing method and device, and mobile terminal
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN110418056A (en) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 A kind of image processing method, device, storage medium and electronic equipment
CN113949815A (en) * 2021-11-17 2022-01-18 维沃移动通信有限公司 Shooting preview method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221398B2 (en) * 2003-05-01 2007-05-22 Hewlett-Packard Development Company, L.P. Accurate preview for digital cameras
CN101193209A (en) * 2006-11-28 2008-06-04 索尼株式会社 Imaging device
CN101656817A (en) * 2008-08-19 2010-02-24 株式会社理光 Image processing apparatus, image processing process and image processing procedures
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN102696219A (en) * 2010-11-08 2012-09-26 松下电器产业株式会社 Imaging device, imaging method, program, and integrated circuit
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal
CN103322937A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Method and device for measuring depth of object using structured light method
CN103366352A (en) * 2012-03-30 2013-10-23 北京三星通信技术研究有限公司 Device and method for producing image with background being blurred

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7221398B2 (en) * 2003-05-01 2007-05-22 Hewlett-Packard Development Company, L.P. Accurate preview for digital cameras
CN101193209A (en) * 2006-11-28 2008-06-04 索尼株式会社 Imaging device
CN101656817A (en) * 2008-08-19 2010-02-24 株式会社理光 Image processing apparatus, image processing process and image processing procedures
CN102696219A (en) * 2010-11-08 2012-09-26 松下电器产业株式会社 Imaging device, imaging method, program, and integrated circuit
CN102447933A (en) * 2011-11-01 2012-05-09 浙江捷尚视觉科技有限公司 Depth information acquisition method based on binocular framework
CN103322937A (en) * 2012-03-19 2013-09-25 联想(北京)有限公司 Method and device for measuring depth of object using structured light method
CN103366352A (en) * 2012-03-30 2013-10-23 北京三星通信技术研究有限公司 Device and method for producing image with background being blurred
CN103152521A (en) * 2013-01-30 2013-06-12 广东欧珀移动通信有限公司 Effect of depth of field achieving method for mobile terminal and mobile terminal

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933532A (en) * 2016-06-06 2016-09-07 广东欧珀移动通信有限公司 Image processing method and device, and mobile terminal
CN108156378A (en) * 2017-12-27 2018-06-12 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN110418056A (en) * 2019-07-16 2019-11-05 Oppo广东移动通信有限公司 A kind of image processing method, device, storage medium and electronic equipment
CN113949815A (en) * 2021-11-17 2022-01-18 维沃移动通信有限公司 Shooting preview method and device and electronic equipment

Also Published As

Publication number Publication date
CN104793910B (en) 2018-11-09

Similar Documents

Publication Publication Date Title
US11568517B2 (en) Electronic apparatus, control method, and non- transitory computer readable medium
US9591237B2 (en) Automated generation of panning shots
US20150116529A1 (en) Automatic effect method for photography and electronic apparatus
CN106134176B (en) System and method for multifocal imaging
WO2017045558A1 (en) Depth-of-field adjustment method and apparatus, and terminal
US20160150215A1 (en) Method for performing multi-camera capturing control of an electronic device, and associated apparatus
KR20140004592A (en) Image blur based on 3d depth information
US9549126B2 (en) Digital photographing apparatus and control method thereof
US9218681B2 (en) Image processing apparatus and image processing method
KR20130018330A (en) Imaging apparatus, image processing method, and recording medium for recording program thereon
KR20130071793A (en) Digital photographing apparatus splay apparatus and control method thereof
JPWO2015049899A1 (en) Image display device and image display method
EP3005286B1 (en) Image refocusing
TWI566601B (en) Image processing device and image depth processing method
CN109151329A (en) Photographic method, device, terminal and computer readable storage medium
US20130076941A1 (en) Systems And Methods For Editing Digital Photos Using Surrounding Context
CN104793910A (en) Method and electronic equipment for processing information
US9995905B2 (en) Method for creating a camera capture effect from user space in a camera capture system
CN105007410A (en) Large viewing angle camera control method and user terminal
CN113747067A (en) Photographing method and device, electronic equipment and storage medium
CN105467741A (en) Panoramic shooting method and terminal
EP2890116A1 (en) Method of displaying a photographing mode by using lens characteristics, computer-readable storage medium of recording the method and an electronic apparatus
CN104994288A (en) Shooting method and user terminal
JP6645711B2 (en) Image processing apparatus, image processing method, and program
CN114071009A (en) Shooting method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant