CN105491277B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN105491277B
CN105491277B CN201410469899.1A CN201410469899A CN105491277B CN 105491277 B CN105491277 B CN 105491277B CN 201410469899 A CN201410469899 A CN 201410469899A CN 105491277 B CN105491277 B CN 105491277B
Authority
CN
China
Prior art keywords
distance
resolution
electronic equipment
depth map
focus point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410469899.1A
Other languages
Chinese (zh)
Other versions
CN105491277A (en
Inventor
刘永华
李立华
李锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410469899.1A priority Critical patent/CN105491277B/en
Publication of CN105491277A publication Critical patent/CN105491277A/en
Application granted granted Critical
Publication of CN105491277B publication Critical patent/CN105491277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of image processing method of present invention offer and electronic equipment.The method be applied to include at least two image capture modules electronic equipment in, the method includes:The focus operation for detecting described image acquisition module, determines focusing area;It determines in focus point corresponding with the focusing area and described image acquisition module at a distance from predetermined plane;When the distance is more than first threshold, the first depth map is obtained with first resolution;And when the distance is no more than first threshold, the second depth map is obtained with second resolution, wherein the second resolution is less than the first resolution.

Description

Image processing method and electronic equipment
Technical field
The present invention relates to the fields of electronic equipment, more particularly, to image processing method and electronic equipment.
Background technology
Currently, 3D display and stereoscopic camera have received more and more attention.Depth map is being generated using stereoscopic camera When, usual depth algorithm needs to use bigger buffer, and the size of buffer and the resolution ratio of depth map and depth The distance of object distance has direct contact in degree figure.
On the other hand, when being realized using hardware circuit, big buffer means more costs.
For this reason, it may be desirable to provide a kind of image processing method and electronic equipment, the buffering that depth calculation uses can be reduced Device, while ensureing the using effect of depth information.
Invention content
The embodiment provides a kind of image processing methods, are applied to include at least two image capture modules In electronic equipment, the method includes:
The focus operation for detecting described image acquisition module, determines focusing area;
It determines in focus point corresponding with the focusing area and described image acquisition module at a distance from predetermined plane;
When the distance is more than first threshold, the first depth map is obtained with first resolution;And
When the distance is no more than first threshold, the second depth map is obtained with second resolution, wherein described second point Resolution is less than the first resolution.
Preferably, determine predetermined plane in corresponding with focusing area focus point and described image acquisition module away from From further comprising:
Detect the value of the driving current of the voice coil motor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the value of the driving current;With And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, focus point corresponding with the focusing area and predetermined plane in described image acquisition module are determined Distance further comprises:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, the first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
According to another embodiment of the present invention, a kind of electronic equipment is provided, including:
At least two image capture modules, the image for acquiring subject;
Focusing area determination unit, the focus operation for detecting described image acquisition module, determines focusing area;
Distance determining unit, it is pre- in focus point corresponding with the focusing area and described image acquisition module for determining Allocate the distance in face;
Control unit, for when the distance is more than first threshold, the first depth map to be obtained with first resolution;And When the distance is no more than first threshold, the second depth map is obtained with second resolution, wherein the second resolution is less than The first resolution.
Preferably, the distance determining unit is further used for:
Detect the value of the driving current of the voice coil motor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the value of the driving current;With And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, the distance determining unit further comprises:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, the first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
Therefore, image processing method and electronic equipment according to the ... of the embodiment of the present invention can reduce depth calculation use Buffer, while ensureing the using effect of depth information.
Description of the drawings
Fig. 1 a-1c are the definition graphs for the principle that description generates depth map information;
Fig. 2 is the schematic diagram for the principle for illustrating image processing method according to the ... of the embodiment of the present invention;
Fig. 3 is the flow chart for illustrating image processing method according to the ... of the embodiment of the present invention;And
Fig. 4 is the functional configuration block diagram for illustrating electronic equipment according to the ... of the embodiment of the present invention.
Specific implementation mode
Before describing image processing method and electronic equipment according to the ... of the embodiment of the present invention, acquisition is easily described first The principle of depth information.
Currently, generating the technology that depth map information is current prevalence in the industry using depth camera (stereo camera).Example Such as, most common mode is using two video cameras separated by a distance while to obtain scene image and generate depth map.This The system of sample is also referred to as binocular camera system.Certainly, in addition to binocular camera system, camera array system, packet can also be used The camera array for including multiple camera compositions, for obtaining scene image.
Below with reference to Fig. 1 a-1c, the principle for obtaining depth information is easily described by taking binocular camera system as an example.
Most basic binocular solid geometrical relationship is as shown in Figure 1a, it is made of two identical video cameras, and two A plane of delineation is located in a plane, and the reference axis of two video cameras is mutually parallel, and x-axis overlaps, in the side x between video camera Upward spacing is parallax range b.In this model, in scene the same characteristic point on two camera image planes Image space is different.Here, subpoint of the same point in scene in two different images is known as conjugate pair, it is therein As soon as subpoint is the correspondence of another subpoint, conjugate pair is asked to be to solve for correspondence.Being total to when two images are overlapped Yoke is known as parallax to the difference (the distance between conjugate pair point) of the position of point, by two camera centers and passes through scene spy The intersection of the plane of sign point referred to as outer pole (epipolar) plane, outer polar plane and the plane of delineation is known as EP point.
In Figure 1b, subpoints of the scene point P in the left and right plane of delineation is divided into plAnd pr.Without loss of generality, it is assumed that sit Mark system origin is overlapped with left lens center.Compare similar triangles PMClAnd plLCl, following formula can be obtained:
Similarly, from similar triangles PNCrAnd plRCr, following formula can be obtained:
Merge above two formulas, can obtain:
Wherein F is focal length, and B is parallax range.
Therefore, the depth recovery of various scene points can be realized by calculating parallax.
For the image for the Same Scene that two width obtain from different perspectives, traditional feature point searching method is first A characteristic point is selected on piece image, then searches for corresponding characteristic point on the second width image.That is, passing through Feature Points Matching calculates, and position of the selected characteristic point in the first width figure in the second width is found, to carry out the matching of image.
As illustrated in figure 1 c, according to imaging geometry principle, the characteristic point one on piece image is positioned on another width image On corresponding EP point.That is, each characteristic point in piece image is all located in same a line in the second width image.
On the other hand, due to the discrete feature of digital picture, this feature o'clock is in piece image and the second width image Alternate position spike is measured as unit of pixel.Obviously, characteristic point high-resolution (for example, using VGA resolution (640 × 480) pixel difference when pixel difference and low resolution (for example, using QVGA resolution ratio (240 × 320)) when) is different, by This causes the calculation amount of feature point search also different.Computationally intensive when pixel difference is big, the buffer needed is big.Pixel difference hour, Calculation amount is small, and the buffer needed is small.
On the other hand, the object also pixel difference of effect characteristics point in two images with a distance from video camera.When object from When video camera is remote, pixel difference variation of the object in two images is smaller, can support high-resolution with certain buffer at this time The calculating of the depth map of rate (such as VGA).On the other hand, when object is close from video camera, pixel difference of the object in two images It changes greatly, at this point, identical buffer cannot support the calculating of high-resolution depth graph.But it if will depth map at this time Resolution ratio be set as low resolution (such as QVGA), then the pixel difference of object in both images will become smaller, at this point, identical slow It rushes device and can support current calculation amount.
On the other hand, if it is known that the distance between target and video camera are in a certain section, then search range can limit One in a line on the second image is made compared in minizone, as shown in 1c.It is thus possible to improve characteristic point search speed is simultaneously And reduce calculation amount.
It can be seen that the calculation amount of depth map, and therefore caused by buffer size, with the resolution ratio of depth map with And object has direct relation at a distance from video camera.
In the following, image processing method according to the ... of the embodiment of the present invention will be described with reference to figure 2.
Image processing method according to the ... of the embodiment of the present invention is applied to include that the electronics of at least two image capture modules is set In standby.Such electronic equipment is, for example, the stereoscopic camera for including two or more cameras.
The method includes:
Step S101:The focus operation for detecting described image acquisition module, determines focusing area;
Step S102:Determine focus point corresponding with the focusing area and predetermined plane in described image acquisition module Distance;
Step S103:When the distance is more than first threshold, the first depth map is obtained with first resolution;And
Step S104:When the distance is no more than first threshold, the second depth map, wherein institute are obtained with second resolution It states second resolution and is less than the first resolution.
Specifically, in step S101, when stereoscopic camera starts Image Acquisition, preview graph is shown on the display unit Picture.At this point, for example by auto-focusing or manual focus, the focusing area in preview image is determined.For example, by single in display Small box shown in member etc. determines focusing area.After determining focusing area, the voice coil motor of electronic equipment will drive image Collecting unit includes that lens unit is moved to corresponding position.
Then, in step s 102, determine in focus point corresponding with the focusing area and described image acquisition module The distance of predetermined plane.
In one embodiment, determine that focus point corresponding with the focusing area makes a reservation for described image acquisition module The distance of plane further comprises:
Detect the value of the driving current of the voice coil motor of electronic equipment.For example, voice coil motor drives current through to posting Respective value is written in storage to determine.For example, when 1 is written into register, expression driving current is 1ma, when to deposit When 2 being written in device, indicate that driving current is 2ma etc..
Then, the position of the lens unit at least two image capture module is calculated according to the value of the driving current It sets.That is, when driving current is 1ma, it may be determined that lens unit is by driving 1mm.It, can when driving current is 2ma To determine lens unit by driving 2mm etc..
Then, focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.That is, root According to the driven position of lens unit, it may be determined that focus point at this time, may thereby determine that focus point and predetermined plane away from From.
In another embodiment, electronic equipment can also provide lens sensor, the position for measuring lens unit.
Accordingly, it is determined that with the focusing area in corresponding focus point and described image acquisition module at a distance from predetermined plane Further comprise:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
That is, because lens sensor can directly determine the position of lens unit, focus so as to determine The position of point, also just correspondingly determines focus point at a distance from predetermined plane.
Then, in step s 103, when the distance is more than first threshold, the first depth is obtained with first resolution Figure;And
Step S104:When the distance is no more than first threshold, the second depth map, wherein institute are obtained with second resolution It states second resolution and is less than the first resolution.
Specifically, as shown in figure 3, when focus point is more than first threshold (such as larger than 35cm, figure at a distance from predetermined screen A sections in 3) when, i.e., object is remote from video camera at this time, and pixel difference variation of the object in two images is smaller, uses at this time certain Buffer can support high-resolution (such as VGA) depth map calculating.
On the other hand, when the distance is no more than first threshold (the B sections in such as less than 35cm, Fig. 3), object is in two width Pixel difference variation on image is smaller, at this time in order to use identical buffer, can only support the depth of low resolution (such as QVGA) Spend the calculating of figure.It should be noted that at this point, because object distance video camera distance is closer, therefore object is shared in the picture Range is larger (at this point, belonging to the microspur stage).Therefore, although reducing the resolution ratio of depth map, this has no effect on depth letter The practical effect of breath.
At this point, as shown in formula 3, depth z is by xl′-xr' influence, that is to say, that characteristic point is in the first image and the second figure As the influence of the range difference (that is, pixel difference) of upper subpoint.Span in the pixel difference i.e. depth map computational algorithm.
That is, the first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
Therefore, image processing method according to a first embodiment of the present invention, when focus point is more than at a distance from predetermined screen When first threshold, the calculating of depth map is carried out using high-resolution, when focus point is no more than the first threshold at a distance from predetermined screen When value, the calculating of depth map is carried out using low resolution, to efficiently reduce the calculation amount of depth calculation, therefore is reduced The buffer used, while can ensure the using effect of depth information.
In the following, electronic equipment according to another embodiment of the present invention will be described with reference to figure 4.
The electronic equipment 400 includes:
At least two image capture modules 401, the image for acquiring subject;
Focusing area determination unit 402, the focus operation for detecting described image acquisition module 401 determine focusing area Domain;
Distance determining unit 403, for determining focus point corresponding with the focusing area and described image acquisition module The distance of middle predetermined plane;
Control unit 404, for when the distance is more than first threshold, the first depth map to be obtained with first resolution; And when the distance is no more than first threshold, the second depth map is obtained with second resolution, wherein the second resolution Less than the first resolution.
The image capture module 401 is such as including lens unit, voice coil motor, lens position sensor.Electronic equipment 400 such as further including register, the value of the driving current for voice coil motor to be written.
Preferably, the distance determining unit 403 is further used for:
Detect the value of the driving current of the voice coil motor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the value of the driving current;With And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, the distance determining unit 403 further comprises:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
Preferably, the first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
It should be noted that each functional block of electronic equipment 400 is configured to execute at image according to first embodiment Each step of reason method is omitted it and is retouched in detail herein to execute image processing method according to a first embodiment of the present invention It states.
Therefore, electronic equipment according to the ... of the embodiment of the present invention, when focus point is more than first threshold at a distance from predetermined screen When, the calculating of depth map is carried out using high-resolution, when focus point is no more than first threshold at a distance from predetermined screen, is made The calculating that depth map is carried out with low resolution uses to efficiently reduce the calculation amount of depth calculation, therefore reduce Buffer, while can ensure the using effect of depth information.
It should be noted that its functional unit is illustrate only when illustrating electronic equipment according to various embodiments, and The connection relation of each functional unit is not specifically described, it will be appreciated by persons skilled in the art that each functional unit can Suitably to be connected by bus, internal connection line etc., such connection is well known to those skilled in the art.
It should be noted that in the present specification, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
Finally, it is to be noted that, it is above-mentioned it is a series of processing include not only with sequence described here in temporal sequence The processing of execution, and include the processing executed parallel or respectively rather than in chronological order.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can be by Software adds the mode of required hardware platform to realize, naturally it is also possible to all be implemented by hardware.Based on this understanding, Technical scheme of the present invention can be expressed in the form of software products in whole or in part to what background technology contributed, The computer software product can be stored in a storage medium, such as ROM/RAM, magnetic disc, CD, including some instructions are making It obtains a computer equipment (can be personal computer, server or the network equipment etc.) and executes each embodiment of the present invention Or the method described in certain parts of embodiment.
The present invention is described in detail above, specific case used herein is to the principle of the present invention and embodiment party Formula is expounded, and the explanation of above example is only intended to facilitate the understanding of the method and its core concept of the invention;Meanwhile it is right In those of ordinary skill in the art, according to the thought of the present invention, change is had in specific embodiments and applications Place, in conclusion the content of the present specification should not be construed as limiting the invention.

Claims (8)

1. a kind of image processing method, be applied to include at least two image capture modules electronic equipment in, the method packet It includes:
The focus operation for detecting described image acquisition module, determines focusing area;
It determines in focus point corresponding with the focusing area and described image acquisition module at a distance from predetermined plane;
When the distance is more than first threshold, the first depth map is obtained with first resolution;And
When the distance is no more than first threshold, the second depth map is obtained with second resolution, wherein the second resolution Less than the first resolution.
2. image processing method as described in claim 1, wherein determine corresponding with focusing area focus point with it is described The distance of predetermined plane further comprises in image capture module:
Detect the value of the driving current of the voice coil motor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the value of the driving current;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
3. image processing method as described in claim 1, wherein determine corresponding with focusing area focus point with it is described The distance of predetermined plane further comprises in image capture module:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
4. image processing method as described in claim 1, wherein
The first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
5. a kind of electronic equipment, including:
At least two image capture modules, the image for acquiring subject;
Focusing area determination unit, the focus operation for detecting described image acquisition module, determines focusing area;
Distance determining unit, for determining that focus point corresponding with the focusing area allocates in advance with described image acquisition module The distance in face;
Control unit, for when the distance is more than first threshold, the first depth map to be obtained with first resolution;And work as institute When stating distance no more than first threshold, the second depth map is obtained with second resolution, wherein the second resolution is less than described First resolution.
6. electronic equipment as claimed in claim 5, wherein the distance determining unit is further used for:
Detect the value of the driving current of the voice coil motor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the value of the driving current;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
7. electronic equipment as claimed in claim 5, wherein the distance determining unit further comprises:
Obtain the sensing value of the lens sensor of the electronic equipment;
The position of the lens unit at least two image capture module is calculated according to the sensing value;And
Focus point is calculated at a distance from the predetermined plane according to the position of the lens unit.
8. electronic equipment as claimed in claim 5, wherein
The first threshold is determined by the predetermined span and resolution ratio used in depth map computational algorithm.
CN201410469899.1A 2014-09-15 2014-09-15 Image processing method and electronic equipment Active CN105491277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410469899.1A CN105491277B (en) 2014-09-15 2014-09-15 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410469899.1A CN105491277B (en) 2014-09-15 2014-09-15 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105491277A CN105491277A (en) 2016-04-13
CN105491277B true CN105491277B (en) 2018-08-31

Family

ID=55677963

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410469899.1A Active CN105491277B (en) 2014-09-15 2014-09-15 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105491277B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182656B (en) * 2017-12-28 2021-04-30 深圳市创梦天地科技有限公司 Image processing method and terminal
CN110555874B (en) * 2018-05-31 2023-03-10 华为技术有限公司 Image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102867288A (en) * 2011-07-07 2013-01-09 三星电子株式会社 Depth image conversion apparatus and method
CN102934451A (en) * 2010-03-31 2013-02-13 汤姆森特许公司 3D disparity maps
CN103562958A (en) * 2011-05-26 2014-02-05 汤姆逊许可公司 Scale-independent maps
CN104023177A (en) * 2014-06-04 2014-09-03 华为技术有限公司 Camera control method, device and camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101214536B1 (en) * 2010-01-12 2013-01-10 삼성전자주식회사 Method for performing out-focus using depth information and camera using the same

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102934451A (en) * 2010-03-31 2013-02-13 汤姆森特许公司 3D disparity maps
CN103562958A (en) * 2011-05-26 2014-02-05 汤姆逊许可公司 Scale-independent maps
CN102867288A (en) * 2011-07-07 2013-01-09 三星电子株式会社 Depth image conversion apparatus and method
CN104023177A (en) * 2014-06-04 2014-09-03 华为技术有限公司 Camera control method, device and camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双目立体匹配的算法研究及其多核并行化;陈蛟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120615;I138-2153 *

Also Published As

Publication number Publication date
CN105491277A (en) 2016-04-13

Similar Documents

Publication Publication Date Title
EP3248374B1 (en) Method and apparatus for multiple technology depth map acquisition and fusion
JP5138031B2 (en) Method, apparatus and system for processing depth related information
CN105744138B (en) Quick focusing method and electronic equipment
TW201741999A (en) Method and system for generating depth information
CN102158719A (en) Image processing apparatus, imaging apparatus, image processing method, and program
CN105376484A (en) Image processing method and terminal
CN103903222A (en) Three-dimensional sensing method and three-dimensional sensing device
JP6087947B2 (en) Method for 3D reconstruction of scenes that rely on asynchronous sensors
CN113711276A (en) Scale-aware monocular positioning and mapping
WO2023169281A1 (en) Image registration method and apparatus, storage medium, and electronic device
CN105491277B (en) Image processing method and electronic equipment
Davanthapuram et al. Visually impaired indoor navigation using YOLO based object recognition, monocular depth estimation and binaural sounds
US9872011B2 (en) High-speed depth sensing with a hybrid camera setup
TW201607296A (en) Method of quickly generating depth map of image and image processing device
Isakova et al. FPGA design and implementation of a real-time stereo vision system
US8593508B2 (en) Method for composing three dimensional image with long focal length and three dimensional imaging system
CN115937291B (en) Binocular image generation method and device, electronic equipment and storage medium
US20130176388A1 (en) Method and device for providing temporally consistent disparity estimations
EP2866446B1 (en) Method and multi-camera portable device for producing stereo images
CN106548482B (en) Dense matching method and system based on sparse matching and image edges
Diskin et al. UAS exploitation by 3D reconstruction using monocular vision
WO2022198631A1 (en) Method, apparatus and system for auto-labeling
GÜVENDİK et al. FPGA Based Disparity Value Estimation
JP2013247522A (en) Image processing apparatus and method
Kirby et al. Dense depth maps from correspondences derived from perceived motion

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant