CN105451010A - Depth of field acquisition device and acquisition method - Google Patents
Depth of field acquisition device and acquisition method Download PDFInfo
- Publication number
- CN105451010A CN105451010A CN201410406660.XA CN201410406660A CN105451010A CN 105451010 A CN105451010 A CN 105451010A CN 201410406660 A CN201410406660 A CN 201410406660A CN 105451010 A CN105451010 A CN 105451010A
- Authority
- CN
- China
- Prior art keywords
- image
- unit
- depth
- carrier
- field acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention discloses a depth of field acquisition device, and discloses a depth of field acquisition method matched with the device. The depth of field acquisition device includes an image capture unit, a driving unit, an image transmission unit, an image segmentation unit, an image processing unit, an image storage unit, an image synthesis unit and an image display unit. The image capture unit includes a lens, and the driving unit includes a carrier, an elastic element and an electric transformation assembly. The lens is driven by the carrier to perform oscillating motion to obtain a plurality of images, and in cooperation with a depth of field acquisition method, the images are segmented into M*N subimages, the oscillating motion of the carrier is divided into Q actions, synthesis processing is performed on the plurality of images, and thus depth of field information of a photographed object can be obtained relatively accurately, so as to satisfy requirements of application fields that need to obtain accurate measured data.
Description
Technical field
The present invention relates to a kind of depth of field acquisition device, particularly relate to a kind of method of depth of field acquisition device and acquisition thereof.
Background technology
The depth of field refers to the subject longitudinal separation scope that the imaging that can obtain picture rich in detail at camera lens or other imager forward positions measures.After focusing completes, can be formed in the scope before and after focus clearly as, this tandem distance range, is just called the depth of field.In camera lens front, that namely focuses a little is forward and backward, has the space of one section of certain length, when subject is positioned at this section of space, its imaging on egative film is just before and after focus between these two blur circles, and the length in this section of space at subject place, is just the depth of field.In other words, the subject in this section of space, it is presented on the fog degree in egative film face, and all in the limited range of allowing blur circle, the length in this section of space is exactly the depth of field.
Along with continuous progress and the development of science and technology, more and more replace based on the Stereo Matching Technology of image procossing that price is high, the three-dimensional data processing mode of technical sophistication gradually, and become the main flow in market gradually.And still there are some problems in traditional Stereo Matching Technology based on image procossing, due to the defect in structural design and the deficiency in software merit rating, it is made to there is larger error when obtaining the depth of field of subject, be difficult to the depth of view information accurately reflecting subject, thus the requirement of the application obtaining accurate measurement data of cannot satisfying the demand.
Summary of the invention
The object of the invention is to overcome weak point of the prior art, provide a kind of can the depth of field acquisition device of Obtaining Accurate subject depth of view information and the method for acquisition thereof.
The object of the invention is to be achieved through the following technical solutions:
A kind of depth of field acquisition device, comprising:
Image-capture unit, described image-capture unit is for gathering the picture signal of extraneous input;
Driver element, described driver element is for driving described image-capture unit;
Image transmitting unit, the picture signal that described image transmitting unit gathers for transmitting described image-capture unit;
Image segmentation unit, described image segmentation unit for receiving the picture signal of described image transmitting unit, and carries out dividing processing to the picture signal of described image transmitting unit;
Graphics processing unit, described graphics processing unit is used for carrying out definition analysis to the picture signal of described image segmentation unit;
Image storage unit, described image storage unit is for receiving and storing the picture signal of described graphics processing unit;
Image composing unit, described image composing unit is used for carrying out synthesis process to multiple picture signals of described image storage unit;
Image-display units, described image-display units shows final image-forming information according to the synthesis result of described image composing unit;
Wherein, described driver element comprises carrier, electronic transition components and flexible member, described carrier is for carrying described image-capture unit, described electronic transition components is for driving described carrier movement, to make that described carrier moves up and down, horizontal movement, oscillating motion or three have concurrently, and described flexible member is used for providing elastic-restoring force for described carrier.
Wherein in an embodiment, described electronic transition components comprises magnet and coil, and described coil is located on described carrier, and described magnet is connected with described coil electromagnetism.
Wherein in an embodiment, described image-capture unit comprises camera lens, and described camera lens is installed on described carrier.
Wherein in an embodiment, described image transmitting unit comprises imageing sensor, and described imageing sensor receives picture signal that described camera lens gathers and transfers to described image segmentation unit.
Wherein in an embodiment, described flexible member comprises elastic component and lower elastic component, and described upper elastic component and described lower elastic component are located at the two ends of described carrier respectively.
A kind of depth of field acquisition methods, comprises step:
Step S1, prespecified, be M*N subgraph by the Iamge Segmentation grabbed;
Step S2, prespecified, the oscillating motion of aforementioned bearer is divided into Q Step;
Step S3, from initial position to maximum rocked position, waves carrier, and each position captures an image, and judges the definition of each subgraph;
Step S4, if the definition of some subgraphs exceedes default threshold value R, then think that this subgraph is clear, so, record following tlv triple { Step, I, J, Image (n) }, wherein I, J represents the position of subgraph in present image Image (n), and n is the n-th crawl image including clear subgraph;
Step S5, before dispatching from the factory, demarcates module, each step, correspond to a depth of field data P, so can do following conversion from above-mentioned tlv triple: { Step, I, J, Image (n) } be converted to { P, I, J, Image (n) }, I ∈ [0, M-1], J ∈ [0, N-1];
Step S6, after having traveled through all Step, the depth information of whole object scene various piece is known, if altogether obtain the crawl image containing clear subgraph in X, they can be merged into the image containing above-mentioned depth information array in, for subsequent treatment.
In step sl, become by Iamge Segmentation M to arrange N row, to obtain M*N subgraph, the numerical values recited of M, N presets as the case may be, and as obtained more accurate depth of view information, and the numerical value of M*N needs larger.
In step S2 and step S3, wave by carrier and resolve into Q action, the numerical values recited of Q is determined as the case may be, and as obtained more accurate depth of view information, then the numerical value of Q needs larger.
Depth of field acquisition device arranges image-capture unit, driver element, image transmitting unit, image segmentation unit, graphics processing unit, image storage unit, image composing unit and image-display units, oscillating motion is done by belt carrier index glass head, cooperated with depth of field acquisition methods is to Image Segmentation Using and definition analysis simultaneously, the depth of view information of subject can be obtained more accurately, with the requirement of the application obtaining accurate measurement data of satisfying the demand.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the depth of field acquisition device of one embodiment of the invention;
Fig. 2 is the flow chart of the depth of field acquisition methods of one embodiment of the invention.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
As shown in Figure 1, it is the schematic diagram of the depth of field acquisition device 10 of one embodiment of the invention.Depth of field acquisition device 10 comprises: image-capture unit 100, driver element 200, image transmitting unit 300, image segmentation unit 400, graphics processing unit 500, image storage unit 600, image composing unit 700 and image-display units 800.
Image-capture unit 100, image transmitting unit 300, image segmentation unit 400, graphics processing unit 500, image storage unit 600, image composing unit 700 and image-display units 800 connect successively, and driver element 200 is connected with image-capture unit 100.
Concrete, image-capture unit 100 comprises camera lens 110; Image transmitting unit 300 comprises imageing sensor 310; Driver element 200 comprises carrier 210, flexible member 220 and electronic transition components 230, and wherein electronic transition components 230 comprises coil 232 and magnet 234.
Camera lens 110 is installed on carrier 210, and carrier 210 is for the camera lens 110 of load image placement unit 100.Carrier 210 is located on flexible member 220, and coil 232 is installed on carrier 210, and coil 232 is connected with magnet 234 electromagnetism, and camera lens 110 is connected with imageing sensor 310.
Image-capture unit 100 is for gathering the picture signal of extraneous input, i.e. external image imaging on camera lens 110.
Driver element 200 is for driving image-capture unit 100, concrete, electronic transition components 230 drives carrier 210 to move, to make that carrier 210 moves up and down, horizontal movement, oscillating motion or three have concurrently, enters to wear that the dynamic camera lens 110 be installed on carrier 210 moves up and down, horizontal movement, oscillating motion or three have concurrently.
Flexible member 220 is connected with carrier 210, and flexible member 220 is for providing elastic-restoring force for carrier 210.In the present embodiment, flexible member 220 comprises elastic component 222 and lower elastic component 224, and upper elastic component 222 and lower elastic component 224 are located at the two ends of carrier 210 respectively.
Electronic transition components 230 comprises coil 232 and magnet 234, and coil 232 is energized, and the coil 232 of energising produces motion in the magnetic field of magnet 234, and then drives the carrier 210 be attached thereto to move.In the present embodiment, electronic transition components 230 quantity is multiple, and be arranged at around carrier 210, electronic transition components 230 can realize independently separately control action when being energized, each electronic transition components 230 produce independently linear movement when being energized thus carrier 210 can be made to realize moving up and down, horizontal movement, oscillating motion or three have concurrently.
The picture signal that image transmitting unit 300 gathers for transmitting image placement unit 100, image transmitting unit 300 comprises imageing sensor 310, and imageing sensor 310 is for responding to the imaging of camera lens 110 and being transmitted in image segmentation unit 400.
Image segmentation unit 400 for receiving the picture signal of image transmitting unit 300, and carries out dividing processing to the picture signal of image transmitting unit 300.Concrete, the Iamge Segmentation grabbed is M*N subgraph by image segmentation unit 400, and become M to arrange N row by Iamge Segmentation, M, N value is determined on a case-by-case basis.
Graphics processing unit 500 is for carrying out definition analysis to the picture signal of image segmentation unit 400, if the definition of some subgraphs exceedes default threshold value R (R is the numerical value preset), then think that this subgraph is clear, qualified, so be stored in image storage unit 600.
Image storage unit 600 is for receiving and the picture signal of memory image processing unit 500.
Image composing unit 700 is for carrying out synthesis process to multiple picture signals of image storage unit 600, carrier 210 drives camera lens 110 to wave by rocking action, thus form the imaging of multiple different visual angles, the imaging of different visual angles, by the synthesis process of image composing unit 700, makes the depth information of whole object scene various piece known.
Image-display units 800 shows final image-forming information according to the synthesis result of image composing unit 700.
As shown in Figure 2, it is the flow chart of the depth of field acquisition methods of one embodiment of the invention.The schematic diagram of the depth of field acquisition device 10 shown in composition graphs 1, to understand the technical scheme of depth of field acquisition methods of the present invention.
When carrier 210 at short notice oscillating motion time, the image of camera lens 110 at diverse location place can be captured, to obtain depth of view information.
Depth of field acquisition methods comprises the following steps:
Step S1, prespecified, be M*N subgraph by the Iamge Segmentation grabbed;
Step S2, prespecified, the oscillating motion of aforementioned bearer is divided into Q Step;
Step S3, from initial position to maximum rocked position, waves carrier, and each position captures an image, and judges the definition of each subgraph;
Step S4, if the definition of some subgraphs exceedes default threshold value R, then think that this subgraph is clear, so, record following tlv triple { Step, I, J, Image (n) }, wherein I, J represents the position of subgraph in present image Image (n), and n is the n-th crawl image including clear subgraph;
Step S5, before dispatching from the factory, demarcates module, each step, correspond to a depth of field data P, so can do following conversion from above-mentioned tlv triple: { Step, I, J, Image (n) } be converted to { P, I, J, Image (n) }, I ∈ [0, M-1], J ∈ [0, N-1];
Step S6, after having traveled through all Step, the depth information of whole object scene various piece is known, if altogether obtain the crawl image containing clear subgraph in X, they can be merged into the image containing above-mentioned depth information array in, for subsequent treatment.
In step sl, become by Iamge Segmentation M to arrange N row, to obtain M*N subgraph, the numerical values recited of M, N presets as the case may be, and as obtained more accurate depth of view information, and the numerical value of M*N needs larger.
In step S2 and step S3, wave by carrier 210 and resolve into Q action, the numerical values recited of Q is determined as the case may be, and as obtained more accurate depth of view information, then the numerical value of Q needs larger.
Depth of field acquisition device 10 arranges image-capture unit 100, driver element 200, image transmitting unit 300, image segmentation unit 400, graphics processing unit 500, image storage unit 600, image composing unit 700 and image-display units 800, camera lens 110 is driven to do oscillating motion by carrier 210, cooperated with depth of field acquisition methods is to Image Segmentation Using and definition analysis simultaneously, the depth of view information of subject can be obtained more accurately, with the requirement of the application obtaining accurate measurement data of satisfying the demand.
Above-described embodiment is the present invention's preferably execution mode; but embodiments of the present invention are not restricted to the described embodiments; change, the modification done under other any does not deviate from Spirit Essence of the present invention and principle, substitute, combine, simplify; all should be the substitute mode of equivalence, be included within protection scope of the present invention.
Claims (6)
1. a depth of field acquisition device, is characterized in that, comprising:
Image-capture unit, described image-capture unit is for gathering the picture signal of extraneous input;
Driver element, described driver element is for driving described image-capture unit;
Image transmitting unit, the picture signal that described image transmitting unit gathers for transmitting described image-capture unit;
Image segmentation unit, described image segmentation unit for receiving the picture signal of described image transmitting unit, and carries out dividing processing to the picture signal of described image transmitting unit;
Graphics processing unit, described graphics processing unit is used for carrying out definition analysis to the picture signal of described image segmentation unit;
Image storage unit, described image storage unit is for receiving and storing the picture signal of described graphics processing unit;
Image composing unit, described image composing unit is used for carrying out synthesis process to multiple picture signals of described image storage unit;
Image-display units, described image-display units shows final image-forming information according to the synthesis result of described image composing unit;
Wherein, described driver element comprises carrier, electronic transition components and flexible member, described carrier is for carrying described image-capture unit, described electronic transition components is for driving described carrier movement, to make that described carrier moves up and down, horizontal movement, oscillating motion or three have concurrently, and described flexible member is used for providing elastic-restoring force for described carrier.
2. depth of field acquisition device according to claim 1, is characterized in that, described electronic transition components comprises magnet and coil, and described coil is located on described carrier, and described magnet is connected with described coil electromagnetism.
3. depth of field acquisition device according to claim 1, is characterized in that, described image-capture unit comprises camera lens, and described camera lens is installed on described carrier.
4. depth of field acquisition device according to claim 1, is characterized in that, described image transmitting unit comprises imageing sensor, and described imageing sensor receives picture signal that described camera lens gathers and transfers to described image segmentation unit.
5. depth of field acquisition device according to claim 1, is characterized in that, described flexible member comprises elastic component and lower elastic component, and described upper elastic component and described lower elastic component are located at the two ends of described carrier respectively.
6. a depth of field acquisition methods, is characterized in that, comprises step:
Step S1, prespecified, be M*N subgraph by the Iamge Segmentation grabbed;
Step S2, prespecified, the oscillating motion of aforementioned bearer is divided into Q Step;
Step S3, from initial position to maximum rocked position, waves carrier, and each position captures an image, and judges the definition of each subgraph;
Step S4, if the definition of some subgraphs exceedes default threshold value R, then think that this subgraph is clear, so, record following tlv triple { Step, I, J, Image (n) }, wherein I, J represents the position of subgraph in present image Image (n), and n is the n-th crawl image including clear subgraph;
Step S5, before dispatching from the factory, demarcates module, each step, correspond to a depth of field data P, so can do following conversion from above-mentioned tlv triple: { Step, I, J, Image (n) } be converted to { P, I, J, Image (n) }, I ∈ [0, M-1], J ∈ [0, N-1];
Step S6, after having traveled through all Step, the depth information of whole object scene various piece is known, if altogether obtain the crawl image containing clear subgraph in X, they can be merged into the image containing above-mentioned depth information array in, for subsequent treatment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410406660.XA CN105451010A (en) | 2014-08-18 | 2014-08-18 | Depth of field acquisition device and acquisition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410406660.XA CN105451010A (en) | 2014-08-18 | 2014-08-18 | Depth of field acquisition device and acquisition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105451010A true CN105451010A (en) | 2016-03-30 |
Family
ID=55560734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410406660.XA Pending CN105451010A (en) | 2014-08-18 | 2014-08-18 | Depth of field acquisition device and acquisition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105451010A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447929A (en) * | 2018-10-18 | 2019-03-08 | 北京小米移动软件有限公司 | Image composition method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102087460A (en) * | 2010-12-28 | 2011-06-08 | 深圳市英迈吉科技有限公司 | Automatic focusing method capable of freely selecting automatic focusing (AF) area |
CN102843571A (en) * | 2012-09-14 | 2012-12-26 | 冠捷显示科技(厦门)有限公司 | Multi-view three-dimensional display image synthesis method |
CN103308452A (en) * | 2013-05-27 | 2013-09-18 | 中国科学院自动化研究所 | Optical projection tomography image capturing method based on depth-of-field fusion |
CN103973978A (en) * | 2014-04-17 | 2014-08-06 | 华为技术有限公司 | Method and electronic device for achieving refocusing |
-
2014
- 2014-08-18 CN CN201410406660.XA patent/CN105451010A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102087460A (en) * | 2010-12-28 | 2011-06-08 | 深圳市英迈吉科技有限公司 | Automatic focusing method capable of freely selecting automatic focusing (AF) area |
CN102843571A (en) * | 2012-09-14 | 2012-12-26 | 冠捷显示科技(厦门)有限公司 | Multi-view three-dimensional display image synthesis method |
CN103308452A (en) * | 2013-05-27 | 2013-09-18 | 中国科学院自动化研究所 | Optical projection tomography image capturing method based on depth-of-field fusion |
CN103973978A (en) * | 2014-04-17 | 2014-08-06 | 华为技术有限公司 | Method and electronic device for achieving refocusing |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109447929A (en) * | 2018-10-18 | 2019-03-08 | 北京小米移动软件有限公司 | Image composition method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI521255B (en) | Automatic focusing method, and automatic focusing device, image capturing device using the same | |
US9521320B2 (en) | Image processing apparatus, image capturing apparatus, image processing method, and storage medium | |
EP3248374B1 (en) | Method and apparatus for multiple technology depth map acquisition and fusion | |
US20120147150A1 (en) | Electronic equipment | |
US8223194B2 (en) | Image processing method and apparatus | |
JP5495683B2 (en) | Imaging apparatus and distance measuring method | |
US20150077522A1 (en) | Solid state imaging device, calculating device, and calculating program | |
RU2734018C2 (en) | Method and device for generating data representing a light field | |
Dansereau et al. | A wide-field-of-view monocentric light field camera | |
CN101911671A (en) | Imaging device and optical axis control method | |
CN101424863A (en) | Stereoscopic camera and parallax self-adapting regulating method thereof | |
JP2011059415A5 (en) | ||
US9532030B2 (en) | Integrated three-dimensional vision sensor | |
RU2734115C2 (en) | Method and device for generating data characterizing a pixel beam | |
Hu et al. | Monocular stereo measurement using high-speed catadioptric tracking | |
KR20200085099A (en) | Optical system and camera module for comprising the same | |
CN113344839B (en) | Depth image acquisition device, fusion method and terminal equipment | |
CN110336993B (en) | Depth camera control method and device, electronic equipment and storage medium | |
JP7312185B2 (en) | Camera module and its super-resolution image processing method | |
CN109089048B (en) | Multi-lens panoramic linkage device and method | |
CN105451010A (en) | Depth of field acquisition device and acquisition method | |
US10084978B2 (en) | Image capturing apparatus and image processing apparatus | |
US8593508B2 (en) | Method for composing three dimensional image with long focal length and three dimensional imaging system | |
CN104125391A (en) | Image sensor, electric device using the same and focusing method of the electric device | |
US20100194865A1 (en) | Method of generating and displaying a 3d image and apparatus for performing the method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160330 |
|
RJ01 | Rejection of invention patent application after publication |