CN105357515A - Color and depth imaging method and device based on structured light and light-field imaging - Google Patents
Color and depth imaging method and device based on structured light and light-field imaging Download PDFInfo
- Publication number
- CN105357515A CN105357515A CN201510958294.3A CN201510958294A CN105357515A CN 105357515 A CN105357515 A CN 105357515A CN 201510958294 A CN201510958294 A CN 201510958294A CN 105357515 A CN105357515 A CN 105357515A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- light
- structured light
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 203
- 238000000034 method Methods 0.000 claims abstract description 41
- 238000004458 analytical method Methods 0.000 claims abstract description 7
- 230000003287 optical effect Effects 0.000 claims description 92
- 238000012545 processing Methods 0.000 claims description 25
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000005286 illumination Methods 0.000 claims description 9
- 230000008901 benefit Effects 0.000 abstract description 10
- 230000005855 radiation Effects 0.000 abstract description 2
- 230000000295 complement effect Effects 0.000 abstract 1
- 238000007500 overflow downdraw method Methods 0.000 description 10
- ORQBXQOJMQIAOY-UHFFFAOYSA-N nobelium Chemical compound [No] ORQBXQOJMQIAOY-UHFFFAOYSA-N 0.000 description 7
- 239000013589 supplement Substances 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000001737 promoting effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 239000000047 product Substances 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000009412 basement excavation Methods 0.000 description 2
- 230000019771 cognition Effects 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 235000012364 Peperomia pellucida Nutrition 0.000 description 1
- 240000007711 Peperomia pellucida Species 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
Abstract
The invention discloses a color and depth imaging method based on structured light and light-field imaging. The method comprises steps as follows: a light field image of a preset shoot scene to be shot by a user is acquired; the acquired light field image is subjected to scene analysis, and a specific scene type of the preset shoot scene is acquired; a corresponding mode of structured light of the preset shoot scene is acquired and shot to the preset shoot scene according to the relation between different preset screen types and structured light in different modes; a depth image of the preset shoot scene under radiation of the structured light is acquired; color images and depth images in the light field image are extracted and processed and fused together with the depth image, and a comprehensive image with color and depth for the preset shoot scene is finally acquired. According to the color and depth imaging method based on structured light and light-field imaging and a device used for the method, an active and inactive combined depth imaging mode is adopted, an active depth imaging mode and an inactive depth imaging mode complement each other's advantages, color and depth imaging is realized, and the imaging quality is guaranteed.
Description
Technical field
The present invention relates to the technical field such as optical engineering and computer vision, particularly relate to color depth formation method and the device thereof of a kind of structure based light and optical field imaging.
Background technology
At present, along with the development of human sciences's technology, in computer vision system, three-dimensional scene information is that the computer vision application such as Iamge Segmentation, target detection, object tracking provide more possibility, compared with two dimensional image, depth image has the three-dimensional feature information of object, i.e. depth information, and therefore depth image is also widely used as a kind of general three-dimensional scene information expression way.Therefore, utilize the imaging device simultaneously catching color and depth information, realize the detection and Identification of three-dimensional body, will become the focus that of computer vision field is new, wherein the acquisition of depth image is key technology wherein.
In computer vision system, obtain the method for depth image and can be divided into two classes: passive type and active.Wherein, the method that passive type obtains depth image mainly utilizes ambient environmental conditions imaging, conventional method is binocular stereo vision, and optical field imaging also receives increasing concern as the application of a kind of emerging passive type imaging mode at present in estimation of Depth.Optical field imaging is the important branch calculating imaging field.Light field is the light radiation field simultaneously comprising position and directional information in space, compares the traditional imaging mode only recording 2-D data, and optical field imaging can obtain more abundant image information.Therefore, optical field imaging technology is be calculated to be picture to provide many new developing direction.At present, optical field imaging mainly contains three kinds of forms: microlens array, camera array and mask mode, obtains application in fields such as increase image depth, estimation of Depth, super-resolutions.
The method of active acquisition depth image is compared to passive type, and the most obvious feature is that equipment itself needs emitted energy to complete the acquisition of depth information.In recent years, initiatively depth sensing is further abundant in application on the market.Initiatively depth sensing mainly comprises the methods such as TOF (flying time technology), structured light, laser dot scans.Wherein, structured light is the light with AD HOC, and it has the light of such as point, line, surface isotype pattern.The depth image of structure based light obtains principle and is: by project structured light to scene, and is caught corresponding with structured light patterns by imageing sensor.Because the mode pattern of structured light can, because the shape of object deforms, therefore utilize triangle principle to calculate the depth information that can obtain each point in scene by mode image in the position caught in the image obtained and deformation degree.That structured light depth imaging technique provides is convenient, high accuracy and three-dimensional information fast, and it is all widely used in fields such as automobile, game, medical treatment.
But, although optical field imaging can obtain scene enough information in single exposure process, and carry out Depth Information Acquistion by being calculated to be as related algorithm, realize coloured image and depth image obtains simultaneously, but, its to some lack texture, without the object of obvious grey scale change or the extraction of depth information of scene poor.And the great advantage of structured light depth imaging technique is to improve the precision of some depth informations in particular cases, smooth for some, lack texture, surf zone without obvious gray scale or change of shape, utilize structured light can form obvious striped on a surface of an, thus the difficult problem avoided in poor in information region relevant matches, but, usually there is the problems such as depth information region disappearance, noise.
Therefore, at present in the urgent need to developing a kind of technology, it can adopt the active Depth Imaging mode combined with passive type, utilize mutual supplement with each other's advantages between the two, realize colored and Depth Imaging, and ensure the quality of imaging, contribute to promoting three-dimensional scenic and be modeled in extensive use in computer vision system.
Summary of the invention
In view of this, the object of this invention is to provide color depth formation method and the device thereof of a kind of structure based light and optical field imaging, it can adopt the active Depth Imaging mode combined with passive type, utilize mutual supplement with each other's advantages between the two, realize colored and Depth Imaging, and ensure the quality of imaging, contribute to promoting three-dimensional scenic and be modeled in extensive use in computer vision system, the product use sense being conducive to improving user is subject to, and is of great practical significance.
For this reason, the invention provides the color depth formation method of a kind of structure based light and optical field imaging, comprise step:
The first step: gather the light field image that user needs the default photographed scene taken;
Second step: carry out scene analysis to gathered light field image, obtains the concrete scene type of this default photographed scene;
3rd step: according to the corresponding relation between the different scene type preset and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and by the structured light of this associative mode in this default photographed scene;
4th step: gather the depth image of this default photographed scene under structured light;
5th step: extract the coloured image in the light field image of described default photographed scene and depth image, and carry out processing and merging with the depth image of described default photographed scene under structured light, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
Wherein, in the described first step, gather by optical field imaging transducer the light field image of default photographed scene that user needs to take.
Wherein, in described 4th step, gather the depth image of this default photographed scene under structured light by Depth Imaging transducer.
In addition, present invention also offers the color depth imaging device of a kind of structure based light and optical field imaging, comprising:
Optical field imaging unit, needs the light field image of the default photographed scene taken, and sends to graphics processing unit for gathering user;
Logic control element, be connected with optical field imaging unit, light field image collection is carried out for controlling optical field imaging unit, and the concrete scene type of this default photographed scene sent according to graphics processing unit and the corresponding relation between default different scene type and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and send illumination mode control signal to light source and send Depth Imaging control signal to structured light Depth Imaging unit, described illumination mode control signal comprises the structured light information of pattern corresponding to described default photographed scene,
Light source, andlogic control unit is connected, for generation of the structured light of different mode, and according to the illumination mode control signal that described logic control element is sent, by the structured light of pattern corresponding to described default photographed scene in this default photographed scene;
Structured light Depth Imaging unit, andlogic control unit is connected, for the Depth Imaging control signal sent according to described logic control element, the depth image of the default photographed scene of Real-time Collection under structured light, and send to graphics processing unit;
Graphics processing unit, respectively with optical field imaging unit, logic control element is connected with structured light Depth Imaging unit, for receiving the light field image of the described default photographed scene that described optical field imaging unit obtains, scene analysis is carried out to described light field image, obtain the concrete scene type of this default photographed scene and send to logic control element, and the coloured image extracted in this light field image and depth image, then the coloured image in described light field image and depth image and the depth image of described default photographed scene under structured light are carried out processing and merging, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
Wherein, also comprise half-reflection and half-transmission prism and main lens, the dead ahead of described half-reflection and half-transmission prism is provided with described main lens, wherein:
Described main lens, for assembling the image beam that described default photographed scene reflects, then projects on described half-reflection and half-transmission prism;
Described half-reflection and half-transmission prism, for the incident image light beams from main lens being divided into first via image beam and the second road image beam, and first via image beam is wherein reflected to described structured light Depth Imaging unit, and the depth image of the default photographed scene under structured light is obtained by described structured light Depth Imaging unit collection, and the second road image beam is transmitted to optical field imaging unit, and obtained the light field image presetting photographed scene by the collection of optical field imaging unit.
Wherein, described structured light Depth Imaging unit is Depth Imaging transducer;
Described optical field imaging unit comprises microlens array and optical field imaging transducer, and described optical field imaging transducer is vertical setting with microlens array, and described microlens array is positioned at the dead ahead of described optical field imaging transducer;
Described half-reflection and half-transmission prisms tilted is arranged at the dead ahead of described microlens array, and described half-reflection and half-transmission prism becomes predetermined angle to arrange with optical field imaging sensor plane;
Described Depth Imaging sensor levels arranges and is positioned at immediately below described half-reflection and half-transmission prism, and described half-reflection and half-transmission prism becomes predetermined angle to arrange with described Depth Imaging sensor plane.
Wherein, described optical field imaging transducer is COMS or ccd imaging sensor, and described Depth Imaging transducer is for adopting COMS or ccd imaging sensor.
Wherein, described logic control element is on-site programmable gate array FPGA or Chip Microcomputer A RM logic control chip;
Described graphics processing unit is central processor CPU or digital signal processor DSP that described color depth imaging device is installed.
From above technical scheme provided by the invention, compared with prior art, the invention provides color depth formation method and the device thereof of a kind of structure based light and optical field imaging, it can adopt the active Depth Imaging mode combined with passive type, utilize mutual supplement with each other's advantages between the two, realize colored and Depth Imaging, and ensure the quality of imaging, contribute to promoting three-dimensional scenic and be modeled in extensive use in computer vision system, the product use sense being conducive to improving user is subject to, and is of great practical significance.
Accompanying drawing explanation
Fig. 1 is the flow chart of the color depth formation method of a kind of structure based light provided by the invention and optical field imaging;
Fig. 2 is the structural representation of the color depth imaging device of a kind of structure based light provided by the invention and optical field imaging;
Fig. 3 is the schematic diagram of a kind of embodiment collection of color depth imaging device of a kind of structure based light provided by the invention and optical field imaging, distribution and process external image light beam;
Fig. 4 is that the part position relations of a kind of embodiment of color depth imaging device when carrying out optical imagery of a kind of structure based light provided by the invention and optical field imaging faces simplified schematic diagram.
Embodiment
In order to make those skilled in the art person understand the present invention program better, below in conjunction with drawings and embodiments, the present invention is described in further detail.
Fig. 1 is the flow chart of the color depth formation method of a kind of structure based light provided by the invention and optical field imaging;
See Fig. 1, the color depth formation method of a kind of structure based light provided by the invention and optical field imaging, comprises the following steps:
Step S101: gather the light field image that user needs the default photographed scene taken;
Step S102: carry out scene analysis to gathered light field image, obtains the concrete scene type of this default photographed scene;
For the present invention, it should be noted that, can graphics processing unit in the color depth imaging device of the structure based light shown in composition graphs 2 to Fig. 4 and optical field imaging, scene understanding is carried out according to picture material by graphics processing unit, scene type belonging to judgement, as the shape of indoor or outdoor, static scene or dynamic scene, object scene, geometry and color attribute etc.
In specific implementation, scene understanding process mainly comprises Iamge Segmentation, feature extraction, target identification and scene classification, realize scene cognition in conjunction with data study and excavation, biological cognitive characteristics and statistical modeling method and understand, such as, can utilize and realize scene understanding based on HMM or based on the method for conditional random field models.
Step S103: according to the corresponding relation between the different scene type preset and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and by the structured light of this associative mode in this default photographed scene;
Step S104: the depth image gathering the current structure based light of this default photographed scene (namely under structured light);
Step S105: extract the coloured image in the light field image of described default photographed scene and depth image, and carry out processing and merging with the depth image of described default photographed scene under structured light, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
For the present invention, it should be noted that, can the structure of optical field imaging unit in the color depth imaging device of the structure based light shown in composition graphs 2 to Fig. 4 and optical field imaging, depth information can be obtained while acquisition scene two-dimensional signal, utilize existing known technical method to extract Two-dimensional Color Image and depth image than the primary light field picture be easier to by optical field imaging unit photographs.
It should be noted that, in specific implementation, coloured image can adopt the method for the acquisition subgraph of Ng (name) to obtain, and depth image can adopt the method based on EPI (EpipolarPlaneImage, to polar plane figure) to obtain.
For the present invention, it should be noted that, smooth at some owing to extracting the depth image obtained from light field image, lack texture, surf zone depth information precision without obvious gray scale or change of shape is limited, and the depth image of structure based light is better in these regional depth precisions of information, but there is the problem such as larger affected by noise, fusion is exactly adopt image interfusion method, realize the mutual supplement with each other's advantages of two kinds of depth images, thus obtain the good depth image of whole structure, in conjunction with the coloured image extracted in light field image and then obtain the colour and depth image that described default photographed scene finally has.The concrete grammar merged relates to algorithm, but the fusion method of existing main flow can be utilized also oneself to propose new fusion method, and this patent is not specifically introduced.
In specific implementation, image interfusion method can adopt linear weighted function image co-registration method, pyramid diagram as fusion method, based on wavelet image fusion method, neural network image fusion method etc.
In the present invention, in step S101, user needs the default photographed scene taken to arrange according to the needs of user, can be any one scene that user selects.
In step S101, in specific implementation, can gather by optical field imaging transducer the light field image of default photographed scene that user needs to take.
In step s 103, need user to pre-set corresponding relation between different scene type and different mode structured light, be specially man-to-man relation.
In step s 103, the structured light of different mode can be launched in default photographed scene by a light source.For light source, can be the equipment of any one generating structure light, can be specifically luminous point, light slit, grating, graticule mesh or speckle are projected to certain projector equipment on testee or instrument, also can be the laser generating laser beam.
In step S104, in specific implementation, the depth image of the current structure based light of this default photographed scene (namely under structured light) can be gathered by Depth Imaging transducer.
See Fig. 2, in specific implementation, in order to run the color depth formation method of a kind of structure based light that the invention described above provides and optical field imaging, present invention also offers the color depth imaging device of a kind of structure based light and optical field imaging, this imaging device comprises:
Optical field imaging unit 101, needs the light field image of the default photographed scene taken, and sends to graphics processing unit 105 for gathering user;
Logic control element 102, be connected with optical field imaging unit 101, for the imaging sign on inputted according to user, control optical field imaging unit 101 and carry out light field image collection, and the concrete scene type of this default photographed scene sent according to graphics processing unit and the corresponding relation between default different scene type and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and send illumination mode control signal to light source 103 and send Depth Imaging control signal to structured light Depth Imaging unit 104, described illumination mode control signal comprises the structured light information of pattern corresponding to described default photographed scene,
Light source 103, andlogic control unit 102 is connected, for generation of the structured light of different mode (point, line, surface pattern pattern), and according to the illumination mode control signal that described logic control element 102 is sent, by the structured light of pattern corresponding to described default photographed scene in this default photographed scene;
Structured light Depth Imaging unit 104, andlogic control unit 102 is connected, for the Depth Imaging control signal sent according to described logic control element 102, the depth image of the default photographed scene of Real-time Collection under structured light, and send to graphics processing unit 105;
Graphics processing unit 105, respectively with optical field imaging unit 101, logic control element 102 is connected with structured light Depth Imaging unit 104, for receiving the light field image of the described default photographed scene that described optical field imaging unit 101 obtains, scene analysis is carried out to described light field image, obtain the concrete scene type of this default photographed scene and send to logic control element, and the coloured image extracted in this light field image and depth image, then the coloured image in described light field image and depth image and the depth image of described default photographed scene under structured light are carried out processing and merging, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
For the present invention, in specific implementation, described logic control element 102 can be connected with the user instruction input equipment such as keyboard, mouse, touch-screen, tablet, and user can be facilitated to be entered as picture sign on.
In the present invention, it should be noted that, the object arranging light source 103 is to allow structured light Depth Imaging cell sensor 104 can preset the depth image of photographed scene described in shooting, collecting, completes the Depth Imaging of structure based light.For light source 103, can be the equipment of any one generating structure light, can be specifically luminous point, light slit, grating, graticule mesh or speckle are projected to certain projector equipment on testee or instrument, also can be the laser generating laser beam.
For the present invention, it should be noted that, can graphics processing unit in the color depth imaging device of the structure based light shown in composition graphs 2 to Fig. 4 and optical field imaging, scene understanding is carried out according to picture material by graphics processing unit, scene type belonging to judgement, as the shape of indoor or outdoor, static scene or dynamic scene, object scene, geometry and color attribute etc.
In specific implementation, scene understanding process mainly comprises Iamge Segmentation, feature extraction, target identification and scene classification, realize scene cognition in conjunction with data study and excavation, biological cognitive characteristics and statistical modeling method and understand, such as, can utilize and realize scene understanding based on HMM or based on the method for conditional random field models.
For the present invention, it should be noted that, can the structure of optical field imaging unit in the color depth imaging device of the structure based light shown in composition graphs 2 to Fig. 4 and optical field imaging, depth information can be obtained while acquisition scene two-dimensional signal, utilize existing known technical method to extract Two-dimensional Color Image and depth image than the primary light field picture be easier to by optical field imaging unit photographs.
It should be noted that, in specific implementation, coloured image can adopt the method for the acquisition subgraph of Ng (name) to obtain, and depth image can adopt the method based on EPI (EpipolarPlaneImage, to polar plane figure) to obtain.
For the present invention, it should be noted that, smooth at some owing to extracting the depth image obtained from light field image, lack texture, surf zone depth information precision without obvious gray scale or change of shape is limited, and the depth image of structure based light is better in these regional depth precisions of information, but there is the problem such as larger affected by noise, fusion is exactly adopt image interfusion method, realize the mutual supplement with each other's advantages of two kinds of depth images, thus obtain the good depth image of whole structure, in conjunction with the coloured image extracted in light field image and then obtain the colour and depth image that described default photographed scene finally has.The concrete grammar merged relates to algorithm, but the fusion method of existing main flow can be utilized also oneself to propose new fusion method, and this patent is not specifically introduced.
In specific implementation, image interfusion method can adopt linear weighted function image co-registration method, pyramid diagram as fusion method, based on wavelet image fusion method, neural network image fusion method etc.
For the present invention, in order to ensure the light field image collection effect of described optical field imaging unit 101, and ensure the collection effect of depth image of structured light Depth Imaging unit 104 structure based light, the surface structure of reasonable disposition light field Depth Imaging of the present invention device in production practices simultaneously.In specific implementation, see Fig. 3, described light field Depth Imaging device also comprises half-reflection and half-transmission prism 106 and main lens 107.Described half-reflection and half-transmission prism, for the incident image light beams from main lens being divided into first via image beam and the second road image beam, and first via image beam is wherein reflected to described structured light Depth Imaging unit, and the depth image of the default photographed scene under structured light is obtained by described structured light Depth Imaging unit collection, and the second road image beam is transmitted to optical field imaging unit, and obtained the light field image presetting photographed scene by the collection of optical field imaging unit.
Described main lens 107, for assembling the image beam that described default photographed scene reflects, then projects on described half-reflection and half-transmission prism;
It should be noted that, for the present invention, the structure of described half-reflection and half-transmission prism 106 and main lens 107 has ensured the Depth Imaging of structure based light and the light shaft coaxle of optical field imaging, decrease Depth Imaging transducer and optical field imaging transducer obtain the parallax of image.
The part position relations of a kind of embodiment of color depth imaging device when carrying out optical imagery being a kind of structure based light provided by the invention and optical field imaging see Fig. 4, Fig. 4 in the lump faces simplified schematic diagram.Described structured light Depth Imaging unit 104 is preferably Depth Imaging transducer 1041, described optical field imaging unit 101 comprises microlens array 1011 and optical field imaging transducer 1012, described optical field imaging transducer 1012 is vertical setting with microlens array 1011, and described microlens array 1011 is positioned at the dead ahead of described optical field imaging transducer 1012.Wherein, microlens array 1011 is made up of plano-convex microlens array, for receiving the transmitted light beam through half-reflection and half-transmission prism, then completes optical field imaging;
In the present invention, in specific implementation, described half-reflection and half-transmission prism 106 is obliquely installed the dead ahead in described microlens array 1011, described half-reflection and half-transmission prism 106 becomes predetermined angle to arrange (needs according to user are arranged, such as, can be 45° angle) with optical field imaging transducer 1012 plane;
See Fig. 4, described Depth Imaging transducer 1041 is horizontally disposed with and is positioned at immediately below described half-reflection and half-transmission prism 106, described half-reflection and half-transmission prism 106 becomes predetermined angle to arrange (needs according to user are arranged, such as, can be 45° angle) with described Depth Imaging transducer 1041 plane; The dead ahead of described half-reflection and half-transmission prism 106 is provided with described main lens 107.
By above vibrational power flow, for the present invention, it should be noted that, described main lens 107, for assembling the image beam of reflection on described default photographed scene (i.e. object space) 100, then projects on the half-reflection and half-transmission prism 106 in dead astern;
See Fig. 3, Fig. 4, described half-reflection and half-transmission prism 106, may be used for the incident image light beams from main lens 107 being divided into first via image beam and the second road image beam, and first via image beam is wherein reflected to described structured light Depth Imaging unit 104 (as Depth Imaging transducer 1041), and the depth image of the default photographed scene 100 obtained under structured light is gathered by described Depth Imaging transducer 1041, and the second road image beam is transmitted to optical field imaging unit 101 (comprising microlens array 1011 and optical field imaging transducer 1012), and the light field image obtaining and preset photographed scene 100 is gathered by optical field imaging unit 101, namely two-way light beam is respectively used to structured light Depth Imaging and optical field imaging.
It should be noted that, for the present invention, the structural design of above-mentioned Fig. 4, the effective guarantee Depth Imaging of structure based light and the light shaft coaxle of optical field imaging, decrease Depth Imaging transducer 204 and optical field imaging transducer 206 obtain the parallax of image.
In the present invention, in specific implementation, described structured light Depth Imaging unit 104 comprises at least one Depth Imaging transducer, and this Depth Imaging transducer is preferably and adopts COMS or ccd imaging sensor,
In the present invention, in specific implementation, described optical field imaging transducer 1012 is preferably and adopts COMS or ccd imaging sensor.
It should be noted that, for the present invention, as mentioned above, described logic control element 102, being connected with light source 103, structured light Depth Imaging unit 104 and optical field imaging unit 101 respectively, for realizing, closed loop feedback control being carried out to light source 101, structured light Depth Imaging unit 104 and optical field imaging unit 105.
In the present invention, in specific implementation, described logic control element 102 preferably adopts FPGA (field programmable gate array) or ARM (single-chip microcomputer) logic control chip.
In the present invention, described graphics processing unit 105 can be central processor CPU or digital signal processor DSP that described device is installed.
Therefore, known based on above technical scheme, the color depth formation method of a kind of structure based light provided by the invention and optical field imaging and device thereof, it is on the optical field imaging basis of microlens array form, utilize the structured light advantage that extraction of depth information precision is high under special circumstances, avoid the unfavorable factor of structured light technique in Depth Imaging by optical field imaging estimation of Depth simultaneously, realize two kinds of mode mutual supplement with each other's advantages in Depth Imaging, and the coloured image simultaneously got under same scene, improve image quality and the application flexibility of coloured image and Depth Imaging, contribute to promoting three-dimensional scenic and be modeled in extensive use in computer vision system.
In sum, compared with prior art, the invention provides color depth formation method and the device thereof of a kind of structure based light and optical field imaging, it can adopt the active Depth Imaging mode combined with passive type, utilizes mutual supplement with each other's advantages between the two, realizes colored and Depth Imaging, and ensure the quality of imaging, contribute to promoting three-dimensional scenic and be modeled in extensive use in computer vision system, the product use sense being conducive to improving user is subject to, and is of great practical significance.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (8)
1. a color depth formation method for structure based light and optical field imaging, is characterized in that, comprise step:
The first step: gather the light field image that user needs the default photographed scene taken;
Second step: carry out scene analysis to gathered light field image, obtains the concrete scene type of this default photographed scene;
3rd step: according to the corresponding relation between the different scene type preset and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and by the structured light of this associative mode in this default photographed scene;
4th step: gather the depth image of this default photographed scene under structured light;
5th step: extract the coloured image in the light field image of described default photographed scene and depth image, and carry out processing and merging with the depth image of described default photographed scene under structured light, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
2. the method for claim 1, is characterized in that, in the described first step, gathers by optical field imaging transducer the light field image of default photographed scene that user needs to take.
3. the method for claim 1, is characterized in that, in described 4th step, gathers the depth image of this default photographed scene under structured light by Depth Imaging transducer.
4. a color depth imaging device for structure based light and optical field imaging, is characterized in that, comprising:
Optical field imaging unit, needs the light field image of the default photographed scene taken, and sends to graphics processing unit for gathering user;
Logic control element, be connected with optical field imaging unit, light field image collection is carried out for controlling optical field imaging unit, and the concrete scene type of this default photographed scene sent according to graphics processing unit and the corresponding relation between default different scene type and different mode structured light, obtain the structured light of pattern corresponding to described default photographed scene, and send illumination mode control signal to light source and send Depth Imaging control signal to structured light Depth Imaging unit, described illumination mode control signal comprises the structured light information of pattern corresponding to described default photographed scene,
Light source, andlogic control unit is connected, for generation of the structured light of different mode, and according to the illumination mode control signal that described logic control element is sent, by the structured light of pattern corresponding to described default photographed scene in this default photographed scene;
Structured light Depth Imaging unit, andlogic control unit is connected, for the Depth Imaging control signal sent according to described logic control element, the depth image of the default photographed scene of Real-time Collection under structured light, and send to graphics processing unit;
Graphics processing unit, respectively with optical field imaging unit, logic control element is connected with structured light Depth Imaging unit, for receiving the light field image of the described default photographed scene that described optical field imaging unit obtains, scene analysis is carried out to described light field image, obtain the concrete scene type of this default photographed scene and send to logic control element, and the coloured image extracted in this light field image and depth image, then the coloured image in described light field image and depth image and the depth image of described default photographed scene under structured light are carried out processing and merging, obtain colour and degree of depth synthetic image that described default photographed scene finally has.
5. color depth imaging device as claimed in claim 4, it is characterized in that, also comprise half-reflection and half-transmission prism and main lens, the dead ahead of described half-reflection and half-transmission prism is provided with described main lens, wherein:
Described main lens, for assembling the image beam that described default photographed scene reflects, then projects on described half-reflection and half-transmission prism;
Described half-reflection and half-transmission prism, for the incident image light beams from main lens being divided into first via image beam and the second road image beam, and first via image beam is wherein reflected to described structured light Depth Imaging unit, and the depth image of the default photographed scene under structured light is obtained by described structured light Depth Imaging unit collection, and the second road image beam is transmitted to optical field imaging unit, and obtained the light field image presetting photographed scene by the collection of optical field imaging unit.
6. color depth imaging device as claimed in claim 5, it is characterized in that, described structured light Depth Imaging unit is Depth Imaging transducer;
Described optical field imaging unit comprises microlens array and optical field imaging transducer, and described optical field imaging transducer is vertical setting with microlens array, and described microlens array is positioned at the dead ahead of described optical field imaging transducer;
Described half-reflection and half-transmission prisms tilted is arranged at the dead ahead of described microlens array, and described half-reflection and half-transmission prism becomes predetermined angle to arrange with optical field imaging sensor plane;
Described Depth Imaging sensor levels arranges and is positioned at immediately below described half-reflection and half-transmission prism, and described half-reflection and half-transmission prism becomes predetermined angle to arrange with described Depth Imaging sensor plane.
7. color depth imaging device as claimed in claim 6, it is characterized in that, described optical field imaging transducer is COMS or ccd imaging sensor, and described Depth Imaging transducer is for adopting COMS or ccd imaging sensor.
8. color depth imaging device as claimed in claim 4, it is characterized in that, described logic control element is on-site programmable gate array FPGA or Chip Microcomputer A RM logic control chip;
Described graphics processing unit is central processor CPU or digital signal processor DSP that described color depth imaging device is installed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510958294.3A CN105357515B (en) | 2015-12-18 | 2015-12-18 | Color and depth imaging method and device based on structured light and light-field imaging |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510958294.3A CN105357515B (en) | 2015-12-18 | 2015-12-18 | Color and depth imaging method and device based on structured light and light-field imaging |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105357515A true CN105357515A (en) | 2016-02-24 |
CN105357515B CN105357515B (en) | 2017-05-03 |
Family
ID=55333363
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510958294.3A Active CN105357515B (en) | 2015-12-18 | 2015-12-18 | Color and depth imaging method and device based on structured light and light-field imaging |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105357515B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106161907A (en) * | 2016-08-31 | 2016-11-23 | 北京的卢深视科技有限公司 | Obtain the security protection network cameras of scene three-dimensional information |
CN106228507A (en) * | 2016-07-11 | 2016-12-14 | 天津中科智能识别产业技术研究院有限公司 | A kind of depth image processing method based on light field |
CN106500629A (en) * | 2016-11-29 | 2017-03-15 | 深圳大学 | A kind of microscopic three-dimensional measurement apparatus and system |
CN107483845A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Photographic method and its device |
CN107610171A (en) * | 2017-08-09 | 2018-01-19 | 广东欧珀移动通信有限公司 | Image processing method and its device |
CN107993260A (en) * | 2017-12-14 | 2018-05-04 | 浙江工商大学 | A kind of light field image depth estimation method based on mixed type convolutional neural networks |
CN108846473A (en) * | 2018-04-10 | 2018-11-20 | 杭州电子科技大学 | Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks |
CN108924407A (en) * | 2018-06-15 | 2018-11-30 | 深圳奥比中光科技有限公司 | A kind of Depth Imaging method and system |
CN109376667A (en) * | 2018-10-29 | 2019-02-22 | 北京旷视科技有限公司 | Object detection method, device and electronic equipment |
CN110896468A (en) * | 2018-09-13 | 2020-03-20 | 郑州雷动智能技术有限公司 | Time information output system for depth camera equipment |
CN111479075A (en) * | 2020-04-02 | 2020-07-31 | 青岛海信移动通信技术股份有限公司 | Photographing terminal and image processing method thereof |
CN111797690A (en) * | 2020-06-02 | 2020-10-20 | 武汉烽理光电技术有限公司 | Optical fiber perimeter intrusion identification method and device based on wavelet neural network grating array |
CN112887697A (en) * | 2021-01-21 | 2021-06-01 | 北京华捷艾米科技有限公司 | Image processing method and system |
CN114157851A (en) * | 2021-11-26 | 2022-03-08 | 长沙海润生物技术有限公司 | Wearable wound infection imaging device and imaging method |
CN114189668A (en) * | 2021-11-26 | 2022-03-15 | 长沙海润生物技术有限公司 | Wearable wound surface imaging device and imaging method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120057040A1 (en) * | 2010-05-11 | 2012-03-08 | Byung Kwan Park | Apparatus and method for processing light field data using a mask with an attenuation pattern |
CN102833487A (en) * | 2012-08-08 | 2012-12-19 | 中国科学院自动化研究所 | Visual computing-based optical field imaging device and method |
CN102970548A (en) * | 2012-11-27 | 2013-03-13 | 西安交通大学 | Image depth sensing device |
CN103237161A (en) * | 2013-04-10 | 2013-08-07 | 中国科学院自动化研究所 | Light field imaging device and method based on digital coding control |
WO2013127974A1 (en) * | 2012-03-01 | 2013-09-06 | Iee International Electronics & Engineering S.A. | Spatially coded structured light generator |
CN104581124A (en) * | 2013-10-29 | 2015-04-29 | 汤姆逊许可公司 | Method and apparatus for generating depth map of a scene |
-
2015
- 2015-12-18 CN CN201510958294.3A patent/CN105357515B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120057040A1 (en) * | 2010-05-11 | 2012-03-08 | Byung Kwan Park | Apparatus and method for processing light field data using a mask with an attenuation pattern |
WO2013127974A1 (en) * | 2012-03-01 | 2013-09-06 | Iee International Electronics & Engineering S.A. | Spatially coded structured light generator |
CN102833487A (en) * | 2012-08-08 | 2012-12-19 | 中国科学院自动化研究所 | Visual computing-based optical field imaging device and method |
CN102970548A (en) * | 2012-11-27 | 2013-03-13 | 西安交通大学 | Image depth sensing device |
CN103237161A (en) * | 2013-04-10 | 2013-08-07 | 中国科学院自动化研究所 | Light field imaging device and method based on digital coding control |
CN104581124A (en) * | 2013-10-29 | 2015-04-29 | 汤姆逊许可公司 | Method and apparatus for generating depth map of a scene |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228507B (en) * | 2016-07-11 | 2019-06-25 | 天津中科智能识别产业技术研究院有限公司 | A kind of depth image processing method based on light field |
CN106228507A (en) * | 2016-07-11 | 2016-12-14 | 天津中科智能识别产业技术研究院有限公司 | A kind of depth image processing method based on light field |
CN106161907A (en) * | 2016-08-31 | 2016-11-23 | 北京的卢深视科技有限公司 | Obtain the security protection network cameras of scene three-dimensional information |
CN106500629A (en) * | 2016-11-29 | 2017-03-15 | 深圳大学 | A kind of microscopic three-dimensional measurement apparatus and system |
CN106500629B (en) * | 2016-11-29 | 2022-09-27 | 深圳大学 | Microscopic three-dimensional measuring device and system |
CN107483845A (en) * | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Photographic method and its device |
CN107483845B (en) * | 2017-07-31 | 2019-09-06 | Oppo广东移动通信有限公司 | Photographic method and its device |
CN107610171A (en) * | 2017-08-09 | 2018-01-19 | 广东欧珀移动通信有限公司 | Image processing method and its device |
CN107993260A (en) * | 2017-12-14 | 2018-05-04 | 浙江工商大学 | A kind of light field image depth estimation method based on mixed type convolutional neural networks |
CN108846473B (en) * | 2018-04-10 | 2022-03-01 | 杭州电子科技大学 | Light field depth estimation method based on direction and scale self-adaptive convolutional neural network |
CN108846473A (en) * | 2018-04-10 | 2018-11-20 | 杭州电子科技大学 | Light field depth estimation method based on direction and dimension self-adaption convolutional neural networks |
CN108924407A (en) * | 2018-06-15 | 2018-11-30 | 深圳奥比中光科技有限公司 | A kind of Depth Imaging method and system |
CN110896468A (en) * | 2018-09-13 | 2020-03-20 | 郑州雷动智能技术有限公司 | Time information output system for depth camera equipment |
CN109376667A (en) * | 2018-10-29 | 2019-02-22 | 北京旷视科技有限公司 | Object detection method, device and electronic equipment |
CN111479075A (en) * | 2020-04-02 | 2020-07-31 | 青岛海信移动通信技术股份有限公司 | Photographing terminal and image processing method thereof |
CN111797690A (en) * | 2020-06-02 | 2020-10-20 | 武汉烽理光电技术有限公司 | Optical fiber perimeter intrusion identification method and device based on wavelet neural network grating array |
CN112887697A (en) * | 2021-01-21 | 2021-06-01 | 北京华捷艾米科技有限公司 | Image processing method and system |
CN112887697B (en) * | 2021-01-21 | 2022-06-10 | 北京华捷艾米科技有限公司 | Image processing method and system |
CN114157851A (en) * | 2021-11-26 | 2022-03-08 | 长沙海润生物技术有限公司 | Wearable wound infection imaging device and imaging method |
CN114189668A (en) * | 2021-11-26 | 2022-03-15 | 长沙海润生物技术有限公司 | Wearable wound surface imaging device and imaging method |
CN114157851B (en) * | 2021-11-26 | 2024-03-15 | 长沙海润生物技术有限公司 | Wearable wound infection imaging equipment and imaging method |
CN114189668B (en) * | 2021-11-26 | 2024-03-15 | 长沙海润生物技术有限公司 | Wearable wound surface imaging device and imaging method |
Also Published As
Publication number | Publication date |
---|---|
CN105357515B (en) | 2017-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105357515A (en) | Color and depth imaging method and device based on structured light and light-field imaging | |
CN102833487B (en) | Visual computing-based optical field imaging device and method | |
CN104634276B (en) | Three-dimension measuring system, capture apparatus and method, depth computing method and equipment | |
CN105574525B (en) | A kind of complex scene multi-modal biological characteristic image acquiring method and its device | |
CN103868460B (en) | Binocular stereo vision method for automatic measurement based on parallax optimized algorithm | |
CN107635129B (en) | Three-dimensional trinocular camera device and depth fusion method | |
CN104036488B (en) | Binocular vision-based human body posture and action research method | |
US20220129066A1 (en) | Lightweight and low power cross reality device with high temporal resolution | |
US20220051441A1 (en) | Multi-camera cross reality device | |
CN109076148A (en) | Everyday scenes reconstruction engine | |
CN106572340A (en) | Camera shooting system, mobile terminal and image processing method | |
CN106454287A (en) | Combined camera shooting system, mobile terminal and image processing method | |
CN107707839A (en) | Image processing method and device | |
CN104634277B (en) | Capture apparatus and method, three-dimension measuring system, depth computing method and equipment | |
US20220132056A1 (en) | Lightweight cross reality device with passive depth extraction | |
CN107749070A (en) | The acquisition methods and acquisition device of depth information, gesture identification equipment | |
CN105608738A (en) | Light field camera-based flame three-dimensional photometric field reconstruction method | |
KR20170031185A (en) | Wide field-of-view depth imaging | |
CN107705278A (en) | The adding method and terminal device of dynamic effect | |
CN206921118U (en) | Double-wavelength images acquisition system | |
CN116682140A (en) | Three-dimensional human body posture estimation algorithm based on attention mechanism multi-mode fusion | |
CN103033145A (en) | Method and system for identifying shapes of plurality of objects | |
CN107592491A (en) | Video communication background display methods and device | |
CN107493452A (en) | Video pictures processing method, device and terminal | |
CN107529020A (en) | Image processing method and device, electronic installation and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 300457 unit 1001, block 1, msd-g1, TEDA, No.57, 2nd Street, Binhai New Area Economic and Technological Development Zone, Tianjin Patentee after: Tianjin Zhongke intelligent identification Co.,Ltd. Address before: 300457 No. 57, Second Avenue, Economic and Technological Development Zone, Binhai New Area, Tianjin Patentee before: TIANJIN ZHONGKE INTELLIGENT IDENTIFICATION INDUSTRY TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd. |
|
CP03 | Change of name, title or address |