CN110517306B - Binocular depth vision estimation method and system based on deep learning - Google Patents
Binocular depth vision estimation method and system based on deep learning Download PDFInfo
- Publication number
- CN110517306B CN110517306B CN201910814513.9A CN201910814513A CN110517306B CN 110517306 B CN110517306 B CN 110517306B CN 201910814513 A CN201910814513 A CN 201910814513A CN 110517306 B CN110517306 B CN 110517306B
- Authority
- CN
- China
- Prior art keywords
- depth
- neural network
- picture
- binocular
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000013135 deep learning Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000003062 neural network model Methods 0.000 claims abstract description 29
- 230000004927 fusion Effects 0.000 claims description 38
- 238000013527 convolutional neural network Methods 0.000 claims description 33
- 238000013528 artificial neural network Methods 0.000 claims description 21
- 238000012360 testing method Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 229920006395 saturated elastomer Polymers 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims 1
- 238000004422 calculation algorithm Methods 0.000 abstract description 20
- 230000007613 environmental effect Effects 0.000 abstract description 6
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000013526 transfer learning Methods 0.000 abstract description 2
- 238000004364 calculation method Methods 0.000 description 8
- 241000282414 Homo sapiens Species 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a binocular depth vision estimation method and a binocular depth vision estimation system based on deep learning, comprising the following steps of collecting training data; the depth generation module generates a depth distance corresponding to the position of the picture, and stores the depth distance as a depth picture with depth information, the size of which is consistent with that of the original picture, and the pixel value of the picture corresponds to the relative distance; training a neural network model; the depth estimation results in an estimated depth distance map. The invention has the beneficial effects that: the binocular vision depth estimation method based on the deep learning is high in accuracy, high in generalization capability, capable of performing transfer learning, and capable of greatly improving operation speed compared with the traditional algorithm operation time in speed when being applied to different environmental conditions.
Description
Technical Field
The invention relates to the technical field of depth distance measurement by a binocular camera, in particular to a binocular depth vision estimation method and a binocular depth vision estimation system based on deep learning.
Background
In recent years, obtaining the distance between environmental objects through depth estimation is an important field in computer vision. Similar to two eyes of a human, three-dimensional information in the environment is reconstructed through a binocular camera, and an estimation of the distance between objects in the environment is obtained. The binocular vision depth estimation method based on the traditional computer vision, such as SGM algorithm, has the characteristics of low precision and too low speed, and the algorithm has high dependence on environment, has poor robustness in complex scenes, and is difficult to meet the requirements of commercial landing. The binocular vision depth estimation method based on the deep learning has the characteristics of high precision, strong generalization capability, high speed and the like.
Disclosure of Invention
This section is intended to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description summary and in the title of the application, to avoid obscuring the purpose of this section, the description summary and the title of the invention, which should not be used to limit the scope of the invention.
The present invention has been made in view of the above-described problems occurring in the prior art.
Therefore, one technical problem solved by the present invention is: the binocular depth vision estimation method based on the deep learning is provided for obtaining the distance between the environmental object and the object more accurately.
In order to solve the technical problems, the invention provides the following technical scheme: a binocular depth vision estimation method based on deep learning comprises the following steps of collecting training data, and obtaining two initial pictures with different visual angles by a camera module; the depth generation module generates a depth distance corresponding to the position of the picture, and stores the depth distance as a depth picture with depth information, the size of which is consistent with that of the original picture, and the pixel value of the picture corresponds to the relative distance; training a neural network model, namely inputting the depth picture into the neural network model for training, and obtaining and storing trained neural network parameters through iterative training; and estimating the depth, wherein the camera module acquires an actual picture, and inputs the actual picture into the trained neural network model for calculation to obtain an estimated depth distance map.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: the neural network model comprises a convolutional neural network, a feature fusion network layer and a 3D convolutional neural network layer, and comprises the following training steps that pictures acquired by the camera module are used as input; obtaining a characteristic diagram of the two diagrams through a convolutional neural network; the output of the convolutional neural network layer is used as the input of the feature fusion network layer, and fusion features are extracted; and putting the depth map into a 3D convolutional neural network layer to extract the depth map.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: the acquired depth picture is put into the neural network model, and picture characteristics are extracted through the convolutional neural network; and inputting the three-dimensional feature fusion data to the feature fusion network layer to perform feature fusion, and matching and fusing the related features to generate a 3D feature map.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: the model training further comprises the steps of performing a 3D convolution on the 3D feature map, the convolution kernel size being 3 x 3, obtaining a fusion feature of the position and depth on the 3D feature map, the feature map being 1/4 of the original size, therefore, the picture is up-sampled to the original size to obtain a depth picture with the same size as the picture, and for each pixel point in the picture, a group of depth signals with the size of d=48 is corresponding, and the group of signals is normalized, and the function is defined as follows:
v is the corresponding depth signal and S is the normalized depth signal:
and multiplying the obtained normalized signal by the corresponding signal weight to obtain depth parallax information of the corresponding position.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: further comprising the step of a network update phase,
comparing the obtained parallax image with a real parallax image, namely, a depth image acquired by a depth camera, and obtaining a loss value of a network by adopting a smooth L1 loss function, wherein the loss function formula is as follows, and x is a data difference value of a corresponding position:
back-propagating the loss value to update and iterate the parameters of the whole neural network; repeating the above process until the network parameter update is smaller, and repeatedly performing the training for more than one time to obtain no better test result, namely judging that the training tends to be saturated, and finishing the training.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: the convolutional neural network comprises the following training steps that two depth pictures are simultaneously put into a residual network layer of the convolutional neural network to extract a picture feature map; and the feature map is placed into a spatial pyramid pooling layer for feature enhancement, so that richer feature information is obtained.
As a preferred embodiment of the deep learning-based binocular depth vision estimation method of the present invention, the method comprises: the feature fusion network layer comprises the following steps that features extracted by a convolution layer of a convolution neural network are used as input; inputting the rich characteristic information content of the convolutional depth fusion layer; the depth information fusion layer generates an information layer with depth information matching.
Binocular deep as the deep learning based on the inventionA preferred embodiment of the method of visual estimation of the degree, wherein: the 3D convolutional neural network layer comprises the following steps of taking the output of the characteristic fusion network layer as input; inputting a Hourgass module to extract richer deep high-dimensional information; obtaining a depth module with the size of the original image through the up-sampling layer; the size is D W H, where meaning is a graph of D size, assuming that the ith graph W j ,H k The pixel value of (a) is A ijk The output on the corresponding depth map is: d (D) jk =∑A ijk *i(i=0,1,2……D)。
The invention solves the other technical problem that: the binocular depth vision estimation system based on the deep learning is provided, and the method can be realized by means of the system.
In order to solve the technical problems, the invention provides the following technical scheme: the device comprises a camera module, a depth generation module and a neural network model; the camera module is a camera fixedly arranged on the binocular camera and is used for acquiring pictures with two different visual angles; the depth generation module generates a depth picture according to the acquired picture, and the relative distance between the depth picture and the camera corresponds to the pixel value of the image; the neural network model utilizes the acquired pictures to carry out deep learning and save the neural network parameters for generating an estimated depth distance map.
As a preferred embodiment of the system for binocular depth vision estimation based on deep learning according to the present invention, wherein: the camera module comprises 2 groups of cameras, namely a color binocular camera and a gray binocular camera, wherein the color binocular camera is used for collecting pictures and is used for training as the input of a neural network, and the gray binocular camera is used for generating a depth map because of better contrast and resolution.
The invention has the beneficial effects that: the binocular vision depth estimation method based on the deep learning is high in accuracy, high in generalization capability, capable of performing transfer learning, and capable of greatly improving operation speed compared with the traditional algorithm operation time in speed when being applied to different environmental conditions.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a schematic overall flow chart of a method for binocular depth vision estimation based on deep learning according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a binocular depth estimation architecture according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of a convolutional neural network module layer according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of a feature fusion network layer according to a first embodiment of the present invention;
FIG. 5 is a schematic diagram of a 3D convolutional neural network layer according to a first embodiment of the present invention;
fig. 6 is a schematic structural diagram of a system for binocular depth vision estimation based on deep learning according to a second embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to the illustrations of fig. 1 to 5, obtaining the distance between environmental objects by depth estimation is an important field in computer vision. It is similar to two eyes of human beings, and three-dimensional information in the environment is reconstructed through a binocular camera to obtain an estimation of the distance between objects in the environment.
By calculating the parallax of the two images, the distance measurement is directly performed on the front scene (the range where the images are captured) without judging what type of obstacle appears in front. Therefore, for any type of obstacle, the necessary early warning or braking can be performed according to the change of the distance information. The principle of a binocular camera is similar to that of a human eye. The human eye is able to perceive the distance of an object due to the difference in the images presented by both eyes to the same object, also known as "parallax". The farther the object is, the smaller the parallax; whereas the larger the parallax. The magnitude of the parallax corresponds to the distance between the object and the eyes, which is also why 3D movies enable stereoscopic hierarchical perception.
The binocular vision depth estimation method based on the traditional computer vision, such as SGM algorithm, has the characteristics of low precision and too low speed, and the algorithm has high dependence on environment, has poor robustness in complex scenes, and is difficult to meet the requirements of commercial landing. Therefore, the binocular vision depth estimation method based on the deep learning is provided in the embodiment, and has the advantages of high precision, strong generalization capability and high speed.
Specifically, the method comprises the following steps,
s1: acquiring training data, and acquiring two initial pictures 101 with different visual angles by using a camera module 100;
s2: the depth generation module 200 generates a depth distance corresponding to the picture position, and stores the depth distance as a depth picture 201 with depth information, the depth distance is consistent with the size of the original picture, and the picture pixel value corresponds to the relative distance;
s4: training the neural network model 300, inputting the depth picture 201 into the neural network model 300 for training, and obtaining and storing the trained neural network parameters through iterative training; the neural network model 300 comprises a convolutional neural network 301, a feature fusion network layer 302 and a 3D convolutional neural network layer 303, and comprises the following training steps, wherein pictures acquired by the camera module 100 are used as input; the characteristic diagrams of the two diagrams are obtained through a convolutional neural network 301; the output of the convolutional neural network layer 301 serves as the input of the feature fusion network layer 302, and fusion features are extracted; the depth map is extracted by being put into the 3D convolutional neural network layer 303.
More specifically, the method comprises the following steps,
the acquired depth picture 201 is put into a neural network model 300, and picture characteristics are extracted through a convolutional neural network 301; inputting the three-dimensional image to a feature fusion network layer 302 for feature fusion, and matching and fusing the related features to generate a 3D feature map;
then 3D convolution is carried out on the 3D feature map, the convolution kernel size is 3 multiplied by 3, fusion features of the position and the depth on the 3D feature map are obtained, the feature map (the feature map refers to an output result of the 3D convolution at the moment) is 1/4 of the original size (the original size is the original size of a picture acquired by a camera, the purpose of 1/4 is to reduce the calculation amount for picture compression, otherwise the network has difficulty obtaining results in less than 1 s), thus upsampling the picture to the original size to obtain a depth picture that is identical to the picture size (e.g., upsampling by bilinear interpolation), and for each pixel in the picture, normalizing the set of depth signals to a size d=48, the function is defined as follows:
v is the corresponding depth signal and S is the normalized depth signal:
the obtained normalized signal is multiplied by the corresponding signal weight to obtain the depth parallax information of the corresponding position, and the depth information is directly obtained without subsequent processing.
Comparing the obtained parallax image with a real parallax image, namely, a depth image acquired by a depth camera, and obtaining a loss value of a network by adopting a smooth L1 loss function, wherein the loss function formula is as follows, and x is a data difference value of a corresponding position:
back-propagating the loss value to update and iterate the parameters of the whole neural network;
the above process is repeated until the network parameter update is small (which can be understood as a final depth information graph, and each training is performed, for example, the depth of a certain corresponding point is 100, and then training is performed for many times, which is always about 100, to indicate that the network is not learned any more), and the iterative training is performed for many times, which does not obtain better test results, namely, it is determined that the training tends to be saturated, and the training is completed.
S4: the depth estimation, the camera module 100 collects the actual picture, and inputs the actual picture into the trained neural network model 300 for calculation, so as to obtain the estimated depth distance map.
The training steps of the convolutional neural network 301, the feature fusion network layer 302 and the 3D convolutional neural network layer 303 are sequentially included in this embodiment, specifically.
The convolutional neural network 301 comprises the following training steps, wherein two depth pictures 201 are simultaneously put into a residual network layer of the convolutional neural network 301 to extract picture feature images; and the feature map is placed into a spatial pyramid pooling layer for feature enhancement, so that richer feature information is obtained.
The feature fusion network layer 302 includes the steps of convolving features extracted by a convolutional layer of the neural network 301 as input; inputting the rich characteristic information content of the convolutional depth fusion layer; the depth information fusion layer generates an information layer with depth information matching.
The 3D convolutional neural network layer 303 includes the steps of taking as input the output of the feature fusion network layer 302; inputting a Hourgass module to extract richer deep high-dimensional information; obtaining a depth module with the size of the original image through the up-sampling layer; the size is D W H, where meaning is a graph of D size, assuming that the ith graph W j ,H k The pixel value of (a) is A ijk The output on the corresponding depth map is: d (D) jk =∑A ijk *i i=0,1,2……D。
It should be noted that, the present application provides a binocular depth vision estimation method based on deep learning, actually, two pictures with different visual angles are obtained by two cameras at fixed positions on a binocular camera, the two pictures are simultaneously put into a convolutional neural network residual error network to extract a picture feature map, and then the picture feature map is put into a spatial pyramid pooling layer to be enhanced as a feature. Carrying out feature fusion on two picture features with correlation, wherein the purpose of feature fusion is as follows: the feature maps presented by the two pictures at different view angles have relevance because the two pictures differ only in view angle, there are many identical or similar matching features in the feature maps, and the features are fused together for subsequent depth extraction.
The fusion layer neural network is utilized to supervise and learn proper feature matching, and the feature matching is different from the traditional binocular matching algorithm, wherein the feature matching is the matching learned by the neural network, such as relatively poor feature matching, which can lead to poor results, and the reverse supervision neural network is adjusted to better feature matching according to the quality of the results.
Finally, only the effective distance range of the camera is needed to be given, for example, the maximum distance is 200 meters, the neural network can give depth distance information within 0-200 meters on the picture, and the confidence reliability is not high beyond the effective distance range, so that the depth information is set according to 200 meters.
The embodiment provides a binocular vision depth estimation method based on deep learning, which has high precision. On the KITTI data set, an average error pixel percentage of 0.83% can be achieved, whereas the conventional binocular depth estimation method has an error rate of 3.57%. The method for estimating the binocular depth has strong generalization capability, and can transfer and learn the meaning of transfer and learning in the method: the method has the advantages that the method can be applied to different types of binocular cameras by using the same neural network architecture, the whole neural network is not required to be completely retrained, the parameters of the binocular cameras are only required to be set on the original basis, the output parameters of the network layer of the training neural network are only required to be updated, the development time and difficulty can be reduced, the method is applied to pictures with the size of 1242 x 375 under different environment conditions, the average operation time is 0.32 seconds on the speed, compared with the operation time of 3.7 seconds of the traditional algorithm, the operation speed is greatly improved, the requirements of commercial landing are basically met, the practical problems of the general traditional binocular depth estimation scheme are solved, and the method has a great application prospect in the related fields of automatic driving, indoor positioning and the like.
Scene one:
in the embodiment, the test vehicle for deploying the method and the vehicle for deploying the traditional method are subjected to comparison test, python software programming is used for realizing simulation test of the method and the traditional method, the test is performed on the KITTI data set, simulation data are obtained according to experimental results, and the traditional method adopts SGM algorithm and SDM algorithm in the experiment. The test environment is a binocular depth test set of the public data set KITTI, and the traditional method and the method run python software to realize simulation.
And comparing the performance of each algorithm, testing the running speed of each algorithm, calculating the error value of each algorithm, and averaging the estimated errors on the KITTI test set. The experiment was conducted by using the above 2 conventional methods and the present method, and the test results are shown in Table 1 below.
Table 1: and (5) testing results.
Algorithm | Speed of speed | Average error pixel ratio | Average parallax |
This patent | 0.32s | 1.32% | 0.5px |
SGM | 3.7s | 5.76% | 1.3px |
SDM | ~1min | 10.95% | 2.0px |
Referring to the data in table 1 above, it can be observed that the algorithm of the present implementation is much superior to the conventional algorithm in terms of both speed and accuracy.
Example 2
Referring to the illustration of fig. 6, the present embodiment proposes a binocular depth vision estimation system based on deep learning, the above embodiment can be implemented in dependence on the present system, and the above embodiment method or system can be applied to depth vision estimation of a vehicle. For example, the binocular camera arranged on the vehicle body is used for shooting image information around the vehicle body, meanwhile, the vehicle-mounted host computer is written with a deep learning network algorithm, and the shot image is input into the algorithm module to calculate and estimate the distance between the environmental object and the vehicle-mounted host computer and display the distance on the display screen of the vehicle-mounted host computer, so that the driver can be reminded of safe driving.
Specifically, the system comprises a camera module 100, a depth generation module 200 and a neural network model 300; the camera module 100 is a camera fixedly arranged on the binocular camera and is used for acquiring pictures with two different visual angles; the depth generation module 200 generates a depth picture 201 according to the acquired picture, and the relative distance between the depth picture 201 and the camera corresponds to the pixel value of the image; the neural network model 300 uses the acquired pictures for deep learning to save the neural network parameters for the generation of the estimated depth distance map. The camera module 100 includes 2 sets of cameras, a color binocular camera and a gray binocular camera, wherein the color binocular camera is used for acquiring pictures and training as input of a neural network, and the gray binocular camera is used for generating a depth map because of better contrast and resolution.
It should be noted that, the corresponding two stereo depth gray-scale cameras may generate a depth distance corresponding to the picture position, that is, find a matching point corresponding to the binocular picture, and obtain depth information d=f×b/d according to the pixel difference between the matching points, where f is the focal length of the camera, b is the distance between the binocular two cameras, and d is the pixel difference between the two matching points, that is, the distance between the two matching points and the camera. And the depth information is stored as a picture with the same size as the original picture, for example, only the depth information picture corresponding to the left view is needed to be stored, the pixel value of the picture corresponds to the relative distance, the information of the gray picture is generally composed of any number from 0 to 255, for example, white is 255, black is 1, and if the effective distance of the camera is 1 to 200 meters, the pixel value of the picture can be used for representing the depth information, namely 155 represents that the point of the picture is 155 meters away from the camera.
The network does not know how to perform feature fusion matching in the untrained condition, and through continuous training, the network can give a proper feature matching method to match related features, wherein the related features mean: the left image shows an automobile and the right image shows an automobile, the characteristic images are all provided with relevant information of the automobile, and the characteristic fusion layer is used for matching the relevant automobile information so as to fuse the relevant automobile information into a 3D characteristic image.
The depth generation module 200 in this embodiment may be a calculation module in a depth gray-scale camera, for example, a depth camera adopting RGBD-SLAM, where the detection range includes detection accuracy, detection angle, and frame rate; meanwhile, the module is small in power consumption and low in depth information, the depth information is obtained by relying on a pure software algorithm, and the processing chip has high computing performance, so that the depth information of surrounding objects can be obtained. The embodiment only uses the image as the acquisition of training data, and meanwhile, the method is not difficult to find out, and because of the defects of high calculation performance and slow operation of a processing chip, the method can also utilize common shooting to acquire images and then directly adopt a binocular imaging algorithm on a computer to acquire depth information. After the training of the neural network model 300 is completed, only a general camera is required to be installed on a vehicle body, the neural network model 300 is input to obtain depth information, the required performance is low, the operation speed is high, the neural network model 300 is a written deep learning algorithm chip which is arranged in a vehicle-mounted host, for example, the neural network model 300 can be a deep learning main stream chip of a GPU (graphics processing unit), the whole neural network model is a huge calculation matrix, the GPU has thousands of calculation cores, the application throughput of 10-100 times can be realized, and the neural network model also supports the parallel calculation capability which is vital to the deep learning, so that the neural network model can be faster than the traditional processor, and the training process is greatly accelerated. GPU is one of the most commonly used deep learning arithmetic units at present.
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.
Claims (6)
1. A method for binocular depth vision estimation based on deep learning, characterized by: comprises the steps of,
acquiring training data, and acquiring two initial pictures (101) with different visual angles by a camera module (100);
the depth generation module (200) generates a depth distance corresponding to the picture position, and stores the depth distance as a depth picture (201) with depth information, wherein the depth distance is consistent with the size of an original picture, and the picture pixel value corresponds to the relative distance;
training a neural network model (300), inputting the depth picture (201) into the neural network model (300) for training, and obtaining and storing trained neural network parameters through iterative training;
the depth estimation is carried out, the camera module (100) collects actual pictures, the actual pictures are input into the trained neural network model (300) to be calculated, and an estimated depth distance map is obtained; the neural network model (300) comprises a convolutional neural network (301), a feature fusion network layer (302) and a 3D convolutional neural network layer (303), comprising the training steps,
the image acquired by the camera module (100) is used as input;
obtaining a characteristic diagram of the two diagrams through a convolutional neural network (301);
the output of the convolutional neural network layer (301) is used as the input of the feature fusion network layer (302) to extract fusion features; putting the depth map into a 3D convolutional neural network layer (303) to extract a depth map; the acquired depth picture (201) is put into the neural network model (300), and picture features are extracted through the convolutional neural network (301); inputting the three-dimensional image to the feature fusion network layer (302) for feature fusion, and matching and fusing the related features to generate a 3D feature map;
the convolutional neural network (301) comprises a training step,
simultaneously placing two depth pictures (201) into a residual network layer of the convolutional neural network (301) to extract a picture feature map;
the feature map is placed into a space pyramid pooling layer for feature enhancement, so that richer feature information is obtained;
the feature fusion network layer (302) comprises the steps of,
the features extracted by the convolution layer of the convolutional neural network (301) are used as inputs;
inputting the rich characteristic information content of the convolutional depth fusion layer;
the depth information fusion layer generates an information layer with depth information matching.
2. The method of deep learning based binocular depth vision estimation of claim 1, wherein: the model training may further comprise the step of,
3D convolution is carried out on a 3D feature map, the convolution kernel size is 3 multiplied by 3, fusion features of positions and depths on the 3D feature map are obtained, the feature map is 1/4 of the original size, therefore, the picture is up-sampled to the original size to obtain a depth picture consistent with the picture size, a group of depth signals with the size of D=48 are corresponding to each pixel point in the picture, normalization is carried out on the group of signals, and the function is defined as follows:
v is the corresponding depth signal and S is the normalized depth signal:
and multiplying the obtained normalized signal by the corresponding signal weight to obtain depth parallax information of the corresponding position.
3. The method of deep learning based binocular depth vision estimation of claim 2, wherein: further comprising the step of a network update phase,
comparing the obtained parallax image with a real parallax image, namely, a depth image acquired by a depth camera, and obtaining a loss value of a network by adopting a smooth L1 loss function, wherein the loss function formula is as follows, and x is a data difference value of a corresponding position:
back-propagating the loss value to update and iterate the parameters of the whole neural network;
repeating the above process until the network parameter update is smaller, and repeatedly performing the training for more than one time to obtain no better test result, namely judging that the training tends to be saturated, and finishing the training.
4. A method of deep learning based binocular depth vision estimation according to claim 3, wherein: the 3D convolutional neural network layer (303) comprises the steps of,
-taking as input the output of the feature fusion network layer (302);
inputting a Hourgass module to extract richer deep high-dimensional information;
obtaining a depth module with the size of the original image through the up-sampling layer;
the size is d×w×h, where the meaning is a graph with D sizes, and assuming that the pixel value of the ith graph Wj, hk is Aijk, the output on the corresponding depth graph is:
Djk=∑Aijk*i(i=0,1,2......D)。
5. a system employing the deep learning based binocular depth vision estimation method of any one of claims 1 to 4, characterized in that: comprises an image pickup module (100), a depth generation module (200) and a neural network model (300); the camera module (100) is a camera fixedly arranged on the binocular camera and is used for acquiring pictures with two different visual angles; the depth generation module (200) generates a depth picture (201) according to the acquired picture, and the relative distance between the depth picture (201) and the camera corresponds to the pixel value of the image; the neural network model (300) uses the acquired pictures to perform deep learning to save neural network parameters for the generation of an estimated depth distance map.
6. The deep learning based binocular depth vision estimation system of claim 5, wherein: the camera module (100) comprises 2 groups of cameras, namely a color binocular camera and a gray binocular camera, wherein the color binocular camera is used for collecting pictures and is used for training as the input of a neural network, and the gray binocular camera is used for generating a depth map because of better contrast and resolution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910814513.9A CN110517306B (en) | 2019-08-30 | 2019-08-30 | Binocular depth vision estimation method and system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910814513.9A CN110517306B (en) | 2019-08-30 | 2019-08-30 | Binocular depth vision estimation method and system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110517306A CN110517306A (en) | 2019-11-29 |
CN110517306B true CN110517306B (en) | 2023-07-28 |
Family
ID=68629476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910814513.9A Active CN110517306B (en) | 2019-08-30 | 2019-08-30 | Binocular depth vision estimation method and system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110517306B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179330A (en) * | 2019-12-27 | 2020-05-19 | 福建(泉州)哈工大工程技术研究院 | Binocular vision scene depth estimation method based on convolutional neural network |
CN111310916B (en) * | 2020-01-22 | 2022-10-25 | 浙江省北大信息技术高等研究院 | Depth system training method and system for distinguishing left and right eye pictures |
CN112446822B (en) * | 2021-01-29 | 2021-07-30 | 聚时科技(江苏)有限公司 | Method for generating contaminated container number picture |
CN112967332B (en) * | 2021-03-16 | 2023-06-16 | 清华大学 | Binocular depth estimation method and device based on gate control imaging and computer equipment |
CN113344997B (en) * | 2021-06-11 | 2022-07-26 | 方天圣华(北京)数字科技有限公司 | Method and system for rapidly acquiring high-definition foreground image only containing target object |
CN113763447B (en) * | 2021-08-24 | 2022-08-26 | 合肥的卢深视科技有限公司 | Method for completing depth map, electronic device and storage medium |
CN114035871B (en) * | 2021-10-28 | 2024-06-18 | 深圳华邦瀛光电有限公司 | Display method, system and computer equipment of 3D display screen based on artificial intelligence |
CN114789870A (en) * | 2022-05-20 | 2022-07-26 | 深圳市信成医疗科技有限公司 | Innovative modular drug storage management implementation mode |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060290A (en) * | 2019-03-14 | 2019-07-26 | 中山大学 | A kind of binocular parallax calculation method based on 3D convolutional neural networks |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11024041B2 (en) * | 2018-12-10 | 2021-06-01 | Intel Corporation | Depth and motion estimations in machine learning environments |
-
2019
- 2019-08-30 CN CN201910814513.9A patent/CN110517306B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060290A (en) * | 2019-03-14 | 2019-07-26 | 中山大学 | A kind of binocular parallax calculation method based on 3D convolutional neural networks |
Non-Patent Citations (1)
Title |
---|
论文阅读笔记《Pyramid Stereo Matching Network》;深视;《CSDN》;20180415;1-6 * |
Also Published As
Publication number | Publication date |
---|---|
CN110517306A (en) | 2019-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110517306B (en) | Binocular depth vision estimation method and system based on deep learning | |
CN110738697B (en) | Monocular depth estimation method based on deep learning | |
CN111524135B (en) | Method and system for detecting defects of tiny hardware fittings of power transmission line based on image enhancement | |
CN111488865B (en) | Image optimization method and device, computer storage medium and electronic equipment | |
CN112184577B (en) | Single image defogging method based on multiscale self-attention generation countermeasure network | |
CN114565655B (en) | Depth estimation method and device based on pyramid segmentation attention | |
CN114049434B (en) | 3D modeling method and system based on full convolution neural network | |
CN105761234A (en) | Structure sparse representation-based remote sensing image fusion method | |
CN111709307B (en) | Resolution enhancement-based remote sensing image small target detection method | |
AU2021103300A4 (en) | Unsupervised Monocular Depth Estimation Method Based On Multi- Scale Unification | |
CN111325782A (en) | Unsupervised monocular view depth estimation method based on multi-scale unification | |
CN111274980A (en) | Small-size traffic sign identification method based on YOLOV3 and asymmetric convolution | |
CN113762267B (en) | Semantic association-based multi-scale binocular stereo matching method and device | |
CN112927348B (en) | High-resolution human body three-dimensional reconstruction method based on multi-viewpoint RGBD camera | |
CN114782298B (en) | Infrared and visible light image fusion method with regional attention | |
CN116091314B (en) | Infrared image stitching method based on multi-scale depth homography | |
CN115330935A (en) | Three-dimensional reconstruction method and system based on deep learning | |
CN114627299B (en) | Method for detecting and dividing camouflage target by simulating human visual system | |
CN112991422A (en) | Stereo matching method and system based on void space pyramid pooling | |
CN117495718A (en) | Multi-scale self-adaptive remote sensing image defogging method | |
CN111369435A (en) | Color image depth up-sampling method and system based on self-adaptive stable model | |
CN116091793A (en) | Light field significance detection method based on optical flow fusion | |
CN116883770A (en) | Training method and device of depth estimation model, electronic equipment and storage medium | |
CN117670965B (en) | Unsupervised monocular depth estimation method and system suitable for infrared image | |
CN117523024B (en) | Binocular image generation method and system based on potential diffusion model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP02 | Change in the address of a patent holder | ||
CP02 | Change in the address of a patent holder |
Address after: 11th Floor, Building A1, Huizhi Science and Technology Park, No. 8 Hengtai Road, Nanjing Economic and Technological Development Zone, Jiangsu Province, 211000 Patentee after: DILU TECHNOLOGY Co.,Ltd. Address before: Building C4, No.55 Liyuan South Road, moling street, Jiangning District, Nanjing City, Jiangsu Province Patentee before: DILU TECHNOLOGY Co.,Ltd. |