CN108495113A - control method and device for binocular vision system - Google Patents
control method and device for binocular vision system Download PDFInfo
- Publication number
- CN108495113A CN108495113A CN201810259316.0A CN201810259316A CN108495113A CN 108495113 A CN108495113 A CN 108495113A CN 201810259316 A CN201810259316 A CN 201810259316A CN 108495113 A CN108495113 A CN 108495113A
- Authority
- CN
- China
- Prior art keywords
- light
- sensor
- light source
- image
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The embodiment of the present application discloses the control method and device for binocular vision system.One specific implementation mode of this method includes:Light source is controlled to target object emitting structural light, to form predetermined pattern on the surface of target object;Obtain the first image that first sensor acquires target object and the second image that second sensor acquires target object;Characteristic point is extracted from the first image and the second image;In response to determining that the quantity of characteristic point is more than or equal to amount threshold, feature based o'clock carries out Stereo matching to the first image and the second image, generates the depth information of target object.This embodiment improves the accuracys of depth image.
Description
Technical field
The invention relates to field of computer technology, and in particular to is used for the control method and dress of binocular vision system
It sets.
Background technology
Depth camera has been widely used for safety certification, nothing since it can obtain the depth information of object in scene
The fields such as man-machine avoidance, three-dimensional reconstruction.Technique of binocular stereoscopic vision is the depth information that depth camera obtains object in scene
A kind of important technical.
In binocular vision camera, (i.e. different viewpoints) clap Same Scene to two cameras from different angles
It takes the photograph, obtains two digital pictures containing parallax information of the scene.Determine that same point is in two width in scene by Stereo matching
Coordinate position in image can calculate the space three of the point using principle of triangulation according to the relative position of two video cameras
Dimension coordinate.
Invention content
The embodiment of the present application proposes the control method and device for binocular vision system.
In a first aspect, the embodiment of the present application provides a kind of control method for binocular vision system, binocular vision system
System includes light source, first sensor and second sensor, and this method includes:Light source is controlled to target object emitting structural light, with
Predetermined pattern is formed on the surface of target object;First sensor is obtained to sense the first image and second that target object acquires
The second image that device acquires target object;Characteristic point is extracted from the first image and the second image;In response to determining feature
The quantity of point is more than or equal to amount threshold, and feature based o'clock carries out Stereo matching to the first image and the second image, generates target
The depth information of object.
In some embodiments, this method further includes:In response to determining that the quantity of characteristic point is less than amount threshold, obtain
First sensor and/or second sensor sense under conditions of light source emitting structural light and light source not emitting structural light respectively
Light intensity;In response to determining that the intensity of the light sensed under conditions of light source emitting structural light does not emit in light source
The ratio of the intensity of the light sensed under conditions of structure light is more than or equal to default ratio, at the time of determining light source emitting structural light
Difference at the time of sensing the reflected light of structure light after light source emitting structural light with first sensor and/or second sensor
Value;Based on difference, the depth information of target object is generated.
In some embodiments, this method further includes:In response to determining that the quantity of characteristic point is less than amount threshold, obtain
First sensor and/or second sensor sense under conditions of light source emitting structural light and light source not emitting structural light respectively
Light intensity;In response to determining that the intensity of the light sensed under conditions of light source emitting structural light does not emit in light source
The ratio of the intensity of the light sensed under conditions of structure light is more than or equal to default ratio, determines the phase of the structure light of light source transmitting
The phase of the reflected light for the structure light that position is sensed with first sensor and/or second sensor after light source emitting structural light
Difference;Based on difference, the depth information of target object is generated.
In some embodiments, first sensor and second sensor are arranged symmetrically to each other in the both sides of light source.
In some embodiments, predetermined pattern is the speckle pattern with default resolution ratio.
Second aspect, the embodiment of the present application provide a kind of control device for binocular vision system, binocular vision system
System includes light source, first sensor and second sensor, and device includes:Light source control unit is configured to control light source to mesh
Object emitting structural light is marked, to form predetermined pattern on the surface of target object;Image acquisition unit is configured to acquisition first
The second image that the first image and second sensor that sensor acquires target object acquire target object;Feature point extraction
Unit is configured to extract characteristic point from the first image and the second image;First depth information generation unit, is configured to ring
Ying Yu determines that the quantity of characteristic point is more than or equal to amount threshold, and feature based o'clock carries out the first image and the second image three-dimensional
Matching, generates the depth information of target object.
In some embodiments, device further includes:Light intensity acquiring unit is configured to the number in response to determining characteristic point
Amount is less than amount threshold, obtains first sensor and/or second sensor does not emit in light source emitting structural light and light source respectively
The intensity of the light sensed under conditions of structure light;Moment determination unit is configured in response to determining in light source emitter junction
The ratio of the intensity of the light sensed under conditions of structure light and the intensity of the light sensed under conditions of light source not emitting structural light
Value is more than or equal to default ratio, with first sensor and/or second sensor in light source at the time of determining light source emitting structural light
The difference at the time of reflected light of structure light is sensed after emitting structural light;Second depth information generation unit, is configured to
The depth information of target object is generated based on difference.
In some embodiments, device further includes:Light intensity acquiring unit is configured to the number in response to determining characteristic point
Amount is less than amount threshold, obtains first sensor and/or second sensor does not emit in light source emitting structural light and light source respectively
The intensity of the light sensed under conditions of structure light;Phase de-termination unit is configured in response to determining in light source emitter junction
The ratio of the intensity of the light sensed under conditions of structure light and the intensity of the light sensed under conditions of light source not emitting structural light
It is worth the phase that the structure light that light source emits is determined more than or equal to default ratio with first sensor and/or second sensor in light
The difference of the phase of the reflected light of the structure light sensed after the emitting structural light of source;Second depth information generation unit, configuration
Depth information for generating target object based on difference.
In some embodiments, first sensor and second sensor are arranged symmetrically to each other in the both sides of light source.
In some embodiments, predetermined pattern is the speckle pattern with default resolution ratio.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, including:At controller, including one or more
Manage device;Light source;First sensor;Second sensor;Storage device, for storing one or more programs;Work as one or more
Program is executed by a controller so that controller realizes the method as described in any realization method in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in any realization method in first aspect is realized when computer program is executed by processor.
Control method and device provided by the embodiments of the present application for binocular vision system, by controlling light source to target
Object projective structure light pattern obtains the first image and second sensor that first sensor acquires target object to mesh later
The second image for marking object acquisition, then extracts characteristic point from the first image and the second image, finally counts in response to feature
Amount is more than amount threshold, carries out Stereo matching to the first image and the second image, depth image is generated, to improve depth map
The accuracy of picture.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the control method for binocular vision system of the application;
Fig. 3 is the schematic diagram for a realization method for showing embodiment illustrated in fig. 2;
Fig. 4 is the schematic diagram according to an application scenarios of the control method for binocular vision system of the application;
Fig. 5 is the flow chart according to another embodiment of the control method for binocular vision system of the application;
Fig. 6 is according to the flow chart of the another embodiment of the control method for binocular vision system of the application;
Fig. 7 is the structural schematic diagram according to one embodiment of the control device for binocular vision system of the application;
Fig. 8 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows that this application can be applied to exemplary system architectures 100 therein.
As shown in Figure 1, system architecture 100 may include binocular vision system 101, network 102 and controller 103.Network
102 between binocular vision system 101 and controller 103 provide communication link medium.Network 102 may include various
Connection type, such as wired, wireless communication link or fiber optic cables etc..
Binocular vision system 101 can be interacted by network 102 with controller 103, to receive or send message.Binocular vision
Feel system 101 can be equipped with light source 104, first sensor 105 and second sensor 106.Wherein, light source 104 is used for mesh
Mark object emitting structural light, first sensor 105 and second sensor 106 be used to sense target object reflection echo light and
Image for acquiring target object.
Controller 103 may be mounted on binocular vision system 101, can not also be mounted on binocular vision system 101.
Controller 103 to binocular vision system 101 for carrying out various controls, for example, controller 103 can control light source 104 to mesh
Object emitting structural light is marked, controller 103 can also control first sensor 105 and/or second sensor 106 to target object
The echo light of reflection is sensed, or control first sensor 105 and second sensor 106 carry out image to target object and adopt
Collection.
It should be noted that the control method for binocular vision system that the embodiment of the present application is provided is generally by controlling
Device 103 executes, and correspondingly, the control device for binocular vision system is generally positioned in controller 103.
It should be noted that controller can be hardware, can also be software.When controller is hardware, may be implemented
At the distributed apparatus cluster that multiple equipment forms, individual equipment can also be implemented as.When controller is software, may be implemented
At multiple softwares or software module (such as providing Distributed Services), it can also be implemented as single software or software module,
It is not specifically limited herein.
It should be understood that the binocular vision system, network, controller, light source, first sensor in Fig. 1 and second sensor
Number it is only schematical.According to realize needs, can have any number of binocular vision system, network, controller,
Light source, first sensor and second sensor.
With continued reference to Fig. 2, one embodiment of the control method for binocular vision system according to the application is shown
Flow 200.This is used for the flow 200 of the control method of binocular vision system, includes the following steps:
Step 201, control light source is to target object emitting structural light, to form predetermined pattern on the surface of target object.
In the present embodiment, binocular vision system (such as binocular vision system of Fig. 1) may include light source, the first sensing
Device and second sensor.The executive agent (such as controller shown in FIG. 1) of control method for binocular vision system can be with
Light source is controlled, makes light source to target object emitting structural light, to form predetermined pattern (example on the surface of target object
Such as, dot pattern, line pattern, face pattern etc.).
Here, light emitting instruction can be sent by above-mentioned executive agent to binocular vision system by controlling light source
(for example, the instruction of " to target object emitting structural light ") is realized, can also be realized by other suitable modes, such as logical
Cross and increase shading device between light source and target object and control whether to target object emitting structural light, the application to this not
Make specific limit.
In some optional realization methods of the present embodiment, light source can be laser light source.Due to the side of laser light source
Tropism is good, therefore can form clearly structured light patterns on the surface of target object.Laser light source can be monochromatic source, example
Such as, red laser light source, green laser light source, blue laser light source etc..
In some optional realization methods of the present embodiment, above-mentioned predetermined pattern is the speckle pattern with default resolution ratio
Case.Wherein, default resolution ratio can be QVGA (i.e. 320*240), QQVGA (i.e. 160*120) etc..Such as shown in Fig. 3, show
The schematic diagram of one realization method of embodiment illustrated in fig. 2.From the figure 3, it may be seen that by by default resolution ratio (such as 320*240)
Predetermined pattern 302 project the surface of target object 301, more detailed depth information can be obtained, so as to produce height
The depth image of resolution ratio improves the precision of depth image.
Step 202, the first image and second sensor that first sensor acquires target object are obtained to target object
Second image of acquisition.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) first sensor and second sensor can be controlled to target object progress Image Acquisition, it then obtains first sensor and adopts
Second image of the first image and the second sensor acquisition of collection.Wherein, first sensor and second sensor can be images
Sensor (also referred to as photosensitive element), such as, it may include CCD (Charge Coupled Device, charge coupled cell),
CMOS (Complementary Metal-Oxide Semiconductor, metal oxide semiconductor device) etc..
Here, the first image and the second image can be gray level images, can also be RGB color image, the application is to this
It is not especially limited.
Step 203, characteristic point is extracted from the first image and the second image.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) characteristic point can be extracted from above-mentioned first image and the second image.Characteristic point is also known as point of interest, key point, is in image
Middle protrusion and with representing some points of meaning can carry out image recognition, Stereo matching, 3D by these points and rebuild etc..It is special
The extraction for levying point can be there are many mode, for example, (Speeded Up Robust Features accelerate steady special using SURF
Sign is a kind of steady local feature region detection and description algorithm) operator extracts in above-mentioned first image and the second image
Characteristic point.
Step 204, in response to determining that the quantity of characteristic point is more than or equal to amount threshold, feature based o'clock is to the first image
Stereo matching is carried out with the second image, generates the depth information of target object.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) it can determine whether the quantity for the characteristic point that step 203 is extracted is less than amount threshold first.Amount threshold is according to actually answering
It, can for characterize the first image and the second image progress Stereo matching with the pre-set characteristic point quantitative value of the needs of scene
By property.For example, the quantity of the characteristic point of extraction is more than or equal to above-mentioned amount threshold, then it represents that the first image and the second image carry out
The reliability of Stereo matching is higher, on the contrary then indicate that the first image and the second image can not carry out Stereo matching or Stereo matching
Effect is poor.
In response to determining that the quantity of extracted characteristic point is more than or equal to amount threshold, above-mentioned executive agent can be based on
Features described above o'clock carries out Stereo matching to the first image and the second image, and then generates the depth information of target object, for example, deep
Spend image.
Stereo matching refer to matched in two viewpoints (for example, the first image and second image) or multiple viewpoints it is corresponding
Characteristic point.That is, for the same point on target object, it is mapped on the first image and the second image as long as finding the point
Characteristic point the depth information of the point can be estimated by triangulation, it is exactly Stereo matching to look for the process of character pair point
Process.Since Stereo matching is to restore three-dimensional information from two dimensional image, itself has probabilistic feature, therefore is
The correct matching result of acquisition, needs to reduce matched search difficulty by various constraint informations, and it is matched accurate to improve
Degree.For example, according to the difference of the Optimum Theory of use, Stereo Matching Algorithm can be divided into sectional perspective matching algorithm and the overall situation
Stereo Matching Algorithm.Those skilled in the art can select suitable Stereo matching to calculate according to the needs of practical application scene
Method.
In the present embodiment, by forming structured light patterns on the surface of target object, can increase from the first image and
The quantity for the characteristic point extracted in second image, so that binocular vision system can be applied not only to the apparent target of feature
Object (for example, target object with high texture) can also be applied to the unconspicuous target object of feature (for example, with low
The target object of texture), and when the predetermined pattern of resolution ratio (such as 320*240) is preset in projection, additionally it is possible to obtain high score
The depth image of resolution.
It is the application scenarios according to the control method for binocular vision system of the present embodiment with continued reference to Fig. 4, Fig. 4
400 schematic diagram.In the application scenarios 400 of Fig. 4, first, controller (not shown) controls light source 401 to target object
401 (for example, automobile) emitting structural light, to form structured light patterns on the surface of target object 401;Later, above controller
It controls first sensor 403 and second sensor 404 and Image Acquisition is carried out to target object 401, obtain first sensor 403 and adopt
The second image that the first image and second sensor 404 of collection acquire;Then, above controller is from the first image and the second image
Middle extraction characteristic point;Finally, above controller determines that the quantity of extracted characteristic point is more than amount threshold (for example, 8), base
Stereo matching is carried out in the first image of characteristic point pair extracted and the second image, and then generates depth image.
In some optional realization methods of the present embodiment, first sensor and second sensor are arranged symmetrically to each other
In the both sides of light source.As shown in figure 4, first sensor 403, light source 402 and second sensor 404 are arranged on same straight line
On, the distance between first sensor 403 and light source 402 are w1, the distance between second sensor 404 and light source 402 are w2,
Wherein, w1=w2。
The control method for binocular vision system that above-described embodiment of the application provides, by controlling light source to target
Object projective structure light pattern obtains the first image and second sensor that first sensor acquires target object to mesh later
The second image for marking object acquisition, then extracts characteristic point from the first image and the second image, finally counts in response to feature
Amount is more than amount threshold, carries out Stereo matching to the first image and the second image, depth image is generated, to improve depth map
The accuracy of picture.
With further reference to Fig. 5, it illustrates the flows of another embodiment of the control method for binocular vision system
500.This is used for the flow 500 of the control method of binocular vision system, includes the following steps:
Step 501, control light source is to target object emitting structural light, to form predetermined pattern on the surface of target object.
In the present embodiment, binocular vision system (such as binocular vision system of Fig. 1) may include light source, the first sensing
Device and second sensor.The executive agent (such as controller shown in FIG. 1) of control method for binocular vision system can be with
Light source is controlled, makes light source to target object emitting structural light, to form predetermined pattern (example on the surface of target object
Such as, dot pattern, line pattern, face pattern etc.).
Step 502, the first image and second sensor that first sensor acquires target object are obtained to target object
Second image of acquisition.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) first sensor and second sensor can be controlled to target object progress Image Acquisition, it then obtains first sensor and adopts
Second image of the first image and the second sensor acquisition of collection.Wherein, first sensor and second sensor can be can
Acquire the sensor of image.
Step 503, characteristic point is extracted from the first image and the second image.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) characteristic point can be extracted from above-mentioned first image and the second image.Characteristic point is also known as point of interest, key point, is in image
Middle protrusion and with representing some points of meaning can carry out image recognition, Stereo matching, 3D by these points and rebuild etc..It is special
The extraction for levying point can be there are many mode, for example, (Speeded Up Robust Features accelerate steady special using SURF
Sign is a kind of steady local feature region detection and description algorithm) operator extracts in above-mentioned first image and the second image
Characteristic point.
Step 504, in response to determining that the quantity of characteristic point is less than amount threshold, first sensor and/or second is obtained
The intensity for the light that sensor senses under conditions of light source emitting structural light and light source not emitting structural light respectively.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) it can determine whether the quantity for the characteristic point that step 503 is extracted is less than amount threshold first.Amount threshold is according to actually answering
It, can for characterize the first image and the second image progress Stereo matching with the pre-set characteristic point quantitative value of the needs of scene
By property.For example, the quantity of the characteristic point of extraction is more than or equal to above-mentioned amount threshold, then it represents that the first image and the second image carry out
The reliability of Stereo matching is higher, on the contrary then indicate that the first image and the second image can not carry out Stereo matching or Stereo matching
Effect is poor.
In response to determining the quantity of extracted characteristic point to be less than amount threshold, (characteristic point extracted is not enough to pair
First image and the second image carry out Stereo matching), above-mentioned executive agent control first sensor and/or second sensor difference
Under conditions of light source emitting structural light and light source does not sense the intensity of echo light under conditions of emitting structural light, obtains first and passes
The intensity for the echo light that sensor and/or second sensor sense.Wherein, at least one in first sensor and second sensor
A can be the sensor that can sense echo luminous intensity.
Step 505, in response to determining the intensity of the light sensed under conditions of light source emitting structural light and in light source
The ratio of the intensity for the light not sensed under conditions of emitting structural light is more than or equal to default ratio, determines light source emitting structural light
At the time of with first sensor and/or second sensor sensed after light source emitting structural light structure light reflected light when
The difference at quarter.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) can determine first step 504 obtain the echo light sensed under conditions of light source emitting structural light intensity with
Whether the ratio of the intensity for the echo light that light source does not sense under conditions of emitting structural light is less than default ratio.Default ratio is
According to the pre-set intensity ratio of the needs of practical application scene, for characterizing first sensor and/or second sensor
The reliability of the echo light sensed.For example, the ratio determined is (for example, sense under conditions of light source emitting structural light
The intensity of echo light be 10 lumens, 1 lumen of intensity of the echo light sensed under conditions of light source not emitting structural light, i.e., really
Fixed ratio is 10) to be more than default ratio (such as 2), then it represents that first sensor and/or second sensor can accurately be felt
The survey time glistening light of waves, it is on the contrary then indicate that first sensor and/or second sensor sensing are believed less than echo light or the echo light sensed
It is number poor.
In response to determining that identified ratio is more than or equal to above-mentioned default ratio, above-mentioned executive agent can obtain light source
(hereinafter referred to as the first moment) and first sensor and/or second sensor are in light source emitting structural at the time of emitting structural light
At the time of sensing the reflected light of above structure light after light (hereinafter referred to as the second moment), the first moment and the second moment are determined
Difference.The difference is used to characterize the flight time of photon, i.e. photon flies from light source to target object surface, then again from mesh
Mark subject surface is flown the time undergone to first sensor and/or second sensor.
Step 506, it is based on difference, generates the depth information of target object.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) depth information of target object can be determined based on above-mentioned difference.For example, can be by the spread speed of photon and above-mentioned difference
Product half as target object at a distance from binocular vision system.
From figure 5 it can be seen that compared with the corresponding embodiments of Fig. 2, the control of binocular vision system is used in the present embodiment
The flow 500 of method processed highlights the step of using structure light and its reflection photogenerated depth information.The present embodiment describes as a result,
Scheme the depth information of target object can be generated in the case where the first image and the second image can not carry out Stereo matching.
With further reference to Fig. 6, it illustrates the flows of the another embodiment of the control method for binocular vision system
600.This is used for the flow 600 of the control method of binocular vision system, includes the following steps:
Step 601, control light source is to target object emitting structural light, to form predetermined pattern on the surface of target object.
In the present embodiment, binocular vision system (such as binocular vision system of Fig. 1) may include light source, the first sensing
Device and second sensor.The executive agent (such as controller shown in FIG. 1) of control method for binocular vision system can be with
Light source is controlled, makes light source to target object emitting structural light, to form predetermined pattern (example on the surface of target object
Such as, dot pattern, line pattern, face pattern etc.).
Step 602, the first image and second sensor that first sensor acquires target object are obtained to target object
Second image of acquisition.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) first sensor and second sensor can be controlled to target object progress Image Acquisition, it then obtains first sensor and adopts
Second image of the first image and the second sensor acquisition of collection.Wherein, first sensor and second sensor can be can
Acquire the sensor of image.
Step 603, characteristic point is extracted from the first image and the second image.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) characteristic point can be extracted from above-mentioned first image and the second image.Characteristic point is also known as point of interest, key point, is in image
Middle protrusion and with representing some points of meaning can carry out image recognition, Stereo matching, 3D by these points and rebuild etc..It is special
The extraction for levying point can be there are many mode, for example, (Speeded Up Robust Features accelerate steady special using SURF
Sign is a kind of steady local feature region detection and description algorithm) operator extracts in above-mentioned first image and the second image
Characteristic point.
Step 604, in response to determining that the quantity of characteristic point is less than amount threshold, first sensor and/or second is obtained
The intensity for the light that sensor senses under conditions of light source emitting structural light and light source not emitting structural light respectively.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) it can determine whether the quantity for the characteristic point that step 603 is extracted is less than amount threshold first.Amount threshold is according to actually answering
It, can for characterize the first image and the second image progress Stereo matching with the pre-set characteristic point quantitative value of the needs of scene
By property.For example, the quantity of the characteristic point of extraction is more than or equal to above-mentioned amount threshold, then it represents that the first image and the second image carry out
The reliability of Stereo matching is higher, on the contrary then indicate that the first image and the second image can not carry out Stereo matching or Stereo matching
Effect is poor.
In response to determining the quantity of extracted characteristic point to be less than amount threshold, (characteristic point extracted is not enough to pair
First image and the second image carry out Stereo matching), above-mentioned executive agent control first sensor and/or second sensor difference
Under conditions of light source emitting structural light and light source does not sense the intensity of echo light under conditions of emitting structural light, obtains first and passes
The intensity for the echo light that sensor and/or second sensor sense.Wherein, at least one in first sensor and second sensor
A can be the sensor that can sense echo luminous intensity.
Step 605, in response to determining the intensity of the light sensed under conditions of light source emitting structural light and in light source
The ratio of the intensity for the light not sensed under conditions of emitting structural light is more than or equal to default ratio, determines the structure of light source transmitting
The reflected light for the structure light that the phase of light is sensed with first sensor and/or second sensor after light source emitting structural light
Phase difference.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) can determine first step 604 obtain the echo light sensed under conditions of light source emitting structural light intensity with
Whether the ratio of the intensity for the echo light that light source does not sense under conditions of emitting structural light is less than default ratio.Default ratio is
According to the pre-set intensity ratio of the needs of practical application scene, for characterizing first sensor and/or second sensor
The reliability of the echo light sensed.For example, the ratio determined is more than default ratio (such as 2), then it represents that first sensor and/
Or second sensor can accurately sense echo light, it is on the contrary then indicate first sensor and/or second sensor sensing less than
Echo light or the echo optical signal sensed are poor.
In response to determining that identified ratio is more than or equal to above-mentioned default ratio, above-mentioned executive agent can obtain light source
The phase (hereinafter referred to as first phase) and first sensor and/or second sensor of the structure light of transmitting are in light source emitter junction
The phase (hereinafter referred to as second phase) of the reflected light of the above structure light sensed after structure light, determines first phase and second
The difference of phase.The phase difference that the difference is used to characterize the structure light of light source transmitting and its reflected light occurs in communication process.
Step 606, it is based on difference, generates the depth information of target object.
In the present embodiment, it is used for executive agent (such as the control shown in FIG. 1 of the control method of binocular vision system
Device) depth information of target object can be determined based on above-mentioned difference.
From fig. 6 it can be seen that compared with the corresponding embodiments of Fig. 2, the control of binocular vision system is used in the present embodiment
The flow 600 of method processed highlights the step of using structure light and its reflection photogenerated depth information.The present embodiment describes as a result,
Scheme the depth information of target object can be generated in the case where the first image and the second image can not carry out Stereo matching.
With further reference to Fig. 7, as the realization to method shown in above-mentioned each figure, this application provides one kind being used for binocular vision
One embodiment of the control device of feel system, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, device tool
Body can be applied in such as controller.
As shown in fig. 7, the control device 700 for binocular vision system of the present embodiment includes:Light source control unit
701, image acquisition unit 702, feature point extraction unit 703 and the first depth information generate 704.Wherein, light source control unit
701 are configured to control light source to target object emitting structural light, to form predetermined pattern on the surface of target object;Image obtains
Unit 702 is taken to be configured to obtain the first image and second sensor that first sensor acquires target object to target object
Second image of acquisition;Feature point extraction unit 703 is configured to extract characteristic point from the first image and the second image;And the
One depth information generation unit 704 is configured to be more than or equal to amount threshold in response to the quantity for determining characteristic point, based on spy
Sign o'clock carries out Stereo matching to the first image and the second image, generates the depth information of target object.
In the present embodiment, binocular vision system (such as binocular vision system of Fig. 1) may include light source, the first sensing
Device and second sensor.The above-mentioned light source control unit 701 of control device 700 for binocular vision system can to light source into
Row control, makes light source to target object emitting structural light, with the surface of target object formed predetermined pattern (for example, dot pattern,
Line pattern, face pattern etc.).
In some optional realization methods of the present embodiment, light source can be laser light source.Due to the side of laser light source
Tropism is good, therefore can form clearly structured light patterns on the surface of target object.Laser light source can be monochromatic source, example
Such as, red laser light source, green laser light source, blue laser light source etc..
In some optional realization methods of the present embodiment, above-mentioned predetermined pattern is the speckle pattern with default resolution ratio
Case.Wherein, default resolution ratio can be QVGA (i.e. 320*240), QQVGA (i.e. 160*120) etc..By by default resolution ratio
The predetermined pattern of (such as 320*240) projects the surface of target object, can obtain more detailed depth information, so as to
High-resolution depth image is produced, the precision of depth image is improved.
In the present embodiment, above-mentioned image acquisition unit 702 can control first sensor and second sensor to target
Object carries out Image Acquisition, then obtains the second image of the first image and second sensor acquisition of first sensor acquisition.
Wherein, first sensor and second sensor can acquire the sensor of image.
In the present embodiment, features described above point extraction unit 703 can be extracted from above-mentioned first image and the second image
Characteristic point.Characteristic point is also known as point of interest, key point, is prominent in the picture and has some points for representing meaning, passes through these
Point can carry out image recognition, Stereo matching, 3D reconstructions etc..The extraction of characteristic point can there are many modes, for example, utilizing SURF
(Speeded Up Robust Features accelerate robust feature, are that a kind of steady local feature region detection and description are calculated
Method) operator extracts the characteristic point in above-mentioned first image and the second image.
In the present embodiment, above-mentioned first depth information generation unit 704 can determine that the extraction of features described above point is single first
Whether the quantity of the characteristic point of 703 extraction of member is less than amount threshold.Amount threshold is advance according to the needs of practical application scene
The characteristic point quantitative value of setting carries out the reliability of Stereo matching for characterizing the first image and the second image.In response to determination
The quantity for going out extracted characteristic point is more than or equal to amount threshold, and above-mentioned executive agent can be based on features described above o'clock to the first figure
Picture and the second image carry out Stereo matching, and then generate the depth information of target object, for example, depth image.
In some optional realization methods of the present embodiment, first sensor and second sensor are arranged symmetrically to each other
In the both sides of light source.
In some optional realization methods of the present embodiment, device 700 can also be true including light intensity acquiring unit, moment
Order member and the second depth information generation unit.Wherein, light intensity acquiring unit is configured to the number in response to determining characteristic point
Amount is less than amount threshold, obtains first sensor and/or second sensor does not emit in light source emitting structural light and light source respectively
The intensity of the light sensed under conditions of structure light;Moment determination unit is configured in response to determining in light source emitting structural
The ratio of the intensity of the light sensed under conditions of light and the intensity of the light sensed under conditions of light source not emitting structural light
More than or equal to default ratio, determines and sent out in light source with first sensor and/or second sensor at the time of light source emitting structural light
Penetrate difference at the time of structure light senses the reflected light of structure light later;And the second depth information generation unit is configured to base
The depth information of target object is generated in difference.
In response to determining that the quantity of the characteristic point of the extraction of features described above point extraction unit 703 is less than amount threshold (i.e. institute
The characteristic point of extraction is not enough to carry out Stereo matching to the first image and the second image), above-mentioned light intensity acquiring unit can control
First sensor and/or second sensor are respectively under conditions of light source emitting structural light and the condition of light source not emitting structural light
The intensity for the echo light that the intensity of lower sensing echo light, acquisition first sensor and/or second sensor sense.Wherein,
At least one of one sensor and second sensor can be the sensors that can sense echo luminous intensity.
Above-mentioned moment determination unit can determine that above-mentioned light intensity acquiring unit 704 obtains first in light source emitting structural light
Under conditions of the intensity of echo light sensed and the echo light sensed under conditions of light source not emitting structural light intensity
Ratio whether be less than default ratio.In response to determining that identified ratio is more than or equal to above-mentioned default ratio, above-mentioned moment
(hereinafter referred to as the first moment) and first sensor and/or second at the time of determination unit can obtain light source emitting structural light
At the time of sensor senses the reflected light of structure light after light source emitting structural light (hereinafter referred to as the second moment), is determined
The difference at one moment and the second moment.
Above-mentioned second depth information generation unit can determine the depth information of target object based on above-mentioned difference.For example,
Can using the half of the spread speed of photon and the product of above-mentioned difference as target object and binocular vision system away from
From.
In some optional realization methods of the present embodiment, device 700 can also be true including light intensity acquiring unit, phase
Order member and the second depth information generation unit.Wherein, light intensity acquiring unit is configured to the number in response to determining characteristic point
Amount is less than amount threshold, obtains first sensor and/or second sensor does not emit in light source emitting structural light and light source respectively
The intensity of the light sensed under conditions of structure light;Phase de-termination unit is configured in response to determining in light source emitting structural
The ratio of the intensity of the light sensed under conditions of light and the intensity of the light sensed under conditions of light source not emitting structural light
More than or equal to default ratio, determine the phase of the structure light of light source transmitting with first sensor and/or second sensor in light source
The difference of the phase of the reflected light of the structure light sensed after emitting structural light;And the configuration of the second depth information generation unit is used
In the depth information for generating target object based on difference.
In response to determining that the quantity of the characteristic point of the extraction of features described above point extraction unit 703 is less than amount threshold (i.e. institute
The characteristic point of extraction is not enough to carry out Stereo matching to the first image and the second image), above-mentioned light intensity acquiring unit can control
First sensor and/or second sensor are respectively under conditions of light source emitting structural light and the condition of light source not emitting structural light
The intensity for the echo light that the intensity of lower sensing echo light, acquisition first sensor and/or second sensor sense.Wherein,
At least one of one sensor and second sensor can be the sensors that can sense echo luminous intensity.
Above-mentioned phase de-termination unit can determine that above-mentioned light intensity acquiring unit 704 obtains first in light source emitting structural light
Under conditions of the intensity of echo light sensed and the echo light sensed under conditions of light source not emitting structural light intensity
Ratio whether be less than default ratio.In response to determining that identified ratio is more than or equal to above-mentioned default ratio, above-mentioned phase
Determination unit can obtain the phase (hereinafter referred to as first phase) and first sensor and/or the of the structure light of light source transmitting
The phase (hereinafter referred to as second phase) of the reflected light for the structure light that two sensors sense after light source emitting structural light, really
Determine the difference of first phase and second phase.
Above-mentioned second depth information generation unit can determine the depth information of target object based on above-mentioned difference.
The control device for binocular vision system that above-described embodiment of the application provides, by controlling light source to target
Object projective structure light pattern obtains the first image and second sensor that first sensor acquires target object to mesh later
The second image for marking object acquisition, then extracts characteristic point from the first image and the second image, finally counts in response to feature
Amount is more than amount threshold, carries out Stereo matching to the first image and the second image, depth image is generated, to improve depth map
The accuracy of picture.
Below with reference to Fig. 8, it illustrates the computer systems 800 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes controller 801, controller 801 includes one or more central processings
Unit (CPU), can be loaded into according to the program being stored in read-only memory (ROM) 802 or from storage section 808 with
Machine accesses the program in memory (RAM) 803 and executes various actions appropriate and processing.In RAM 803, also it is stored with and is
Various programs and data needed for 800 operation of system.Controller 801, ROM 802 and RAM 803 pass through the phase each other of bus 804
Even.Input/output (I/O) interface 805 is also connected to bus 804.
It is connected to I/O interfaces 805 with lower component:Include the importation 806 of first sensor and second sensor etc.;Packet
Include the output par, c 807 of light source etc.;Storage section 808 including hard disk etc.;And including such as LAN card, modulation /demodulation
The communications portion 809 of the network interface card of device etc..Communications portion 809 executes communication process via the network of such as internet.It drives
Dynamic device 810 is also according to needing to be connected to I/O interfaces 805.Detachable media 811, such as disk, CD, magneto-optic disk, semiconductor are deposited
Reservoir etc. is mounted on driver 810, as needed in order to be mounted as needed from the computer program read thereon
Enter storage section 808.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, controller 801 can be controlled when calling above computer program to execute the control function for binocular vision system
Output par, c 807 can control the target object that importation 806 acquires different points of view to target object emitting structural light
The first image and the second image, and control importation 806 sense target object reflection echo light.Above computer journey
Sequence can be downloaded and installed by communications portion 809 from network, and/or be mounted from detachable media 811.In the calculating
When machine program is executed by controller 801, the above-mentioned function of being limited in the present processes is executed.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, can be any include computer readable storage medium or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute on the user computer, partly execute, executed as an independent software package on the user computer,
Part executes or executes on a remote computer or server completely on the remote computer on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including LAN (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include light source control unit, image acquisition unit, feature point extraction unit and the first depth information generation unit.Wherein, these lists
The title of member does not constitute the restriction to the unit itself under certain conditions, for example, light source control unit can also be described
For " unit of the control light source to target object emitting structural light ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in device described in above-described embodiment;Can also be individualism, and without be incorporated the device in.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device so that should
Device:Light source is controlled to target object emitting structural light, to form predetermined pattern on the surface of target object;Obtain the first sensing
The second image that the first image and second sensor that device acquires target object acquire target object;From the first image and
Characteristic point is extracted in two images;In response to determining that the quantity of characteristic point is more than or equal to amount threshold, feature based o'clock is to first
Image and the second image carry out Stereo matching, generate the depth information of target object.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (12)
1. a kind of control method for binocular vision system, the binocular vision system includes light source, first sensor and
Two sensors, the method includes:
The light source is controlled to target object emitting structural light, to form predetermined pattern on the surface of the target object;
The first image and the second sensor that the first sensor acquires the target object are obtained to the target
Second image of object acquisition;
Characteristic point is extracted from described first image and second image;
In response to determining that the quantity of the characteristic point is more than or equal to amount threshold, based on the characteristic point to described first image
Stereo matching is carried out with second image, generates the depth information of the target object.
2. according to the method described in claim 1, wherein, the method further includes:
In response to determining that the quantity of the characteristic point is less than the amount threshold, the first sensor and/or described is obtained
The light that second sensor senses under conditions of the light source emitting structural light and the light source not emitting structural light respectively
Intensity;
In response to determining that the intensity of the light sensed under conditions of the light source emitting structural light is not sent out in the light source
The ratio for penetrating the intensity of the light sensed under conditions of structure light is more than or equal to default ratio, determines the light source emitting structural light
At the time of with the first sensor and/or the second sensor sense structure light after the light source emitting structural light
Reflected light at the time of difference;
Based on the difference, the depth information of the target object is generated.
3. according to the method described in claim 1, wherein, the method further includes:
In response to determining that the quantity of the characteristic point is less than the amount threshold, the first sensor and/or described is obtained
The light that second sensor senses under conditions of the light source emitting structural light and the light source not emitting structural light respectively
Intensity;
In response to determining that the intensity of the light sensed under conditions of the light source emitting structural light is not sent out in the light source
The ratio for penetrating the intensity of the light sensed under conditions of structure light is more than or equal to default ratio, determines the structure of the light source transmitting
The knot that the phase of light is sensed with the first sensor and/or the second sensor after the light source emitting structural light
The difference of the phase of the reflected light of structure light;
Based on the difference, the depth information of the target object is generated.
4. according to the method described in claim 1, wherein, the first sensor and the second sensor are set symmetrically to each other
It sets in the both sides of the light source.
5. according to the method described in one of claim 1-4, wherein the predetermined pattern is the speckle pattern with default resolution ratio
Case.
6. a kind of control device for binocular vision system, the binocular vision system includes light source, first sensor and
Two sensors, described device include:
Light source control unit is configured to control the light source to target object emitting structural light, in the target object
Surface forms predetermined pattern;
Image acquisition unit is configured to obtain the first image that the first sensor acquires the target object and described
The second image that second sensor acquires the target object;
Feature point extraction unit is configured to extract characteristic point from described first image and second image;
First depth information generation unit is configured to be more than or equal to quantity threshold in response to the quantity for determining the characteristic point
Value carries out Stereo matching to described first image and second image based on the characteristic point, generates the target object
Depth information.
7. device according to claim 6, wherein described device further includes:
Light intensity acquiring unit is configured to be less than the amount threshold in response to the quantity for determining the characteristic point, obtains institute
First sensor and/or the second sensor are stated respectively in the light source emitting structural light and the light source not emitting structural light
Under conditions of the intensity of light that senses;
Moment determination unit is configured in response to the light determining to sense under conditions of the light source emitting structural light
The ratio of intensity and the intensity of the light sensed under conditions of the light source not emitting structural light is more than or equal to default ratio, really
With the first sensor and/or the second sensor in the light source emitter junction at the time of the fixed light source emitting structural light
The difference at the time of reflected light of structure light is sensed after structure light;
Second depth information generation unit is configured to generate the depth information of the target object based on the difference.
8. device according to claim 6, wherein described device further includes:
Light intensity acquiring unit is configured to be less than the amount threshold in response to the quantity for determining the characteristic point, obtains institute
First sensor and/or the second sensor are stated respectively in the light source emitting structural light and the light source not emitting structural light
Under conditions of the intensity of light that senses;
Phase de-termination unit is configured in response to the light determining to sense under conditions of the light source emitting structural light
The ratio of intensity and the intensity of the light sensed under conditions of the light source not emitting structural light is more than or equal to default ratio, really
The phase of the structure light of the fixed light source transmitting emits with the first sensor and/or the second sensor in the light source
The difference of the phase of the reflected light of the structure light sensed after structure light;
Second depth information generation unit is configured to generate the depth information of the target object based on the difference.
9. device according to claim 6, wherein the first sensor and the second sensor are set symmetrically to each other
It sets in the both sides of the light source.
10. according to the device described in one of claim 6-9, wherein the predetermined pattern is the speckle with default resolution ratio
Pattern.
11. a kind of electronic equipment, including:
Controller, including one or more processors;
Light source;
First sensor;
Second sensor;
Storage device, for storing one or more programs;
When one or more of programs are executed by the controller so that the controller is realized as appointed in claim 1-5
Method described in one.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259316.0A CN108495113B (en) | 2018-03-27 | 2018-03-27 | Control method and device for binocular vision system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810259316.0A CN108495113B (en) | 2018-03-27 | 2018-03-27 | Control method and device for binocular vision system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108495113A true CN108495113A (en) | 2018-09-04 |
CN108495113B CN108495113B (en) | 2020-10-27 |
Family
ID=63316554
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810259316.0A Active CN108495113B (en) | 2018-03-27 | 2018-03-27 | Control method and device for binocular vision system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108495113B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109842791A (en) * | 2019-01-15 | 2019-06-04 | 浙江舜宇光学有限公司 | A kind of image processing method and device |
CN110278356A (en) * | 2019-06-10 | 2019-09-24 | 北京迈格威科技有限公司 | Smart camera equipment and information processing method, information processing equipment and medium |
WO2020047863A1 (en) * | 2018-09-07 | 2020-03-12 | 深圳配天智能技术研究院有限公司 | Distance measurement method and apparatus |
WO2020173461A1 (en) * | 2019-02-28 | 2020-09-03 | 深圳市道通智能航空技术有限公司 | Obstacle detection method, device and unmanned air vehicle |
CN112633181A (en) * | 2020-12-25 | 2021-04-09 | 北京嘀嘀无限科技发展有限公司 | Data processing method, system, device, equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663712A (en) * | 2012-04-16 | 2012-09-12 | 天津大学 | Depth calculation imaging method based on flight time TOF camera |
CN103796004A (en) * | 2014-02-13 | 2014-05-14 | 西安交通大学 | Active binocular depth sensing method of structured light |
WO2016206004A1 (en) * | 2015-06-23 | 2016-12-29 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
CN106445146A (en) * | 2016-09-28 | 2017-02-22 | 深圳市优象计算技术有限公司 | Gesture interaction method and device for helmet-mounted display |
CN106504284A (en) * | 2016-10-24 | 2017-03-15 | 成都通甲优博科技有限责任公司 | A kind of depth picture capturing method combined with structure light based on Stereo matching |
CN106603942A (en) * | 2016-12-15 | 2017-04-26 | 杭州艾芯智能科技有限公司 | TOF camera noise reduction method |
CN106772431A (en) * | 2017-01-23 | 2017-05-31 | 杭州蓝芯科技有限公司 | A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision |
-
2018
- 2018-03-27 CN CN201810259316.0A patent/CN108495113B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663712A (en) * | 2012-04-16 | 2012-09-12 | 天津大学 | Depth calculation imaging method based on flight time TOF camera |
CN103796004A (en) * | 2014-02-13 | 2014-05-14 | 西安交通大学 | Active binocular depth sensing method of structured light |
WO2016206004A1 (en) * | 2015-06-23 | 2016-12-29 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
CN106445146A (en) * | 2016-09-28 | 2017-02-22 | 深圳市优象计算技术有限公司 | Gesture interaction method and device for helmet-mounted display |
CN106504284A (en) * | 2016-10-24 | 2017-03-15 | 成都通甲优博科技有限责任公司 | A kind of depth picture capturing method combined with structure light based on Stereo matching |
CN106603942A (en) * | 2016-12-15 | 2017-04-26 | 杭州艾芯智能科技有限公司 | TOF camera noise reduction method |
CN106772431A (en) * | 2017-01-23 | 2017-05-31 | 杭州蓝芯科技有限公司 | A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020047863A1 (en) * | 2018-09-07 | 2020-03-12 | 深圳配天智能技术研究院有限公司 | Distance measurement method and apparatus |
CN111699361A (en) * | 2018-09-07 | 2020-09-22 | 深圳配天智能技术研究院有限公司 | Method and device for measuring distance |
CN111699361B (en) * | 2018-09-07 | 2022-05-27 | 深圳配天智能技术研究院有限公司 | Method and device for measuring distance |
CN109842791A (en) * | 2019-01-15 | 2019-06-04 | 浙江舜宇光学有限公司 | A kind of image processing method and device |
CN109842791B (en) * | 2019-01-15 | 2020-09-25 | 浙江舜宇光学有限公司 | Image processing method and device |
WO2020173461A1 (en) * | 2019-02-28 | 2020-09-03 | 深圳市道通智能航空技术有限公司 | Obstacle detection method, device and unmanned air vehicle |
CN110278356A (en) * | 2019-06-10 | 2019-09-24 | 北京迈格威科技有限公司 | Smart camera equipment and information processing method, information processing equipment and medium |
CN112633181A (en) * | 2020-12-25 | 2021-04-09 | 北京嘀嘀无限科技发展有限公司 | Data processing method, system, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN108495113B (en) | 2020-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108495113A (en) | control method and device for binocular vision system | |
CN111325796B (en) | Method and apparatus for determining pose of vision equipment | |
US20210304431A1 (en) | Depth-Aware Photo Editing | |
CN109074660B (en) | Method and system for real-time three-dimensional capture and instant feedback of monocular camera | |
US11521311B1 (en) | Collaborative disparity decomposition | |
KR20220009393A (en) | Image-based localization | |
CN102938844B (en) | Three-dimensional imaging is utilized to generate free viewpoint video | |
US9129435B2 (en) | Method for creating 3-D models by stitching multiple partial 3-D models | |
US20210241495A1 (en) | Method and system for reconstructing colour and depth information of a scene | |
CN106256124B (en) | Structuring is three-dimensional | |
EP3276578A1 (en) | Method for depicting an object | |
CN103168309A (en) | 2d to 3d image and video conversion using GPS and dsm | |
KR20190112894A (en) | Method and apparatus for 3d rendering | |
CN109887003A (en) | A kind of method and apparatus initialized for carrying out three-dimensional tracking | |
CN110866977B (en) | Augmented reality processing method, device, system, storage medium and electronic equipment | |
KR102197615B1 (en) | Method of providing augmented reality service and server for the providing augmented reality service | |
CN109978753B (en) | Method and device for drawing panoramic thermodynamic diagram | |
CN107507269A (en) | Personalized three-dimensional model generating method, device and terminal device | |
KR102317182B1 (en) | Apparatus for generating composite image using 3d object and 2d background | |
US11557086B2 (en) | Three-dimensional (3D) shape modeling based on two-dimensional (2D) warping | |
US20110242271A1 (en) | Synthesizing Panoramic Three-Dimensional Images | |
US10586394B2 (en) | Augmented reality depth sensing using dual camera receiver | |
JP2020009447A (en) | Method and device for augmenting reality | |
WO2023088127A1 (en) | Indoor navigation method, server, apparatus and terminal | |
Ogawa et al. | Occlusion Handling in Outdoor Augmented Reality using a Combination of Map Data and Instance Segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |