CN106225764A - Based on the distance-finding method of binocular camera in terminal and terminal - Google Patents
Based on the distance-finding method of binocular camera in terminal and terminal Download PDFInfo
- Publication number
- CN106225764A CN106225764A CN201610515002.3A CN201610515002A CN106225764A CN 106225764 A CN106225764 A CN 106225764A CN 201610515002 A CN201610515002 A CN 201610515002A CN 106225764 A CN106225764 A CN 106225764A
- Authority
- CN
- China
- Prior art keywords
- image
- terminal
- target object
- determined
- distance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C3/00—Measuring distances in line of sight; Optical rangefinders
Landscapes
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The disclosure is directed to a kind of based on the distance-finding method of binocular camera in terminal and terminal, described method includes: terminal obtains the first image corresponding to target object and the second image respectively by described binocular camera;Described terminal, according to described first image and described second image, determines the distance between described terminal and described target object.The disclosure binocular camera capture front picture by terminal built-in, and determine the distance between terminal and target object according to two width images of capture, it is achieved thereby that only use a terminal without just can complete range finding by any other equipment.When terminal is placed in the vehicle of traveling, it is possible to determine the distance between Current vehicle and front vehicles by terminal, thus realize efficient auxiliary and drive effect.
Description
Technical field
It relates to the communications field, particularly to a kind of based on the distance-finding method of binocular camera in terminal and terminal.
Background technology
Driver, when driving vehicle, needs to judge the distance with front truck timely and accurately, to help driver to adjust in time
Car load speed etc., it is ensured that traffic safety.
In correlation technique, can by add in vehicle a kind of front truck range finding early warning system calculate vehicle and front truck away from
From.This front truck early warning system is independent embedded device, and uses monocular location algorithm to calculate the distance of vehicle and front truck.
Summary of the invention
Disclosure embodiment provides a kind of based on the distance-finding method of binocular camera in terminal and terminal, described technical side
Case is as follows:
First aspect according to disclosure embodiment, it is provided that a kind of based on the distance-finding method of binocular camera in terminal, should
Method includes:
Terminal obtains the first image corresponding to target object and the second image respectively by described binocular camera;
Described terminal according to described first image and described second image, determine described terminal and described target object it
Between distance.
Further, described terminal, according to described first image and described second image, determines that described terminal is with described
Distance between target object, including:
Described terminal detects described target object in described first image;
Described terminal, according to testing result, determines described target object positional information in the first image;
Described terminal, according to described target object positional information in described first image, determines that described target object exists
Positional information in described second image;
Described terminal exists according to described target object positional information in described first image and described target object
Positional information in described second image, determines the distance between described terminal and described target object.
Further, described terminal detects described target object in described first image, including:
Described terminal uses target detection degree of depth network to detect described target object, wherein, institute in described first image
State target detection degree of depth network and include at least one full articulamentum.
Further, described terminal uses target detection degree of depth network to detect described object in described first image
Body, including:
Described terminal determines the image block to be determined in described first image;
Two-dimensional matrix corresponding for described image block to be determined is inputted described target detection degree of depth network by described terminal;
The parameter matrix of described terminal articulamentum complete to first in described target detection degree of depth network carries out singular value and divides
Solve, obtain the first parameter submatrix corresponding to the parameter matrix of the described first full articulamentum and the second parameter submatrix;
Described terminal uses described first parameter submatrix and described second parameter submatrix to described image to be determined
The two-dimensional matrix that block is corresponding carries out full connection process, obtain described image block characteristic of correspondence to be determined vector, wherein, described in treat
Determine that the element in image block characteristic of correspondence vector is for representing the probability that described image block to be determined is described target object;
Described terminal, according to described image block characteristic of correspondence to be determined vector, determines described target object.
Further, described terminal is according to described target object positional information in described first image and described mesh
Mark object positional information in described second image, determines the distance between described terminal and described target object, including:
Described terminal calculates described target object abscissa value in described first image with described target object in institute
State the difference of abscissa value in the second image;
Described terminal, according to described difference, determines the depth value of described target object, and by the degree of depth of described target object
Value is as the distance between described terminal and described target object.
Further, described terminal, according to described first image and described second image, determines that described terminal is with described
Before distance between target object, also include:
Described terminal carries out image flame detection to described first image and described second image, so that in described first image
The first pixel and described second image corresponding with described first pixel in the second pixel there is identical vertical seat
Mark.
Further, described terminal, according to described first image and described second image, determines that described terminal is with described
Before distance between target object, also include:
Described terminal carries out parameter calibration process to described binocular camera, so that the left mesh that described binocular camera is corresponding
Photographic head and right mesh photographic head keep the most parallel.
Further, described terminal uses described first parameter submatrix and described second parameter submatrix to treat described
Determine that the two-dimensional matrix that image block is corresponding carries out full connection process, obtain described image block characteristic of correspondence vector to be determined
Before, also include:
Described terminal uses described target detection degree of depth network to carry out the two-dimensional matrix that described image block to be determined is corresponding
Process of convolution and down-sampled process.
Second aspect according to disclosure embodiment, it is provided that a kind of terminal, including:
Acquisition module, be configured to described binocular camera obtain respectively the first image corresponding to target object and
Second image;
Determine module, be configured to, according to described first image and described second image, determine that described terminal is with described
Distance between target object.
Further, described determine that module includes:
Detection sub-module, is configured to detect described target object in described first image;
First determines submodule, is configured to, according to testing result, determine described target object position in the first image
Confidence ceases;
Second determines submodule, is configured to according to described target object positional information in described first image, really
Fixed described target object positional information in described second image;
3rd determines submodule, be configured to according to described target object positional information in described first image and
Described target object positional information in described second image, determines the distance between described terminal and described target object.
Further, described detection sub-module is used for:
Using target detection degree of depth network to detect described target object in described first image, wherein, described target is examined
Depth measurement degree network includes at least one full articulamentum.
Further, described detection sub-module specifically for:
Determine the image block to be determined in described first image, by two-dimensional matrix input corresponding for described image block to be determined
Described target detection degree of depth network, the parameter matrix of articulamentum complete to first in described target detection degree of depth network carries out unusual
Value is decomposed, and obtains the first parameter submatrix corresponding to the parameter matrix of the described first full articulamentum and the second parameter submatrix,
Use described first parameter submatrix and the described second parameter submatrix two-dimensional matrix corresponding to described image block to be determined
Carrying out full connection process, obtain described image block characteristic of correspondence to be determined vector, wherein, described image block to be determined is corresponding
Element in characteristic vector is used for representing the probability that described image block to be determined is described target object, and, treat according to described
Determine image block characteristic of correspondence vector, determine described target object.
Further, the described 3rd determine submodule specifically for:
Calculate described target object abscissa value in described first image with described target object at described second figure
The difference of the abscissa value in Xiang, and, according to described difference, determine the depth value of described target object, and by described target
The depth value of object is as the distance between described terminal and described target object.
Further, described terminal also includes:
Rectification module, is configured to described first image and described second image are carried out image flame detection, so that described
The first pixel in first image and the second pixel in described second image corresponding with described first pixel have
Identical vertical coordinate.
Further, described terminal also includes:
Processing module, is configured to described binocular camera is carried out parameter calibration process, so that described binocular camera
Corresponding left mesh photographic head and right mesh photographic head keep the most parallel.
Further, described detection sub-module is specifically additionally operable to:
Use described target detection degree of depth network that the two-dimensional matrix that described image block to be determined is corresponding is carried out process of convolution
And down-sampled process.
The third aspect according to disclosure embodiment, it is provided that a kind of terminal, including:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
The first image corresponding to target object and the second image is obtained respectively by described binocular camera;
According to described first image and described second image, determine between described terminal and described target object away from
From.
The technical scheme that disclosure embodiment provides can include following beneficial effect: terminal is by built-in binocular camera shooting
Head capture front picture, and determine the distance between terminal and target object according to two width images of capture, thus realize
Only use a terminal without just completing by any other equipment to find range.When the vehicle that terminal is placed in traveling
Time middle, it is possible to determine the distance between Current vehicle and front vehicles by terminal, thus help driver to enter in time
Driving velocity modulation is whole waits operation, i.e. only need a terminal comprising binocular camera can realize assisting driving efficiently, and nothing
Need independent embedded device required in correlation technique, and compared to the distance-finding method in correlation technique, the present embodiment
The distance determined is the most accurate.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe
The disclosure can be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the disclosure
Example, and for explaining the principle of the disclosure together with description.
Fig. 1 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure;
Fig. 2 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure;
Fig. 3 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure;
Fig. 4 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure;
Fig. 5 is the imaging schematic diagram of binocular camera;
Fig. 6 is a kind of based on the distance-finding method of binocular camera complete in terminal according to shown in an exemplary embodiment
Flow chart;
Fig. 7 is the block diagram according to a kind of terminal shown in an exemplary embodiment;
Fig. 8 is the block diagram according to a kind of terminal shown in an exemplary embodiment;
Fig. 9 is the block diagram according to a kind of terminal shown in an exemplary embodiment;
Figure 10 is the block diagram according to a kind of terminal shown in an exemplary embodiment;
Figure 11 is the block diagram of the entity according to a kind of terminal shown in an exemplary embodiment;
Figure 12 is the block diagram according to a kind of terminal 1300 shown in an exemplary embodiment.
By above-mentioned accompanying drawing, it has been shown that the embodiment that the disclosure is clear and definite, hereinafter will be described in more detail.These accompanying drawings
With word, the scope being not intended to be limited disclosure design by any mode is described, but by with reference to specific embodiment being
Those skilled in the art illustrate the concept of the disclosure.
Detailed description of the invention
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they are only with the most appended
The example of the apparatus and method that some aspects that described in detail in claims, the disclosure are consistent.
Fig. 1 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure, as it is shown in figure 1, the executive agent of the method is the terminal comprising binocular camera, such as: mobile terminal, panel computer etc..
The method includes:
In a step 101, terminal obtains, by binocular camera, the first image and second that target object is corresponding respectively
Image.
When the disclosure is applied to different scene, above-mentioned target object is different object.Such as, it is applied to auxiliary when the disclosure
When helping driving, target object is the front vehicles of terminal place vehicle, people, barrier etc..All should with the disclosure below the disclosure
As a example by assisting Driving Scene, but this should not be used as the restriction of the disclosure, and disclosed method can be applied equally to other
Terminal is used to carry out under the scene found range.
After start, before terminal is positioned over the windshield of place vehicle, terminal can be by built-in in terminal
Binocular camera captures the picture of place vehicle front.Wherein, binocular camera includes two photographic head, two shootings
Head can capture the picture of place vehicle front respectively and feed back to terminal.Therefore, terminal just can get pin at synchronization
Two pictures to place vehicle front picture.
In a step 102, terminal according to the first image and the second image, determine between terminal and target object away from
From.
After terminal gets the first image and the second image, terminal and target can be determined according to the two image
Distance between object.Such as, terminal can be according to object same in the picture of front in the first image and in the second image
Diverse location determine the distance between terminal and target object.
In the present embodiment, terminal captures front picture by built-in binocular camera, and according to two width images of capture
Determine the distance between terminal and target object, it is achieved thereby that only use a terminal without by any other
Equipment just can complete range finding.When terminal is placed in the vehicle of traveling, it is possible to determine Current vehicle by terminal
And the distance between front vehicles, thus driver is helped to carry out the operations such as speed adjustment in time, i.e. only to need one to comprise double
The terminal of mesh photographic head can realize assisting driving efficiently, and independent embedded sets without required in correlation technique
Standby, and compared to the distance-finding method in correlation technique, the distance that the present embodiment is determined is the most accurate.
On the basis of above-described embodiment, the present embodiment relates to the concrete side of the spacing determining terminal with target object
Method.That is, Fig. 2 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure, as in figure 2 it is shown, above-mentioned steps 102 includes:
In step 201, terminal detects target object in the first image.
The picture that two photographic head of binocular camera are captured is the entire picture in the range of field of front vision, i.e. picture
In except target object, also include background object.Such as, when terminal is placed on before the windshield of place vehicle, captured
Picture in except front vehicles, also include trees, building etc., if target object is front vehicles, then terminal mainly needs
First distance between Current vehicle to be determined and front vehicles, accordingly, it would be desirable to detect from the picture containing multiple object
Go out target object, i.e. front vehicles.
In step 202., terminal, according to testing result, determines target object positional information in the first image.
After detecting target object, terminal is it needs to be determined that go out target object position in the first image.Terminal is permissible
Using the position of target object central point in the first image as target object position in the first image.
In step 203, terminal, according to target object positional information in the first image, determines that target object is second
Positional information in image.
The sub-picture that terminal is only required in two width images performs detection target object and determines target object
The operation of position, once terminal determines target object positional information in a sub-picture, then can pass through binocular camera
The corresponding relation of institute's capturing visual, determines target object positional information in another piece image.
In step 204, terminal according to target object positional information in the first image and target object second
Positional information in image, determines the distance between terminal and target object.
When terminal determines that target object positional information in the first image and target object are in the second image
After positional information, it is possible to according to the principle of binocular range finding, determine the distance between terminal and target object.
In the present embodiment, terminal is by detecting target object and determining object in the picture that binocular camera is captured
The position of body, and then determine between terminal and target object according to target object diverse location information in two images
Distance.I.e., only just can complete range finding by the positional information in two images of target object, it is ensured that ranging process
Efficiently and the accuracy of range measurement.
On the basis of above-described embodiment, the present embodiment relates to the concrete grammar detecting target object, i.e. above-mentioned steps
201 particularly as follows:
Terminal uses target detection degree of depth network to detect target object, wherein, this target detection degree of depth in the first image
Network includes at least one full articulamentum.
Wherein, target detection degree of depth network can be Faster RCNN network, and this network includes convolutional layer, down-sampled
Layer and full articulamentum etc., after two-dimensional matrix corresponding for a secondary image to be detected is inputted this network, through this network processes, can
To export the characteristic vector of image to be detected.
Full articulamentum at Faster RCNN network passes through the matrix multiple of characteristic vector and the full articulamentum that will input,
Export new characteristic vector.Wherein, the number of full articulamentum can be arranged the most flexibly, for example, it is possible to
Faster RCNN network always arranges two full articulamentums.
On the basis of above-described embodiment, the present embodiment is directed to use with target detection degree of depth network and detects in the first image
The concrete grammar of target object, i.e. Fig. 3 is a kind of based on binocular camera in terminal according to shown in an exemplary embodiment
The flow chart of distance-finding method, as it is shown on figure 3, above-mentioned use target detection degree of depth network detects target object bag in the first image
Include:
In step 301, the image block to be determined during terminal determines the first image.
Terminal by the way of stochastical sampling from the first image selected different image block to be determined, such as, terminal with
Machine selectes 100 image blocks as image block to be determined.Terminal and then each selected image block is carried out the present embodiment
Each step operation.The present embodiment illustrates as a example by the processing procedure of one of them image block to be determined.
In step 302, terminal is by two-dimensional matrix input target detection degree of depth network corresponding for image block to be determined.
In step 303, the parameter matrix of terminal articulamentum complete to first in target detection degree of depth network carries out unusual
Value is decomposed, and obtains the first parameter submatrix corresponding to parameter matrix and the second parameter submatrix of the first full articulamentum.
The parameter of the full articulamentum of target detection degree of depth network can be expressed as the matrix W of a u × v, and W is through singular value
What decomposition can approximate is expressed as W=U ∑mVT, wherein, U is the matrix of a u × m, ∑mIt is the diagonal matrix of a m × m, V
It it is the matrix of a v × m.Through singular value decomposition, the number of full articulamentum parameter, it is reduced to m (u+v) from uv, thus pole
The earth reduces operation times.
When implementing acceleration based on singular value decomposition, by the full articulamentum that parameter is W, replace with two full connections
Layer, the parameter of first full articulamentum is set to ∑mVT, the parameter of second full articulamentum is set to U.When image block to be determined
Number more time, replace with two full articulamentums through the full articulamentum of above-mentioned warp, operation times can be greatly reduced, thus
Significantly speed up target object detection.
It should be noted that this step and the strictest sequencing of abovementioned steps 301 and 302, this step is permissible
Just completed before detection target object, i.e. by the method for this step, full articulamentum can be replaced with two in advance and entirely connect
Connect layer, when carrying out target object detection, directly use the parameter submatrix that the full articulamentum of the two is corresponding to carry out computing.
It addition, when target detection degree of depth network includes multiple full articulamentum, each full articulamentum can be according to this
Method is replaced, thus reduces operation times and accelerating vehicle detection further.
In step 304, terminal uses the first parameter submatrix and the second parameter submatrix to image block pair to be determined
The two-dimensional matrix answered carries out full connection process, obtains image block characteristic of correspondence to be determined vector, wherein, image block pair to be determined
Element in the characteristic vector answered is for representing the probability that image block to be determined is target object.
The image information two-dimensional matrix of image block to be determined represents, after this two-dimensional matrix is inputted full articulamentum, entirely
The mode that this two-dimensional matrix is divided by articulamentum according to row or row divide, is divided into multiple characteristic vector, and one by one by these
Characteristic vector is multiplied with the first parameter submatrix and the second parameter submatrix of full articulamentum, thus obtain new feature to
Amount.
Wherein, the element in obtained new characteristic vector is for representing the probability that image block to be determined is target object.
Such as, when new characteristic vector is 2 dimensional vector, this vector includes 2 elements, one of them element representation image to be determined
Block is the probability of target object, such as vehicle, and another element representation image to be determined is not the probability of target object.Example again
As, when new characteristic vector is 3-dimensional vector, this vector includes 3 elements, then these 3 elements just can represent 2 targets
Object and the probability of background.
In step 305, terminal, according to image block characteristic of correspondence to be determined vector, determines target object.
For the image block characteristic of correspondence to be determined vector drawn, terminal sentences the represented probability of the most each element
Size.Such as, if characteristic vector is 2 dimensional vectors, wherein first element representation image block to be determined is the general of target object
Rate, second element representation image block to be determined is the probability of background, it is assumed that the value of the first element is 0.7, second element
Value is 0.3, then illustrate that the probability that image block to be determined is target object is the probability of background more than image block to be determined, the most permissible
Think that image block to be determined is target object, thus using this image block to be determined as an object in the first image
Body, determines target object the most by the way.
In the present embodiment, detect the by the image block to be determined in the first image is inputted target detection degree of depth network
Target object in one image, thus ensure the accuracy of detection.Meanwhile, when entirely connecting process, by will entirely connect
The parameter matrix of layer replaces with two parameter submatrixs, thus greatly reduces operation times, and then accelerates target object detection, carries
The efficiency of high target object detection.
On the basis of above-described embodiment, the present embodiment relates to the concrete side determining the distance between terminal and target object
Method, i.e. Fig. 4 is a kind of based on the flow process of the distance-finding method of binocular camera in terminal according to shown in an exemplary embodiment
Figure, as shown in Figure 4, above-mentioned steps 204 specifically includes:
In step 401, terminal calculates target object abscissa value in the first image with target object at the second figure
The difference of the abscissa value in Xiang.
In step 402, terminal, according to above-mentioned difference, determines the depth value of target object, and by the degree of depth of target object
Value is as the distance between terminal and target object.
Specifically, Fig. 5 is the imaging schematic diagram of binocular camera, as it is shown in figure 5, the some P (x in physical worldc, yc, zc)
Imaging point in the imaging picture of two photographic head of binocular camera is respectively PlAnd Pr, wherein, PlAnd PrHave identical
Every a line of vertical coordinate, i.e. two imaging points is alignment, PlAnd PrAbscissa be respectively XleftAnd Xright, the horizontal seat of the two
Mark and differ, two abscissa XleftAnd XrightParallax d, parallax d that difference is binocular camera and physical world in
Point P (xc, yc, zc) degree of depth zcIt is inversely proportional to, i.e. the product of the two is a constant, therefore, when being determined by above-mentioned steps 401
Go out the difference of target object abscissa value in the first image and target object abscissa value in the second image, i.e. parallax
Afterwards, it is possible to go out the depth value of target object according to this mathematic interpolation, the depth value calculated is exactly terminal and object
Distance between body.
It should be noted that when including multiple target object in above picture, terminal can calculate each object
Body is to the distance between terminal, and these distances can be compared by terminal, and as final distance and minimum range is fed back to use
Family.Such as, terminal can carry out the distance to user feedback terminal Yu target object by the mode such as voice message, text prompt.
On the basis of above-described embodiment, the present embodiment relates to the method for image flame detection.That is, before above-mentioned steps 102,
Also include:
Terminal carries out image flame detection to the first image and the second image so that the first pixel in the first image and with
The second pixel in the second image that first pixel is corresponding has identical vertical coordinate.
Wherein, the first pixel is any one pixel in the first image, the second pixel be in the second image with
The pixel that first pixel is corresponding.
Calculate the target object found range parallax on the width image of left and right two, first have to this point on left images
Two corresponding pixel couplings are got up.But, it is the most time-consuming for mating corresponding point on two-dimensional space, in order to reduce coupling
Hunting zone, it is necessary to by the present embodiment by two width images strictly row correspondence so that on piece image any point and its
Corresponding point on another piece image have identical line number, and then, only need to this row carry out linear search can match right
Ying Dian.Thus promote treatment effeciency.
On the basis of above-described embodiment, the present embodiment relates to the method that photographic head parameter calibration processes, i.e. in above-mentioned step
Before rapid 102, also include:
Terminal carries out parameter calibration process to above-mentioned binocular camera so that left mesh photographic head corresponding to binocular camera with
And right mesh photographic head keeps the most parallel.
Illustrating to understand according to the binocular camera imaging in Fig. 5, a point in physical world is in the image of two, left and right
Present a little must row corresponding, the accuracy of guarantee range finding.But in reality, left mesh photographic head can not be put down the most forward
OK, and due to technique, camera lens there may be lens distortion.Therefore, in the present embodiment, binocular camera is entered
Line parameter demarcation processes, and makes left mesh photographic head and the strict parallel alignment forward of right mesh photographic head by mathematic calculation, and
And elimination lens distortion.
On the basis of above-described embodiment, the present embodiment relates to process of convolution and the process of down-sampled process, i.e. above-mentioned
Before step 304, also include:
Terminal use target detection degree of depth network handles determine the two-dimensional matrix that image block is corresponding carry out process of convolution and
Down-sampled process.
As it was previously stated, target detection degree of depth network includes convolutional layer, down-sampled layer and full articulamentum etc., a pair is treated
After detection two-dimensional matrix corresponding to image inputs this network, through this network processes, can export the feature of image to be detected to
Amount.
In target detection degree of depth network, after full articulamentum is normally at volume basic unit and down-sampled layer, therefore, entirely connecting
Connect before layer connects process entirely, and two-dimensional matrix input target detection degree of depth network corresponding to image block to be determined it
After, first the two-dimensional matrix that image block to be determined is corresponding can be carried out process of convolution and down-sampled process, to obtain size and picture
Vegetarian refreshments information all two-dimensional matrixs in intended scope, input after full articulamentum processes again by this two-dimensional matrix, it is possible to
Obtain image block characteristic of correspondence to be determined vector further.
Fig. 6 is a kind of based on the distance-finding method of binocular camera complete in terminal according to shown in an exemplary embodiment
Flow chart, as shown in Figure 6, the complete procedure of range finding is:
In step 601, terminal carries out parameter calibration process to binocular camera.
In step 602, terminal obtains, by binocular camera, the first image and second that target object is corresponding respectively
Image.
In step 603, terminal carries out image flame detection to the first image and the second image.
In step 604, the image block to be determined during terminal determines the first image.
In step 605, terminal is by two-dimensional matrix input target detection degree of depth network corresponding for image block to be determined.
In step 606, terminal uses target detection degree of depth network handles to determine that the two-dimensional matrix that image block is corresponding is carried out
Process of convolution and down-sampled process.
In step 607, the parameter matrix of terminal articulamentum complete to first in target detection degree of depth network carries out unusual
Value is decomposed, and obtains the first parameter submatrix corresponding to parameter matrix and the second parameter submatrix of the first full articulamentum.
In step 608, terminal uses the first parameter submatrix and the second parameter submatrix to image block pair to be determined
The two-dimensional matrix answered carries out full connection process, obtains image block characteristic of correspondence to be determined vector.
In step 609, terminal, according to image block characteristic of correspondence to be determined vector, determines target object.
In step 6010, terminal determines target object positional information in the first image.
In step 6011, terminal, according to target object positional information in the first image, determines that target object is
Positional information in two images.
In step 6012, terminal calculates target object abscissa value in the first image with target object at the second figure
The difference of the abscissa value in Xiang.
In step 6013, terminal, according to above-mentioned difference, determines the depth value of target object, and by the degree of depth of target object
Value is as the distance between terminal and target object.
The concrete grammar of above steps is referred to previous embodiment, and here is omitted.
Following for disclosure device embodiment, may be used for performing method of disclosure embodiment.Real for disclosure device
Execute the details not disclosed in example, refer to method of disclosure embodiment.
Fig. 7 is the block diagram according to a kind of terminal shown in an exemplary embodiment, as it is shown in fig. 7, this terminal includes double
Mesh photographic head, this terminal includes:
Acquisition module 701, be configured to binocular camera obtain respectively the first image corresponding to target object and
Second image.
Determine module 702, be configured to, according to the first image and the second image, determine between terminal and target object
Distance.
Fig. 8 is the block diagram according to a kind of terminal shown in an exemplary embodiment, as shown in Figure 8, determines that module 702 is wrapped
Include:
Detection sub-module 7021, is configured in the first image detect target object.
First determines submodule 7022, is configured to, according to testing result, determine target object position in the first image
Confidence ceases.
Second determines submodule 7023, is configured to, according to target object positional information in the first image, determine mesh
Mark object positional information in the second image.
3rd determines submodule 7024, is configured to according to target object positional information in the first image and target
Object positional information in the second image, determines the distance between terminal and target object.
In another embodiment, detection sub-module 7021 is used for:
Target detection degree of depth network is used to detect target object, wherein, this target detection degree of depth network in the first image
Include at least one full articulamentum.
In another embodiment, detection sub-module 7021 specifically for:
Determine the image block to be determined in the first image, by two-dimensional matrix input target detection corresponding for image block to be determined
Degree of depth network, the parameter matrix of articulamentum complete to first in target detection degree of depth network carries out singular value decomposition, obtains first
The first parameter submatrix that the parameter matrix of full articulamentum is corresponding and the second parameter submatrix, use the first parameter submatrix with
And second parameter submatrix the two-dimensional matrix that image block to be determined is corresponding is carried out full connection process, obtain image block pair to be determined
The characteristic vector answered, wherein, the element in image block characteristic of correspondence vector to be determined is used for representing that image block to be determined is mesh
The probability of mark object, and, according to image block characteristic of correspondence to be determined vector, determine target object.
In another embodiment, the 3rd determine submodule 7024 specifically for:
Calculate target object abscissa value in the first image and target object abscissa value in the second image
Difference, and, according to this difference, determine the depth value of target object, and using the depth value of target object as terminal and target
Distance between object.
Fig. 9 is the block diagram according to a kind of terminal shown in an exemplary embodiment, as it is shown in figure 9, on the basis of Fig. 7,
Above-mentioned terminal also includes:
Rectification module 703, is configured to the first image and the second image are carried out image flame detection, so that in the first image
The first pixel and second image corresponding with the first pixel in the second pixel there is identical vertical coordinate.
Figure 10 is the block diagram according to a kind of terminal shown in an exemplary embodiment, as shown in Figure 10, on the basis of Fig. 9
On, above-mentioned terminal also includes:
Processing module 704, is configured to binocular camera be carried out parameter calibration process, so that binocular camera is corresponding
Left mesh photographic head and right mesh photographic head keep the most parallel.
In another embodiment, detection sub-module 7021 is specifically additionally operable to:
Target detection degree of depth network handles is used to determine that the two-dimensional matrix that image block is corresponding carries out process of convolution and fall is adopted
Sample processes.
About the device in above-described embodiment, wherein modules performs the concrete mode of operation in relevant the method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
Figure 11 is the block diagram of the entity according to a kind of terminal shown in an exemplary embodiment, as shown in figure 11, this terminal
Including:
Memorizer 91 and processor 92.
Memorizer 91 is for storing the executable instruction of processor 92.
Processor 92 is configured to:
The first image corresponding to target object and the second image is obtained respectively by described binocular camera;
According to described first image and described second image, determine between described terminal and described target object away from
From.
In the embodiment of above-mentioned terminal, it should be appreciated that processor 92 can be that central authorities process submodule (English: Central
Processing Unit, is called for short: CPU), it is also possible to be other general processors, digital signal processor (English: Digital
Signal Processor, is called for short: DSP), special IC (English: Application Specific Integrated
Circuit, is called for short: ASIC) etc..The place that general processor can be microprocessor or this processor can also be any routine
Reason device etc., and aforesaid memorizer can be read only memory (English: read-only memory, abbreviation: ROM), deposit at random
Access to memory (English: random access memory, RAM), flash memory, hard disk or solid state hard disc it is called for short:.SIM
Card also referred to as subscriber identification card, smart card, digital mobile telephone must be loaded onto this card and can use.I.e. at computer chip
On store the information of digital mobile phone client, the content such as the key of encryption and telephone directory of user.Real in conjunction with the disclosure
Execute the step of the method disclosed in example to be embodied directly in hardware processor and performed, or with the hardware in processor and
Software module combination execution completes.
Figure 12 is the block diagram according to a kind of terminal 1300 shown in an exemplary embodiment.Wherein, terminal 1300 can be
Mobile phone, computer, tablet device, personal digital assistant etc..
With reference to Figure 12, terminal 1300 can include following one or more assembly: processes assembly 1302, memorizer 1304,
Power supply module 1306, multimedia groupware 1308, audio-frequency assembly 1310, the interface 1312 of input/output (I/O), sensor cluster
1314, and communications component 1316.
Process assembly 1302 and generally control the integrated operation of terminal 1300, such as with display, call, data communication,
The operation that camera operation and record operation are associated.Process assembly 1302 and can include that one or more processor 1320 performs
Instruction, to complete all or part of step of above-mentioned method.Additionally, process assembly 1302 can include one or more mould
Block, it is simple to process between assembly 1302 and other assemblies is mutual.Such as, process assembly 1302 and can include multi-media module,
With facilitate multimedia groupware 1308 and process between assembly 1302 mutual.
Memorizer 1304 is configured to store various types of data to support the operation in terminal 1300.These data
Example include in terminal 1300 operation any application program or the instruction of method, contact data, telephone book data,
Message, picture, video etc..Memorizer 1304 can by any kind of volatibility or non-volatile memory device or they
Combination realizes, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable can
Program read-only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory
Reservoir, disk or CD.
The various assemblies that power supply module 1306 is terminal 1300 provide electric power.Power supply module 1306 can include power management
System, one or more power supplys, and other generate, manage and distribute, with for terminal 1300, the assembly that electric power is associated.
The touch-control of one output interface of offer that multimedia groupware 1308 is included between described terminal 1300 and user shows
Display screen.In certain embodiments, touching display screen can include liquid crystal display (LCD) and touch panel (TP).Touch panel
Including one or more touch sensors with the gesture on sensing touch, slip and touch panel.Described touch sensor is permissible
Not only sensing touches or the border of sliding action, but also detects the persistent period relevant to described touch or slide and pressure
Power.In certain embodiments, multimedia groupware 1308 includes a front-facing camera and/or post-positioned pick-up head.When terminal 1300
Being in operator scheme, during such as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside many
Media data.Each front-facing camera and post-positioned pick-up head can be a fixing optical lens system or have focal length and light
Learn zoom capabilities.
Audio-frequency assembly 1310 is configured to output and/or input audio signal.Such as, audio-frequency assembly 1310 includes a wheat
Gram wind (MIC), when terminal 1300 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike quilt
It is configured to receive external audio signal.The audio signal received can be further stored at memorizer 1304 or via communication
Assembly 1316 sends.In certain embodiments, audio-frequency assembly 1310 also includes a speaker, is used for exporting audio signal.
I/O interface 1312 provides interface, above-mentioned peripheral interface module for processing between assembly 1302 and peripheral interface module
Can be keyboard, put striking wheel, button etc..These buttons may include but be not limited to: home button, volume button, start button and
Locking press button.
Sensor cluster 1314 includes one or more sensor, for providing the state of various aspects to comment for terminal 1300
Estimate.Such as, what sensor cluster 1314 can detect terminal 1300 opens/closed mode, the relative localization of assembly, such as institute
Stating display and keypad that assembly is terminal 1300, sensor cluster 1314 can also detect terminal 1300 or terminal 1,300 1
The position change of individual assembly, the presence or absence that user contacts with terminal 1300, terminal 1300 orientation or acceleration/deceleration and end
The variations in temperature of end 1300.Sensor cluster 1314 can include proximity transducer, is configured to do not having any physics
The existence of object near detection during contact.Sensor cluster 1314 can also include optical sensor, as CMOS or ccd image sense
Device, for using in imaging applications.In certain embodiments, this sensor cluster 1314 can also include acceleration sensing
Device, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 1316 is configured to facilitate the communication of wired or wireless mode between terminal 1300 and other equipment.Eventually
End 1300 can access wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.Exemplary at one
In embodiment, broadcast singal or broadcast that communications component 1316 receives from external broadcasting management system via broadcast channel are relevant
Information.In one exemplary embodiment, described communications component 1316 also includes near-field communication (NFC) module, to promote short distance
Communication.Such as, can be based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband
(UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, terminal 1300 can be by one or more application specific integrated circuits (ASIC), numeral
Signal processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components realize, be used for performing above-mentioned based on binocular in terminal
The distance-finding method of photographic head.
In the exemplary embodiment, a kind of non-transitory computer-readable recording medium including instruction, example are additionally provided
As included the memorizer 1304 of instruction, above-mentioned instruction can have been performed said method by the processor 1320 of terminal 1300.Example
If, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, soft
Dish and optical data storage devices etc..
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is by the process of terminal 1300
When device performs so that terminal 1300 is able to carry out a kind of based on the distance-finding method of binocular camera in terminal.Described method includes:
Terminal obtains the first image corresponding to target object and the second image respectively by described binocular camera;
Described terminal according to described first image and described second image, determine described terminal and described target object it
Between distance.
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to its of the disclosure
Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modification, purposes or
Person's adaptations is followed the general principle of the disclosure and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques means.Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are by following
Claims are pointed out.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and
And various modifications and changes can carried out without departing from the scope.The scope of the present disclosure is only limited by appending claims
System.
Claims (17)
1. one kind based on the distance-finding method of binocular camera in terminal, it is characterised in that including:
Terminal obtains the first image corresponding to target object and the second image respectively by described binocular camera;
Described terminal, according to described first image and described second image, determines between described terminal and described target object
Distance.
Method the most according to claim 1, it is characterised in that described terminal is according to described first image and described second
Image, determines the distance between described terminal and described target object, including:
Described terminal detects described target object in described first image;
Described terminal, according to testing result, determines described target object positional information in the first image;
Described terminal, according to described target object positional information in described first image, determines that described target object is described
Positional information in second image;
Described terminal according to described target object positional information in described first image and described target object described
Positional information in second image, determines the distance between described terminal and described target object.
Method the most according to claim 2, it is characterised in that described terminal detects described target in described first image
Object, including:
Described terminal uses target detection degree of depth network to detect described target object, wherein, described mesh in described first image
Mark detection degree of depth network includes at least one full articulamentum.
Method the most according to claim 3, it is characterised in that described terminal uses target detection degree of depth network described the
One image detects described target object, including:
Described terminal determines the image block to be determined in described first image;
Two-dimensional matrix corresponding for described image block to be determined is inputted described target detection degree of depth network by described terminal;
The parameter matrix of described terminal articulamentum complete to first in described target detection degree of depth network carries out singular value decomposition, obtains
Take the first parameter submatrix corresponding to the parameter matrix of the described first full articulamentum and the second parameter submatrix;
Described terminal uses described first parameter submatrix and described second parameter submatrix to described image block pair to be determined
The two-dimensional matrix answered carries out full connection process, obtains described image block characteristic of correspondence to be determined vector, wherein, described to be determined
Element in image block characteristic of correspondence vector is for representing the probability that described image block to be determined is described target object;
Described terminal, according to described image block characteristic of correspondence to be determined vector, determines described target object.
Method the most according to claim 2, it is characterised in that described terminal according to described target object at described first figure
Positional information in Xiang and described target object positional information in described second image, determine described terminal and described mesh
Distance between mark object, including:
Described terminal calculates described target object abscissa value in described first image and described target object described the
The difference of the abscissa value in two images;
Described terminal, according to described difference, determines the depth value of described target object, and is made by the depth value of described target object
For the distance between described terminal and described target object.
6. according to the method described in any one of claim 1-5, it is characterised in that described terminal according to described first image and
Described second image, before determining the distance between described terminal and described target object, also includes:
Described terminal carries out image flame detection to described first image and described second image, so that in described first image
The second pixel in one pixel and described second image corresponding with described first pixel has identical vertical coordinate.
7. according to the method described in any one of claim 1-5, it is characterised in that described terminal according to described first image and
Described second image, before determining the distance between described terminal and described target object, also includes:
Described terminal carries out parameter calibration process to described binocular camera, so that the left mesh shooting that described binocular camera is corresponding
Head and right mesh photographic head keep the most parallel.
Method the most according to claim 4, it is characterised in that described terminal uses described first parameter submatrix and institute
State the second parameter submatrix and the two-dimensional matrix that described image block to be determined is corresponding is carried out full connection process, obtain described to be determined
Before image block characteristic of correspondence vector, also include:
Described terminal uses described target detection degree of depth network that the two-dimensional matrix that described image block to be determined is corresponding is carried out convolution
Process and down-sampled process.
9. a terminal, it is characterised in that described terminal includes that binocular camera, described terminal include:
Acquisition module, is configured to described binocular camera and obtains the first image and second that target object is corresponding respectively
Image;
Determine module, be configured to according to described first image and described second image, determine described terminal and described target
Distance between object.
Terminal the most according to claim 9, it is characterised in that described determine that module includes:
Detection sub-module, is configured to detect described target object in described first image;
First determines submodule, is configured to, according to testing result, determine described target object position letter in the first image
Breath;
Second determines submodule, is configured to, according to described target object positional information in described first image, determine institute
State target object positional information in described second image;
3rd determines submodule, is configured to according to described target object positional information in described first image and described
Target object positional information in described second image, determines the distance between described terminal and described target object.
11. terminals according to claim 10, it is characterised in that described detection sub-module is used for:
Using target detection degree of depth network to detect described target object in described first image, wherein, described target detection is deep
Degree network includes at least one full articulamentum.
12. terminals according to claim 11, it is characterised in that described detection sub-module specifically for:
Determine the image block to be determined in described first image, by described for two-dimensional matrix input corresponding for described image block to be determined
Target detection degree of depth network, the parameter matrix of articulamentum complete to first in described target detection degree of depth network carries out singular value and divides
Solve, obtain the first parameter submatrix corresponding to the parameter matrix of the described first full articulamentum and the second parameter submatrix, use
The two-dimensional matrix that described image block to be determined is corresponding is carried out by described first parameter submatrix and described second parameter submatrix
Full connection processes, and obtains described image block characteristic of correspondence to be determined vector, wherein, described image block characteristic of correspondence to be determined
Element in vector is used for representing the probability that described image block to be determined is described target object, and, according to described to be determined
Image block characteristic of correspondence vector, determines described target object.
13. terminals according to claim 10, it is characterised in that the described 3rd determine submodule specifically for:
Calculate described target object abscissa value in described first image and described target object in described second image
The difference of abscissa value, and, according to described difference, determine the depth value of described target object, and by described target object
Depth value as the distance between described terminal and described target object.
14. according to the terminal described in any one of claim 9-13, it is characterised in that also include:
Rectification module, is configured to described first image and described second image are carried out image flame detection, so that described first
The first pixel in image and the second pixel in described second image corresponding with described first pixel have identical
Vertical coordinate.
15. according to the terminal described in any one of claim 9-13, it is characterised in that also include:
Processing module, is configured to described binocular camera is carried out parameter calibration process, so that described binocular camera is corresponding
Left mesh photographic head and right mesh photographic head keep the most parallel.
16. terminals according to claim 12, it is characterised in that described detection sub-module is specifically additionally operable to:
Use the described target detection degree of depth network two-dimensional matrix that described image block to be determined is corresponding is carried out process of convolution and
Down-sampled process.
17. 1 kinds of terminals, it is characterised in that described terminal includes:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
The first image corresponding to target object and the second image is obtained respectively by described binocular camera;
According to described first image and described second image, determine the distance between described terminal and described target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610515002.3A CN106225764A (en) | 2016-07-01 | 2016-07-01 | Based on the distance-finding method of binocular camera in terminal and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610515002.3A CN106225764A (en) | 2016-07-01 | 2016-07-01 | Based on the distance-finding method of binocular camera in terminal and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106225764A true CN106225764A (en) | 2016-12-14 |
Family
ID=57519024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610515002.3A Pending CN106225764A (en) | 2016-07-01 | 2016-07-01 | Based on the distance-finding method of binocular camera in terminal and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106225764A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622510A (en) * | 2017-08-25 | 2018-01-23 | 维沃移动通信有限公司 | A kind of information processing method and device |
CN107944390A (en) * | 2017-11-24 | 2018-04-20 | 西安科技大学 | Motor-driven vehicle going objects in front video ranging and direction localization method |
CN108108667A (en) * | 2017-12-01 | 2018-06-01 | 大连理工大学 | A kind of front vehicles fast ranging method based on narrow baseline binocular vision |
CN108733208A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | The I-goal of smart machine determines method and apparatus |
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
CN109752951A (en) * | 2017-11-03 | 2019-05-14 | 腾讯科技(深圳)有限公司 | Processing method, device, storage medium and the electronic device of control system |
CN109829401A (en) * | 2019-01-21 | 2019-05-31 | 深圳市能信安科技股份有限公司 | Traffic sign recognition method and device based on double capture apparatus |
CN109920008A (en) * | 2019-02-20 | 2019-06-21 | 北京中科慧眼科技有限公司 | Modification method, device and the automated driving system of self-calibration range error |
WO2019137535A1 (en) * | 2018-01-15 | 2019-07-18 | 维沃移动通信有限公司 | Object distance measurement method and terminal device |
WO2020029917A1 (en) * | 2018-08-06 | 2020-02-13 | 北京旷视科技有限公司 | Image processing method and apparatus, and image processing device |
WO2020061794A1 (en) * | 2018-09-26 | 2020-04-02 | 深圳市大疆创新科技有限公司 | Vehicle driver assistance device, vehicle and information processing method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070053584A1 (en) * | 2001-11-09 | 2007-03-08 | Honda Giken Kogyo Kabushiki Kaisha | Image recognition apparatus |
CN102069770A (en) * | 2010-12-16 | 2011-05-25 | 福州名品电子科技有限公司 | Automobile active safety control system based on binocular stereo vision and control method thereof |
CN102490673A (en) * | 2011-12-13 | 2012-06-13 | 中科院微电子研究所昆山分所 | Vehicle active safety control system based on internet of vehicles and control method of vehicle active safety control system |
CN103148837A (en) * | 2012-11-16 | 2013-06-12 | Tcl集团股份有限公司 | Method and apparatus for measuring vehicle distance and automobile |
EP2752348A1 (en) * | 2013-01-04 | 2014-07-09 | Continental Automotive Systems, Inc. | Adaptive emergency brake and steer assist system based on driver focus |
CN104021388A (en) * | 2014-05-14 | 2014-09-03 | 西安理工大学 | Reversing obstacle automatic detection and early warning method based on binocular vision |
CN105551047A (en) * | 2015-12-21 | 2016-05-04 | 小米科技有限责任公司 | Picture content detecting method and device |
CN105651258A (en) * | 2015-12-30 | 2016-06-08 | 杨正林 | Initiative-view-angle binocular vision ranging system and initiative-view-angle binocular vision ranging method |
CN105716568A (en) * | 2016-01-28 | 2016-06-29 | 武汉光庭信息技术股份有限公司 | Binocular camera ranging method in automatic pilot system |
-
2016
- 2016-07-01 CN CN201610515002.3A patent/CN106225764A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070053584A1 (en) * | 2001-11-09 | 2007-03-08 | Honda Giken Kogyo Kabushiki Kaisha | Image recognition apparatus |
CN102069770A (en) * | 2010-12-16 | 2011-05-25 | 福州名品电子科技有限公司 | Automobile active safety control system based on binocular stereo vision and control method thereof |
CN102490673A (en) * | 2011-12-13 | 2012-06-13 | 中科院微电子研究所昆山分所 | Vehicle active safety control system based on internet of vehicles and control method of vehicle active safety control system |
CN103148837A (en) * | 2012-11-16 | 2013-06-12 | Tcl集团股份有限公司 | Method and apparatus for measuring vehicle distance and automobile |
EP2752348A1 (en) * | 2013-01-04 | 2014-07-09 | Continental Automotive Systems, Inc. | Adaptive emergency brake and steer assist system based on driver focus |
CN104021388A (en) * | 2014-05-14 | 2014-09-03 | 西安理工大学 | Reversing obstacle automatic detection and early warning method based on binocular vision |
CN105551047A (en) * | 2015-12-21 | 2016-05-04 | 小米科技有限责任公司 | Picture content detecting method and device |
CN105651258A (en) * | 2015-12-30 | 2016-06-08 | 杨正林 | Initiative-view-angle binocular vision ranging system and initiative-view-angle binocular vision ranging method |
CN105716568A (en) * | 2016-01-28 | 2016-06-29 | 武汉光庭信息技术股份有限公司 | Binocular camera ranging method in automatic pilot system |
Non-Patent Citations (1)
Title |
---|
ROSS GIRSHICK: "Fast R-CNN", 《IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107622510A (en) * | 2017-08-25 | 2018-01-23 | 维沃移动通信有限公司 | A kind of information processing method and device |
US11275239B2 (en) | 2017-11-03 | 2022-03-15 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for operating control system, storage medium, and electronic apparatus |
CN109752951A (en) * | 2017-11-03 | 2019-05-14 | 腾讯科技(深圳)有限公司 | Processing method, device, storage medium and the electronic device of control system |
CN109752951B (en) * | 2017-11-03 | 2022-02-08 | 腾讯科技(深圳)有限公司 | Control system processing method and device, storage medium and electronic device |
CN107944390A (en) * | 2017-11-24 | 2018-04-20 | 西安科技大学 | Motor-driven vehicle going objects in front video ranging and direction localization method |
CN108108667A (en) * | 2017-12-01 | 2018-06-01 | 大连理工大学 | A kind of front vehicles fast ranging method based on narrow baseline binocular vision |
WO2019137535A1 (en) * | 2018-01-15 | 2019-07-18 | 维沃移动通信有限公司 | Object distance measurement method and terminal device |
CN108733208A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | The I-goal of smart machine determines method and apparatus |
WO2020029917A1 (en) * | 2018-08-06 | 2020-02-13 | 北京旷视科技有限公司 | Image processing method and apparatus, and image processing device |
US11461908B2 (en) | 2018-08-06 | 2022-10-04 | Beijing Kuangshi Technology Co., Ltd. | Image processing method and apparatus, and image processing device using infrared binocular cameras to obtain three-dimensional data |
WO2020061794A1 (en) * | 2018-09-26 | 2020-04-02 | 深圳市大疆创新科技有限公司 | Vehicle driver assistance device, vehicle and information processing method |
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
CN109829401A (en) * | 2019-01-21 | 2019-05-31 | 深圳市能信安科技股份有限公司 | Traffic sign recognition method and device based on double capture apparatus |
CN109920008A (en) * | 2019-02-20 | 2019-06-21 | 北京中科慧眼科技有限公司 | Modification method, device and the automated driving system of self-calibration range error |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106225764A (en) | Based on the distance-finding method of binocular camera in terminal and terminal | |
CN110688951B (en) | Image processing method and device, electronic equipment and storage medium | |
CN109614876B (en) | Key point detection method and device, electronic equipment and storage medium | |
CN106651955B (en) | Method and device for positioning target object in picture | |
CN108764069B (en) | Living body detection method and device | |
CN110674719B (en) | Target object matching method and device, electronic equipment and storage medium | |
US11288531B2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN105809704A (en) | Method and device for identifying image definition | |
CN105631403A (en) | Method and device for human face recognition | |
CN105469056A (en) | Face image processing method and device | |
CN105260732A (en) | Image processing method and device | |
CN107944367B (en) | Face key point detection method and device | |
CN106682736A (en) | Image identification method and apparatus | |
CN111666917A (en) | Attitude detection and video processing method and device, electronic equipment and storage medium | |
CN106557759B (en) | Signpost information acquisition method and device | |
CN105069426A (en) | Similar picture determining method and apparatus | |
CN106339695A (en) | Face similarity detection method, device and terminal | |
CN111435422B (en) | Action recognition method, control method and device, electronic equipment and storage medium | |
CN104077585A (en) | Image correction method and device and terminal | |
CN108171222B (en) | Real-time video classification method and device based on multi-stream neural network | |
CN110930351A (en) | Light spot detection method and device and electronic equipment | |
CN114581525A (en) | Attitude determination method and apparatus, electronic device, and storage medium | |
US10846513B2 (en) | Method, device and storage medium for processing picture | |
CN106297408A (en) | Information cuing method and device | |
CN105812530A (en) | Method and device for adding contact means by using fingerprint information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161214 |
|
RJ01 | Rejection of invention patent application after publication |