Embodiment
In recent years, with the development of the social economy, automobile quantity is growing day by day, road capacity can not meet that the volume of traffic is fast
The demand that speed increases, especially urban traffic congestion and blocking are more serious, cause road traffic accident increasingly to increase.Wherein,
The unmanned technology and auxiliary driving technology of automobile are the effective ways as raising vehicle safety, can be effectively
Solve these problems.In addition, with the development of unmanned air vehicle technique, unmanned plane is in police, city management, video capture, electric power, gas
As the industrial applications such as, rescue and relief work are extensive, therefore, the research of the navigation of aircraft obtains increasing concern.In addition, blind person
As disadvantaged group, it is necessary to obtain the help of society, help them to improve the ability lived on one's own life, more preferable life can be possessed
Quality, and assisting blind walking is very important one side.
Regardless of whether in which technical field such as unmanned, auxiliary driving, Navigation of Pilotless Aircraft or blind person's guide, obstacle
The detection of thing is all wherein very important part, and the obstacle detection method of main flow is regarding based on binocular camera at present
What poor figure was detected.
Therefore, the embodiment of the present application provides a kind of obstacle detection method based on above-mentioned application scenarios, to obtain height
The disparity map of precision.Specifically, as shown in figure 1, the general principle for the technical scheme that the embodiment of the present application is provided is:Obtain double
The left view and right view of lens camera synchronization collection, then, to first of left view and the low resolution of right view
Matched with window, and based on the disparity map obtained after matching, from the first matching area of left view and right view, obtained
The matching area of target first of barrier be present, then, by the high score in the matching area of target first in left view and right view
The match window of resolution second is matched, and obtains the first disparity map, then, the match window of multiple Reusability higher resolution
Matching primitives are carried out, the profile of barrier can be more accurate in obtained disparity map.
Part term involved in the application is explained below, understood with helping reader:
" binocular camera ", it is to place the camera being combined at a certain distance by two cameras with identical parameters,
In general, the left camera in binocular camera are generally arranged in same horizontal line with right camera, caused with reaching left camera and
Right camera optical axis is parallel so that binocular camera can be used for simulating human eye and causing differential seat angle, with this come reach three-dimensional imaging or
Person detects the effect of the depth of field.
" parallax ", refer to the direction difference caused by same target from two points that a determining deviation be present, from mesh
The angle seen between two points is marked, is called the parallactic angle of the two points, the distance between 2 points are referred to as baseline.
" parallax value ", when referring to that left camera and right camera are shot to same target in binocular camera, obtain two
The parallax value of the difference, the as pixel of two abscissas of same pixel, corresponding, this two width figure are directed in width image
The parallax value of all pixels point forms disparity map as in.
Corresponding relation schematic diagram is understood between parallax and depth shown in reference picture 2a, Fig. 2 b, if OLFor left camera, institute is in place
Put, ORTo there is camera position, f is used to represent left camera and has the focal length of camera lens, and B represents parallax range, equal to a left side
The distance of camera and the projection centre line of right camera, specifically:
Assuming that same characteristic point P (x of the left and right camera in synchronization viewing space objectc,yc,zc), zcIt can generally recognize
To be the depth of this feature point, for representing the distance of this feature point and interplanar residing for the camera of left and right, this feature point P exists respectively
Characteristic point P image is obtained on " left eye " and " right eye ", i.e. subpoints of the characteristic point P on the camera of left and right is PL(xL,yL) and
PR(xR,yR), if the image of left and right camera is at grade, characteristic point P Y-coordinate and image coordinate PLAnd PRY-coordinate
It is identical, then obtained by triangle geometrical relationship:
Due to parallax Disparity=xL-xR.Thus three-dimensional coordinates of the characteristic point P under camera coordinates system, which can be calculated, is
For:
Therefore, can be obtained based on above-mentioned formula, because for binocular camera, its parallax range B and focal length F are
Determine, therefore, parallax value and depth are in inverse relation.
The span of parallax value in general, can be arranged to nearest in [0,255] by 0 in disparity map in the application
Distance, 255 are arranged to farthest distance.
It should be noted that image is acquired because the binocular camera in the application is simulation human eye, therefore,
Left camera in binocular camera in the application is set in the same horizontal line with right camera, and optical axis is parallel, and is existed certain
Spacing, therefore, the parallax in the application refer mainly to horizontal parallax.
" camera calibration ", for determine the three-dimensional geometry position of space object surface point and its in the picture between corresponding points
Correlation, it is necessary to establish the geometrical model of camera imaging, these geometrical model parameters are exactly camera parameter.Mostly several
These parameters must can just be obtained by experiment with calculating under part, and this process for solving parameter is just referred to as camera calibration mistake
Journey.
Camera calibration in the application is often referred to camera off-line calibration.Under normal circumstances, due to the optical axis of binocular camera
Positioned at camera internal, when camera assembles it is difficult to ensure that optical axis perfect parallelism, generally in the presence of certain deviation, therefore,
Generally off-line calibration can be carried out to building successful binocular camera, to obtain the internal reference of camera (focal length, baseline length, figure
Inconocenter, distortion parameter etc.) and outer ginseng (spin matrix R and translation matrix T).
In a kind of example, the camera lens of binocular camera can be marked offline using Zhang Zhengyou gridiron patterns scaling method
It is fixed.
Specifically, when carrying out off-line calibration to camera, first left camera can be demarcated, obtain the inside and outside of left camera
Parameter;Secondly, right camera is demarcated, obtains the inside and outside parameter of right camera;Finally, binocular camera is demarcated, obtained
Take the rotation translation relation between the camera of left and right.
Any point W=[X, Y, Z] in hypothetical world coordinate systemT, corresponding points are m=[u, v] to the point on the image planeT,
Projection relation between object point and picture point is:
[u,v,1]T=P [X, Y, Z, 1]T(formula seven);
Wherein, P is 3 × 4 projection matrix, and it can be represented by rotating with translation matrix:
P=A [Rt] (formula seven);
Wherein, R is 3 × 3 spin matrixs, and t is translation vector, the external parameter of the two matrixes expression binocular vision, one
Individual expression position, an expression direction, thus can determine that each position of the pixel in world coordinate system on image, wherein
A matrixes represent camera internal parameter matrix, can be expressed as:
(u in above formulao,uo) be image center coordinate;fuAnd fvRepresent that horizontal, vertical pixel unit represents respectively
Focal length, β represent obliquity factor.
Some parameters obtained during above-mentioned off-line calibration have application in image rectification and barrier calculating process.
" image rectification ", therefore, generally can be double because the image that lens distortion can cause camera lens to gather is distorted
Distortion correction is carried out to binocular camera before lens camera collection image and polar curve corrects.Assuming that the benchmark image without distortion
For f (x, y), there is the image of larger geometric distortion for g (x ', y '), the set distortion between two images coordinate system can be with table
It is shown as:Above-mentioned formula is represented with binary polynomial:
Wherein, n is polynomial coefficient, and i and j represent the particular location of pixel in the picture, aijAnd bijFor each term system
Number.The image of distortion correction has been obtained by above formula.
For the polar curve correct operation of image, according to the rotation of the left and right camera obtained in camera off-line correction and translation square
Battle array, it is assumed that left camera rotation translation matrix is R1 and t1, and the rotation translation matrix of right camera is R2 and t2, and it rotates and translation square
Battle array can be to obtain in off-line correction.Rotation and translation matrix based on left and right camera, utilize Bouguet polar curve correction side
Method so that the corresponding polar curve of left and right camera image is parallel.The time complexity of Stereo matching is greatly reduced, simplifies parallaxometer
Calculation process.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, expression may have three kinds of passes
System, for example, A and/or B, can be represented:Individualism A, while A and B be present, these three situations of individualism B.In addition, herein
Middle character "/", it is a kind of relation of "or" to typically represent forward-backward correlation object.If being not added with illustrating, " multiple " herein are
Refer to two or more.
It should be noted that in the embodiment of the present application, " exemplary " or " such as " etc. word make example, example for expression
Card or explanation.Be described as in the embodiment of the present application " exemplary " or " such as " any embodiment or design should
It is interpreted than other embodiments or design more preferably or more advantage.Specifically, " exemplary " or " example are used
Such as " word is intended to that related notion is presented in a concrete fashion.
It should be noted that in the embodiment of the present application, unless otherwise indicated, the implication of " multiple " refer to two or two with
On.
It should be noted that in the embodiment of the present application, " (English:Of) ", " corresponding (English:Corresponding,
Relevant it is) " and " corresponding (English:Corresponding) " can use with sometimes, it is noted that do not emphasizing it
During difference, its is to be expressed be meant that it is consistent.
Below in conjunction with the Figure of description of the embodiment of the present application, the technical scheme provided the embodiment of the present application is said
It is bright.Obviously, it is described be the application part of the embodiment, rather than whole embodiment.It should be noted that hereafter institute
Part or all of technical characteristic in any number of technical schemes provided can be used in combination, shape in the case where not conflicting
Cheng Xin technical scheme.
The executive agent for the obstacle detection method that the embodiment of the present application provides can be the obstacle based on binocular camera
Analyte detection device can be used for the electronic equipment for performing above-mentioned obstacle detection method.Wherein, based on binocular camera
Obstacle detector can be above-mentioned electronic equipment in central processing unit (Central Processing Unit, CPU),
The combination of the hardware such as CPU and memory or can be above-mentioned terminal device in other control units or module.
Exemplary, above-mentioned electronic equipment can be that binocular camera is gathered using the method that the embodiment of the present application provides
Left and right view personal computer ((personal computer, PC), net book, the personal digital assistant (English analyzed
Text:Personal Digital Assistant, referred to as:PDA), server etc., or above-mentioned electronic equipment can be to be provided with
The software client that the method that the embodiment of the present application can be used to provide is handled the left and right view that binocular camera gathers
Or the PC of software systems or software application, server etc., specific hardware realize that environment can be in the form of all-purpose computer, either
ASIC mode or FPGA, or some programmable expansion platforms are such as Tensilica Xtensa platforms
Deng.For example, above-mentioned electronic equipment can be integrated in unmanned machine, blind-man navigating instrument, automatic driving vehicle, intelligent vehicle, intelligence
Energy mobile phone etc. is needed in equipment or the instrument of detection barrier.
Based on the above, embodiments herein provides a kind of obstacle detection method based on binocular camera, such as
Shown in Fig. 3, this method comprises the following steps:
S101, in the first matching area of left view and the first matching area of right view of predetermined scene, obtain respectively
Take the matching area of target first that barrier be present.
Exemplary, when controlling binocular camera collection view, that is, carry out in scene before detection of obstacles, generally need
Certain adjustment (for example, the camera such as off-line calibration, image rectification adjustment operation) is carried out to binocular camera in advance, to ensure
Left camera is parallel with right camera optical axis.Then, the baseline length between the camera optical axis of left and right is measured, and records binocular camera
Focal length, and ensure that the baseline length and focal length will not change, so as to ensure the synchronism of binocular camera collection image, keep away
Exempt from unnecessary error.
, specifically can be according to the window size of the first matching area and double when performing step S101 in a kind of example
The resolution ratio of lens camera, matching area division is carried out to two width views, obtains the first matching area of two width views;Wherein,
Above-mentioned left view is made up of with right view multiple mutually non-overlapping matching areas of size identical first.For example, it is assumed that
The left view of binocular camera collection and the resolution ratio W of right view are 600*600 pixels, and the window of default first matching area
Mouth size is 30 × 30, then as shown in figure 4, the left view 21 in Fig. 4 respectively has 20 mutually with the horizontal and vertical of right view 22
Do not overlap the first matching area.
In another example, when performing step S101b, according to the window size of the first matching area, mutually overlapping
The horizontal offset of two matching areas and the resolution ratio of binocular camera, to two width views carry out matching area division,
The first matching area of two width views is obtained, wherein, above-mentioned left view is by multiple mutually overlapping size phases with right view
Same the first matching area composition.For example, it is assumed that the left view of binocular camera collection and the resolution ratio W of right view are 600*
600 pixels, the division schematic diagram of the first matching area in the left view and right view shown in reference picture 5, if left view selection
The region of one 30 × 30, then N number of 30 × 30 region is similarly selected to be matched on right view, e.g., first region
Horizontal-shift distance be L, the horizontal-shift distance of Two Areas is 2L, the like, it is if L=15, N=40, i.e., horizontal
To there is 40 the first mutually overlapping matching areas.By the number of mutually overlapping marked off matching area is not than mutually
The number of overlapping marked off matching area is more, and therefore, the precision of corresponding obtained disparity map is also higher.
It should be noted that the first above-mentioned matching area is low resolution match window, it is generally the case that if first
Window size with region is s, and view size is W, then needs to ensure that W/s is integer, each in two images so as to ensure
First matching area size is identical, is conveniently matched.
S102, the second matching area in the matching area of target in left view first matched with target in right view first
The second matching area in region is matched, and obtains the first disparity map.
Exemplary, the application is by mesh in the second matching area in the matching area of target in left view first and right view
The process that the second matching area in the first matching area is matched is marked, is in the matching area of target in left view first
The second matching area and right view in the second matching area in the matching area of target first carry out Matching power flow calculating, calculate
Go out left view parallax value corresponding with same second matching area in right view, obtain the first disparity map.
The size of the second matching area in the embodiment of the present application is less than the size of the first matching area.Above-mentioned first regards
Poor figure can be the second matching area correspondence image in left view in the matching area of target first and target first in right view
Target the first matching area correspondence image and mesh in right view in the disparity map or left view of matching area correspondence image
The disparity map of the first matching area correspondence image is marked, first is matched with other in left view in addition to the matching area of target first
The parallax of region correspondence image and other the first matching area correspondence images in right view in addition to the matching area of target first
Scheme after combining, the disparity map formed.
S103, the positional information for determining barrier in the first disparity map.
Exemplary, when performing step S103, can be partitioned into according to the barrier threshold value H of setting where barrier
Precise area, and calculate according to the inside and outside parameter of binocular camera the true bearing of barrier.In a kind of example, this
The region that application can be less than predetermined barrier threshold value to parallax value in the first disparity map carries out contour detecting, obtains barrier
Profile information, then according to the profile information of the barrier and the parallax value of corresponding region, determine the position of the barrier
Information.
The scheme that the embodiment of the present application provides, passes through the first matching area of the low resolution of the left view in predetermined scene
With in the first matching area of the low resolution of right view, obtaining the matching area of target first that barrier be present respectively, and will
High-resolution second matching area in left view in the matching area of target first and the matching area of target first in right view
In high-resolution second matching area matched, the first disparity map is obtained, because the size of the second matching area is less than
The size of first matching area, therefore, so targetedly the region where barrier in left view and right view is entered to advance
The fine match of one step, the amount of calculation of disparity computation is reduced, improve the efficiency of disparity computation, simultaneously as to predetermined field
The region that barrier in the left and right view of scape be present carries out further fine match, apparent so as to obtain barrier profile
Disparity map, and then more fine barrier profile information can be obtained based on the disparity map, improve detection of obstacles
The degree of accuracy and detection efficiency.
Optionally, the application is true respectively in the first matching area from the first matching area of left view and right view
Surely the matching area of target first of barrier be present, that is, can be by using the matching window of low resolution when performing step S101
Mouthful left view and right view execution once matched, and left view and right view are obtained based on the disparity map obtained after matching
The middle matching area of target first that barrier be present.
Exemplary, step S101 specifically comprises the following steps:
S101a, the left view and right view for obtaining the collection of binocular camera synchronization.
S101b, the first matching area in left view matched with the first matching area in right view, obtain second and regard
Difference figure.
S101c, according to the second disparity map, from left view in the first matching area of the first matching area and right view,
The matching area of target first that barrier be present is determined respectively.
Exemplary, the application is to a left side by left view and the process that the first matching area is matched in right view
The first matching area calculates left view and regarded with the right side with carrying out Matching power flow calculating in the first matching area in right view in view
Parallax value corresponding to same first matching area, obtains the second disparity map in figure.
In a kind of example, when performing step S101c, the application can be from left view corresponding with right view first
In matching area, determine that parallax value is less than the first Matching band corresponding to the region of predetermined barrier threshold value in the second disparity map
Domain, as the matching area of target first that barrier be present.
Exemplary, reference picture 4, by taking the first row as an example, 20 the first matching areas numbering on the left view of the first row
For L (1,1), L (1,2) ... ..., L (1,20), the first matching area numbering on right view is R (1,1), R (1,
2) ... ..., R (1,20), then L (1,1) is matched with R (1,1) to R (1,20) successively, from the Matching power flow calculated
Region R (1, j) where the first minimum matching area of middle selection Matching power flow, then obtain the parallax value D1 in L (1,1) region
(1,1)=j-1, wherein j ∈ (1,20), it represents the parallax value of all pixels point in the window area of the first row jth row.Such as
Shown in Fig. 6, it is assumed that the Matching power flow that L (1,15) and R (1,3) is obtained is minimum, then parallax value is 15-3=12, is repeated the above steps
The Stereo matching for completing remaining 19 rows calculates, you can obtains a complete disparity map D1.
Exemplary, existing binocular ranging cost computational methods have SAD, SSD, NCC etc., and specific formula is with reference to as follows:
SAD Matching power flow calculation formula is:
SSD Matching power flow calculation formula is:
NCC Matching power flow calculation formula is:
Shown in reference picture 6, it is assumed that setting barrier threshold value H=10, then it is assumed that the first match window in the first disparity map
Parallax value contains barrier more than H's, understands that barrier, reference are contained in this region of L (1,15) according to the description above
Barrier square areas T shown in Fig. 6, wherein black represent that parallax value is 0, and frosty area represents that parallax value is 12.
If with L (1,15) this first matching area shown in Fig. 6 as first object match window exemplified by, then to L (1,
15) window carries out further fine match.As shown in fig. 7, it is careful to use k further to be carried out for 5 × 5 windows in T regions
Matching primitives, then it is horizontal and vertical respectively to have 6 the second matching areas, by taking the first row as an example, the matching on the left view of the first row
Window number is TL (1,1), TL (1,2) ... ..., TL (1,6), and match window numbering is TR (1,1) on right view, TR (1,
2) ... ..., TR (1,6), then matched successively with TL (1,1) and TR (1,1) to TR (1,6), take wherein that Matching power flow is most
It is small, and the value must be smaller than defined matching threshold G, otherwise it is assumed that being the point that can not be matched, for example be caused because blocking
Left images can not match, it is assumed that meet the condition match window numbering be TR (1, i), wherein i ∈ (1,6), then TL (1,
1) parallax value is D2 (1,1)=i-1+D1 (1,15), and it represents all pixels point in the window area of the row of the first row i-th
Parallax value.Repeat the above steps and complete the Stereo matching calculating of remaining 6 rows, you can obtain a complete fine disparity map D2.
Meanwhile after disparity map D2 is obtained, higher resolution (such as 4 × 4,3 × 3,2 × 2) can also be repeatedly used repeatedly
Matching area carry out matching primitives so that obtaining the higher disparity map of fineness, and therefrom can determine profile more
The clearly profile information of barrier.
Optionally, in order to improve the fineness of barrier profile in the first disparity map, the application can also be in step S103
Afterwards, can be on the barrier region cut be segmented for the second time, the repetitious match window using higher resolution is carried out
Matching.
Exemplary, also include after step s 102:
Target first in S102a, the second matching area and right view in left view in the matching area of target first
With in the second matching area in region, the matching area of target second that barrier be present is obtained respectively.
S102b, by target second in the 3rd matching area and right view in the matching area of target in left view second
Matched with the 3rd matching area in region, obtain the 3rd disparity map, wherein, the chi of the 3rd matching area in the application
The very little size for being less than the second matching area.
In specific perform, the size of the number specifically repeated and the match window matched every time can be set,
From the operation for repeating above-mentioned step S102a and S102b.
It is above-mentioned that mainly the embodiment of the present application is provided from the terminal point that obstacle detector and the device are applied
Scheme be described.It is understood that the device, in order to realize above-mentioned function, it comprises perform each function phase to answer
Hardware configuration and/or software module.Those skilled in the art should be readily appreciated that, with reference to implementation disclosed herein
The unit and algorithm steps of each example of example description, the application can be come with the combining form of hardware or hardware and computer software
Realize.Some function is performed in a manner of hardware or computer software driving hardware actually, spy depending on technical scheme
Fixed application and design constraint.Professional and technical personnel can be retouched to each specific application using distinct methods to realize
The function of stating, but this realization is it is not considered that exceed scope of the present application.
The embodiment of the present application can carry out the division of functional module, example according to above method example to obstacle detector
Such as, each function can be corresponded to and divide each functional module, two or more functions can also be integrated at one
Manage in module.Above-mentioned integrated module can both be realized in the form of hardware, can also use the form of software function module
Realize.It should be noted that the division in the embodiment of the present application to module is schematical, only a kind of logic function is drawn
Point, there can be other dividing mode when actually realizing.
Illustrate the device embodiment corresponding with embodiment of the method presented above that the embodiment of the present application provides below.
It should be noted that in following apparatus embodiment related content explanation, may be referred to above method embodiment.
In the case where dividing each functional module using corresponding each function, Fig. 8 shows involved in above-described embodiment
And obstacle detector a kind of possible structural representation, the device 3 includes:Acquisition module 31, the and of matching module 32
Determining module 33.Acquisition module 31 is used to support obstacle detector to perform the step S101 in Fig. 3;Matching module 32 is used for
Obstacle detector is supported to perform the step S102 in Fig. 3;Determining module 33 is used to support the device to perform the step in Fig. 3
S103.Further, acquisition module 31, specifically for supporting the device to perform above step S101a, S101c, mould is matched
Block 32 is specifically used for supporting the device to perform above step S101b.Further, acquisition module 31, specifically for supporting
The device performs above step S102a, and matching module 32 is specifically used for supporting the device to perform above step
S102b。
Further, wherein, all related contents of each step that above method embodiment is related to can be with for division module
The function description of corresponding function module is quoted, will not be repeated here.
In hardware realization, above-mentioned acquisition module 31, matching module 32 and determining module 33 can be processors.It is above-mentioned
The program corresponding to action performed by obstacle detector can be stored in the memory of the device in a software form,
Operation corresponding to above modules is performed in order to which processor calls.
Fig. 9 shows the possible structural representation of a kind of electronic equipment involved in embodiments herein.The dress
Putting 4 includes:Processor 41, memory 42, system bus 43 and communication interface 44.Memory 42 is used to store computer execution generation
Code, processor 41 is connected with memory 42 by system bus 43, and when plant running, processor 41 is used to perform memory 42
The computer executable code of storage, to perform any one obstacle detection method of the embodiment of the present application offer, e.g., processor
41 are used to support the device to perform the Overall Steps in Fig. 3, and/or other processes for techniques described herein, specifically
Obstacle detection method refer to associated description hereafter and in accompanying drawing, here is omitted.
The embodiment of the present application also provides a kind of storage medium, and the storage medium can include memory 42.
The embodiment of the present application also provides a kind of computer program product, and the computer program can be loaded directly into memory 42
In, and contain software code, the computer program is loaded into via computer and can realize above-mentioned detection of obstacles after performing
Method.
Processor 41 can be the general designation of a processor or multiple treatment elements.For example, processor 41 can be with
For central processing unit (central processing unit, CPU).Processor 41 can also be other general processors, numeral
Signal processor (digital signal processing, DSP), application specific integrated circuit (application specific
Integrated circuit, ASIC), field programmable gate array (field-programmable gate array, FPGA)
Either other PLDs, discrete gate or transistor logic, discrete hardware components etc., it can realize or hold
Various exemplary logic blocks of the row with reference to described by present disclosure, module and circuit.General processor can be
Microprocessor or the processor can also be any conventional processors etc..Processor 41 can also be application specific processor, should
Application specific processor can include at least one in baseband processing chip, radio frequency processing chip etc..The processor can also be
The combination of computing function is realized, such as is combined comprising one or more microprocessors, combination of DSP and microprocessor etc..Enter
One step, the application specific processor can also include the chip with other dedicated processes functions of the device.
The step of method with reference to described by present disclosure can be realized in a manner of hardware or by
The mode that reason device performs software instruction is realized.Software instruction can be made up of corresponding software module, and software module can be by
Deposit in random access memory (English:Random access memory, abbreviation:RAM), flash memory, read-only storage (English
Text:Read only memory, abbreviation:ROM), Erasable Programmable Read Only Memory EPROM (English:erasable
Programmable ROM, abbreviation:EPROM), EEPROM (English:Electrically EPROM,
Abbreviation:EEPROM), register, hard disk, mobile hard disk, read-only optical disc (CD-ROM) or any other shape well known in the art
In the storage medium of formula.A kind of exemplary storage medium is coupled to processor, so as to enable a processor to from the storage medium
Information is read, and information can be write to the storage medium.Certainly, storage medium can also be the part of processor.Processing
Device and storage medium can be located in ASIC.In addition, the ASIC can be located in terminal device.Certainly, processor and storage are situated between
Matter can also be present in terminal device as discrete assembly.
System bus 43 can include data/address bus, power bus, controlling bus and signal condition bus etc..The present embodiment
In for clear explanation, various buses are all illustrated as system bus 43 in fig.9.
Communication interface 44 can be specifically the transceiver on the device.The transceiver can be wireless transceiver.For example, nothing
Line transceiver can be antenna of the device etc..Processor 41 is by communication interface 44 and other equipment, if for example, the device is
During a module or component in the terminal device, the device is used to carry out data between other modules in the terminal device
Interaction.
The embodiment of the present application also provides a kind of robot, and the robot includes obstacle detector corresponding to Fig. 9.
Those skilled in the art are it will be appreciated that in said one or multiple examples, work(described herein
It is able to can be realized with hardware, software, firmware or their any combination.When implemented in software, can be by these functions
It is stored in computer-readable medium or is transmitted as one or more instructions on computer-readable medium or code.
Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another
Any medium of one place transmission computer program.It is any that storage medium can be that universal or special computer can access
Usable medium.
Finally it should be noted that:Above-described embodiment, to the purpose of the application, technical scheme and beneficial to effect
Fruit is further described, and should be understood that the embodiment that the foregoing is only the application, not
For limiting the protection domain of the application, all any modifications on the basis of the technical scheme of the application, made, equally replace
Change, improve, all should be included within the protection domain of the application.