CN107636679A - A kind of obstacle detection method and device - Google Patents

A kind of obstacle detection method and device Download PDF

Info

Publication number
CN107636679A
CN107636679A CN201680006896.1A CN201680006896A CN107636679A CN 107636679 A CN107636679 A CN 107636679A CN 201680006896 A CN201680006896 A CN 201680006896A CN 107636679 A CN107636679 A CN 107636679A
Authority
CN
China
Prior art keywords
matching area
matching
target
left view
right view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201680006896.1A
Other languages
Chinese (zh)
Other versions
CN107636679B (en
Inventor
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Inc filed Critical Cloudminds Inc
Publication of CN107636679A publication Critical patent/CN107636679A/en
Application granted granted Critical
Publication of CN107636679B publication Critical patent/CN107636679B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of obstacle detection method and device, methods described include:In the first matching area of the left view of predetermined scene and the first matching area of right view, the matching area of target first (S101) that barrier be present is obtained respectively;The second matching area in the matching area of target in left view first is matched with the second matching area in the matching area of target in right view first, obtains the first disparity map;Wherein, the size of the second matching area is less than the size (S102) of the first matching area;Determine the positional information (S103) of barrier in the first disparity map.Methods described improves the detection of obstacles degree of accuracy and detection efficiency.

Description

A kind of obstacle detection method and device
Technical field
The application is related to detection technique field, more particularly to a kind of obstacle detection method and device.
Background technology
At present no matter in unmanned, auxiliary driving, Navigation of Pilotless Aircraft, blind person's guide, any technology of intelligent robot In field, detection of obstacles is all wherein very important part, for example, intelligent robot or automatic driving vehicle etc. are unknown Independent navigation perceives surrounding environment, it is necessary to which system provides the information such as barrier and road in environment in environment.In recent years, with meter The rapid development of calculation machine image processing techniques, vision sensor are applied in detection of obstacles and are increasingly valued by people, Therefore, the obstacle detection method of main flow is the obstacle detection method of view-based access control model sensor at present, and binocular vision system Because its cost is low, can obtain scene or the advantages that the depth information of object, be widely used target detection, tracking and The fields such as obstacle recognition.Specifically, the existing obstacle detection method based on binocular vision:It is by known to position relationship Camera composition stereo visual system, the disparity map that the parallax being imaged according to the same object in space on two cameras obtains, lead to Cross and disparity map is detected, so that it is determined that going out barrier position.
However, no matter mobile robot or unmanned, all may be than the motion of higher speed, this requires detection of obstacles Reach real-time, and accurate barrier profile information is then needed for robot path planning and accurate control.Therefore, it is existing The maximum difficult point that is faced of the obstacle detection method based on binocular vision be:The real-time and barrier profile of disparity computation The accuracy of segmentation, both of which will have influence on the detection of obstacles degree of accuracy and detection efficiency.
The content of the invention
Embodiments herein provides a kind of obstacle detection method and device, to improve the detection of obstacles degree of accuracy and Detection efficiency.
To reach above-mentioned purpose, embodiments herein adopts the following technical scheme that:
First aspect, there is provided a kind of obstacle detection method, including:
In the first matching area of the left view of predetermined scene and the first matching area of right view, obtain exist respectively The matching area of target first of barrier;
By the matching area of target first in the second matching area in the matching area of target in left view first and right view In the second matching area matched, obtain the first disparity map;Wherein, the size of second matching area is less than described the The size of one matching area;
Determine the positional information of barrier in first disparity map.
Second aspect, there is provided a kind of obstacle detector, including:
Acquisition module, for the first matching area of the left view in predetermined scene and the first matching area of right view In, the matching area of target first that barrier be present is obtained respectively;
Matching module, for the second matching in the matching area of target first in the left view that obtains the acquisition module Region is matched with the second matching area in the matching area of target in right view first, obtains the first disparity map;Wherein, institute State the second match window and be less than first match window;
Determining module, the positional information of barrier in the first disparity map obtained for determining the matching module.
The third aspect, there is provided a kind of electronic equipment, the structure of the electronic equipment include processor, and the processor is configured To support the electronic equipment to perform corresponding function in the above method.The electronic equipment can also include memory, the memory For being coupled with processor, it stores the computer software code of the electronic equipments, and it, which is included, is used to perform above-mentioned aspect Designed program.
Fourth aspect, there is provided a kind of computer-readable storage medium, for saving as the computer used in obstacle detector Software instruction, it includes the program code designed by the method performed described in first aspect.
5th aspect, there is provided a kind of computer program product, can be loaded directly into the internal storage of computer, and contain There is software code, the computer program is loaded into via computer and the method described in first aspect can be realized after performing.
6th aspect, there is provided a kind of robot, the robot include the electronic equipment described in the third aspect.
The scheme that the embodiment of the present application provides, passes through the first matching area of the low resolution of the left view in predetermined scene With in the first matching area of the low resolution of right view, obtaining the matching area of target first that barrier be present respectively, and will By the Matching band of target first in high-resolution second matching area in the matching area of target in left view first and right view High-resolution second matching area in domain is matched, and obtains the first disparity map, because the size of the second matching area is small In the size of the first matching area, therefore, so targetedly the region where barrier in left view and right view is carried out Further fine match, the amount of calculation of disparity computation is reduced, improve the efficiency of disparity computation, simultaneously as to predetermined The region that barrier in the left and right view of scene be present carries out further fine match, more clear so as to obtain barrier profile The disparity map of Chu, and then more fine barrier profile information can be obtained based on the disparity map, improve detection of obstacles The degree of accuracy and detection efficiency.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of application, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of method flow diagram for obstacle detection method that the embodiment of the present application provides;
Fig. 2 a are the parallax that left camera shoots same target with right camera in the binocular camera that the embodiment of the present application provides With depth corresponding relation figure;
Fig. 2 b are Fig. 2 a top view;
Fig. 3 is the method flow schematic diagram for another obstacle detection method that the embodiment of the present application provides;
Fig. 4 is that the left view that the embodiment of the present application provides is illustrated with the first match window mutually non-overlapping in right view Figure;
Fig. 5 is the first match window schematic diagram mutually overlapping in the left view and right view that the embodiment of the present application provides;
Fig. 6 is a kind of disparity correspondence schematic diagram that the embodiment of the present application provides;
Fig. 7 is another disparity correspondence schematic diagram that the embodiment of the present application provides;
Fig. 8 is a kind of structural representation for obstacle detector that the embodiment of the present application provides;
Fig. 9 is the structural representation for a kind of electronic equipment that the embodiment of the present application provides.
Embodiment
In recent years, with the development of the social economy, automobile quantity is growing day by day, road capacity can not meet that the volume of traffic is fast The demand that speed increases, especially urban traffic congestion and blocking are more serious, cause road traffic accident increasingly to increase.Wherein, The unmanned technology and auxiliary driving technology of automobile are the effective ways as raising vehicle safety, can be effectively Solve these problems.In addition, with the development of unmanned air vehicle technique, unmanned plane is in police, city management, video capture, electric power, gas As the industrial applications such as, rescue and relief work are extensive, therefore, the research of the navigation of aircraft obtains increasing concern.In addition, blind person As disadvantaged group, it is necessary to obtain the help of society, help them to improve the ability lived on one's own life, more preferable life can be possessed Quality, and assisting blind walking is very important one side.
Regardless of whether in which technical field such as unmanned, auxiliary driving, Navigation of Pilotless Aircraft or blind person's guide, obstacle The detection of thing is all wherein very important part, and the obstacle detection method of main flow is regarding based on binocular camera at present What poor figure was detected.
Therefore, the embodiment of the present application provides a kind of obstacle detection method based on above-mentioned application scenarios, to obtain height The disparity map of precision.Specifically, as shown in figure 1, the general principle for the technical scheme that the embodiment of the present application is provided is:Obtain double The left view and right view of lens camera synchronization collection, then, to first of left view and the low resolution of right view Matched with window, and based on the disparity map obtained after matching, from the first matching area of left view and right view, obtained The matching area of target first of barrier be present, then, by the high score in the matching area of target first in left view and right view The match window of resolution second is matched, and obtains the first disparity map, then, the match window of multiple Reusability higher resolution Matching primitives are carried out, the profile of barrier can be more accurate in obtained disparity map.
Part term involved in the application is explained below, understood with helping reader:
" binocular camera ", it is to place the camera being combined at a certain distance by two cameras with identical parameters, In general, the left camera in binocular camera are generally arranged in same horizontal line with right camera, caused with reaching left camera and Right camera optical axis is parallel so that binocular camera can be used for simulating human eye and causing differential seat angle, with this come reach three-dimensional imaging or Person detects the effect of the depth of field.
" parallax ", refer to the direction difference caused by same target from two points that a determining deviation be present, from mesh The angle seen between two points is marked, is called the parallactic angle of the two points, the distance between 2 points are referred to as baseline.
" parallax value ", when referring to that left camera and right camera are shot to same target in binocular camera, obtain two The parallax value of the difference, the as pixel of two abscissas of same pixel, corresponding, this two width figure are directed in width image The parallax value of all pixels point forms disparity map as in.
Corresponding relation schematic diagram is understood between parallax and depth shown in reference picture 2a, Fig. 2 b, if OLFor left camera, institute is in place Put, ORTo there is camera position, f is used to represent left camera and has the focal length of camera lens, and B represents parallax range, equal to a left side The distance of camera and the projection centre line of right camera, specifically:
Assuming that same characteristic point P (x of the left and right camera in synchronization viewing space objectc,yc,zc), zcIt can generally recognize To be the depth of this feature point, for representing the distance of this feature point and interplanar residing for the camera of left and right, this feature point P exists respectively Characteristic point P image is obtained on " left eye " and " right eye ", i.e. subpoints of the characteristic point P on the camera of left and right is PL(xL,yL) and PR(xR,yR), if the image of left and right camera is at grade, characteristic point P Y-coordinate and image coordinate PLAnd PRY-coordinate It is identical, then obtained by triangle geometrical relationship:
Due to parallax Disparity=xL-xR.Thus three-dimensional coordinates of the characteristic point P under camera coordinates system, which can be calculated, is For:
Therefore, can be obtained based on above-mentioned formula, because for binocular camera, its parallax range B and focal length F are Determine, therefore, parallax value and depth are in inverse relation.
The span of parallax value in general, can be arranged to nearest in [0,255] by 0 in disparity map in the application Distance, 255 are arranged to farthest distance.
It should be noted that image is acquired because the binocular camera in the application is simulation human eye, therefore, Left camera in binocular camera in the application is set in the same horizontal line with right camera, and optical axis is parallel, and is existed certain Spacing, therefore, the parallax in the application refer mainly to horizontal parallax.
" camera calibration ", for determine the three-dimensional geometry position of space object surface point and its in the picture between corresponding points Correlation, it is necessary to establish the geometrical model of camera imaging, these geometrical model parameters are exactly camera parameter.Mostly several These parameters must can just be obtained by experiment with calculating under part, and this process for solving parameter is just referred to as camera calibration mistake Journey.
Camera calibration in the application is often referred to camera off-line calibration.Under normal circumstances, due to the optical axis of binocular camera Positioned at camera internal, when camera assembles it is difficult to ensure that optical axis perfect parallelism, generally in the presence of certain deviation, therefore, Generally off-line calibration can be carried out to building successful binocular camera, to obtain the internal reference of camera (focal length, baseline length, figure Inconocenter, distortion parameter etc.) and outer ginseng (spin matrix R and translation matrix T).
In a kind of example, the camera lens of binocular camera can be marked offline using Zhang Zhengyou gridiron patterns scaling method It is fixed.
Specifically, when carrying out off-line calibration to camera, first left camera can be demarcated, obtain the inside and outside of left camera Parameter;Secondly, right camera is demarcated, obtains the inside and outside parameter of right camera;Finally, binocular camera is demarcated, obtained Take the rotation translation relation between the camera of left and right.
Any point W=[X, Y, Z] in hypothetical world coordinate systemT, corresponding points are m=[u, v] to the point on the image planeT, Projection relation between object point and picture point is:
[u,v,1]T=P [X, Y, Z, 1]T(formula seven);
Wherein, P is 3 × 4 projection matrix, and it can be represented by rotating with translation matrix:
P=A [Rt] (formula seven);
Wherein, R is 3 × 3 spin matrixs, and t is translation vector, the external parameter of the two matrixes expression binocular vision, one Individual expression position, an expression direction, thus can determine that each position of the pixel in world coordinate system on image, wherein A matrixes represent camera internal parameter matrix, can be expressed as:
(u in above formulao,uo) be image center coordinate;fuAnd fvRepresent that horizontal, vertical pixel unit represents respectively Focal length, β represent obliquity factor.
Some parameters obtained during above-mentioned off-line calibration have application in image rectification and barrier calculating process.
" image rectification ", therefore, generally can be double because the image that lens distortion can cause camera lens to gather is distorted Distortion correction is carried out to binocular camera before lens camera collection image and polar curve corrects.Assuming that the benchmark image without distortion For f (x, y), there is the image of larger geometric distortion for g (x ', y '), the set distortion between two images coordinate system can be with table It is shown as:Above-mentioned formula is represented with binary polynomial:
Wherein, n is polynomial coefficient, and i and j represent the particular location of pixel in the picture, aijAnd bijFor each term system Number.The image of distortion correction has been obtained by above formula.
For the polar curve correct operation of image, according to the rotation of the left and right camera obtained in camera off-line correction and translation square Battle array, it is assumed that left camera rotation translation matrix is R1 and t1, and the rotation translation matrix of right camera is R2 and t2, and it rotates and translation square Battle array can be to obtain in off-line correction.Rotation and translation matrix based on left and right camera, utilize Bouguet polar curve correction side Method so that the corresponding polar curve of left and right camera image is parallel.The time complexity of Stereo matching is greatly reduced, simplifies parallaxometer Calculation process.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, expression may have three kinds of passes System, for example, A and/or B, can be represented:Individualism A, while A and B be present, these three situations of individualism B.In addition, herein Middle character "/", it is a kind of relation of "or" to typically represent forward-backward correlation object.If being not added with illustrating, " multiple " herein are Refer to two or more.
It should be noted that in the embodiment of the present application, " exemplary " or " such as " etc. word make example, example for expression Card or explanation.Be described as in the embodiment of the present application " exemplary " or " such as " any embodiment or design should It is interpreted than other embodiments or design more preferably or more advantage.Specifically, " exemplary " or " example are used Such as " word is intended to that related notion is presented in a concrete fashion.
It should be noted that in the embodiment of the present application, unless otherwise indicated, the implication of " multiple " refer to two or two with On.
It should be noted that in the embodiment of the present application, " (English:Of) ", " corresponding (English:Corresponding, Relevant it is) " and " corresponding (English:Corresponding) " can use with sometimes, it is noted that do not emphasizing it During difference, its is to be expressed be meant that it is consistent.
Below in conjunction with the Figure of description of the embodiment of the present application, the technical scheme provided the embodiment of the present application is said It is bright.Obviously, it is described be the application part of the embodiment, rather than whole embodiment.It should be noted that hereafter institute Part or all of technical characteristic in any number of technical schemes provided can be used in combination, shape in the case where not conflicting Cheng Xin technical scheme.
The executive agent for the obstacle detection method that the embodiment of the present application provides can be the obstacle based on binocular camera Analyte detection device can be used for the electronic equipment for performing above-mentioned obstacle detection method.Wherein, based on binocular camera Obstacle detector can be above-mentioned electronic equipment in central processing unit (Central Processing Unit, CPU), The combination of the hardware such as CPU and memory or can be above-mentioned terminal device in other control units or module.
Exemplary, above-mentioned electronic equipment can be that binocular camera is gathered using the method that the embodiment of the present application provides Left and right view personal computer ((personal computer, PC), net book, the personal digital assistant (English analyzed Text:Personal Digital Assistant, referred to as:PDA), server etc., or above-mentioned electronic equipment can be to be provided with The software client that the method that the embodiment of the present application can be used to provide is handled the left and right view that binocular camera gathers Or the PC of software systems or software application, server etc., specific hardware realize that environment can be in the form of all-purpose computer, either ASIC mode or FPGA, or some programmable expansion platforms are such as Tensilica Xtensa platforms Deng.For example, above-mentioned electronic equipment can be integrated in unmanned machine, blind-man navigating instrument, automatic driving vehicle, intelligent vehicle, intelligence Energy mobile phone etc. is needed in equipment or the instrument of detection barrier.
Based on the above, embodiments herein provides a kind of obstacle detection method based on binocular camera, such as Shown in Fig. 3, this method comprises the following steps:
S101, in the first matching area of left view and the first matching area of right view of predetermined scene, obtain respectively Take the matching area of target first that barrier be present.
Exemplary, when controlling binocular camera collection view, that is, carry out in scene before detection of obstacles, generally need Certain adjustment (for example, the camera such as off-line calibration, image rectification adjustment operation) is carried out to binocular camera in advance, to ensure Left camera is parallel with right camera optical axis.Then, the baseline length between the camera optical axis of left and right is measured, and records binocular camera Focal length, and ensure that the baseline length and focal length will not change, so as to ensure the synchronism of binocular camera collection image, keep away Exempt from unnecessary error.
, specifically can be according to the window size of the first matching area and double when performing step S101 in a kind of example The resolution ratio of lens camera, matching area division is carried out to two width views, obtains the first matching area of two width views;Wherein, Above-mentioned left view is made up of with right view multiple mutually non-overlapping matching areas of size identical first.For example, it is assumed that The left view of binocular camera collection and the resolution ratio W of right view are 600*600 pixels, and the window of default first matching area Mouth size is 30 × 30, then as shown in figure 4, the left view 21 in Fig. 4 respectively has 20 mutually with the horizontal and vertical of right view 22 Do not overlap the first matching area.
In another example, when performing step S101b, according to the window size of the first matching area, mutually overlapping The horizontal offset of two matching areas and the resolution ratio of binocular camera, to two width views carry out matching area division, The first matching area of two width views is obtained, wherein, above-mentioned left view is by multiple mutually overlapping size phases with right view Same the first matching area composition.For example, it is assumed that the left view of binocular camera collection and the resolution ratio W of right view are 600* 600 pixels, the division schematic diagram of the first matching area in the left view and right view shown in reference picture 5, if left view selection The region of one 30 × 30, then N number of 30 × 30 region is similarly selected to be matched on right view, e.g., first region Horizontal-shift distance be L, the horizontal-shift distance of Two Areas is 2L, the like, it is if L=15, N=40, i.e., horizontal To there is 40 the first mutually overlapping matching areas.By the number of mutually overlapping marked off matching area is not than mutually The number of overlapping marked off matching area is more, and therefore, the precision of corresponding obtained disparity map is also higher.
It should be noted that the first above-mentioned matching area is low resolution match window, it is generally the case that if first Window size with region is s, and view size is W, then needs to ensure that W/s is integer, each in two images so as to ensure First matching area size is identical, is conveniently matched.
S102, the second matching area in the matching area of target in left view first matched with target in right view first The second matching area in region is matched, and obtains the first disparity map.
Exemplary, the application is by mesh in the second matching area in the matching area of target in left view first and right view The process that the second matching area in the first matching area is matched is marked, is in the matching area of target in left view first The second matching area and right view in the second matching area in the matching area of target first carry out Matching power flow calculating, calculate Go out left view parallax value corresponding with same second matching area in right view, obtain the first disparity map.
The size of the second matching area in the embodiment of the present application is less than the size of the first matching area.Above-mentioned first regards Poor figure can be the second matching area correspondence image in left view in the matching area of target first and target first in right view Target the first matching area correspondence image and mesh in right view in the disparity map or left view of matching area correspondence image The disparity map of the first matching area correspondence image is marked, first is matched with other in left view in addition to the matching area of target first The parallax of region correspondence image and other the first matching area correspondence images in right view in addition to the matching area of target first Scheme after combining, the disparity map formed.
S103, the positional information for determining barrier in the first disparity map.
Exemplary, when performing step S103, can be partitioned into according to the barrier threshold value H of setting where barrier Precise area, and calculate according to the inside and outside parameter of binocular camera the true bearing of barrier.In a kind of example, this The region that application can be less than predetermined barrier threshold value to parallax value in the first disparity map carries out contour detecting, obtains barrier Profile information, then according to the profile information of the barrier and the parallax value of corresponding region, determine the position of the barrier Information.
The scheme that the embodiment of the present application provides, passes through the first matching area of the low resolution of the left view in predetermined scene With in the first matching area of the low resolution of right view, obtaining the matching area of target first that barrier be present respectively, and will High-resolution second matching area in left view in the matching area of target first and the matching area of target first in right view In high-resolution second matching area matched, the first disparity map is obtained, because the size of the second matching area is less than The size of first matching area, therefore, so targetedly the region where barrier in left view and right view is entered to advance The fine match of one step, the amount of calculation of disparity computation is reduced, improve the efficiency of disparity computation, simultaneously as to predetermined field The region that barrier in the left and right view of scape be present carries out further fine match, apparent so as to obtain barrier profile Disparity map, and then more fine barrier profile information can be obtained based on the disparity map, improve detection of obstacles The degree of accuracy and detection efficiency.
Optionally, the application is true respectively in the first matching area from the first matching area of left view and right view Surely the matching area of target first of barrier be present, that is, can be by using the matching window of low resolution when performing step S101 Mouthful left view and right view execution once matched, and left view and right view are obtained based on the disparity map obtained after matching The middle matching area of target first that barrier be present.
Exemplary, step S101 specifically comprises the following steps:
S101a, the left view and right view for obtaining the collection of binocular camera synchronization.
S101b, the first matching area in left view matched with the first matching area in right view, obtain second and regard Difference figure.
S101c, according to the second disparity map, from left view in the first matching area of the first matching area and right view, The matching area of target first that barrier be present is determined respectively.
Exemplary, the application is to a left side by left view and the process that the first matching area is matched in right view The first matching area calculates left view and regarded with the right side with carrying out Matching power flow calculating in the first matching area in right view in view Parallax value corresponding to same first matching area, obtains the second disparity map in figure.
In a kind of example, when performing step S101c, the application can be from left view corresponding with right view first In matching area, determine that parallax value is less than the first Matching band corresponding to the region of predetermined barrier threshold value in the second disparity map Domain, as the matching area of target first that barrier be present.
Exemplary, reference picture 4, by taking the first row as an example, 20 the first matching areas numbering on the left view of the first row For L (1,1), L (1,2) ... ..., L (1,20), the first matching area numbering on right view is R (1,1), R (1, 2) ... ..., R (1,20), then L (1,1) is matched with R (1,1) to R (1,20) successively, from the Matching power flow calculated Region R (1, j) where the first minimum matching area of middle selection Matching power flow, then obtain the parallax value D1 in L (1,1) region (1,1)=j-1, wherein j ∈ (1,20), it represents the parallax value of all pixels point in the window area of the first row jth row.Such as Shown in Fig. 6, it is assumed that the Matching power flow that L (1,15) and R (1,3) is obtained is minimum, then parallax value is 15-3=12, is repeated the above steps The Stereo matching for completing remaining 19 rows calculates, you can obtains a complete disparity map D1.
Exemplary, existing binocular ranging cost computational methods have SAD, SSD, NCC etc., and specific formula is with reference to as follows:
SAD Matching power flow calculation formula is:
SSD Matching power flow calculation formula is:
NCC Matching power flow calculation formula is:
Shown in reference picture 6, it is assumed that setting barrier threshold value H=10, then it is assumed that the first match window in the first disparity map Parallax value contains barrier more than H's, understands that barrier, reference are contained in this region of L (1,15) according to the description above Barrier square areas T shown in Fig. 6, wherein black represent that parallax value is 0, and frosty area represents that parallax value is 12.
If with L (1,15) this first matching area shown in Fig. 6 as first object match window exemplified by, then to L (1, 15) window carries out further fine match.As shown in fig. 7, it is careful to use k further to be carried out for 5 × 5 windows in T regions Matching primitives, then it is horizontal and vertical respectively to have 6 the second matching areas, by taking the first row as an example, the matching on the left view of the first row Window number is TL (1,1), TL (1,2) ... ..., TL (1,6), and match window numbering is TR (1,1) on right view, TR (1, 2) ... ..., TR (1,6), then matched successively with TL (1,1) and TR (1,1) to TR (1,6), take wherein that Matching power flow is most It is small, and the value must be smaller than defined matching threshold G, otherwise it is assumed that being the point that can not be matched, for example be caused because blocking Left images can not match, it is assumed that meet the condition match window numbering be TR (1, i), wherein i ∈ (1,6), then TL (1, 1) parallax value is D2 (1,1)=i-1+D1 (1,15), and it represents all pixels point in the window area of the row of the first row i-th Parallax value.Repeat the above steps and complete the Stereo matching calculating of remaining 6 rows, you can obtain a complete fine disparity map D2.
Meanwhile after disparity map D2 is obtained, higher resolution (such as 4 × 4,3 × 3,2 × 2) can also be repeatedly used repeatedly Matching area carry out matching primitives so that obtaining the higher disparity map of fineness, and therefrom can determine profile more The clearly profile information of barrier.
Optionally, in order to improve the fineness of barrier profile in the first disparity map, the application can also be in step S103 Afterwards, can be on the barrier region cut be segmented for the second time, the repetitious match window using higher resolution is carried out Matching.
Exemplary, also include after step s 102:
Target first in S102a, the second matching area and right view in left view in the matching area of target first With in the second matching area in region, the matching area of target second that barrier be present is obtained respectively.
S102b, by target second in the 3rd matching area and right view in the matching area of target in left view second Matched with the 3rd matching area in region, obtain the 3rd disparity map, wherein, the chi of the 3rd matching area in the application The very little size for being less than the second matching area.
In specific perform, the size of the number specifically repeated and the match window matched every time can be set, From the operation for repeating above-mentioned step S102a and S102b.
It is above-mentioned that mainly the embodiment of the present application is provided from the terminal point that obstacle detector and the device are applied Scheme be described.It is understood that the device, in order to realize above-mentioned function, it comprises perform each function phase to answer Hardware configuration and/or software module.Those skilled in the art should be readily appreciated that, with reference to implementation disclosed herein The unit and algorithm steps of each example of example description, the application can be come with the combining form of hardware or hardware and computer software Realize.Some function is performed in a manner of hardware or computer software driving hardware actually, spy depending on technical scheme Fixed application and design constraint.Professional and technical personnel can be retouched to each specific application using distinct methods to realize The function of stating, but this realization is it is not considered that exceed scope of the present application.
The embodiment of the present application can carry out the division of functional module, example according to above method example to obstacle detector Such as, each function can be corresponded to and divide each functional module, two or more functions can also be integrated at one Manage in module.Above-mentioned integrated module can both be realized in the form of hardware, can also use the form of software function module Realize.It should be noted that the division in the embodiment of the present application to module is schematical, only a kind of logic function is drawn Point, there can be other dividing mode when actually realizing.
Illustrate the device embodiment corresponding with embodiment of the method presented above that the embodiment of the present application provides below. It should be noted that in following apparatus embodiment related content explanation, may be referred to above method embodiment.
In the case where dividing each functional module using corresponding each function, Fig. 8 shows involved in above-described embodiment And obstacle detector a kind of possible structural representation, the device 3 includes:Acquisition module 31, the and of matching module 32 Determining module 33.Acquisition module 31 is used to support obstacle detector to perform the step S101 in Fig. 3;Matching module 32 is used for Obstacle detector is supported to perform the step S102 in Fig. 3;Determining module 33 is used to support the device to perform the step in Fig. 3 S103.Further, acquisition module 31, specifically for supporting the device to perform above step S101a, S101c, mould is matched Block 32 is specifically used for supporting the device to perform above step S101b.Further, acquisition module 31, specifically for supporting The device performs above step S102a, and matching module 32 is specifically used for supporting the device to perform above step S102b。
Further, wherein, all related contents of each step that above method embodiment is related to can be with for division module The function description of corresponding function module is quoted, will not be repeated here.
In hardware realization, above-mentioned acquisition module 31, matching module 32 and determining module 33 can be processors.It is above-mentioned The program corresponding to action performed by obstacle detector can be stored in the memory of the device in a software form, Operation corresponding to above modules is performed in order to which processor calls.
Fig. 9 shows the possible structural representation of a kind of electronic equipment involved in embodiments herein.The dress Putting 4 includes:Processor 41, memory 42, system bus 43 and communication interface 44.Memory 42 is used to store computer execution generation Code, processor 41 is connected with memory 42 by system bus 43, and when plant running, processor 41 is used to perform memory 42 The computer executable code of storage, to perform any one obstacle detection method of the embodiment of the present application offer, e.g., processor 41 are used to support the device to perform the Overall Steps in Fig. 3, and/or other processes for techniques described herein, specifically Obstacle detection method refer to associated description hereafter and in accompanying drawing, here is omitted.
The embodiment of the present application also provides a kind of storage medium, and the storage medium can include memory 42.
The embodiment of the present application also provides a kind of computer program product, and the computer program can be loaded directly into memory 42 In, and contain software code, the computer program is loaded into via computer and can realize above-mentioned detection of obstacles after performing Method.
Processor 41 can be the general designation of a processor or multiple treatment elements.For example, processor 41 can be with For central processing unit (central processing unit, CPU).Processor 41 can also be other general processors, numeral Signal processor (digital signal processing, DSP), application specific integrated circuit (application specific Integrated circuit, ASIC), field programmable gate array (field-programmable gate array, FPGA) Either other PLDs, discrete gate or transistor logic, discrete hardware components etc., it can realize or hold Various exemplary logic blocks of the row with reference to described by present disclosure, module and circuit.General processor can be Microprocessor or the processor can also be any conventional processors etc..Processor 41 can also be application specific processor, should Application specific processor can include at least one in baseband processing chip, radio frequency processing chip etc..The processor can also be The combination of computing function is realized, such as is combined comprising one or more microprocessors, combination of DSP and microprocessor etc..Enter One step, the application specific processor can also include the chip with other dedicated processes functions of the device.
The step of method with reference to described by present disclosure can be realized in a manner of hardware or by The mode that reason device performs software instruction is realized.Software instruction can be made up of corresponding software module, and software module can be by Deposit in random access memory (English:Random access memory, abbreviation:RAM), flash memory, read-only storage (English Text:Read only memory, abbreviation:ROM), Erasable Programmable Read Only Memory EPROM (English:erasable Programmable ROM, abbreviation:EPROM), EEPROM (English:Electrically EPROM, Abbreviation:EEPROM), register, hard disk, mobile hard disk, read-only optical disc (CD-ROM) or any other shape well known in the art In the storage medium of formula.A kind of exemplary storage medium is coupled to processor, so as to enable a processor to from the storage medium Information is read, and information can be write to the storage medium.Certainly, storage medium can also be the part of processor.Processing Device and storage medium can be located in ASIC.In addition, the ASIC can be located in terminal device.Certainly, processor and storage are situated between Matter can also be present in terminal device as discrete assembly.
System bus 43 can include data/address bus, power bus, controlling bus and signal condition bus etc..The present embodiment In for clear explanation, various buses are all illustrated as system bus 43 in fig.9.
Communication interface 44 can be specifically the transceiver on the device.The transceiver can be wireless transceiver.For example, nothing Line transceiver can be antenna of the device etc..Processor 41 is by communication interface 44 and other equipment, if for example, the device is During a module or component in the terminal device, the device is used to carry out data between other modules in the terminal device Interaction.
The embodiment of the present application also provides a kind of robot, and the robot includes obstacle detector corresponding to Fig. 9.
Those skilled in the art are it will be appreciated that in said one or multiple examples, work(described herein It is able to can be realized with hardware, software, firmware or their any combination.When implemented in software, can be by these functions It is stored in computer-readable medium or is transmitted as one or more instructions on computer-readable medium or code. Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another Any medium of one place transmission computer program.It is any that storage medium can be that universal or special computer can access Usable medium.
Finally it should be noted that:Above-described embodiment, to the purpose of the application, technical scheme and beneficial to effect Fruit is further described, and should be understood that the embodiment that the foregoing is only the application, not For limiting the protection domain of the application, all any modifications on the basis of the technical scheme of the application, made, equally replace Change, improve, all should be included within the protection domain of the application.

Claims (14)

  1. A kind of 1. obstacle detection method, it is characterised in that including:
    In the first matching area of the left view of predetermined scene and the first matching area of right view, obtain obstacle be present respectively The matching area of target first of thing;
    By in the matching area of target first in the second matching area in the matching area of target in left view first and right view Second matching area is matched, and obtains the first disparity map;Wherein, the size of second matching area is less than described first Size with region;
    Determine the positional information of barrier in first disparity map.
  2. 2. according to the method for claim 1, it is characterised in that the first matching area of the left view in predetermined scene With in the first matching area of right view, obtaining the matching area of target first that barrier be present respectively, including:
    Obtain left view and right view that binocular camera synchronization gathers in predetermined scene;
    First matching area of left view is matched with the first matching area of right view, obtains the second disparity map;
    It is true respectively from the first matching area of the first matching area of left view and right view according to second disparity map Surely the matching area of target first of barrier be present.
  3. 3. according to the method for claim 2, it is characterised in that it is described according to second disparity map, from the of left view In first matching area of one matching area and right view, the matching area of target first that barrier be present is determined respectively, including:
    From left view the first matching area corresponding with right view, determine that parallax value is less than pre- in second disparity map The first matching area corresponding to the region of barrier threshold value is determined, as the matching area of target first that barrier be present.
  4. 4. according to the method for claim 1, it is characterised in that the first matching area of the left view in predetermined scene With in the first matching area of right view, before obtaining the matching area of target first that barrier be present respectively, in addition to:
    According to the resolution ratio of the window size of the first matching area and the binocular camera, to left view and right view progress With region division, the first matching area of left view and right view is obtained.
  5. 5. according to the method for claim 4, it is characterised in that the in the matching area by target in left view first Before two matching areas are matched with the second matching area in the matching area of target in right view first, in addition to:
    According to the resolution ratio of the window size of the second matching area and the matching area of the target first, to target in left view One matching area carries out matching area division with the matching area of target first in right view, obtains left view and target in right view Second matching area of the first matching area;Wherein, the second preset matching window size is less than first preset matching Window size.
  6. A kind of 6. obstacle detector, it is characterised in that including:
    Acquisition module, in the first matching area of the left view in predetermined scene and the first matching area of right view, dividing Huo Qu there is not the matching area of target first of barrier;
    Matching module, for the second matching area in the matching area of target first in the left view that obtains the acquisition module Matched with the second matching area in the matching area of target in right view first, obtain the first disparity map;Wherein, described Two match windows are less than first match window;
    Determining module, the positional information of barrier in the first disparity map obtained for determining the matching module.
  7. 7. device according to claim 6, it is characterised in that:
    The acquisition module, it is additionally operable to obtain the left view that gathers in predetermined scene of binocular camera synchronization and the right side regards Figure;
    The matching module, it is additionally operable to be matched the first matching area of left view with the first matching area of right view, Obtain the second disparity map;
    The acquisition module obtains the mesh that barrier be present in the left view of predetermined scene and the first matching area of right view When marking the first matching area, it is specifically used for:According to second disparity map, from left view and the first matching area of right view In, it is determined that the matching area of target first of barrier be present.
  8. 8. device according to claim 7, it is characterised in that the acquisition module according to second disparity map, from In first matching area of left view and the first matching area of right view, determine that the target first that barrier be present matches respectively During region, it is specifically used for:
    From left view the first matching area corresponding with right view, determine that parallax value is less than pre- in second disparity map The first matching area corresponding to the region of barrier threshold value is determined, as the matching area of target first that barrier be present.
  9. 9. device according to claim 6, it is characterised in that the matching module is additionally operable to:
    According to the resolution ratio of the window size of the first matching area and the binocular camera, to left view and right view progress With region division, the first matching area of left view and right view is obtained.
  10. 10. device according to claim 9, it is characterised in that the matching module is additionally operable to:
    According to the resolution ratio of the window size of the second matching area and the matching area of the target first, to target in left view One matching area carries out matching area division with the matching area of target first in right view, obtains left view and target in right view Second matching area of the first matching area;Wherein, the second preset matching window size is less than first preset matching Window size.
  11. 11. a kind of electronic equipment, it is characterised in that the electronic equipment includes:Memory and processor, the memory coupling Into the processor, for storing computer executable code, the computer executable code is used to control the processor to hold Method described in row claim any one of 1-5.
  12. 12. a kind of computer-readable storage medium, it is characterised in that for saving as the computer software used in obstacle detector Instruction, it includes the program code designed by the method described in perform claim 1~5 any one of requirement.
  13. 13. a kind of computer program product, it is characterised in that can be loaded directly into the internal storage of computer, and contain Software code, the computer program are loaded into via computer and can realized described in any one of Claims 1 to 5 after performing Method.
  14. 14. a kind of robot, it is characterised in that including the electronic equipment described in claim 11.
CN201680006896.1A 2016-12-30 2016-12-30 Obstacle detection method and device Active CN107636679B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/113550 WO2018120040A1 (en) 2016-12-30 2016-12-30 Obstacle detection method and device

Publications (2)

Publication Number Publication Date
CN107636679A true CN107636679A (en) 2018-01-26
CN107636679B CN107636679B (en) 2021-05-25

Family

ID=61112708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680006896.1A Active CN107636679B (en) 2016-12-30 2016-12-30 Obstacle detection method and device

Country Status (2)

Country Link
CN (1) CN107636679B (en)
WO (1) WO2018120040A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035322A (en) * 2018-07-17 2018-12-18 重庆大学 A kind of detection of obstacles and recognition methods based on binocular vision
CN109661631A (en) * 2018-03-27 2019-04-19 深圳市大疆创新科技有限公司 Control method, device and the unmanned plane of unmanned plane
CN110633600A (en) * 2018-06-21 2019-12-31 海信集团有限公司 Obstacle detection method and device
CN111191538A (en) * 2019-12-20 2020-05-22 北京中科慧眼科技有限公司 Obstacle tracking method, device and system based on binocular camera and storage medium
CN111583336A (en) * 2020-04-22 2020-08-25 深圳市优必选科技股份有限公司 Robot and inspection method and device thereof
CN111898396A (en) * 2019-05-06 2020-11-06 北京四维图新科技股份有限公司 Obstacle detection method and device
CN112149458A (en) * 2019-06-27 2020-12-29 商汤集团有限公司 Obstacle detection method, intelligent driving control method, device, medium, and apparatus
CN112489186A (en) * 2020-10-28 2021-03-12 中汽数据(天津)有限公司 Automatic driving binocular data perception algorithm
WO2022017320A1 (en) * 2020-07-21 2022-01-27 影石创新科技股份有限公司 Obstacle information obtaining method, obstacle avoidance method, moving apparatus, and computer-readable storage medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631853A (en) * 2018-12-29 2019-04-16 深圳市道通智能航空技术有限公司 A kind of depth map processing method, device and unmanned plane
CN111443365B (en) * 2020-03-27 2022-06-17 维沃移动通信有限公司 Positioning method and electronic equipment
CN111899170A (en) * 2020-07-08 2020-11-06 北京三快在线科技有限公司 Obstacle detection method and device, unmanned aerial vehicle and storage medium
CN111986248B (en) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Multi-vision sensing method and device and automatic driving automobile
CN112698421A (en) * 2020-12-11 2021-04-23 北京百度网讯科技有限公司 Evaluation method, device, equipment and storage medium for obstacle detection
CN113534737B (en) * 2021-07-15 2022-07-19 中国人民解放军火箭军工程大学 PTZ (Pan/Tilt/zoom) dome camera control parameter acquisition system based on multi-view vision

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102313536A (en) * 2011-07-21 2012-01-11 清华大学 Method for barrier perception based on airborne binocular vision
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN104021388A (en) * 2014-05-14 2014-09-03 西安理工大学 Reversing obstacle automatic detection and early warning method based on binocular vision
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN105654493A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Improved method for optimizing optical affine-invariant binocular stereo matching cost and parallax
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN105931231A (en) * 2016-04-15 2016-09-07 浙江大学 Stereo matching method based on full-connection random field combination energy minimization

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175222B (en) * 2011-03-04 2012-09-05 南开大学 Crane obstacle-avoidance system based on stereoscopic vision
US10257506B2 (en) * 2012-12-28 2019-04-09 Samsung Electronics Co., Ltd. Method of obtaining depth information and display apparatus
CN108594851A (en) * 2015-10-22 2018-09-28 飞智控(天津)科技有限公司 A kind of autonomous obstacle detection system of unmanned plane based on binocular vision, method and unmanned plane

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102313536A (en) * 2011-07-21 2012-01-11 清华大学 Method for barrier perception based on airborne binocular vision
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
CN104021388A (en) * 2014-05-14 2014-09-03 西安理工大学 Reversing obstacle automatic detection and early warning method based on binocular vision
CN105205458A (en) * 2015-09-16 2015-12-30 北京邮电大学 Human face living detection method, device and system
CN105654493A (en) * 2015-12-30 2016-06-08 哈尔滨工业大学 Improved method for optimizing optical affine-invariant binocular stereo matching cost and parallax
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN105931231A (en) * 2016-04-15 2016-09-07 浙江大学 Stereo matching method based on full-connection random field combination energy minimization

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Y YANG等: "Obstacles and Pedestrian Detection on a Moving Vehicle", 《INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS》 *
杨福增: "基于立体视觉技术的多种农田障碍物检测方法", 《农业机械学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109661631A (en) * 2018-03-27 2019-04-19 深圳市大疆创新科技有限公司 Control method, device and the unmanned plane of unmanned plane
WO2019183789A1 (en) * 2018-03-27 2019-10-03 深圳市大疆创新科技有限公司 Method and apparatus for controlling unmanned aerial vehicle, and unmanned aerial vehicle
CN110633600A (en) * 2018-06-21 2019-12-31 海信集团有限公司 Obstacle detection method and device
CN110633600B (en) * 2018-06-21 2023-04-25 海信集团有限公司 Obstacle detection method and device
CN109035322A (en) * 2018-07-17 2018-12-18 重庆大学 A kind of detection of obstacles and recognition methods based on binocular vision
CN111898396A (en) * 2019-05-06 2020-11-06 北京四维图新科技股份有限公司 Obstacle detection method and device
CN112149458A (en) * 2019-06-27 2020-12-29 商汤集团有限公司 Obstacle detection method, intelligent driving control method, device, medium, and apparatus
CN111191538A (en) * 2019-12-20 2020-05-22 北京中科慧眼科技有限公司 Obstacle tracking method, device and system based on binocular camera and storage medium
CN111583336A (en) * 2020-04-22 2020-08-25 深圳市优必选科技股份有限公司 Robot and inspection method and device thereof
CN111583336B (en) * 2020-04-22 2023-12-01 深圳市优必选科技股份有限公司 Robot and inspection method and device thereof
WO2022017320A1 (en) * 2020-07-21 2022-01-27 影石创新科技股份有限公司 Obstacle information obtaining method, obstacle avoidance method, moving apparatus, and computer-readable storage medium
CN112489186A (en) * 2020-10-28 2021-03-12 中汽数据(天津)有限公司 Automatic driving binocular data perception algorithm
CN112489186B (en) * 2020-10-28 2023-06-27 中汽数据(天津)有限公司 Automatic driving binocular data sensing method

Also Published As

Publication number Publication date
CN107636679B (en) 2021-05-25
WO2018120040A1 (en) 2018-07-05

Similar Documents

Publication Publication Date Title
CN107636679A (en) A kind of obstacle detection method and device
WO2021004548A1 (en) Vehicle speed intelligent measurement method based on binocular stereo vision system
CN112292711B (en) Associating LIDAR data and image data
US10789719B2 (en) Method and apparatus for detection of false alarm obstacle
CN106529495A (en) Obstacle detection method of aircraft and device
EP3566172A1 (en) Systems and methods for lane-marker detection
CN103093479A (en) Target positioning method based on binocular vision
CN104864849B (en) Vision navigation method and device and robot
CN105551020A (en) Method and device for detecting dimensions of target object
CN114120149B (en) Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
CN108107897A (en) Real time sensor control method and device
CN113580134A (en) Visual positioning method, device, robot, storage medium and program product
CN113034347B (en) Oblique photography image processing method, device, processing equipment and storage medium
CN116385994A (en) Three-dimensional road route extraction method and related equipment
CN114648639B (en) Target vehicle detection method, system and device
CN113902047B (en) Image element matching method, device, equipment and storage medium
WO2023283929A1 (en) Method and apparatus for calibrating external parameters of binocular camera
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
CN107506782B (en) Dense matching method based on confidence weight bilateral filtering
CN113822159B (en) Three-dimensional target detection method, device and computer
CN112257485A (en) Object detection method and device, storage medium and electronic equipment
CN108416305A (en) Position and orientation estimation method, device and the terminal of continuous type lane segmentation object
CN116299300B (en) Determination method and device for drivable area, computer equipment and storage medium
CN116007637B (en) Positioning device, method, in-vehicle apparatus, vehicle, and computer program product
CN113870365B (en) Camera calibration method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210201

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.