CN110348333A - Object detecting method, device, storage medium and electronic equipment - Google Patents

Object detecting method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110348333A
CN110348333A CN201910553167.3A CN201910553167A CN110348333A CN 110348333 A CN110348333 A CN 110348333A CN 201910553167 A CN201910553167 A CN 201910553167A CN 110348333 A CN110348333 A CN 110348333A
Authority
CN
China
Prior art keywords
target pixel
pixel points
geometrical characteristic
class
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910553167.3A
Other languages
Chinese (zh)
Inventor
赵绍安
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Inc filed Critical Cloudminds Inc
Priority to CN201910553167.3A priority Critical patent/CN110348333A/en
Publication of CN110348333A publication Critical patent/CN110348333A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to a kind of object detecting method, device, storage medium and electronic equipment.The described method includes: extracting the target pixel points for meeting preset condition from the rgb image data of target scene;According to the point cloud data of the target pixel points, the target pixel points are clustered, to obtain at least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;It determines the geometrical characteristic of at least one target pixel points class, and according to the geometrical characteristic of the geometrical characteristic for the target pixel points class determined and known target object, identifies the corresponding object of target pixel points class.In the disclosure, the geometrical characteristic of object and the target detection technique based on two dimensional image are combined, enhance the accuracy of object detection.

Description

Object detecting method, device, storage medium and electronic equipment
Technical field
This disclosure relates to computer vision field, and in particular, to a kind of object detecting method, device, storage medium and Electronic equipment.
Background technique
Intelligent object detection is one of popular research direction of current artificial intelligence field, its main application field includes Wisdom factory, smart home and intelligent Service etc..In the prior art, based on the target detection technique of image, there is some disadvantages Disease.For example, when carrying out grasping body, an image mixed the spurious with the genuine may allow a system to making mistake as a result, leading The image for causing robot that crawl one is gone to be decorated with the object, rather than true object.
In the grasping body field of robot, the relevant technologies include being determined by pasting two dimensional code to object object Position or the track of fixed object operation, then by photoelectric sensor come the fixation grasping manipulation of Crush trigger arm.The former compares Relatively it is suitable for some flexible crawl environment, for example, command service robot grabs dixie cup.The latter then can be used for factory and fix The crawl task of scene.Planar bar code technology there are the shortcomings that specifically include that (1) needs to be transformed object appearance;(2) precision It is limited to pixel resolution, and usually object distance camera is remoter, precision can drastically decrease;(3) it can not provide entire The information of object can only carry out opposite calculate by position of the two-dimentional code position to object.The grasping body technology in opposite forward position It is main to be realized by intensified learning.Intensified learning gives a mark to the current behavior of robot by constructing value network, thus Adjust the rules for grasping of robot.This mode of intensified learning is too dependent on the rewards and punishments mechanism of constructed value network, grabs Different objects is taken to usually require to learn for a long time, this grasping means end to end is difficult to be applied to flexible at this stage In crawl task.
Summary of the invention
In order to overcome the problems, such as present in the relevant technologies, the disclosure provides a kind of method, apparatus of object detection, storage is situated between Matter and electronic equipment.
To achieve the goals above, according to the first aspect of the embodiments of the present disclosure, a kind of method of object detection, institute are provided The method of stating includes:
The target pixel points for meeting preset condition are extracted from the rgb image data of target scene;
According to the point cloud data of the target pixel points, the target pixel points are clustered, to obtain at least one Target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Determine the geometrical characteristic of at least one target pixel points class, and according to the target pixel points class determined Geometrical characteristic and known target object geometrical characteristic, identify the corresponding object of target pixel points class.
Optionally, the target pixel points for meeting preset condition are extracted in the rgb image data from target scene Before step, the method also includes:
Obtain the depth image data of the target scene;
The depth image data and the rgb image data are aligned, to obtain the point cloud data of the target scene.
Optionally, the method also includes:
Obtain the depth image data of the target pixel points;
By the rgb image data pair of the depth image data of the target pixel points and the target pixel points Together, to obtain the point cloud datas of the target pixel points.
Optionally, the target pixel points for meeting preset condition, packet are extracted in the rgb image data from target scene It includes:
The rgb image data is transformed into HSV space;
The pixel fallen on the corresponding section HSV of preset color of object is determined as the target pixel points.
Optionally, described according to the geometrical characteristic of the target pixel points class and the geometrical characteristic of known object, identify the mesh Mark the corresponding object of pixel class, comprising:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, the method also includes:
The class center of each target pixel points class is calculated to the distance of origin;
The geometrical characteristic of at least one target pixel points class of the determination, and according to the object pixel determined The geometrical characteristic of point class and the geometrical characteristic of known target object, identify the corresponding object of target pixel points class, comprising:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know The not corresponding object of the current target pixel points class.
Optionally, the method also includes: after identifying the corresponding object of the target pixel points class, determine described in The coordinate of the corresponding object of target pixel points class.
Optionally, the geometrical characteristic includes one or more in following: the long ratio of length, breadth length ratio, depth-width ratio, height, Height divided by the sum of length, width and height and for characterize whether be straight line instruction information.
According to the second aspect of an embodiment of the present disclosure, a kind of article detection device is provided, described device includes:
Extraction module, for extracting the target pixel points for meeting preset condition from the rgb image data of target scene;
Cluster module clusters the target pixel points for the point cloud data according to the target pixel points, with Obtain at least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Identification module, for determining the geometrical characteristic of at least one target pixel points class, and according to the institute determined The geometrical characteristic of target pixel points class and the geometrical characteristic of known target object are stated, identifies the corresponding object of target pixel points class Body.
Optionally, described device further include:
First obtain module, for the extraction module extracted from the rgb image data of target scene meet it is default Before the target pixel points of condition, the depth image data of the target scene is obtained;
First alignment module, for the depth image data and the rgb image data to be aligned, to obtain the mesh Mark the point cloud data of scene.
Optionally, described device further include:
Second obtains module, for obtaining the depth image data of the target pixel points;
Second alignment module, for by the depth image data of the target pixel points and the target pixel points The rgb image data alignment, to obtain the point cloud data of the target pixel points.
Optionally, the extraction module includes:
Transform subblock, for the rgb image data to be transformed into HSV space;
Determine submodule, it is described for the pixel fallen into the corresponding HSV space of preset color of object to be determined as Target pixel points.
Optionally, the identification module is used for:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, described device further include:
Computing module, for calculating the class center of each target pixel points class to the distance of origin;
The identification module is used for:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know The not corresponding object of the current target pixel points class.
Optionally, described device further include: determining module, for identifying the corresponding object of the target pixel points class Later, the coordinate of the corresponding object of the target pixel points class is determined.
Optionally, the geometrical characteristic includes one or more in following: length, breadth length ratio, depth-width ratio, height are divided by length The sum of wide height and for characterize whether be straight line instruction information.
According to the third aspect of an embodiment of the present disclosure, a kind of computer readable storage medium is provided, calculating is stored thereon with The step of machine program, the object detecting method that realization disclosure first aspect provides when which is executed by processor.
According to a fourth aspect of embodiments of the present disclosure, a kind of electronic equipment is provided, comprising:
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize that disclosure first aspect provides The object detecting method the step of.
In the disclosure, the target pixel points for meeting preset condition are extracted first from the rgb image data of target scene, so Afterwards, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one target pixel points class, Wherein, each target pixel points class corresponds respectively to an object, finally, determining that the geometry of at least one target pixel points class is special Sign, and according to the geometrical characteristic of the geometrical characteristic for the target pixel points class determined and known target object, identify the target The corresponding object of pixel class.Scheme in the disclosure, by the geometrical characteristic of object and target detection skill based on two dimensional image Art combines, in this way, can exclude include target object still image interference caused by robot, enhance object The accuracy that physical examination is surveyed, so that auxiliary robot is more flexible accurately to execute crawl task.Also, this method can reduce calculation Dependence of the method to image detection result, it is only necessary to examined object is contained in the target scene that image detection provides, from And the robustness of detection method is significantly increased.In addition, the object detecting method that the disclosure provides, it is not necessary that the outer of object is transformed It sees, and for different objects, only its geometrical property, which need to simply be arranged, can be completed object detection, without establishing complexity Model, so that the detection method that the disclosure provides simply is easily achieved, detection efficiency is greatly improved and versatility is stronger.
Other feature and advantage of the disclosure will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is and to constitute part of specification for providing further understanding of the disclosure, with following tool Body embodiment is used to explain the disclosure together, but does not constitute the limitation to the disclosure.In the accompanying drawings:
Fig. 1 is the flow chart of the object detecting method shown in one exemplary embodiment of the disclosure.
Fig. 2 is the flow chart of the object detecting method shown in disclosure another exemplary embodiment.
Fig. 3 determines the flow chart of method for the target pixel points shown in one exemplary embodiment of the disclosure.
Fig. 4 is the flow chart of the object identification method shown in one exemplary embodiment of the disclosure.
Fig. 5 is the flow chart of the object detecting method shown in disclosure another exemplary embodiment.
Fig. 6 is the block diagram of the article detection device shown in one exemplary embodiment of the disclosure.
Fig. 7 is the block diagram of a kind of electronic equipment shown in one exemplary embodiment of the disclosure.
Fig. 8 is the block diagram of a kind of electronic equipment shown in one exemplary embodiment of the disclosure.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the disclosure.It should be understood that this place is retouched The specific embodiment stated is only used for describing and explaining the disclosure, is not limited to the disclosure.
As shown in Figure 1, for the flow chart of the object detecting method shown in one exemplary embodiment of the disclosure, this method can be with Applied to robot, especially for executing the robot of crawl task.Alternatively, this method can be applied to cloud, for example, answering For controlling the server of robot manipulation.
As shown in Figure 1, this method may comprise steps of.
In S11, the target pixel points for meeting preset condition are extracted from the rgb image data of target scene.
Wherein, target scene is scene locating for object to be identified.For example, in one example, it is desirable to robot be enabled to grab The one-pen on desk is taken, then target scene can be the scene for including this pen on the desk and desktop.Target field The rgb image data of scape can be obtained by image collecting device (e.g., camera).
After the rgb image data for obtaining target scene, the rgb image data of target scene is handled, is extracted Meet the target pixel points of preset condition.The purpose of this step is just to sift out to be likely to be the corresponding pixel of object to be identified.
In S12, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one Target pixel points class, wherein each target pixel points class corresponds respectively to an object.
In one embodiment, clustering to target pixel points can be with are as follows: carries out according to the distance between target pixel points Cluster, for example, the two o'clock is gathered for one kind when distance between two points is less than a distance threshold.In another embodiment, Clustering to target pixel points can be with are as follows: according between target pixel points distance and normal vector clustered, for example, in two o'clock The distance between less than a distance threshold and the two point normal vectors between angle less than an angle threshold value when, the two o'clock Gathered for one kind.In this embodiment, judge whether two points are same class according to the angle of distance and normal vector, so that The cluster result of target pixel points is more accurate.It is worth noting that the mode clustered to target pixel points is not limited to In above two embodiment, other cluster modes are also applied for the disclosure.It is resulting every after being clustered to target pixel points One target pixel points class corresponds respectively to an object.For example, it is assumed that obtaining two target pixel points classes, then can determine There are two candidate objects.
In S13, the geometrical characteristic of at least one target pixel points class is determined, and according to the target pixel points class determined Geometrical characteristic and known target object geometrical characteristic, identify the corresponding object of target pixel points class.
For example, can determine that the geometry of the target pixel points class is special according to the point cloud data of each point in target pixel points class Sign.Wherein, which may include one or more in following: length, breadth length ratio, depth-width ratio, it is high long than, it is high divided by The sum of length, width and height and for characterize whether be straight line instruction information.Illustratively, length can be by object pixel Point class carries out principal component analysis and obtains, wherein maximum eigenvalue in long corresponding principal component analysis, in wide corresponding principal component analysis time Big characteristic value, height correspond to minimal eigenvalue in principal component analysis.Alternatively, length can also be directly according to target pixel points class Point cloud data be calculated, for example, calculate separately point on tri- directions X, Y, Z apart from maximum value, and by each direction On point be identified as length and width and height apart from maximum value.After obtaining length, breadth length ratio, height can be calculated It is wide than, it is high it is long than, it is high divided by the sum of length, width and height.
Above-mentioned known target object can be preset object, for example, robot wants in this crawl task The object of crawl.The geometrical characteristic of the object can be inputted in advance, in this way, which kind of object corresponded in identification target pixel points class When, can by by the geometrical characteristic of the geometrical characteristic of target pixel points and known target object carry out matched mode come Judged.If it does, then determining that the corresponding object of the target pixel points is exactly the target object, if it does not match, really The fixed corresponding object of the target pixel points is not the target object, at this point it is possible to terminate to identify or continue to use target pixel points Geometrical characteristic matched with the geometrical characteristic of another target object, to continue to identify the corresponding object of the target pixel points.
Since the geometrical characteristic of target pixel points class can characterize the geometrical characteristic of its corresponding object, in basis The geometrical characteristic for the target pixel points class determined includes can exclude when identifying the corresponding object of the target pixel points class There is the interference caused by robot of the still image of target object, enhance the accuracy of object detection, thus auxiliary robot It is more flexible accurately to execute crawl task.Also, this method can reduce dependence of the algorithm to image detection result, it is only necessary to Examined object is contained in the target scene that image detection provides, so that the robust of detection method be significantly increased Property.In addition, the object detecting method that the disclosure provides only needs letter it is not necessary that the appearance of object is transformed, and for different objects Object detection can be completed in its geometrical property of the setting of list, without establishing complicated model, so that the inspection that the disclosure provides Survey method is simply easily achieved, and detection efficiency greatly improves and versatility is stronger.
As shown in Fig. 2, for the flow chart of the object detecting method shown in disclosure another exemplary embodiment.This method is removed It can also include S201, S202 except including above-mentioned S11-S13.
In S201, the depth image data of target scene is obtained;
In S202, the rgb image data of depth image data and target scene is aligned, to obtain the target scene Point cloud data.
In one embodiment, can be obtained by depth camera synchronization target scene depth image data and Rgb image data, alternatively, only obtaining the depth image data of target scene by depth camera.Getting target scene Depth image data after, depth image data and rgb image data are aligned, wherein the mode being aligned can be depth Under image alignment to RGB image coordinate system, it is also possible to snap to RGB image under depth image coordinate system.It is worth explanation It is that the mode of the depth image data and rgb image data that obtain synchronization target scene is not limited to using depth phase Machine is also possible to three-dimensional laser scanner, binocular camera etc..
Method through this embodiment, the depth image data and rgb image data of available target scene, then leads to It crosses depth data and rgb image data alignment, obtains the point cloud data of target scene, to realize the geometry of object is special The target detection technique based on image of seeking peace combines, and improves the accuracy of object detection.
In another embodiment, it can be the mesh for first extracting from the rgb image data of target scene and meeting preset condition Pixel is marked, the depth image data (for example, can obtain by depth camera) of the target pixel points is then obtained, it later will be with The corresponding depth image data of the target pixel points and rgb image data alignment, to directly acquire the point of target pixel points Cloud data.In this way, it is possible to reduce data volume when depth image data and rgb image data are aligned improves alignment speed, makes Obtain object detection more quickly, efficiently.
As shown in figure 3, determining the flow chart of method for the target pixel points shown in one exemplary embodiment of the disclosure, this is really The method of determining may comprise steps of.
In S301, the rgb image data of target scene is transformed into HSV space;
In S302, the pixel fallen on the corresponding section HSV of preset color of object is determined as target pixel points. Wherein it is possible to obtain the corresponding color of the target object, which is color of object in advance according to target object.Fall into this Pixel on the corresponding section HSV of color of object probably belongs to target object, and therefore, these pixels are targeted Pixel.Alternatively, in another embodiment, color of object be except the corresponding color of the background image portion of target scene it Other outer colors, the disclosure are not defined specifically.
In one embodiment, the determination method of target pixel points, which can be, is transformed into HSV sky for rgb image data first Between in, the pixel fallen on the corresponding section HSV of preset color of object is then determined as target pixel points.In another reality Apply in example, the frame frame that the determination method of target pixel points can be deep learning select as a result, for example based on deep learning Object detection model frame select as a result, as FasterR-CNN, SSD (Single Shot MultiBox Detector), YOLO (You only look once) etc..
As shown in figure 4, for the flow chart of the object identification method shown in one exemplary embodiment of the disclosure, the recognition methods It may comprise steps of.
In S401, obtain the corresponding first geometrical characteristic vector of target pixel points class, wherein first geometrical characteristic to Amount is to arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class.
As described above, geometrical characteristic may include one or more in following: length, breadth length ratio, depth-width ratio, high length Than, it is high divided by the sum of length, width and height and for characterize whether be straight line instruction information.When the geometry for obtaining target pixel points class After feature, corresponding first geometrical characteristic vector can be generated according to the geometrical characteristic.For example, it is assumed that the target picture obtained The geometrical characteristic of vegetarian refreshments class includes length, breadth length ratio, depth-width ratio, high long ratio, and presetting arrangement mode is length, depth-width ratio, wide length Than, high long ratio, then corresponding first geometrical characteristic vector are as follows: [a1, b1, c1, d1], wherein a1 is the several of target pixel points class Length in what feature, b1 are the depth-width ratio in the geometrical characteristic of target pixel points class, and c1 is that the geometry of target pixel points class is special Breadth length ratio and d1 in sign are the long ratio of height in the geometrical characteristic of target pixel points class.
In S402, the corresponding second geometrical characteristic vector of each target object is obtained, wherein second geometry is special Levying vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to above-mentioned default arrangement mode Arrange resulting vector.
As set forth above, it is possible to pre-enter the geometrical characteristic of target object, later, can be generated according to the geometrical characteristic The corresponding second geometrical characteristic vector of target object.Illustratively, item of information and mesh included by the geometrical characteristic of target pixel points It is identical to mark item of information included by the geometrical characteristic of object.
For example, it is assumed that the first geometrical characteristic vector is [a1, b1, c1, d1], then corresponding second geometrical characteristic of target object Vector are as follows: [a2, b2, c2, d2], wherein a2 is the length in the geometrical characteristic of target object, and b2 is that the geometry of target object is special Depth-width ratio in sign, c2 are that the breadth length ratio and d2 in the geometrical characteristic of target object are in the geometrical characteristic of target object High long ratio.
In S403, the second geometry similar with the first geometrical characteristic vector corresponding to target pixel points class is special if it exists Vector is levied, then identifies that the corresponding object of target pixel points class is the corresponding object of the similar second geometrical characteristic vector Body.
The first geometrical characteristic vector corresponding to acquisition target pixel points class and each target object are corresponding After second geometrical characteristic vector, can by the first geometrical characteristic vector one by one with each second geometrical characteristic vector carry out Match, to determine whether the two is similar.If it exists with the first geometrical characteristic vector similar second corresponding to target pixel points class Geometrical characteristic vector, then it represents that the corresponding object of target pixel points class target object phase corresponding with the second geometrical characteristic vector Seemingly, therefore, identify that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Illustratively, the first geometrical characteristic vector is carried out one by one matching with each second geometrical characteristic vector can be with are as follows: meter Calculate the similarity parameter between the first geometrical characteristic vector and each second geometrical characteristic vector.Optionally, the similarity parameter For COS distance, then the COS distance between the first geometrical characteristic vector is greater than or equal to the of the first pre-determined distance if it exists Two geometrical characteristic vectors, then it is the maximum second geometrical characteristic vector of COS distance between the first geometrical characteristic vector is true It is set to the similar second geometrical characteristic vector of the first geometrical characteristic vector.Optionally, which is Euclidean distance, then If it exists the Euclidean distance between the first geometrical characteristic vector be less than or equal to the second pre-determined distance the second geometrical characteristic to Amount, then by the smallest second geometrical characteristic vector of Euclidean distance between the first geometrical characteristic vector be determined as this more than the first What similar second geometrical characteristic vector of feature vector.It is worth noting that similarity parameter is not limited to above two implementation Mode, other can determine that the parameter of the similitude between outgoing vector is equally applicable to the disclosure.
As shown in figure 5, for the flow chart of the object detecting method shown in disclosure another exemplary embodiment.Such as Fig. 5 institute Show, this method may comprise steps of:
In S11, the target pixel points for meeting preset condition are extracted from the rgb image data of target scene.
In S12, according to the point cloud data of target pixel points, target pixel points are clustered, to obtain at least one Target pixel points class.Wherein, the specific implementation of S11 and S12 has been described above, and details are not described herein again.
In S501, the class center of each target pixel points class is calculated to the distance of origin.
In S502, if target object is not yet all identified, and there are the target pixel points not yet identified The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is then determined as by class Target pixel point class.
In S503, the geometrical characteristic of target pixel point class is determined.
In S504, according to the geometrical characteristic of the geometrical characteristic for the current pixel point class determined and target object, identification The corresponding object of the current target pixel points class.Mode described in specific identification method Fig. 4 above in conjunction is similar, herein not It repeats again.
In this embodiment, the class center of each target pixel points class is calculated first to the distance of origin, optionally, is calculated Euclidean distance of the class center of target pixel points class to origin.Then, distance based on class center to origin from the near to the remote suitable Sequence is ranked up target pixel points class, and optionally, sort method can be bubble sort, selected and sorted, insertion sort, fast Speed sequence etc..For example, target pixel points class totally three that cluster obtains, respectively A class, B class, C class, according to class center to origin Distance ranking results from the near to the remote be A, B, C.There are three known target objects, is target object 1, target object respectively 2 and target object 3.Then when executing S502 for the first time, at this point, since three target objects are not yet all identified, and three A target pixel points class did not also all carry out identification, it therefore meets the condition in S502, at this point, A class is determined as current mesh Mark pixel class.Later, S503 and S504 is executed.If which object the unidentified corresponding object of A class out is after S504 Body (that is, there is no the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to target pixel point class to Amount), or identify that the corresponding object of A class is target object 2, then S502 is returned to, circulation executes S502-S504, until whole Until target pixel points class was all carried out identification or target object is all identified.
According to the distance sequence from the near to the distant of class centre distance origin, successively target pixel points class is judged, it can To improve the priority for the class being closer, and will be reduced apart from the priority of farther away class, because under normal conditions, distance What farther away class included is background data, itself is the probability of target object with regard to lower.In this way, object detection can be improved Speed improves recognition efficiency.
In addition, this method can also include the following steps: identifying mesh in the object detecting method that the disclosure provides After marking the corresponding object of pixel class, the coordinate of the corresponding object of target pixel points class is determined.
Wherein it is possible to determine the seat of the corresponding object of target pixel points class according to the point cloud data of target pixel points class Mark.In this way, may further determine that out the pose of object after identifying the corresponding object of target pixel points class, thus auxiliary Execute crawl task with helping robot more accurate and flexible.
As shown in fig. 6, for the block diagram of the article detection device shown in one exemplary embodiment of the disclosure.Described device 100 May include:
Extraction module 601, for extracting the target pixel points for meeting preset condition from the RGB data of target scene;
Cluster module 602 gathers the target pixel points for the point cloud data according to the target pixel points Class, to obtain at least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Identification module 603, for determining the geometrical characteristic of at least one target pixel points class, and according to determining The geometrical characteristic of the target pixel points class and the geometrical characteristic of known target object identify that the target pixel points class is corresponding Object.
Scheme in the disclosure combines the geometrical characteristic of object and the target detection technique based on two dimensional image, In this way, can exclude include target object still image interference caused by robot, enhance the standard of object detection True property, so that auxiliary robot is more flexible accurately to execute crawl task.Also, this method can reduce algorithm and examine to image Survey the dependence of result, it is only necessary to examined object is contained in the target scene that image detection provides, thus significantly Improve the robustness of detection method.In addition, the object detecting method that the disclosure provides, it is not necessary that the appearance of object is transformed, and it is right In different objects, only its geometrical property, which need to simply be arranged, can be completed object detection, without establishing complicated model, thus So that the detection method that the disclosure provides simply is easily achieved, detection efficiency is greatly improved and versatility is stronger.
Optionally, described device can also include:
First obtain module, for the extraction module extracted from the rgb image data of target scene meet it is default Before the target pixel points of condition, the depth image data of the target scene is obtained;
First alignment module, for the depth image data and the rgb image data to be aligned, to obtain the mesh Mark the point cloud data of scene.
Optionally, described device can also include:
Second obtains module, for obtaining the depth image data of the target pixel points;
Second alignment module, for by the depth image data of the target pixel points and the target pixel points The rgb image data alignment, to obtain the point cloud data of the target pixel points.
Optionally, the extraction module 601 may include:
Transform subblock, for the rgb image data to be transformed into HSV space;
Determine submodule, it is described for the pixel fallen into the corresponding HSV space of preset color of object to be determined as Target pixel points.
Optionally, the identification module 603 can be used for:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector To arrange resulting vector according to default arrangement mode by the geometrical characteristic of the target pixel points class;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein second geometrical characteristic Vector is to arrange by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode Resulting vector;
If it exists the second geometrical characteristic similar with the first geometrical characteristic vector corresponding to the target pixel points class to Amount then identifies that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
Optionally, the COS distance between the first geometrical characteristic vector corresponding to the target pixel points class if it exists More than or equal to the second geometrical characteristic vector of pre-determined distance, then most by the COS distance between the first geometrical characteristic vector The second big geometrical characteristic vector is determined as the similar second geometrical characteristic vector of the first geometrical characteristic vector.
Optionally, described device can also include:
Computing module, for calculating the class center of each target pixel points class to the distance of origin;
The identification module 603 is used for:
If the target object is not yet all identified, and has the target pixel points class not yet identified, then The nearest target pixel points class of the distance of class center to origin in the target pixel points class not yet identified is determined as currently Target pixel points class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, know The not corresponding object of the current target pixel points class.
Optionally, described device can also include:
Determining module, for determining the object pixel after identifying the corresponding object of the target pixel points class The coordinate of the corresponding object of point class.
Optionally, the geometrical characteristic includes one or more in following: length, breadth length ratio, depth-width ratio, height are divided by length The sum of wide height and for characterize whether be straight line instruction information.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Based on same design, the disclosure provides a kind of computer readable storage medium, is stored thereon with computer program, should The step of object detecting method that the disclosure provides is realized when program is executed by processor.
Fig. 7 is the block diagram of a kind of electronic equipment 700 shown according to an exemplary embodiment.As shown in fig. 7, the electronics is set Standby 700 may include: processor 701, memory 702.The electronic equipment 700 can also include multimedia component 703, input/ Export one or more of (I/O) interface 704 and communication component 705.
Wherein, processor 701 is used to control the integrated operation of the electronic equipment 700, to complete above-mentioned object detection side All or part of the steps in method.Memory 702 is for storing various types of data to support the behaviour in the electronic equipment 700 To make, these data for example may include the instruction of any application or method for operating on the electronic equipment 700, with And the relevant data of application program, such as contact data, the message of transmitting-receiving, picture, audio, video etc..The memory 702 It can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random-access is deposited Reservoir (Static Random Access Memory, abbreviation SRAM), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, abbreviation EEPROM), erasable programmable Read-only memory (Erasable Programmable Read-Only Memory, abbreviation EPROM), programmable read only memory (Programmable Read-Only Memory, abbreviation PROM), and read-only memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, disk or CD.Multimedia component 703 may include screen and audio component.Wherein Screen for example can be touch screen, and audio component is used for output and/or input audio signal.For example, audio component may include One microphone, microphone is for receiving external audio signal.The received audio signal can be further stored in storage Device 702 is sent by communication component 705.Audio component further includes at least one loudspeaker, is used for output audio signal.I/O Interface 704 provides interface between processor 701 and other interface modules, other above-mentioned interface modules can be keyboard, mouse, Button etc..These buttons can be virtual push button or entity button.Communication component 705 is for the electronic equipment 700 and other Wired or wireless communication is carried out between equipment.Wireless communication, such as Wi-Fi, bluetooth, near-field communication (Near Field Communication, abbreviation NFC), 2G, 3G, 4G, NB-IOT, eMTC or other 5G etc. or they one or more of Combination, it is not limited here.Therefore the corresponding communication component 705 may include: Wi-Fi module, bluetooth module, NFC mould Block etc..
In one exemplary embodiment, electronic equipment 700 can be by one or more application specific integrated circuit (Application Specific Integrated Circuit, abbreviation ASIC), digital signal processor (Digital Signal Processor, abbreviation DSP), digital signal processing appts (Digital Signal Processing Device, Abbreviation DSPD), programmable logic device (Programmable Logic Device, abbreviation PLD), field programmable gate array (Field Programmable Gate Array, abbreviation FPGA), controller, microcontroller, microprocessor or other electronics member Part is realized, for executing above-mentioned object detecting method.
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of above-mentioned object detecting method is realized when program instruction is executed by processor.For example, the computer readable storage medium It can be the above-mentioned memory 702 including program instruction, above procedure instruction can be executed by the processor 701 of electronic equipment 700 To complete above-mentioned object detecting method.
Fig. 8 is the block diagram of a kind of electronic equipment 800 shown according to an exemplary embodiment.For example, electronic equipment 800 can To be provided as a server.Referring to Fig. 8, electronic equipment 800 includes processor 822, and quantity can be one or more, with And memory 832, for storing the computer program that can be executed by processor 822.The computer program stored in memory 832 May include it is one or more each correspond to one group of instruction module.In addition, processor 822 can be configured as The computer program is executed, to execute above-mentioned object detecting method.
In addition, electronic equipment 800 can also include power supply module 826 and communication component 850, which can be with It is configured as executing the power management of electronic equipment 800, which, which can be configured as, realizes electronic equipment 800 Communication, for example, wired or wireless communication.In addition, the electronic equipment 800 can also include input/output (I/O) interface 858.Electricity Sub- equipment 800 can be operated based on the operating system for being stored in memory 832, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM etc..
In a further exemplary embodiment, a kind of computer readable storage medium including program instruction is additionally provided, it should The step of above-mentioned object detecting method is realized when program instruction is executed by processor.For example, the computer readable storage medium It can be the above-mentioned memory 832 including program instruction, above procedure instruction can be executed by the processor 822 of electronic equipment 800 To complete above-mentioned object detecting method.
The preferred embodiment of the disclosure is described in detail in conjunction with attached drawing above, still, the disclosure is not limited to above-mentioned reality The detail in mode is applied, in the range of the technology design of the disclosure, a variety of letters can be carried out to the technical solution of the disclosure Monotropic type, these simple variants belong to the protection scope of the disclosure.
It is further to note that specific technical features described in the above specific embodiments, in not lance In the case where shield, can be combined in any appropriate way, in order to avoid unnecessary repetition, the disclosure to it is various can No further explanation will be given for the combination of energy.
In addition, any combination can also be carried out between a variety of different embodiments of the disclosure, as long as it is without prejudice to originally Disclosed thought equally should be considered as disclosure disclosure of that.

Claims (20)

1. a kind of object detecting method, which is characterized in that the described method includes:
The target pixel points for meeting preset condition are extracted from the rgb image data of target scene;
According to the point cloud data of the target pixel points, the target pixel points are clustered, to obtain at least one target Pixel class, wherein each target pixel points class corresponds respectively to an object;
Determine the geometrical characteristic of at least one target pixel points class, and according to the several of the target pixel points class determined The geometrical characteristic of what feature and known target object, identifies the corresponding object of target pixel points class.
2. the method according to claim 1, wherein being extracted in the rgb image data from target scene Before the step of meeting the target pixel points of preset condition, the method also includes:
Obtain the depth image data of the target scene;
The depth image data and the rgb image data are aligned, to obtain the point cloud data of the target scene.
3. the method according to claim 1, wherein the method also includes:
Obtain the depth image data of the target pixel points;
The depth image data of the target pixel points and the rgb image data of the target pixel points are aligned, To obtain the point cloud data of the target pixel points.
4. the method according to claim 1, wherein extracting symbol in the rgb image data from target scene Close the target pixel points of preset condition, comprising:
The rgb image data is transformed into HSV space;
The pixel fallen on the corresponding section HSV of preset color of object is determined as the target pixel points.
5. the method according to claim 1, wherein the geometrical characteristic according to the target pixel points class and The geometrical characteristic of known target object identifies the corresponding object of target pixel points class, comprising:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector is served as reasons The geometrical characteristic of the target pixel points class arranges resulting vector according to default arrangement mode;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein the second geometrical characteristic vector To arrange gained according to the default arrangement mode by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector Vector;
The second geometrical characteristic vector similar with the first geometrical characteristic vector corresponding to the target pixel points class if it exists, then Identify that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
6. according to the method described in claim 5, it is characterized in that, if it exists and first corresponding to the target pixel points class COS distance between geometrical characteristic vector is greater than or equal to the second geometrical characteristic vector of pre-determined distance, then will with this more than the first It is similar that the maximum second geometrical characteristic vector of COS distance between what feature vector is determined as the first geometrical characteristic vector Second geometrical characteristic vector.
7. the method according to claim 1, wherein the method also includes:
The class center of each target pixel points class is calculated to the distance of origin;
The geometrical characteristic of at least one target pixel points class of the determination, and according to the target pixel points class determined Geometrical characteristic and known target object geometrical characteristic, identify the corresponding object of target pixel points class, comprising:
It, then will still if the target object is not yet all identified, and has the target pixel points class not yet identified The nearest target pixel points class of the distance of class center to origin is determined as current goal in the target pixel points class not identified Pixel class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, identification should The corresponding object of target pixel point class.
8. method according to any one of claims 1-7, which is characterized in that the method also includes:
After identifying the corresponding object of the target pixel points class, the seat of the corresponding object of the target pixel points class is determined Mark.
9. method according to any one of claims 1-7, which is characterized in that the geometrical characteristic includes one in following Or it is multinomial: length, breadth length ratio, depth-width ratio, it is high long than, it is high divided by the sum of length, width and height and for characterizing whether be straight line Indicate information.
10. a kind of article detection device, which is characterized in that described device includes:
Extraction module, for extracting the target pixel points for meeting preset condition from the rgb image data of target scene;
Cluster module clusters the target pixel points for the point cloud data according to the target pixel points, to obtain At least one target pixel points class, wherein each target pixel points class corresponds respectively to an object;
Identification module, for determining the geometrical characteristic of at least one target pixel points class, and according to the mesh determined The geometrical characteristic of pixel class and the geometrical characteristic of known target object are marked, identifies the corresponding object of target pixel points class.
11. device according to claim 10, which is characterized in that described device further include:
First obtains module, meets preset condition for extracting from the rgb image data of target scene in the extraction module Target pixel points before, obtain the depth image data of the target scene;
First alignment module, for the depth image data and the rgb image data to be aligned, to obtain the target field The point cloud data of scape.
12. device according to claim 10, which is characterized in that described device further include:
Second obtains module, for obtaining the depth image data of the target pixel points;
Second alignment module, for will be described in the depth image data of the target pixel points and the target pixel points Rgb image data alignment, to obtain the point cloud data of the target pixel points.
13. device according to claim 10, which is characterized in that the extraction module includes:
Transform subblock, for the rgb image data to be transformed into HSV space;
Submodule is determined, for the pixel fallen into the corresponding HSV space of preset color of object to be determined as the target Pixel.
14. device according to claim 10, which is characterized in that the identification module is used for:
Obtain the corresponding first geometrical characteristic vector of the target pixel points class, wherein the first geometrical characteristic vector is served as reasons The geometrical characteristic of the target pixel points class arranges resulting vector according to default arrangement mode;
Obtain the corresponding second geometrical characteristic vector of each target object, wherein the second geometrical characteristic vector To arrange gained by the geometrical characteristic of target object corresponding with the second geometrical characteristic vector, according to the default arrangement mode Vector;
The second geometrical characteristic vector similar with the first geometrical characteristic vector corresponding to the target pixel points class if it exists, then Identify that the corresponding object of target pixel points class is the corresponding target object of the similar second geometrical characteristic vector.
15. device according to claim 14, which is characterized in that if it exists with corresponding to the target pixel points class COS distance between one geometrical characteristic vector is greater than or equal to the second geometrical characteristic vector of pre-determined distance, then will with this first It is similar that the maximum second geometrical characteristic vector of COS distance between geometrical characteristic vector is determined as the first geometrical characteristic vector The second geometrical characteristic vector.
16. device according to claim 10, which is characterized in that described device further include:
Computing module, for calculating the class center of each target pixel points class to the distance of origin;
The identification module is used for:
It, then will still if the target object is not yet all identified, and has the target pixel points class not yet identified The nearest target pixel points class of the distance of class center to origin is determined as current goal in the target pixel points class not identified Pixel class;
Determine the geometrical characteristic of the target pixel point class;
According to the geometrical characteristic of the geometrical characteristic for the target pixel point class determined and the target object, identification should The corresponding object of target pixel point class.
17. device described in any one of 0-16 according to claim 1, which is characterized in that described device further include:
Determining module, for determining the target pixel points class after identifying the corresponding object of the target pixel points class The coordinate of corresponding object.
18. device described in any one of 0-16 according to claim 1, which is characterized in that the geometrical characteristic includes in following It is one or more: length, breadth length ratio, depth-width ratio, it is high divided by the sum of length, width and height and for characterize whether be straight line instruction Information.
19. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The step of any one of claim 1-9 the method is realized when execution.
20. a kind of electronic equipment characterized by comprising
Memory is stored thereon with computer program;
Processor, for executing the computer program in the memory, to realize described in any one of claim 1-9 The step of method.
CN201910553167.3A 2019-06-21 2019-06-21 Object detecting method, device, storage medium and electronic equipment Pending CN110348333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910553167.3A CN110348333A (en) 2019-06-21 2019-06-21 Object detecting method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910553167.3A CN110348333A (en) 2019-06-21 2019-06-21 Object detecting method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110348333A true CN110348333A (en) 2019-10-18

Family

ID=68182957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910553167.3A Pending CN110348333A (en) 2019-06-21 2019-06-21 Object detecting method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110348333A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110900603A (en) * 2019-11-29 2020-03-24 上海有个机器人有限公司 Method, medium, terminal and device for identifying elevator through geometric features
KR20220055707A (en) * 2020-10-27 2022-05-04 건국대학교 산학협력단 Apparatus and method for tracking object based on semantic point cloud

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680519A (en) * 2015-02-06 2015-06-03 四川长虹电器股份有限公司 Seven-piece puzzle identification method based on contours and colors
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN107590836A (en) * 2017-09-14 2018-01-16 斯坦德机器人(深圳)有限公司 A kind of charging pile Dynamic Recognition based on Kinect and localization method and system
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
CN107636727A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 Target detection method and device
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN108133191A (en) * 2017-12-25 2018-06-08 燕山大学 A kind of real-time object identification method suitable for indoor environment
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108229548A (en) * 2017-12-27 2018-06-29 华为技术有限公司 A kind of object detecting method and device
CN108596256A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 One kind being based on RGB-D object identification grader building methods
CN109658413A (en) * 2018-12-12 2019-04-19 深圳前海达闼云端智能科技有限公司 A kind of method of robot target grasping body position detection

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680519A (en) * 2015-02-06 2015-06-03 四川长虹电器股份有限公司 Seven-piece puzzle identification method based on contours and colors
CN107636680A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 A kind of obstacle detection method and device
CN107636727A (en) * 2016-12-30 2018-01-26 深圳前海达闼云端智能科技有限公司 Target detection method and device
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN107748890A (en) * 2017-09-11 2018-03-02 汕头大学 A kind of visual grasping method, apparatus and its readable storage medium storing program for executing based on depth image
CN107590836A (en) * 2017-09-14 2018-01-16 斯坦德机器人(深圳)有限公司 A kind of charging pile Dynamic Recognition based on Kinect and localization method and system
CN107610176A (en) * 2017-09-15 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN108133191A (en) * 2017-12-25 2018-06-08 燕山大学 A kind of real-time object identification method suitable for indoor environment
CN108229548A (en) * 2017-12-27 2018-06-29 华为技术有限公司 A kind of object detecting method and device
CN108171748A (en) * 2018-01-23 2018-06-15 哈工大机器人(合肥)国际创新研究院 A kind of visual identity of object manipulator intelligent grabbing application and localization method
CN108596256A (en) * 2018-04-26 2018-09-28 北京航空航天大学青岛研究院 One kind being based on RGB-D object identification grader building methods
CN109658413A (en) * 2018-12-12 2019-04-19 深圳前海达闼云端智能科技有限公司 A kind of method of robot target grasping body position detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙自飞: "服务机器人动态环境下定位及物体抓取技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郝雯等: "面向点云的三维物体识别方法综述", 《计算机科学》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110900603A (en) * 2019-11-29 2020-03-24 上海有个机器人有限公司 Method, medium, terminal and device for identifying elevator through geometric features
KR20220055707A (en) * 2020-10-27 2022-05-04 건국대학교 산학협력단 Apparatus and method for tracking object based on semantic point cloud
KR102405767B1 (en) 2020-10-27 2022-06-03 건국대학교 산학협력단 Apparatus and method for tracking object based on semantic point cloud

Similar Documents

Publication Publication Date Title
JP7265003B2 (en) Target detection method, model training method, device, apparatus and computer program
CN109584302B (en) Camera pose optimization method, camera pose optimization device, electronic equipment and computer readable medium
CN103162682B (en) Based on the indoor path navigation method of mixed reality
CN109671119A (en) A kind of indoor orientation method and device based on SLAM
CN111598164B (en) Method, device, electronic equipment and storage medium for identifying attribute of target object
CN105512627A (en) Key point positioning method and terminal
WO2016025713A1 (en) Three-dimensional hand tracking using depth sequences
Peng et al. CrowdGIS: Updating digital maps via mobile crowdsensing
CN112052186A (en) Target detection method, device, equipment and storage medium
CN103105924B (en) Man-machine interaction method and device
CN103530649A (en) Visual searching method applicable mobile terminal
CN111339976B (en) Indoor positioning method, device, terminal and storage medium
CN110456904B (en) Augmented reality glasses eye movement interaction method and system without calibration
CN110135237B (en) Gesture recognition method
Feng et al. Visual Map Construction Using RGB‐D Sensors for Image‐Based Localization in Indoor Environments
JP2018120283A (en) Information processing device, information processing method and program
CN115808170B (en) Indoor real-time positioning method integrating Bluetooth and video analysis
CN111062400A (en) Target matching method and device
CN110348333A (en) Object detecting method, device, storage medium and electronic equipment
CN112686178A (en) Multi-view target track generation method and device and electronic equipment
CN111709317A (en) Pedestrian re-identification method based on multi-scale features under saliency model
CN107193820A (en) Location information acquisition method, device and equipment
CN107479715A (en) The method and apparatus that virtual reality interaction is realized using gesture control
CN116958584B (en) Key point detection method, regression model training method and device and electronic equipment
CN113610967A (en) Three-dimensional point detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210303

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: Shenzhen Qianhaida Yunyun Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.