WO2019232782A1 - 一种物体特征的识别方法、视觉识别装置及机器人 - Google Patents

一种物体特征的识别方法、视觉识别装置及机器人 Download PDF

Info

Publication number
WO2019232782A1
WO2019232782A1 PCT/CN2018/090421 CN2018090421W WO2019232782A1 WO 2019232782 A1 WO2019232782 A1 WO 2019232782A1 CN 2018090421 W CN2018090421 W CN 2018090421W WO 2019232782 A1 WO2019232782 A1 WO 2019232782A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
points
suspected
information
handle
Prior art date
Application number
PCT/CN2018/090421
Other languages
English (en)
French (fr)
Inventor
吴启帆
Original Assignee
深圳蓝胖子机器人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳蓝胖子机器人有限公司 filed Critical 深圳蓝胖子机器人有限公司
Priority to PCT/CN2018/090421 priority Critical patent/WO2019232782A1/zh
Priority to CN201880003237.1A priority patent/CN109641351B/zh
Publication of WO2019232782A1 publication Critical patent/WO2019232782A1/zh

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices

Definitions

  • the present application relates to the technical field of visual recognition, and in particular, to a method for identifying features of an object, a visual recognition device, and a robot.
  • the current mainstream vision technology is to use machine learning to use a large amount of data to train a specific model. First, a large amount of data is collected for manual labeling, and then the labeled data is given to a machine learning program for training.
  • machine learning requires a large amount of training data to support it, and the data from collection to labeling requires a lot of labor costs.
  • the collection needs to be collected by volunteers to complete the labeling. It takes a lot of human to classify and process the data; machine learning is prone to overfitting, that is, it is very accurate in the case of training, but it shows poor results in the case of no training; the results of machine learning have Predictively, the results of machine learning are not yet fully understood by human beings, and unexpected errors may occur.
  • the present application relates to a method for identifying object features, a visual recognition device, and a robot to solve the problem that it is difficult for a machine in the prior art to recognize an object.
  • the present application proposes a method for identifying object features.
  • the method includes obtaining first description information of an object and second description information of an environment in which the object is located; obtaining reference information according to the first description information; The description information and the reference information are used to obtain feature data that meets the first condition in the first description information, where the first condition includes a first association relationship between all the feature data and the second description information.
  • the present application proposes a method for identifying a luggage handle, which includes obtaining first description information of the luggage and second description information of the environment in which the luggage is located; and obtaining reference information according to the first description information; According to the second description information and the reference information, handle surface data that meets the first condition in the first description information is obtained; wherein the first condition includes a first association relationship between all handle surface data and the second description information.
  • the visual recognition device includes a processor and a memory.
  • the memory stores a computer program, and the processor executes the computer program to implement the method described in any one of the above methods. .
  • this application proposes a robot, which includes the above-mentioned visual recognition device.
  • the beneficial effect of the present invention is that, unlike the prior art, the method for identifying object features in the present application obtains first description information of the object and second description information of the environment in which the object is located, and obtains reference information according to the first description information. According to the second description information and the reference information, feature data that meets the first condition in the first description information is obtained, where the first condition includes a first association relationship between all feature data and the second description information, and according to the obtained feature data Determine the characteristic surface to be able to accurately identify the object.
  • FIG. 1 is a flowchart of an embodiment of an object feature recognition method of the present application
  • FIG. 2 is a flowchart of another embodiment of a method for identifying an object feature according to the present application
  • Figure 3 is a schematic diagram of the luggage handle position
  • FIG. 4 is a flowchart of an embodiment of a method for identifying a trunk handle of the present application
  • FIG. 5 is a schematic structural diagram of an embodiment of a visual recognition device according to the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a robot according to the present application.
  • FIG. 1 is a flowchart of an embodiment of an object feature recognition method according to the present application.
  • the method disclosed in this embodiment may specifically include the following steps:
  • S11 Obtain first description information of the object and second description information of the environment in which the object is located.
  • the method for identifying features of an object is used to identify a characteristic surface of an object, and the characteristic surface may be a concave surface, a convex surface, a plane, or the like, which is not specifically limited in this embodiment.
  • the first description information of the object and the second description information of the environment in which the object is located can be derived from the same visual image, that is, the first description information and the second description information can be obtained in the same acquired image. Description.
  • the second description information of the environment in which the object is located may be the application site of the object.
  • the second description information of the environment in which the suitcase is located may also include other objects in the application site, such as walls, seats, Other facilities.
  • the reference information may be physical attribute information of the object, such as the size and area of the object, and the type of the reference information may be one type or multiple types, which is not limited herein.
  • S13 Acquire feature data that meets the first condition in the first description information according to the second description information and the reference information.
  • the first condition includes a first association relationship between all the feature data and the second description information.
  • the first association relationship may be a specific relationship between all the characteristic data and the second description information, for example, it may be a point-to-point relative
  • the position relationship, the point-to-face relative position relationship, and the like are not limited in this embodiment.
  • the obtained first description information is filtered according to the obtained second description information and the reference information to remove interfering data information, thereby determining a feature surface according to the obtained feature data.
  • This application proposes a method for identifying object features.
  • the method obtains first description information of an object and second description information of an environment in which the object is located, obtains reference information according to the first description information, and according to the second description information and reference information, Feature data that meets the first condition in the first description information is obtained, where the first condition includes a first association relationship between all the feature data and the second description information, and the feature surface is determined according to the obtained feature data, which can accurately identify the object.
  • the method for identifying object features in the above-mentioned instance can be applied to an airport baggage sorting system, providing identification of specific features of luggage or luggage (for example, handles) to further realize automatic transportation of luggage.
  • the method can be implemented by a processor calling a computer program in a storage medium.
  • FIG. 2 is a flowchart of another embodiment of a method for identifying an object feature of the present application.
  • the method disclosed in this embodiment may specifically include the following steps:
  • the object recognition feature method is used to identify a feature surface of an object.
  • the feature surface may be a concave surface, a convex surface, a plane, or the like.
  • the feature surface is a plane.
  • the object may be a suitcase, and the characteristic feature surface may be the surface of the suitcase or the surface on which the handle is located.
  • the first description information of the object and the second description information of the environment in which the object is located may originate from the same visual image.
  • the first description information of the object may include three-dimensional point cloud data of the object.
  • point cloud data refers to the scanned object recorded in the form of points, each point contains three-dimensional coordinates, that is, contains the spatial coordinates of each point, and can also include color information (RGB) or reflection intensity information (Intensity).
  • RGB color information
  • Intensity reflection intensity information
  • the color information may be obtained by a color image through an imaging device such as a camera, and then the color information of the pixels at the corresponding positions is assigned to the corresponding points in the point cloud.
  • the intensity information can be obtained by the echo intensity collected by a receiving device such as a laser scanner.
  • the intensity information is related to the target's surface material, roughness, incident angle direction, and the instrument's emission energy and laser wavelength.
  • the second description information of the environment in which the object is located may be the application site of the object.
  • the second description information of the environment in which the suitcase is located may also be 3D point cloud information of the application site, for example, the application site is a suitcase.
  • the sorting place correspondingly, the 3D point cloud information may include the spatial description of the sorting place, and description data of other objects in its space, such as walls, seats, transportation equipment, robots, luggage racks and / or other Facilities, etc.
  • the reference information may include a normal direction of each point in the 3D point cloud data.
  • the obtained point cloud information can be calculated by a least squares fitting algorithm, and the specific acquisition method is not limited here.
  • the reference information is normal information, so that some interfering objects can be filtered out. For example, in the environment where the luggage is located, the walls and non-luggage facilities on the ground can be classified as interference objects that are not parallel to the ground. At this time, the interference objects can be filtered to accurately find the luggage parallel to the ground. box.
  • S23 Acquire feature data that meets the first condition in the first description information according to the second description information and the reference information.
  • the first condition includes a first association relationship between all the feature data and the second description information.
  • the first association relationship may be a point-to-point relative position relationship, a point-to-face relative position relationship, and the like between all feature data and the second description information.
  • the first association relationship may be in a visual coordinate system of the same visual image, that is, a coordinate system arranged at an application site to obtain an image, and there is a specific position relationship in which all feature data satisfy the second description information.
  • the first description information uses three-dimensional point cloud data including a suitcase
  • the second description information uses three-dimensional point cloud data on the ground.
  • the surface of the luggage case parallel to the ground is a characteristic surface, and the first condition may be set as that the first description information meets the data parallel to the ground.
  • the angle between the normal direction of the object's 3D point cloud data and the direction of the ground is used to determine whether it is in a vertical relationship, and if it is, it is in line with the characteristic data that is parallel to the ground.
  • all feature data and the ground constitute a first association relationship.
  • the feature data obtained through the judgment of the first condition all constitute a parallel relationship with the ground.
  • the second description information includes a world coordinate system set according to the environment in which the object is located, and reference features adopted according to the environment in the world coordinate system, and the points of the reference feature and the feature data conform to the first association relationship.
  • the second description information of the specific relationship between the characteristic surface and the environment in which the object is located can be flexibly selected, and then the specific relationship can be set as the first association relationship, and the A condition.
  • the coordinate system adopted by the visual recognition device must have a corresponding conversion relationship with the world coordinate system, so that during the implementation of the method, the visual recognition device can be used according to the visual recognition device.
  • the data coordinates obtained by the acquired visual image and the coordinates of the second description information selected in the world coordinate system are converted into a unified coordinate system.
  • the second description information uses the description data of the ground in the world coordinate system
  • a visual image including an object is obtained by a visual recognition device to obtain the first description information
  • the first description information is obtained according to the first description information meeting the first condition.
  • the visual recognition device may be a visual sensor, a camera, a 3D camera, or the like.
  • obtaining feature data that meets the first condition in the first description information may include filtering out points whose angles of the normal direction with respect to the reference feature are within an angle threshold to obtain a first set of suspect feature surface points; wherein, The angle threshold depends on the angle of the feature surface relative to the reference feature; region segmentation is performed on the first set of suspected feature surface points to obtain at least one second set of suspected feature surface points capable of forming a planar area.
  • the ground of the environment in the visual coordinate system is defined as the coordinate system X, Y, the space height is Z, the reference feature is set to the ground, and the angle threshold range is set to 80 ° ⁇ 100 °.
  • the angle threshold range is set to 80 ° ⁇ 100 °.
  • the first suspected feature surface point set A obtained is segmented.
  • a three-dimensional region growing algorithm is used to segment the first suspected feature surface point set A to obtain at least one first
  • the second suspected feature surface point set B records all point-to-point sets B in the plane area ⁇ (x1, y1, z1), (x2, y2, z2) ... ⁇ .
  • a three-dimensional region growing algorithm is adopted. Through the feature information, the burdensome manual data collection and labeling steps are eliminated, and the automatic and unsupervised learning is realized. This method is more robust.
  • a second suspected characteristic surface point set whose shape and / or size conforms to the shape and / or size of the characteristic surface is screened out;
  • the second set of suspected feature surface points determines the feature surface.
  • the shape of the luggage is rectangular, and the size is 20 inches.
  • the shape and / or size of the formed planar area can be filtered out from the two-dimensional information of the second suspected feature surface point set B to match the rectangle and / Or a 20-inch second suspected characteristic surface point set C, and the second suspected characteristic surface point set C is used as the surface of the suspected luggage.
  • the three-dimensional point cloud of the second suspected characteristic surface point set is obtained. Information is obtained in two dimensions.
  • the obtained second set of suspected feature surface points B ⁇ (x1, y1, z1), (x2, y2, z2) ... ⁇ is projected onto a two-dimensional space parallel to the feature surface, and is found by an algorithm
  • a corresponding two-dimensional rectangular region point set C ⁇ (xm, ym), (xn, yn) ... ⁇ (m and n are positive integers) is used.
  • a Hough transform algorithm is used.
  • a three-dimensional space is projected into a two-dimensional space by a projection method, which can retain the original size of the object and can be recognized by a two-dimensional image processing method.
  • the method of combining three-dimensional and two-dimensional is adopted, and planar vision assistance is used. Stereo vision makes up for the shortcomings of pure three-dimensional or two-dimensional algorithms.
  • the two-dimensional rectangular region point set C is restored to a three-dimensional space (that is, the corresponding rectangular region point set is obtained Y value) to get the set of points D ⁇ (xm, ym, zm), (xn, yn, zn) ... ⁇ of the suspected suitcase.
  • the feature surface can be identified in the above embodiment, but if the feature surface still exists on the feature surface, such as a handle, the feature portion can be accurately identified based on the relationship between the feature portion and the feature surface. Therefore, this embodiment further includes the following steps , Used to identify features protruding on the feature surface.
  • S25 Calculate the vertical distance between the point and the characteristic surface in the environment according to the three-dimensional point cloud information; filter out the points whose vertical distance is within the distance threshold to obtain the set of suspect feature points; wherein the range of the distance threshold depends on the relative features of the feature The height of the surface.
  • the points whose vertical distance is within the distance threshold range are selected according to the height coordinates of the points on the point set D. For example, suppose that the lowest height of the handle relative to the surface of the luggage is zt, and the highest height of the handle relative to the surface of the luggage is z + t.
  • the distance threshold range of the handle to the surface of the luggage is (zt, z + t) If it is determined that there are points within the distance threshold range, optionally, further determine whether there are continuous points within the threshold range of z to obtain a set of suspected handle points, thereby determining a handle according to the set of suspected handle points.
  • S27 Determine the position information of the selected point projection on the feature surface according to the three-dimensional point cloud information; further select the points whose position information meets the position conditions to form a suspect feature point set; wherein the position conditions depend on the feature portion relative The position of the feature surface.
  • the points whose position information meets the position conditions are filtered according to the position information of the filtered point projection on the feature surface.
  • the handle of the luggage box is generally set in the middle region of one side surface of the luggage box. Therefore, the position condition can be set to whether the continuous points correspond to the middle region of the surface of the suspected luggage box. If the judgment result is yes, the matching
  • the set of points E of the position condition constitutes the handle of the luggage case, that is, within the conditions of the position of the handle.
  • the height is more than 2 cm above the upper surface of the luggage case, and it may also include a limited middle area located on the surface. If there is a protrusion, the continuous point of the protrusion can be judged as the luggage handle.
  • the coordinate information used to determine the feature in the above steps is based on the visual coordinate system. Therefore, to determine the feature in the real world coordinate system, the coordinates of all points in the point set E need to be converted to real according to the coordinate system transformation algorithm. World coordinate system.
  • FIG. 3 is a schematic diagram of the luggage handle position.
  • the handle 110 on the side of the luggage case 100 is the handle on the surface of the luggage case 100 on the vertical ground.
  • the range of the angle threshold is adjusted according to the angle between the normal direction of the point in the first description information and the ground direction. If the normal direction of the feature surface point where the handle 110 of the luggage case 100 is located is relative to the ground The corners of the coordinates are within the adjusted angle threshold range, then the points within the default angle threshold range are perpendicular to the ground. At this time, all points perpendicular to the ground are recorded to the point set P and the point set P coordinates are vertical to the ground. Point set P.
  • Other implementations are similar to the above implementations, and will not be repeated here.
  • the method of this embodiment can simultaneously or separately identify the handles of the trunk parallel to the ground and the handles on the side of the trunk.
  • adjusting the range of the adjustment angle threshold can find a handle that forms a different angle from the ground. .
  • This application proposes a method for identifying object features.
  • the method obtains first description information of an object and second description information of an environment in which the object is located.
  • the first description information includes three-dimensional point cloud data of the object, and is obtained according to the first description information.
  • Reference information includes the normal direction of each point in the 3D point cloud data. According to the second description information and the reference information, the points with the angle of the normal direction relative to the reference feature within the angle threshold range are selected to obtain the first suspect.
  • Feature surface point set where the angle threshold depends on the angle of the feature surface relative to the reference feature; region segmentation of the first suspected feature surface point set to obtain at least one second suspected feature surface point set capable of forming a planar area; according to at least A second suspected characteristic surface point set determines the characteristic surface, thereby obtaining characteristic data that meets the first condition in the first description information, wherein the first condition includes a first association relationship between all the characteristic data and the second description information, and according to the obtained The obtained feature data determines the feature surface and can accurately identify the object.
  • FIG. 4 is a flowchart of an embodiment of a method for identifying a luggage handle of the present application. The method disclosed in this application It can include the following steps:
  • the first description information includes three-dimensional point cloud data of the suitcase;
  • the second description information includes a world coordinate system set according to the environment in which the suitcase is located, and a reference handle surface and reference handle surface adopted according to the environment in the world coordinate system.
  • the points on the handle surface data all conform to the first association relationship.
  • the reference information includes the normal direction of each point in the 3D point cloud data.
  • S43 Acquire handle surface data that meets the first condition in the first description information according to the second description information and the reference information.
  • the first condition includes a first association relationship between all handle surface data and the second description information.
  • the points whose normal direction is relative to the reference handle surface within an angle threshold are selected to obtain the first set of suspect handle surface points; wherein the angle threshold depends on the angle of the handle surface relative to the reference handle surface;
  • a suspicious handle surface point set is used for region segmentation to obtain at least one second suspected handle surface point set capable of forming a planar area; and a handle surface is determined according to the at least one second suspected handle surface point set.
  • a three-dimensional region growing algorithm is used to perform region segmentation on the first set of suspected handle surface points.
  • a second set of suspected handle surface points whose shape and / or size conforms to the shape and / or size of the handle surface is selected;
  • the second set of suspected handle surface points determines the handle surface.
  • the method further includes projecting the second suspected handle surface point set to a two-dimensional space parallel to the handle surface, so as to obtain two-dimensional information from the three-dimensional point cloud information of the second suspected handle surface point set.
  • the method is used to identify a handle surface portion protruding on the handle surface, and further includes calculating a vertical distance of each point relative to the handle surface in the environment according to the three-dimensional point cloud information; filtering out the vertical distance within a distance threshold range Point to obtain a suspected handle surface point set; wherein the distance threshold range depends on the height of the handle surface portion relative to the handle surface; the handle surface portion is determined according to the suspected handle surface point set.
  • the method further includes determining the position information of the selected point projected on the handle surface according to the three-dimensional point cloud information; further selecting points whose position information meets the position conditions to form a set of points on the surface of the suspect handle; wherein, the position conditions Depends on the position of the handle surface portion relative to the handle surface.
  • the identification method of the luggage compartment handle of this application is similar to the above embodiment, and details are not described in detail.
  • the present application proposes a method for identifying a luggage handle.
  • the method includes obtaining a first description of the luggage and a second description of the environment in which the luggage is located; obtaining reference information according to the first description; Reference information to obtain handle surface data that meets the first condition in the first description information, where the first condition includes a first association relationship between all handle surface data and the second description information, and determine the handle surface according to the obtained handle surface data , Can accurately identify the luggage handle.
  • the above method is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a visual recognition device.
  • This application proposes a visual recognition device.
  • the visual recognition device 200 disclosed in this embodiment includes a processor 210 and a memory 220.
  • the memory 220 stores a computer program, and the processor 210 executes the computer program to implement the method according to any one of the foregoing embodiments.
  • the processor 210 is configured to obtain first description information of an object and second description information of an environment in which the object is located; obtain reference information according to the first description information; and obtain first description information according to the second description information and reference information.
  • FIG. 5 only shows the logical relationship between the various devices, and does not limit the circuit connection relationship.
  • the visual recognition device 200 provided in the present application can implement an object feature recognition method in any of the above solutions, and can accurately identify an object.
  • FIG. 6 is a schematic structural diagram of a robot of this application.
  • the robot 300 disclosed in this application includes a visual recognition device 310.
  • the structure and specific implementation of the visual recognition device 310 are the same as those described above.
  • the visual recognition device 200 in the embodiment is similar, and will not be repeated here.
  • the robot 300 provided in the present application can accurately recognize an object.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device implementation described above is only schematic.
  • the division of the modules or units is only a logical function division.
  • multiple units or components may The combination can either be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium , Including a number of instructions to cause a computer device (which may be a personal computer, a server, or a network device) or a processor to perform all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disks or compact discs and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种物体特征的识别方法,该方法通过获取物体的第一描述信息以及物体所处环境的第二描述信息,根据第一描述信息获取参考信息,根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据,其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系,根据获取到的特征数据确定特征表面,能够准确识别物体。还涉及一种行李箱把手的识别方法、一种视觉识别装置以及一种机器人。

Description

一种物体特征的识别方法、视觉识别装置及机器人
【技术领域】
本申请涉及视觉识别技术领域,特别是涉及一种物体特征的识别方法、视觉识别装置及机器人。
【背景技术】
现有的主流视觉技术是采用机器学习的方法,使用大量的数据再训练出特定的模型,首先采集大量数据进行人工标注,再将标注后的数据交给机器学习程序进行训练。
然而现有的机器通过采用有监督学习的方法来学习会存在一些弊端:机器学习需要大量的训练数据做支持,数据从采集到标注都需要大量的人工成本,采集需要征集志愿者配合完成,标注需要大量的人工对数据进行分类处理;机器学习容易出现过拟合现象,即对训练过的情况表现的非常准确,而未训练过的情况则表现出很差的效果;机器学习的结果具有不可预测性,目前人类还无法十分清楚机器学习的结果,可能会出现出人意料的错误。
【发明内容】
本申请涉及一种物体特征的识别方法、视觉识别装置及机器人,以解决现有技术中的机器难以识别物体的问题。
为解决上述技术问题,本申请提出一种物体特征的识别方法,该方法包括获取物体的第一描述信息以及物体所处环境的第二描述信息;根据第一描述信息获取参考信息;根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据;其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系。
为解决上述技术问题,本申请提出一种行李箱把手的识别方法,该方法包括获取行李箱的第一描述信息以及行李箱所处环境的第二描述信息;根据第一描述信息获取参考信息;根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的把手表面数据;其中,第一条件包括所有把手表面数据与第二描述信息构成的第一关联关系。
为解决上述技术问题,本申请提出一种视觉识别装置,该视觉识别装置包括处理器和存储器,存储器中存储有计算机程序,该处理器执行计算机程序以实现上述方法中任一项所述的方法。
为解决上述技术问题,本申请提出一种机器人,该机器人包括上述的视觉识别装置。
本发明的有益效果是,区别于现有技术,本申请中的物体特征的识别方法通过获取物体的第一描述信息以及物体所处环境的第二描述信息,根据第一描述信息获取参考信息,根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据,其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系,根据获取到的特征数据确定特征表面,能够准确识别物体。
【附图说明】
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请物体特征的识别方法一实施例的流程图;
图2是本申请物体特征的识别方法另一实施例的流程图;
图3是行李箱把手位置示意图;
图4是本申请行李箱把手的识别方法一实施例的流程图;
图5是本申请视觉识别装置一实施例的结构示意图;
图6是本申请机器人一实施例的结构示意图。
【具体实施方式】
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参阅图1,图1是本申请物体特征的识别方法一实施例的流程图,本实施例所揭示的方法具体可以包括以下步骤:
S11:获取物体的第一描述信息以及物体所处环境的第二描述信息。
在本实施例中,物体识别特征的方法用于识别物体的特征表面,特征表面可以为凹面、凸面、平面等等,在本实施例中具体不做限定。
在本步骤S11中,可选择的,物体的第一描述信息以及物体所处环境的第二描述信息可以源于同一视觉图像,即可以在同一获取到的图像中获取第一描述信息及第二描述信息。
物体所处环境的第二描述信息可以是物体的应用场地,当物体为行李箱时,行李箱所处环境的第二描述信息还可以包括应用场地内的其他物体,例如墙面、座椅、其他设施等。
S12:根据第一描述信息获取参考信息。
其中,参考信息可以是物体的物理属性信息,例如物体的尺寸、面积等等,参考信息的类型可以为一种也可以为多种,此处不做限定。
S13:根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据;其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系。
在上述步骤S11与S12中获取到第二描述信息以及参考信息后,在本步骤S13中,第一关联关系可以是所有特征数据与第二描述信息之间的特定关系,例如可以是点对点的相对位置关系、点对面的相对位置关系等等,在本实施例中不做限定。
根据获取到的第二描述信息以及参考信息对获取到的第一描述信息进行筛选,以去除干扰的数据信息,从而根据获取到的特征数据确定特征表面。
本申请提出一种物体特征的识别方法,该方法通过获取物体的第一描述信息以及物体所处环境的第二描述信息,根据第一描述信息获取参考信息,根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据,其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系,根据获取到的特征数据确定特征表面,能够准确识别物体。
为便于理解,以下实施例将结合具体应用加以示例阐明本发明原理。
上述事实例中的物体特征的识别方法可以应用于机场行李分拣系统,提供行李或行李具体特征部(例如,把手)的识别,以进一步实现行李的自动化运送。该方法可以通过处理器调用存储介质中的计算机程序实施。
请参阅图2,图2是本申请物体特征的识别方法另一实施例的流程图,本实施例所揭示的方法具体可以包括以下步骤:
S21:获取物体的第一描述信息以及物体所处环境的第二描述信。
物体识别特征的方法用于识别物体的特征表面,特征表面可以为凹面、凸面、平面等等,在本实施例中特征表面为平面。例如,物体可以是行李箱,识别的特征表面可以为行李箱的表面,也可以为行李箱上把手所在的表面。
可选择的,物体的第一描述信息以及物体所处环境的第二描述信息可以源于同一视觉图像。
可选择的,物体的第一描述信息可以包括物体的三维点云数据。
其中,点云数据是指扫描物体以点的形式记录,每一个点包含有三维坐标,即包含了每个点的空间坐标,还可以包括颜色信息(RGB)或反射强度信息(Intensity)。颜色信息可以通过例如相机等摄像装置获取彩色影像,然后将对应位置的像素的颜色信息赋予点云中对应的点。强度信息可以通过激光扫描仪等接收装置采集到的回波强度,强度信息与目标的表面材质、粗糙度、入射角方向,以及仪器的发射能量,激光波长相关。
物体所处环境的第二描述信息可以是物体的应用场地,当物体为行李箱时,行李箱所处环境的第二描述信息还可以是应用场地的三维点云信息,例如应用场地是行李箱的分拣地,相应的,三维点云信息可以包括分拣地的空间描述,以及其空间内的其他物体的描述数据,例如墙面、座椅、运输设备、机器人、行李架和/或其他设施等。
S22:根据第一描述信息获取参考信息。
可选择的,参考信息可以包括三维点云数据中各点的法线方向。
获取三维点云数据中各点的法线方向时,可以通过将采集到的点云信息通过最小二乘拟合算法计算得出,具体获取方法此处不做限定。在本实施例中,参考信息采用法线信息,从而可以过滤掉一些干扰物体。例如,行李箱所处环境中,墙体、地面上非行李箱的设施等均可以归类为与地面不平行的干扰物体,此时可以过滤掉干扰物体,以准确的找到平行于地面的行李箱。
S23:根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据;其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系。
第一关联关系可以是所有特征数据与第二描述信息之间点对点的相对位置关系、点对面的相对位置关系等等。在本实施例中,第一关联关系可以是在同一视觉图像的视觉坐标系中,即布设于应用场地获取图像的坐标系,存在所有特征数据相对于第二描述信息均满足的特定位置关系。例如,该示例中,第一描述信息采用包括行李箱的三维点云数据,第二描述信息采用地面的三维点云数据。行李箱平行于地面的表面为特征表面,可以设置第一条件为第一描述信息中符合与地面平行的数据。例如,通过包括物体的三维点云数据的法线方向与地面方向的夹角,判断是否符合垂直关系,若是,则为符合与地面平行的特征数据。对应的,所有特征数据与地面均构成第一关联关系,该示例中为通过第一条件的判断获取到的特征数据均与地面构成平行关系。
可以理解的是,在采用三维点云数据作为第一描述信息时,可以通过各点的法线方向与地面方向的夹角判断是否该点为与地面平行的点,除了上述判断条件采用是否符合垂直关系,还可以包括一个夹角阈值,以符合实际所需。
可选择的,第二描述信息包括根据物体所处环境设置的世界坐标系,以及世界坐标系下依据环境采用的参考特征,参考特征与特征数据的点均符合第一关联关系。
具体来说,可以依据应用场景的不同,灵活选取特征表面与物体所处环境构成特定关系的第二描述信息,则可以设置该特定关系为第一关联关系,以及设置符合第一关联关系的第一条件。
可以理解的是,当依据真实物体世界中物体与物体所处环境,设定世界坐标系,物体及物体所处环境在世界坐标系中存在对应的空间坐标信息,其中物体特征表面与物体所处环境中选取的参考特征(例如,前述选取地面)均符合的特定条件设置为第一关联关系。
当通过视觉识别装置获取第一描述信息和/或第二描述信息时,视觉识别装置采用的坐标系,需与世界坐标系具有对应的转换关系,使得本方法实施过程中,可以依据视觉识别装置获取的视觉图像得到的数据坐标与世界坐标系中选取的第二描述信息的坐标转换为统一坐标系下。例如,当第二描述信息采用世界坐标系下地面的描述数据,实施该方法时,通过视觉识别装置获取包括物体的视觉图像以获取第一描述信息,根据第一描述信息中符合第一条件获取到的特征数据,要根据第一描述信息与第二描述信息构成第一关联关系得到特征数据,则需要将采用世界坐标系下的地面描述数据与采用视觉坐标系下的第一描述数据统一转换至同一坐标系下。其中,视觉识别装置可以为视觉传感器、相机、3D相机等。
可选择的,获取第一描述信息中符合第一条件的特征数据可以包括,筛选出法线方向相对参考特征的角度在角度阈值范围内的点,以获得第一疑似特征表面点集;其中,角度阈值取决于特征表面相对参考特征的角度;对第一疑似特征表面点集进行区域分割,以获得能够构成平面区域的至少一个第二疑似特征表面点集。
例如,定义视觉坐标系中环境的地面为坐标系X、Y,空间高度为Z,参考特征设定为地面,角度阈值范围设定为80°~100°,当行李箱的把手表面点的法线方向相对地面方向的夹角为89°时,则判定出夹角落入角度阈值,在角度阈值范围内的点默认与地面平行,此时获取所有默认与地面平行的点,从而获得第一疑似特征表面点集A。
对获取到的第一疑似特征表面点集A进行区域分割,在本实施例中,利用三维区域增长算法对第一疑似特征表面点集A进行区域分割,从而获得能够构成平面区域的至少一个第二疑似特征表面点集B,记录平面区域内所有的点到点集B{(x1,y1,z1),(x2,y2,z2)……}。
本实施例采用三维区域增长算法,通过特征信息,免除了繁重的人工数据采集和标注步骤,实现自动化且无监督学习,该方法具有更强的鲁棒性。
S24:根据获取的至少一个第二疑似特征表面点集确定特征表面。
可选择的,根据第二疑似特征表面点集的二维信息,筛选出所构成平面区域的形状和/或大小符合特征表面的形状和/或大小的第二疑似特征表面点集;根据筛选出的第二疑似特征表面点集确定特征表面。
例如,行李箱的形状为矩形,尺寸大小为20寸,此时可以在第二疑似特征表面点集B的二维信息中筛选出所构成平面区域的形状和/或大小符合特征表面的矩形和/或20寸大小的第二疑似特征表面点集C,将第二疑似特征表面点集C作为疑似行李箱的面。
进一步的,获取第二疑似特征表面点集的二维信息时,通过将第二疑似特征表面点集投影到与特征表面平行的二维空间,以由第二疑似特征表面点集的三维点云信息获得二维信息。
具体来说,将获取到的第二疑似特征表面点集B{(x1,y1,z1),(x2,y2,z2)……}投影到与特征表面平行的二维空间,并通过算法找出对应的二维的矩形区域点集C{(xm,ym),(xn,yn)……}(m、n均为正整数),在本实施例中采用霍夫变换算法。
本实施例通过投影的方法,将三维空间投影到二维空间,能够既保留物体原始尺寸,又能通过二维图像处理的方法进行识别,通过了三维与二维结合的方法,使用平面视觉辅助立体视觉,弥补了单纯三维或二维算法的不足。
在本步骤S24中,筛选出符合特征表面的形状和/或大小的第二疑似特征表面点集C后,将二维的矩形区域点集C还原到三维空间(即获取到矩形区域点集对应的y值),得到疑似行李箱的点集D{(xm,ym,zm),(xn,yn,zn)……}。
在上述实施方式中可以识别出特征表面,但若特征表面上还存在特征部时,例如把手,则可以根据特征部与特征表面的关系准确识别特征部,因此,本实施方式进一步包括下述步骤,用于识别凸出设置在特征表面上的特征部。
S25:根据三维点云信息计算环境中的点相对特征表面的垂直距离;筛选出垂直距离在距离阈值范围内的点,以获得疑似特征部点集;其中,距离阈值范围取决于特征部相对特征表面的高度。其中,上述实施例的方法得到多个特征表面时,判断各特征表面在一定空间范围内是否具有符合距离阈值范围的点,若有,则对应的特征表面为设置特征部所在的表面。
S26:根据疑似特征部点集确定特征部。
下面将步骤S25、S26放到一起进行说明:在上述步骤S24中得到三维点集D的点云信息后,根据点集D上的点的高度坐标筛选出垂直距离在距离阈值范围内的点,例如,设把手相对于行李箱表面最低的高度值为z-t,把手相对于行李箱表面最高的高度值为z+t,此时把手相对行李箱表面的距离阈值范围为(z-t,z+t),若判断出在距离阈值范围内存在点,可选择的,进一步判断z的阈值范围内是否存在连续的点,以获得疑似把手点集,从而根据疑似把手点集确定把手。
S27:根据三维点云信息,确定所筛选出的点投影在特征表面的位置信息;进一步筛选出位置信息符合位置条件的点,以构成疑似特征部点集;其中,位置条件取决于特征部相对 特征表面的位置。
具体来说,在获取到三维点云信息后,根据所筛选出的点投影在特征表面的位置信息来筛选出位置信息符合位置条件的点。例如,行李箱的把手一般设置在行李箱一侧表面的中间区域,因此可以将位置条件设置为连续的点是否对应疑似行李箱的面的中间区域,若判断结果为是,则可以筛选出符合位置条件的点集E以构成行李箱的把手,即在符合把手位置条件内。例如高度高于行李箱上表面2cm以上,还可以包括限定位于面的中间区域,若存在突起物,则该突起物的连续点可判断为行李箱把手。
上述步骤中确定特征部所使用的坐标信息均是基于视觉坐标系的,因此要想确定真实世界坐标系中的特征部,需要根据坐标系变换算法将点集E中所有点的坐标转换为真实世界坐标系。
需要说明的是,上述实施例中以寻找行李箱平行于地面的把手为例进行说明,当然也可以寻找到行李箱侧面的把手,具体如图3所示,图3是行李箱把手位置示意图,行李箱100侧面的把手110即位于垂直地面的行李箱100表面的把手。只需在上述步骤S23中,根据第一描述信息中点的法线方向与地面方向的夹角,调整角度阈值的范围,若行李箱100的把手110所在的特征表面点的法线方向相对地面坐标的夹角落入调整后的角度阈值范围内,则默认角度阈值范围内的点与地面垂直,此时记录所有与地面垂直的所有点到点集P并转换点集P坐标为垂直地面的点集P。其他实施方式与上述实施方式类似,此处不做赘述。
可以知道的是,本实施例的方法可以同时或分别识别行李箱平行于地面的把手以及行李箱侧面的把手,在某些情况下,调整调整角度阈值的范围可以找到与地面形成不同角度的把手。
本申请提出一种物体特征的识别方法,该方法通过获取物体的第一描述信息以及物体所处环境的第二描述信息,第一描述信息包括物体的三维点云数据,根据第一描述信息获取参考信息,参考信息包括三维点云数据中各点的法线方向,根据第二描述信息以及参考信息,筛选出法线方向相对参考特征的角度在角度阈值范围内的点,以获得第一疑似特征表面点集;其中,角度阈值取决于特征表面相对参考特征的角度;对第一疑似特征表面点集进行区域分割,以获得能够构成平面区域的至少一个第二疑似特征表面点集;根据至少一个第二疑似特征表面点集确定特征表面,从而获取第一描述信息中符合第一条件的特征数据,其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系,根据获取到的特征数据确定特征表面,能够准确识别物体。
在上述实施方式的基础上,本申请提出一种行李箱把手的识别方法,具体请参阅图4,图4是本申请行李箱把手的识别方法一实施例的流程图,本申请所揭示的方法具体可包括以下步骤:
S41:获取行李箱的第一描述信息以及行李箱所处环境的第二描述信息。
可选择的,第一描述信息包括行李箱的三维点云数据;第二描述信息包括根据行李箱所处环境设置的世界坐标系,以及世界坐标系下依据环境采用的参考把手表面,参考把手表面与把手表面数据的点均符合第一关联关系。
S42:根据第一描述信息获取参考信息。
可选择的,参考信息包括三维点云数据中各点的法线方向。
S43:根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的把手表面数据;其中,第一条件包括所有把手表面数据与第二描述信息构成的第一关联关系。
可选择的,筛选出法线方向相对参考把手表面的角度在角度阈值范围内的点,以获得第一疑似把手表面点集;其中,角度阈值取决于把手表面相对参考把手表面的角度;对第一疑似把手表面点集进行区域分割,以获得能够构成平面区域的至少一个第二疑似把手表面点集;根据至少一个第二疑似把手表面点集确定把手表面。
可选择的,利用三维区域增长算法对第一疑似把手表面点集进行区域分割。
可选择的,根据第二疑似把手表面点集的二维信息,筛选出所构成平面区域的形状和/或大小符合把手表面的形状和/或大小的第二疑似把手表面点集;根据筛选出的第二疑似把手表面点集确定把手表面。
可选择的,进一步包括将第二疑似把手表面点集投影到与把手表面平行的二维空间,以由第二疑似把手表面点集的三维点云信息获得二维信息。
可选择的,方法用于识别凸出设置在把手表面上的把手表面部,进一步包括根据三维点云信息计算环境中每个点相对把手表面的垂直距离;筛选出垂直距离在距离阈值范围内的点,以获得疑似把手表面部点集;其中,距离阈值范围取决于把手表面部相对把手表面的高度;根据疑似把手表面部点集确定把手表面部。
可选择的,进一步包括根据三维点云信息,确定所筛选出的点投影在把手表面的位置信息;进一步筛选出位置信息符合位置条件的点,以构成疑似把手表面部点集;其中,位置条件取决于把手表面部相对把手表面的位置。
本申请行李箱把手的识别方法与上述实施方式类似,具体不做赘述。
本申请提出一种行李箱把手的识别方法,该方法包括获取行李箱的第一描述信息以及行李箱所处环境的第二描述信息;根据第一描述信息获取参考信息;根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的把手表面数据;其中,第一条件包括所有把手表面数据与第二描述信息构成的第一关联关系,根据获取到的把手表面数据确定把手表面,能够准确识别行李箱把手。
上述方法如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一种视觉识别装置中,本申请提出一种视觉识别装置,具体可参阅图5,图5是本申请一种视觉识别装置的结构示意图。本实施例所揭示的视觉识别装置200包括处理器210和存储器220,存储器220中存储有计算机程序,处理器210执行计算机程序以实现上述实施方式中任一项所述的方法。
具体来说,处理器210用于获取物体的第一描述信息以及物体所处环境的第二描述信息;根据第一描述信息获取参考信息;根据第二描述信息以及参考信息,获取第一描述信息中符合第一条件的特征数据;其中,第一条件包括所有特征数据与第二描述信息构成的第一关联关系。
本实施例中视觉识别装置200实现物体特征的识别方法的具体实施方式与上述方案中类似,此处不做赘述。此外,图5中仅表示各个器件之间的逻辑关系,并不对其电路连接关系构成限定。
本申请所提供的视觉识别装置200能够实现上述方案中任意一种物体特征的识别方法,能够实现准确识别物体。
本申请提出一种机器人,具体可参阅图6,图6是本申请一种机器人的结构示意图,本申请所揭示的机器人300包括视觉识别装置310,视觉识别装置310的结构及具体实施方式与上述实施方式中的视觉识别装置200类似,此处不做赘述。
本申请所提供的机器人300能够实现准确识别物体。
在本申请所提供的几个实施方式中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施方式仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。
另外,在本申请各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种物体特征的识别方法,其特征在于,所述方法包括:
    获取所述物体的第一描述信息以及物体所处环境的第二描述信息;
    根据所述第一描述信息获取参考信息;
    根据所述第二描述信息以及所述参考信息,获取所述第一描述信息中符合第一条件的特征数据;其中,所述第一条件包括所有所述特征数据与所述第二描述信息构成的第一关联关系。
  2. 根据权利要求1所述的方法,其特征在于,所述第一描述信息包括所述物体的三维点云数据。
  3. 根据权利要求2所述的方法,其特征在于,所述参考信息包括所述三维点云数据中各点的法线方向。
  4. 根据权利要求3所述的方法,其特征在于,所述第二描述信息包括:
    根据所述物体所处环境设置的世界坐标系,以及所述世界坐标系下依据所述环境采用的参考特征,所述参考特征与所述特征数据的点均符合所述第一关联关系。
  5. 根据权利要求4所述的方法,其特征在于,获取所述第一描述信息中符合第一条件的特征数据包括:
    筛选出所述法线方向相对所述参考特征的角度在角度阈值范围内的点,以获得第一疑似特征表面点集;其中,所述角度阈值取决于所述特征表面相对所述参考特征坐标的角度;
    对所述第一疑似特征表面点集进行区域分割,以获得能够构成平面区域的至少一个第二疑似特征表面点集;
    根据所述至少一个第二疑似特征表面点集确定所述特征表面。
  6. 根据权利要求5所述的方法,其特征在于,所述对所述第一疑似特征表面点集进行区域分割,包括:
    利用三维区域增长算法对所述第一疑似特征表面点集进行区域分割。
  7. 根据权利要求5所述的方法,其特征在于,所述根据所述第二疑似特征表面点集确定所述特征表面,包括:
    根据所述第二疑似特征表面点集的二维信息,筛选出所构成平面区域的形状和/或大小符合所述特征表面的形状和/或大小的第二疑似特征表面点集;
    根据筛选出的第二疑似特征表面点集确定所述特征表面。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述第二疑似特征表面点集确定所述特征表面,进一步包括:
    将所述第二疑似特征表面点集投影到与所述特征表面平行的二维空间,以由所述第二疑似特征表面点集的三维点云信息获得二维信息。
  9. 根据权利要求5所述的方法,其特征在于,所述方法用于识别凸出设置在所述特征表面上的特征部,所述方法进一步包括:
    根据所述三维点云信息计算所述环境中的点相对所述特征表面的垂直距离;
    筛选出所述垂直距离在距离阈值范围内的点,以获得疑似特征部点集;其中,所述距离阈值范围取决于所述特征部相对所述特征表面的高度;
    根据所述疑似特征部点集确定所述特征部。
  10. 根据权利要求9所述的方法,其特征在于,所述方法进一步包括:
    根据所述三维点云信息,确定所筛选出的点投影在所述特征表面的位置信息;
    进一步筛选出所述位置信息符合位置条件的点,以构成疑似特征部点集;其中,所述位置条件取决于所述特征部相对所述特征表面的位置。
  11. 一种行李箱把手的识别方法,其特征在于,所述方法包括:
    获取所述行李箱的第一描述信息以及行李箱所处环境的第二描述信息;
    根据所述第一描述信息获取参考信息;
    根据所述第二描述信息以及所述参考信息,获取所述第一描述信息中符合第一条件的把手表面数据;其中,所述第一条件包括所有所述把手表面数据与所述第二描述信息构成的第一关联关系。
  12. 根据权利要求11所述的方法,其特征在于,所述第一描述信息包括所述行李箱的三维点云数据;所述参考信息包括所述三维点云数据中各点的法线方向;
    所述第二描述信息包括:
    根据所述行李箱所处环境设置的世界坐标系,以及所述世界坐标系下依据所述环境采用的参考把手表面,所述参考把手表面与所述把手表面 数据的点均符合所述第一关联关系。
  13. 根据权利要求12所述的方法,其特征在于,获取所述第一描述信息中符合第一条件的把手表面数据包括:
    筛选出所述法线方向相对所述参考把手表面的角度在角度阈值范围内的点,以获得第一疑似把手表面点集;其中,所述角度阈值取决于所述把手表面相对所述参考把手表面坐标的角度;
    对所述第一疑似把手表面点集进行区域分割,以获得能够构成平面区域的至少一个第二疑似把手表面点集;
    根据所述至少一个第二疑似把手表面点集确定所述把手表面。
  14. 根据权利要求13所述的方法,其特征在于,所述对所述第一疑似把手表面点集进行区域分割,包括:
    利用三维区域增长算法对所述第一疑似把手表面点集进行区域分割。
  15. 根据权利要求13所述的方法,其特征在于,所述根据所述第二疑似把手表面点集确定所述把手表面,包括:
    根据所述第二疑似把手表面点集的二维信息,筛选出所构成平面区域的形状和/或大小符合所述把手表面的形状和/或大小的第二疑似把手表面点集;
    根据筛选出的第二疑似把手表面点集确定所述把手表面。
  16. 根据权利要求15所述的方法,其特征在于,所述根据所述第二疑似把手表面点集确定所述把手表面,进一步包括:
    将所述第二疑似把手表面点集投影到与所述把手表面平行的二维空间,以由所述第二疑似把手表面点集的三维点云信息获得二维信息。
  17. 根据权利要求13所述的方法,其特征在于,所述方法用于识别凸出设置在所述把手表面上的把手表面部,所述方法进一步包括:
    根据所述三维点云信息计算所述环境中的点相对所述把手表面的垂直距离;
    筛选出所述垂直距离在距离阈值范围内的点,以获得疑似把手表面部点集;其中,所述距离阈值范围取决于所述把手表面部相对所述把手表面的高度;
    根据所述疑似把手表面部点集确定所述把手表面部。
  18. 根据权利要求17所述的方法,其特征在于,所述方法进一步包括:
    根据所述三维点云信息,确定所筛选出的点投影在所述把手表面的位置信息;
    进一步筛选出所述位置信息符合位置条件的点,以构成疑似把手表面部点集;其中,所述位置条件取决于所述把手表面部相对所述把手表面的位置。
  19. 一种视觉识别装置,其特征在于,所述视觉识别装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器执行所述计算机程序以实现权利要求1-18中任一项所述的方法。
  20. 一种机器人,其特征在于,所述机器人包括权利要求19所述的视觉识别装置。
PCT/CN2018/090421 2018-06-08 2018-06-08 一种物体特征的识别方法、视觉识别装置及机器人 WO2019232782A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/090421 WO2019232782A1 (zh) 2018-06-08 2018-06-08 一种物体特征的识别方法、视觉识别装置及机器人
CN201880003237.1A CN109641351B (zh) 2018-06-08 2018-06-08 一种物体特征的识别方法、视觉识别装置及机器人

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/090421 WO2019232782A1 (zh) 2018-06-08 2018-06-08 一种物体特征的识别方法、视觉识别装置及机器人

Publications (1)

Publication Number Publication Date
WO2019232782A1 true WO2019232782A1 (zh) 2019-12-12

Family

ID=66060194

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090421 WO2019232782A1 (zh) 2018-06-08 2018-06-08 一种物体特征的识别方法、视觉识别装置及机器人

Country Status (2)

Country Link
CN (1) CN109641351B (zh)
WO (1) WO2019232782A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112816976A (zh) * 2021-01-29 2021-05-18 三一海洋重工有限公司 集装箱箱门朝向检测方法及其系统、存储介质及电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5351310A (en) * 1991-05-21 1994-09-27 International Business Machines Corporation Generalized shape autocorrelation for shape acquisition and recognition
US5864779A (en) * 1996-02-29 1999-01-26 Fujitsu Limited Strict recognizing apparatus using observation point
CN102930246A (zh) * 2012-10-16 2013-02-13 同济大学 一种基于点云片段分割的室内场景识别方法
CN104217205A (zh) * 2013-05-29 2014-12-17 华为技术有限公司 一种识别用户活动类型的方法及系统
CN108122081A (zh) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 机器人及其库存管理方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8160366B2 (en) * 2008-06-20 2012-04-17 Sony Corporation Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
JP5897532B2 (ja) * 2013-11-05 2016-03-30 ファナック株式会社 三次元空間に置かれた物品をロボットで取出す装置及び方法
CN105303189B (zh) * 2014-07-29 2019-08-20 阿里巴巴集团控股有限公司 一种用于检测预定区域中特定标识图像的方法及装置
CN105512689A (zh) * 2014-09-23 2016-04-20 苏州宝时得电动工具有限公司 基于图像的草地识别方法及草坪维护机器人
CN105976375A (zh) * 2016-05-06 2016-09-28 苏州中德睿博智能科技有限公司 一种基于rgb-d类传感器的托盘识别和定位方法
CN106826833B (zh) * 2017-03-01 2020-06-16 西南科技大学 基于3d立体感知技术的自主导航机器人系统
CN107186708B (zh) * 2017-04-25 2020-05-12 珠海智卓投资管理有限公司 基于深度学习图像分割技术的手眼伺服机器人抓取系统及方法
CN107239794B (zh) * 2017-05-18 2020-04-28 深圳市速腾聚创科技有限公司 点云数据分割方法和终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5351310A (en) * 1991-05-21 1994-09-27 International Business Machines Corporation Generalized shape autocorrelation for shape acquisition and recognition
US5864779A (en) * 1996-02-29 1999-01-26 Fujitsu Limited Strict recognizing apparatus using observation point
CN102930246A (zh) * 2012-10-16 2013-02-13 同济大学 一种基于点云片段分割的室内场景识别方法
CN104217205A (zh) * 2013-05-29 2014-12-17 华为技术有限公司 一种识别用户活动类型的方法及系统
CN108122081A (zh) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 机器人及其库存管理方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112816976A (zh) * 2021-01-29 2021-05-18 三一海洋重工有限公司 集装箱箱门朝向检测方法及其系统、存储介质及电子设备
CN112816976B (zh) * 2021-01-29 2023-05-16 三一海洋重工有限公司 集装箱箱门朝向检测方法及其系统、存储介质及电子设备

Also Published As

Publication number Publication date
CN109641351B (zh) 2021-11-26
CN109641351A (zh) 2019-04-16

Similar Documents

Publication Publication Date Title
WO2018098915A1 (zh) 清洁机器人的控制方法及清洁机器人
CN110458897B (zh) 多摄像头自动标定方法及系统、监控方法及系统
CN107103299B (zh) 一种监控视频中的人数统计方法
US11263491B2 (en) Person search method based on person re-identification driven localization refinement
WO2014194620A1 (zh) 图像特征提取、训练、检测方法及模块、装置、系统
CN107341815B (zh) 基于多目立体视觉场景流的剧烈运动检测方法
WO2021075772A1 (ko) 복수 영역 검출을 이용한 객체 탐지 방법 및 그 장치
CN111209870A (zh) 一种双目活体摄像头快速配准方法及其系统和装置
WO2019232782A1 (zh) 一种物体特征的识别方法、视觉识别装置及机器人
CN112488022B (zh) 一种环视全景监控方法、装置及系统
CN113762009B (zh) 一种基于多尺度特征融合及双注意力机制的人群计数方法
CN111968180B (zh) 基于参考平面的高精度物体多自由度姿态估计方法及系统
WO2019127287A1 (zh) 第一智能设备及其连接方法以及具有存储功能的装置
WO2020171304A1 (ko) 영상 복원 장치 및 방법
CN114399552B (zh) 一种室内监护环境行为识别及定位方法
WO2019237223A1 (zh) 机器人系统及自动校准方法、存储装置
WO2023019699A1 (zh) 一种基于3d人脸模型的俯角人脸识别方法及系统
CN112784796A (zh) 一种自学习的无感人脸识别系统
CN112101107A (zh) 一种智能网联模型车在环仿真交通信号灯智能识别方法
WO2019178717A1 (zh) 双目匹配的方法、视觉成像装置及具有存储功能的装置
WO2022141721A1 (zh) 一种多模态无监督的行人像素级语义标注方法和系统
CN116866522B (zh) 一种远程监控方法
WO2024087927A1 (zh) 位姿确定方法及装置、计算机可读存储介质和电子设备
WO2022158655A1 (ko) 정리정돈 로봇을 이용한 정리정돈 시스템
WO2024063242A1 (ko) 영상 분석 장치 및 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18921672

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18921672

Country of ref document: EP

Kind code of ref document: A1