WO2019178738A1 - 物品定位方法及系统 - Google Patents

物品定位方法及系统 Download PDF

Info

Publication number
WO2019178738A1
WO2019178738A1 PCT/CN2018/079602 CN2018079602W WO2019178738A1 WO 2019178738 A1 WO2019178738 A1 WO 2019178738A1 CN 2018079602 W CN2018079602 W CN 2018079602W WO 2019178738 A1 WO2019178738 A1 WO 2019178738A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
appearance
attribute
identification information
camera
Prior art date
Application number
PCT/CN2018/079602
Other languages
English (en)
French (fr)
Inventor
张站朝
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to PCT/CN2018/079602 priority Critical patent/WO2019178738A1/zh
Priority to CN201880001050.8A priority patent/CN108701239B/zh
Priority to US16/556,073 priority patent/US20190385337A1/en
Publication of WO2019178738A1 publication Critical patent/WO2019178738A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisions for transferring data to distant stations, e.g. from a sensing device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • G06Q10/0833Tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30112Baggage; Luggage; Suitcase

Definitions

  • the present disclosure relates to the field of artificial intelligence, and in particular to an object positioning method and system.
  • the bar code or two-dimensional code is mainly attached to the outer packaging of the article, and the bar code or the two-dimensional code of each article is scanned by manual scanning device to realize sorting, or through a special device. Orientation 360 degrees automatically scan barcodes or QR codes for automatic sorting.
  • the method of manual sorting has the problems of large labor cost and low efficiency, and the method of automatic sorting is costly, and there may be a problem that the barcode or the two-dimensional code is occluded and cannot be recognized. At this time, manual participation is still required. low efficiency.
  • the present disclosure provides an item positioning method and system, which can realize visual tracking and positioning of items.
  • an item positioning method applied to an item positioning system, comprising:
  • the appearance attribute of the item is identified by the surveillance camera to obtain the identification information of the item;
  • the items in the delivery queue are located by the appearance attributes of the items and the sequence of items.
  • an item positioning system including:
  • At least one camera for identifying an appearance attribute of the item
  • a processor coupled to the at least one camera, for identifying, when each item enters the delivery queue, the appearance attribute of the item by the at least one camera to obtain identification information of the item; entering the transmission according to the item
  • the location of the queue and the identification information of the item determine a sequence of items of the delivery queue after the item is added; and locate the items in the delivery queue by the appearance attributes of the items and the sequence of items.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having when executed by the programmable device
  • the code portion of the method of any of the above first aspects is provided.
  • the appearance attribute of the item when the item enters the transmission queue, the appearance attribute of the item may be identified by the camera, thereby acquiring the identification information of the item, and then determining the transmission after the item is added according to the position of the item entering the transmission queue and the identification information of the item.
  • the sequence of items in the queue when locating the item, the locating item can be tracked by the appearance attribute of the item and the sequence of items of the delivery queue. In this way, the positioning of the item can be realized by simply arranging the camera, the cost of the item can be visualized at any time while the cost is reduced, and the item transmission is improved by combining the item sequence of the item with the appearance attribute of the item. Positioning accuracy in the process.
  • FIG. 1 is a flowchart of an item positioning method according to an exemplary embodiment
  • FIG. 2 is a schematic view showing an article conveyed by a conveyor belt, according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of updating a sequence of items on a conveyor belt according to an exemplary embodiment
  • FIG. 4 is a block diagram of an item location system, according to an exemplary embodiment.
  • FIG. 1 is a flowchart of an item positioning method according to an exemplary embodiment. As shown in Figure 1, the method includes the following steps.
  • Step S11 When each item enters the transmission queue, the appearance attribute of the item is identified by the surveillance camera to determine the identification information of the item.
  • the transmission queue is a queue formed when the items are queued for transmission. After the first item to be transmitted is added to the transmission queue, the items to be transmitted can continue to be added to the transmission queue.
  • Each item (including the first item that joins the delivery queue) enters the delivery queue, and the identification information of the item can be determined by identifying the appearance attribute of the item.
  • the identification information is the unique identifier of the item, and the identification information of the different items is different.
  • the identification information of the item may correspond, for example, to the identity information of the owner of the item.
  • the identification information of the item may include: an ID number of the owner of the item, a ride information, and the like.
  • the appearance attribute of the item is used to describe the appearance of the item, and the appearance attribute of the item may include one or more of a category attribute, a color attribute, a size attribute, a shape attribute, a material attribute, and the like.
  • the embodiment of the present disclosure proposes to associate the appearance attribute of the item with the identification information of the item.
  • the appearance properties of the item can be obtained by capturing the camera, which is convenient and quick.
  • associating the appearance attributes of the item with the identification information of the item includes the following steps:
  • identifying the appearance attribute of the item by the surveillance camera to determine the identification information of the item includes:
  • the identification information of the item is determined according to the identified appearance attribute and the relationship between the appearance attribute and the identification information.
  • the item Before entering the transmission queue, the item first obtains the identification information and appearance attribute of the item, and associates the identification information of the same item with the appearance attribute.
  • Obtaining the identification information of the item may be obtained by scanning the item by the scanning device, or may be obtained by collecting an image of the item by the camera.
  • Obtaining the appearance information of the item can be obtained by collecting an image of the item through the camera.
  • the association between the identification information of the item and the appearance attribute is to establish a binding relationship between the identification information of the item and the appearance attribute, so that the identification of the item can be recognized when the item is about to enter the transmission queue by the appearance attribute of the item.
  • the item delivery queue is a queue formed by items on the transfer area of the main conveyor.
  • the transfer area of the main conveyor is in communication with the transfer area of one or more of the front conveyors, and the items on the transfer area of the front conveyor are items to be entered into the item transfer queue.
  • the two-dimensional code of the surface of the article is scanned by a scanning device (not shown in FIG. 2) to obtain identification information of the article, or the article number is obtained. Identification information of the item.
  • the article is photographed by entering the camera through the article, and the appearance attribute of the article is obtained.
  • the obtained identification information and the appearance attribute are bound to obtain an association relationship between the identification information of the item and the appearance attribute of the item.
  • the identification information and appearance attributes of the items on the front conveyor can be bound by referring to the above method.
  • the appearance attribute of the item is identified by the camera around the main conveyor, and the item is determined according to the recognized appearance attribute and the association established by the binding method. Identification information.
  • the appearance attribute of the heart-shaped article is recognized by the article entering the camera, and is bound to the identification information A'.
  • the recognition result includes, for example, a heart shape, and then the appearance attribute is associated with each appearance attribute in one or more newly established association relationships.
  • the identification information associated with the consistent appearance attribute is A'.
  • each item in the transmission area can have an association relationship between the corresponding identification information and the appearance attribute, then when the item is about to enter the transmission queue or In the subsequent transmission process, the identification information of the article can be obtained by monitoring the appearance property of the article, and the possible manner will be described below.
  • the appearance attribute includes at least one of a category attribute, a color attribute, a size attribute, a shape attribute, and a material attribute.
  • the appearance attribute of the item is identified by the surveillance camera to determine the identification information of the item.
  • the recognition result and the confidence degree under each appearance attribute dimension are compared with the appearance attributes in the association relationship to determine the identification information of the item.
  • the present disclosure can integrate the appearance attributes of multiple dimensions to determine the identification information of the item.
  • the more dimensions of the appearance attribute the more accurate the result of the recognition.
  • the result of the recognition is often affected by factors such as the deployment position of the surveillance camera, the light, the angle, and the deformation of the article during the transmission process, thereby identifying the appearance of the article.
  • the attribute outputs a confidence level of the recognition result, which is used as a factor to determine the identification information of the item.
  • the embodiment of the present disclosure is not limited, and the possible manners are described below.
  • the identification method of the category attribute is, for example, by classifying and identifying the item type feature, such as whether there is a lever, a handle, a strap, etc., and then determining whether the item has the characteristics by the image to identify the item, and at the same time giving a confidence level of the determination; or For example, after performing deep learning based training on a large number of item images, the category determination is performed based on the item type neural network recognition model, and the confidence is obtained.
  • the method for identifying the color attribute may be, for example, based on the similarity between the pixel color value of the item in the image and the reference color, to obtain the color value and the confidence of the item; or, after performing the training based on the deep learning through the large number of item images, based on the item
  • the color neural network recognition model performs color determination and derives confidence.
  • the identification method of the size attribute for example, based on the parameters of the camera's own focal length, angle, resolution, etc., analyzes the approximate size of the item from the image, such as about 50cm in length, 40cm in width, and 40cm in height, but it is possible to shoot from different angles. The value will be inaccurate, and the confidence can be obtained according to the position of the item in the image and the pixel value.
  • the identification method of the shape attribute for example, by classifying and identifying the shape feature of the specified item, such as a cuboid, a cube, a cylinder, etc., and obtaining a confidence degree; or after performing a deep learning based training on a large number of item images, based on the shape of the item
  • the neural network recognition model performs shape similarity classification judgment and obtains confidence.
  • the method of identifying the material attributes for example, by classifying and identifying the texture features of the specified item, such as plastic, canvas, paper, etc., and obtaining confidence; or after performing deep learning based on a large number of item images, based on the material texture material
  • the neural network recognition model performs material similarity classification judgment and obtains confidence.
  • the confidence of the recognition result can be determined, for example, a rectangular parallelepiped having a drawbar feature and four universal rollers, and the confidence of the trolley case may be 85%.
  • the confidence level of the software package is 65%.
  • the multi-dimensional appearance attribute of the item can be characterized by a vector space model, and each item corresponds to a feature vector, then the vector similarity comparison method can be used to identify the result of the recognition with the camera before binding or the camera before the current camera.
  • the results are compared, for example, by using Euclidean distance or cosine similarity or other similarity comparison methods, and the identification information corresponding to the identified appearance attribute can be determined according to the result of the comparison.
  • other cameras on the main conveyor in Figure 2 can also identify the appearance attributes of the item by the above method.
  • step S11 may include the following steps:
  • the identification information of the item is determined according to the appearance attribute of the item and the order in which the item enters the item positioning system.
  • the order in which items enter the item positioning system can be determined as the time the item enters the front conveyor. As shown in Fig. 3, the front conveyor belt 1 successively enters two gray triangle items (in the order of priority, the identification information is A and B respectively). Then, if only through the appearance attribute, it may not be able to accurately distinguish which item is A and which item is B. Therefore, it is possible to accurately determine that the gray triangle that entered first is A after combining the appearance attribute and the order of the two items entering the pre-conveyor order 1. The entered gray triangle is B.
  • Step S12 Determine the sequence of the items of the transmission queue after the item is added according to the position of the item entering the transmission queue and the identification information of the item.
  • the sequence of articles of the transmission queue is recorded.
  • the sequence of articles on the current main conveyor is A (blue cylindrical), B (black triangle), C ( Yellow square), D (green cylindrical).
  • the identification information of the item is bound to the appearance attribute of the item, the item enters the front conveyor.
  • a plurality of pre-conveyor belts are aggregated to one main conveyor belt, it is necessary to enter the main conveyor belt to monitor the captured image of the camera, determine the position at which the article enters the transmission queue, and update the sequence of items on the main conveyor belt based on the position at which the article enters the transmission queue.
  • Embodiments of the present disclosure contemplate that during a longer distance transfer, an item may be blocked or tumbled at the transfer intermediate position or at the turning position, causing a change in the sequence of items.
  • the embodiment of the present disclosure proposes to continuously detect whether the item sequence changes by the dimensioning camera to maintain the accuracy of the item sequence.
  • the sequential camera is distributed at a corner position of the article transport area.
  • Maintaining the accuracy of the item sequence with a sequential camera including the following steps:
  • the items in the transmission queue are sequentially identified by the dimensioning camera during the transmission process
  • the item sequence is updated according to the current transfer order.
  • the image recognition is captured based on the dimension sequence camera, and the identification of the appearance attribute of the item on the current conveyor belt is compared with the item sequence and attributes maintained in the system, and when the inconsistency is found, the update is performed.
  • the current sequence of articles on the main conveyor belt is A (blue cylindrical), B (black triangle), C (yellow square), and D (green cylindrical).
  • the dimension camera 2 recognizes that the item of the black triangle is located behind the blue-cylindrical item and does not coincide with the recorded item sequence. Therefore, the recorded item sequence is updated to B (black triangle), A (blue cylinder), C (yellow square), D (green cylindrical).
  • Step S13 locating the items in the delivery queue by the appearance attributes of the items and the sequence of the items.
  • locating the item is to confirm the identification information of the baggage at each position, so that the baggage is correctly sent to the corresponding aircraft for consignment; or, when the passenger waits for the baggage, the item is located. That is to confirm the identification information and location of each bag on the conveyor belt so that each passenger can or the current position of his baggage.
  • the present disclosure combines appearance attributes and acquired item sequences to locate items in the delivery queue.
  • the image of the item is collected by a camera distributed around the main conveyor belt to obtain the appearance attributes of the item, and the appearance attributes may have the same or similar conditions, so the item sequence is further combined to accurately locate the items in the delivery queue.
  • the first case there are no items with similar appearance properties.
  • the appearance attribute the identification information of the item can be directly determined, and the position of the item can be obtained by querying the sequence of the item.
  • step S13 includes:
  • the present disclosure provides an item positioning system 400, the item positioning system 400 comprising:
  • At least one camera 401 for identifying an appearance attribute of the item
  • a processor 402 coupled to the at least one camera 401, for identifying, when each item enters the transmission queue, the appearance attribute of the item by the at least one camera 401 to obtain identification information of the item; entering according to the item
  • the location of the transmission queue and the identification information of the item determine a sequence of items of the transmission queue after the item is added; and the items in the transmission queue are located by the appearance attribute of each item and the sequence of the item.
  • processor 402 is further configured to:
  • the identification information of the item is determined according to the identified appearance attribute and the relationship between the appearance attribute and the identification information.
  • the at least one camera 401 includes a dimensioning camera
  • the processor 402 is further configured to:
  • the items in the transmission queue are sequentially identified by the dimensioning camera during the transmission process
  • the item sequence is updated according to the current transfer order.
  • the dimensioning camera is distributed at a corner position of the article conveying area.
  • the appearance attribute includes at least one of a category attribute, a color attribute, a size attribute, a shape attribute, and a material attribute
  • the processor 402 is configured to:
  • the recognition result and the confidence degree under each appearance attribute dimension are compared with the appearance attributes in the association relationship to determine the identification information of the item.
  • the transmission queue includes a target item to be located, and the processor 402 is configured to:
  • the identification information and the current location of the target item are determined.
  • the processor 402 is configured to:
  • the identification information of the item is obtained according to the appearance attribute of the item and the order in which the item enters the item positioning system 400.
  • a computer program product comprising a computer program executable by a programmable device, the computer program having when executed by the programmable device A code portion for performing the above-described item positioning method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Economics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

本公开是关于一种物品定位方法、系统,能够实现可视化地跟踪、定位物品。所述方法包括:在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息;根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。

Description

 物品定位方法及系统 技术领域
本公开涉及人工智能领域,具体涉及一种物品定位方法及系统。
背景技术
在机场、火车站等交通枢纽场景或者物流行业中,常常会涉及到物品的传送,例如行李的托运、分拣,包裹的分拣,等等。
目前,对于机场、火车站等需要对顾客行李进行托运需求场景,主要是通过利用条码射频标签的射频识别对行李实现了自动分拣和跟踪。然而射频标签识别器的成本较高,且效率低下。
对于物流运输行业,主要是在物品外包装上贴条形码或二维码,通过人工拿扫描设备对每个物品的条码或二维码进行扫描来实现分拣,或者是通过一个特殊的装置,全方位360度自动扫描条码或二维码来进行自动分拣。人工分拣的方式存在人工耗费大,效率低下的问题,而自动分拣的方式成本较高,还有可能出现因条码或二维码被遮挡而无法识别的问题,此时依然需要人工参与,效率低下。
可见,目前尚无较好的跟踪定位物品的方法。
发明内容
为克服相关技术中存在的问题,本公开提供一种物品定位方法及系统,能够实现可视化地跟踪、定位物品。
根据本公开实施例的第一方面,提供一种物品定位方法,应用于物品定位系统,包括:
在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以获取该物品的标识信息;
根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;
通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。
根据本公开实施例的第二方面,提供一种物品定位系统,包括:
至少一个摄像头,用于识别物品的外观属性;
处理器,与所述至少一个摄像头相连,用于在每个物品进入传送队列时,通过所述至少一个摄像头识别该物品的外观属性,以获取该物品的标识信息;根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。
根据本公开实施例的第三方面,提供一种计算机程序产品,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行上述第一方面中任一项所述的方法的代码部分。
本公开的实施例提供的技术方案可以包括以下有益效果:
本公开实施例中,可以在物品进入传送队列时,通过摄像头识别物品的外观属性,从而获取物品的标识信息,然后根据物品进入传送队列的位置以及物品的标识信息,确定加入该物品后的传送队列的物品序列。那么在定位物品时,可以通过物品的外观属性和传送队列的物品序列,来跟踪定位物品。通过这样的方式,只需布置摄像头便能实现物品的定位跟踪,在实现了随时可视化物品图像的同时降低了成本,并且,通过传送物品的物品序列与物品的外观属性结合的方式提升了物品传输过程中的定位准确度。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1是根据一示例性实施例示出的一种物品定位方法的流程图;
图2是根据一示例性实施例示出的一种通过传送带传送物品的示意图;
图3是根据一示例性实施例示出的一种更新传送带上物品序列的示意图;
图4是根据一示例性实施例示出的一种物品定位系统的框图。
具体实施方式
以下结合附图对本公开的具体实施方式进行详细说明。应当理解的是,此处所描述的具体实施方式仅用于说明和解释本公开,并不用于限制本公开。
请参考图1,图1是根据一示例性实施例示出的一种物品定位方法的流程图。如图1所示,该方法包括以下步骤。
步骤S11:在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息。
传送队列即为物品排队传送时组成的队列,从第一个待传送的物品加入传送队列后,可以持续有待传送的物品不断加入传送队列。每个物品(包括第一个加入传送队列的物品)进入传送队列时,可以通过识别该物品的外观属性来确定该物品的标识信息,标识信息是物品的唯一标识,不同物品的标识信息不同。
物品的标识信息比如可以与该物品的所有者的身份信息相对应。以物品是行李为例,物品的标识信息可以包括:该物品的所有者的身份证号码、乘车信息等。物品的外观属性用于描述该物品的外观,物品的外观属性可以包括种类属性、颜色属性、尺寸属性、形状属性、材质属性等属性中的一种或多种。
由于通过扫描设备扫描物品的标识信息来定位传送队列中的物品成本较高,为了降低成本以及可视化定位传送队列中的物品,本公开实施例提出将物品的外观属性与物品的标识信息相关联,以便于将物品的外观属性作为定位传送队列中的物品的参考因素之一。物品的外观属性通过摄像头采集即可得到,方便快捷。
在一种实施方式中,将物品的外观属性与物品的标识信息相关联,包括以下步骤:
针对待进入所述物品传送队列的每个物品,获取该物品的标识信息和外观属性;
将获取的外观属性与对应物品的标识信息进行关联;
相应地,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息,包括:
通过所述监控摄像头识别该物品的外观属性;
根据识别的外观属性,及外观属性与标识信息之间的关联关系,确定该物品的标识信息。
物品在进入传送队列之前,先获取该物品的标识信息和外观属性,并将同一物品的标识信息和外观属性相关联。获取物品的标识信息可以是通过扫描设备扫描该物品得到,也可以是通过摄像头采集该物品的图像得到。获取物品的外观信息可以通过摄像头采集该物品的图像得到。将物品的标识信息和外观属性相关联即是建立物品的标识信息和外观属性之间的绑定关系,以便于通过物品的外观属性即可在该物品即将进入传送队列时识别出该物品的标识信息,以便于后续在该物品加入传送队列后确定传送队列的物品序列。
示例地,参见图2,物品传送队列为主传送带的传送区域上的物品所形成的队列。主传送带的传送区域与一个或多个前置传送带的传送区域相连通,前置传送带的传送区域上的物品为待进入物品传送队列的物品。为了便于定位物品,在将物品放入前置传送带时,通过扫描设备(图2中未示意)扫描该物品表面的二维码,以获取该物品的标识信息,或者为该物品编号,以获取该物品的标识信息。同时,通过物品进入摄像头对该物品拍照,获取该物品的外观属性。
接着,将获取到的标识信息和外观属性绑定,以得到该物品的标识信息与该物品的外观属性之间的关联关系。对于每个放入前置传送带的物品,都可以参考上述方法对前置传送带上的物品的标识信息和外观属性进行绑定。
在前置传送带上的物品即将被传送到主传送带时,通过主传送带周围的摄像头识别该物品的外观属性,并根据识别的外观属性,以及通过上述绑定方法建立的关联关系,确定该物品的标识信息。
如图2所示,在心形物品进入前置传送带3时,通过物品进入摄像头识别该心形物品的外观属性,并与标识信息A’相绑定。心形物品在即将进入主传送带时,通过监控摄像头3识别该心形物品的外观属性,识别结果比如包括心形,然后将该外观属性与最新建立的一个或多个关联关系中的各个外观属性对比,对比一致的外观属性所关联的标识信息为A’。
通过对每个进入传送带的物品进行外观属性与标识信息的绑定,可以使得传送区域内每个物品都有相应的标识信息与外观属性之间的关联关系,那么在物品即将进入传送队列时或者在后续的传送过程中,便可通过监控摄像头拍摄物品的外观属性来得到该物品的标识信息,以下将对可能的方式进行说明。
在一个实施例中,外观属性至少包括种类属性、颜色属性、尺寸属性、形状属性和材质属性中的一种,相应的,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息,包括:
通过所述监控摄像头分别在各外观属性维度下对该物品进行识别,并确定识别结果的置信度;
将各外观属性维度下的识别结果以及置信度,与所述关联关系中的外观属性进行相似性比对,以确定该物品的标识信息。
本公开可以综合多个维度的外观属性来确定物品的标识信息,外观属性的维度越多,识别的结果越准确。而由于通过监控摄像头捕捉的图像进行物品外观属性的识别时,识别的结果往往会受到监控摄像头部署位置、光线、角度、物品在传送过程中产生的形变等因素的影响,因此在识别物品的外观属性时会输出识别结果的置信度,置信度将作为确定物品的标识信息的考虑因子。对于各种外观属性维度下,如何进行识别,本公开实施例不作限定,以下对可能的方式进行说明。
种类属性的识别方法例如通过对物品种类特征进行分类识别,比如是否有拉杆,把手,背带等,那么通过图像识别物品是否具有这些特征来确定物品的种类,同时可以给出判定的置信度;或者例如还可以通过对大量物品图像进行基于深度学习的训练后,基于物品类别神经网络识别模型进行种类判定,并得出置信度。
颜色属性的识别方法,例如可以基于图像中物品像素颜色值与基准颜色的相似度比对,得出物品的颜色值以及置信度;或者,通过大量物品图像进行基于深度学习的训练后,基于物品颜色神经网络识别模型进行颜色的判定,并得出置信度。
尺寸属性的识别方法,例如基于监控摄像头自身焦距、角度、分辨率等参数,从图像中分析出物品的大致尺寸,如长50cm左右,宽40cm左右,高40cm左右,但从不同角度拍摄,可能会造成数值不准确,可根据物品在图像中的位置及像素值大小等,得出置信度。
形状属性的识别方法,例如通过对指定物品形状特征进行分类识别,如整体是长方体,正方体,圆柱体等,并得出置信度;或者对大量物品图像进行基于深度学习的训练后,基于物品形状神经网络识别模型进行形状的相似性分类判定,并得出置信度。
材质属性的识别方法,例如通过对指定物品材质纹理特征进行分类识别,如塑料,帆布,纸质等,并得出置信度;或者对大量物品图像进行基于深度学习的训练后,基于物品纹理材质神经网络识别模型进行材质的相似性分类判定,并得出置信度。
监控摄像头分别在各外观属性维度下该物品进行识别后,可以确定识别结果的置信度,比如一个具有拉杆特征,并有四个万向滚轮的长方体,是拉杆箱的置信度可能是85%,是软体包的置信度是65%。
物品的多维度外观属性可采用向量空间模型来表征,每个物品对应一个特征向量,那么可以利用向量的相似度比对方法来将识别的结果与绑定时或者当前摄像头之前的摄像头所识别的结果进行比对,例如采用欧式距离或余弦相似度或其他相似度比对的方法,根据比对的结果便可确定所识别的外观属性对应的标识信息。同样地,图2中主传送带上的其他摄像头也可以通过上述方法识别物品的外观属性。
考虑到不同所有者的物品可能具有相同的外观属性,针对这种情况,本公开实施例提出将物品进入物品定位系统的顺序作为考虑因素之一,因而步骤S11可以包括以下步骤:
在每个物品进入传送队列时,通过所述监控摄像头识别该物品的外观属性;
根据该物品的外观属性以及该物品进入所述物品定位系统的顺序,确定该物品的标识信息。
物品进入物品定位系统的顺序可以按照物品进入前置传送带的时刻确定。如图3所示,前置传送带1先后进入两个灰色三角形的物品(按照先后顺序,标识信息分别为A和B)。那么如果只通过外观属性,可能无法准确区分哪个物品是A哪个物品是B,因此可以结合外观属性以及两个物品进入前置传送带顺序1的先后顺序来准确确定出先进入的灰色三角形为A,后进入的灰色三角形为B。
步骤S12:根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列。
本公开实施例中,会记录传送队列的物品序列,如图2所示,主传送带的传送区域上,当前主传送带上物品序列为A(蓝色圆柱形)、B(黑色三角形)、C(黄色方形)、D(绿色圆柱形)。当物品的标识信息和该物品的外观属性绑定后,物品进入前置传送带。在多个前置传送带汇聚到一个主传送带时,需要通过进入主传送带监控摄像头捕捉的画面,确定物品进入传送队列的位置,并基于物品进入传送队列的位置,对主传送带上的物品序列更新。
以图2为例,当前置传送带3上的红色心形物品进入主传送带后,则主传送带上物品序列需更新为A(蓝色圆柱形)、A’(红色心形)、B(黑色三角形)、C(黄色方形)、D(绿色圆柱形)。
本公开实施例考虑到在较长距离的传送过程中,在传送中转位置处或转弯位置处可能会发生物品被阻挡或翻滚,造成物品序列发生变化。为了使得识别得出的物品序列与实时的物品序列一致,进而准确地定位物品,本公开实施例提出通过维序摄像头持续地检测物品序列是否发生变化,以维护物品序列的准确性。
在一种实施方式中,维序摄像头分布在物品传送区域的转角位置处。继续参见图2,有3个维序摄像头:维序摄像头1、维序摄像头2以及维序摄像头3,分别分布在传送带的转角处。
通过维序摄像头维护物品序列的准确性,包括以下步骤:
传送队列中的物品在传送过程中,通过维序摄像头依次对物品的外观属性进行识别;
将识别的外观属性依次与所述物品序列中对应物品的外观属性进行对比,确定当前的传送顺序是否与所述物品序列一致;
若当前的传送顺序与所述物品序列不一致,则根据当前的传送顺序更新所述物品序列。
基于维序摄像头捕捉图像识别,通过对当前传送带上的物品的外观属性的识别和系统中维护的物品序列及属性进行比对,发现不一致时,进行更新。
以图2为例,当前主传送带上物品序列为A(蓝色圆柱形)、B(黑色三角形)、C(黄色方形)、D(绿色圆柱形)。维序摄像头2识别到黑色三角形的物品位于蓝色圆柱形的物品的后面,与记录的物品序列不一致,因此,将记录的物品序列更新为B(黑色三角形)、A(蓝色圆柱形)、C(黄色方形)、D(绿色圆柱形)。
步骤S13:通过各物品的外观属性及所述物品序列,定位传送队列中的物品。
以行李托运场景为例,在分拣行李时,定位物品即为确认每个位置的行李的标识信息,以将行李正确地送上相应的飞机进行托运;或者,乘客在等候行李时,定位物品即为确认传送带上的每个行李的标识信息和位置,以使得每个乘客都能或者自己的行李的当前位置。
为了能够准确地定位物品并节约成本,本公开结合外观属性及获取的物品序列,来定位传送队列中的物品。首先,通过主传送带周围分布的摄像头采集物品图像,获得物品的外观属性,外观属性可能存在相同或相似的情况,因此进一步结合物品序列,以准确地定位传送队列中的物品。
由于物品之间可能存在相同或相似的情况,以下针对进行外观属性识别时是否存在外观属性接近的物品,分别进行说明。
第一种情况:不存在外观属性接近的物品。通过外观属性即可直接确定物品的标识信息,查询物品序列即可获得该物品的位置。
第二种情况:存在外观属性接近的物品,则步骤S13包括:
将所述目标物品、位于所述目标物品前方的至少一个物品、及位于所述目标物品后方的至少一个物品的外观属性,与所述物品序列对应的外观属性进行对比,以确定所述目标物品的标识信息及当前位置。
当传送队列中存在外观属性几乎相同的多个物品时,需要结合目标物品前面和后面物品的外观属性(前,中,后)一起与传送序列中物品属性进行比对来确定目标物品,如果前,中,后三个物品的外观属性还是具有很高的相似度,则再加上目标物品再前面和再后面的两个物品,共五个物品进行比对,以此类推,以此实现对目标物品的定位。
请参见图4,基于同一发明构思,本公开提供一种物品定位系统400,该物品定位系统400包括:
至少一个摄像头401,用于识别物品的外观属性;
处理器402,与所述至少一个摄像头401相连,用于在每个物品进入传送队列时,通过所述至少一个摄像头401识别该物品的外观属性,以获取该物品的标识信息;根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。
可选的,所述处理器402还用于:
针对待进入所述物品传送队列的每个物品,获取该物品的标识信息和外观属性;
将获取的外观属性与对应物品的标识信息进行关联;
通过所述至少一个摄像头401识别该物品的外观属性;
根据识别的外观属性,及外观属性与标识信息之间的关联关系,确定该物品的标识信息。
可选的,所述至少一个摄像头401包括维序摄像头,所述处理器402还用于:
传送队列中的物品在传送过程中,通过所述维序摄像头依次对物品的外观属性进行识别;
将识别的外观属性依次与所述物品序列中对应物品的外观属性进行对比,确定当前的传送顺序是否与所述物品序列一致;
若当前的传送顺序与所述物品序列不一致,则根据当前的传送顺序更新所述物品序列。
可选的,所述维序摄像头分布在物品传送区域的转角位置处。
可选的,所述外观属性至少包括种类属性、颜色属性、尺寸属性、形状属性、及材料属性中的一种,所述处理器402用于:
通过所述至少一个摄像头401分别在各外观属性维度下对该物品进行识别,并确定识别结果的置信度;
将各外观属性维度下的识别结果以及置信度,与所述关联关系中的外观属性进行相似性比对,以确定该物品的标识信息。
可选的,所述传送队列包括待定位的目标物品,所述处理器402用于:
将通过所述至少一个摄像头获取的所述目标物品、位于所述目标物品前方的至少一个物品、及位于所述目标物品后方的至少一个物品的外观属性,与所述物品序列对应的外观属性进行对比,以确定所述目标物品的标识信息及当前位置。
可选的,所述处理器402用于:
在每个物品进入传送队列时,通过所述至少一个摄像头401识别该物品的外观属性;
根据该物品的外观属性以及该物品进入所述物品定位系统400的顺序,获取该物品的标识信息。
在另一示例性实施例中,还提供了一种计算机程序产品,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行上述的物品定位方法的代码部分。
以上所述,以上实施例仅用以对本公开的技术方案进行了详细介绍,但以上实施例的说明只是用于帮助理解本公开的方法及其核心思想,不应理解为对本公开的限制。本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本公开的保护范围之内。

Claims (15)

  1. 一种物品定位方法,应用于物品定位系统,其特征在于,包括:
    在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息;
    根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;
    通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。
  2. 根据权利要求1所述的物品定位方法,其特征在于,在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息之前,还包括:
    针对待进入所述物品传送队列的每个物品,获取该物品的标识信息和外观属性;
    将获取的外观属性与对应物品的标识信息进行关联;
    通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息,包括:
    通过所述监控摄像头识别该物品的外观属性;
    根据识别的外观属性,及外观属性与标识信息之间的关联关系,确定该物品的标识信息。
  3. 根据权利要求1所述的物品定位方法,其特征在于,所述方法还包括:
    传送队列中的物品在传送过程中,通过维序摄像头依次对物品的外观属性进行识别;
    将识别的外观属性依次与所述物品序列中对应物品的外观属性进行对比,确定当前的传送顺序是否与所述物品序列一致;
    若当前的传送顺序与所述物品序列不一致,则根据当前的传送顺序更新所述物品序列。
  4. 根据权利要求3所述的物品定位方法,其特征在于,所述维序摄像头分布在物品传送区域的转角位置处。
  5. 根据权利要求2所述的物品定位方法,其特征在于,所述外观属性至少包括种类属性、颜色属性、尺寸属性、形状属性、及材质属性中的一种,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息,包括:
    通过所述监控摄像头分别在各外观属性维度下对该物品进行识别,并确定识别结果的置信度;
    将各外观属性维度下的识别结果以及置信度,与所述关联关系中的外观属性进行相似性比对,以确定该物品的标识信息。
  6. 根据权利要求1-5任一所述的物品定位方法,其特征在于,所述传送队列包括待定位的目标物品,通过各物品的外观属性及所述物品序列,定位所述目标物品,包括:
    将所述目标物品、位于所述目标物品前方的至少一个物品、及位于所述目标物品后方的至少一个物品的外观属性,与所述物品序列对应的外观属性进行对比,以确定所述目标物品的标识信息及当前位置。
  7. 根据权利要求1-5任一所述的物品定位方法,其特征在于,在每个物品进入传送队列时,通过监控摄像头识别该物品的外观属性,以确定该物品的标识信息,包括:
    在每个物品进入传送队列时,通过所述监控摄像头识别该物品的外观属性;
    根据该物品的外观属性以及该物品进入所述物品定位系统的顺序,确定该物品的标识信息。
  8. 一种物品定位系统,其特征在于,包括:
    至少一个摄像头,用于识别物品的外观属性;
    处理器,与所述至少一个摄像头相连,用于在每个物品进入传送队列时,通过所述至少一个摄像头识别该物品的外观属性,以确定该物品的标识信息;根据该物品进入所述传送队列的位置及该物品的标识信息,确定加入该物品后的传送队列的物品序列;通过各物品的外观属性及所述物品序列,定位所述传送队列中的物品。
  9. 根据权利要求8所述的物品定位系统,其特征在于,所述处理器还用于:
    针对待进入所述物品传送队列的每个物品,获取该物品的标识信息和外观属性;
    将获取的外观属性与对应物品的标识信息进行关联;
    通过所述至少一个摄像头识别该物品的外观属性;
    根据识别的外观属性,及外观属性与标识信息之间的关联关系,确定该物品的标识信息。
  10. 根据权利要求8所述的物品定位系统,其特征在于,所述至少一个摄像头包括维序摄像头,所述处理器还用于:
    传送队列中的物品在传送过程中,通过所述维序摄像头依次对物品的外观属性进行识别;
    将识别的外观属性依次与所述物品序列中对应物品的外观属性进行对比,确定当前的传送顺序是否与所述物品序列一致;
    若当前的传送顺序与所述物品序列不一致,则根据当前的传送顺序更新所述物品序列。
  11. 根据权利要求10所述的物品定位系统,其特征在于,所述维序摄像头分布在物品传送区域的转角位置处。
  12. 根据权利要求9所述的物品定位系统,其特征在于,所述外观属性至少包括种类属性、颜色属性、尺寸属性、形状属性、及材料属性中的一种,所述处理器用于:
    通过所述至少一个摄像头分别在各外观属性维度下对该物品进行识别,并确定识别结果的置信度;
    将各外观属性维度下的识别结果以及置信度,与所述关联关系中的外观属性进行相似性比对,以确定该物品的标识信息。
  13. 根据权利要求8-12任一所述的物品定位系统,其特征在于,所述传送队列包括待定位的目标物品,所述处理器用于:
    将通过所述至少一个摄像头获取的所述目标物品、位于所述目标物品前方的至少一个物品、及位于所述目标物品后方的至少一个物品的外观属性,与所述物品序列对应的外观属性进行对比,以确定所述目标物品的标识信息及当前位置。
  14. 根据权利要求8-12任一所述的物品定位系统,其特征在于,所述处理器用于:
    在每个物品进入传送队列时,通过所述至少一个摄像头识别该物品的外观属性;
    根据该物品的外观属性以及该物品进入所述物品定位系统的顺序,获取该物品的标识信息。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品包含能够由可编程的装置执行的计算机程序,所述计算机程序具有当由所述可编程的装置执行时用于执行权利要求1至7中任一项所述的方法的代码部分。
PCT/CN2018/079602 2018-03-20 2018-03-20 物品定位方法及系统 WO2019178738A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/079602 WO2019178738A1 (zh) 2018-03-20 2018-03-20 物品定位方法及系统
CN201880001050.8A CN108701239B (zh) 2018-03-20 2018-03-20 物品定位方法及系统
US16/556,073 US20190385337A1 (en) 2018-03-20 2019-08-29 Article positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/079602 WO2019178738A1 (zh) 2018-03-20 2018-03-20 物品定位方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/556,073 Continuation US20190385337A1 (en) 2018-03-20 2019-08-29 Article positioning method and system

Publications (1)

Publication Number Publication Date
WO2019178738A1 true WO2019178738A1 (zh) 2019-09-26

Family

ID=63841507

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079602 WO2019178738A1 (zh) 2018-03-20 2018-03-20 物品定位方法及系统

Country Status (3)

Country Link
US (1) US20190385337A1 (zh)
CN (1) CN108701239B (zh)
WO (1) WO2019178738A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110081862B (zh) * 2019-05-07 2021-12-24 达闼科技(北京)有限公司 一种对象的定位方法、定位装置、电子设备和可存储介质
EP3763448B1 (de) * 2019-07-12 2022-05-11 BEUMER Group GmbH & Co. KG Verfahren und vorrichtung zum erzeugen und aufrechterhalten einer zuordnung von gegenstandsdaten und position eines gegenstands
CN113449149A (zh) * 2020-03-26 2021-09-28 顺丰科技有限公司 物流信息的提取方法、装置、设备及计算机可读存储介质
CN112275803B (zh) * 2020-10-12 2022-06-14 重庆钢铁股份有限公司 一种冷床钢板识别监控方法及系统
CN113065394B (zh) * 2021-02-26 2022-12-06 青岛海尔科技有限公司 用于图像识别物品的方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1492785A (zh) * 2001-01-18 2004-04-28 读取包裹上的信息并对其解码
CN102161040A (zh) * 2011-01-14 2011-08-24 北京交通大学 基于颜色传感器的物流分拣系统
US20150114798A1 (en) * 2013-10-24 2015-04-30 Psi Peripheral Solutions Inc. Order sorting system with selective document insertion
CN107609813A (zh) * 2017-08-31 2018-01-19 中科富创(北京)科技有限公司 一种快递自动识别分拣系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5041907A (en) * 1990-01-29 1991-08-20 Technistar Corporation Automated assembly and packaging system
US5335777A (en) * 1993-10-15 1994-08-09 Jervis B. Webb Company Method and apparatus for belt conveyor load tracking
US20070246328A1 (en) * 2004-06-21 2007-10-25 Siemens Corporate Research Inc. High-Rate Space Efficient Article Singulator
CN100462154C (zh) * 2005-06-16 2009-02-18 中国民用航空总局第二研究所 应用在物件分拣系统的物件栅格式跟踪方法及其跟踪装置
US9367770B2 (en) * 2011-08-30 2016-06-14 Digimarc Corporation Methods and arrangements for identifying objects
CN105425308A (zh) * 2015-12-18 2016-03-23 同方威视技术股份有限公司 物品跟踪系统和方法
CN107679438A (zh) * 2017-10-13 2018-02-09 李志毅 一种具有图像识别功能的超高频标签读取装置及方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1492785A (zh) * 2001-01-18 2004-04-28 读取包裹上的信息并对其解码
CN102161040A (zh) * 2011-01-14 2011-08-24 北京交通大学 基于颜色传感器的物流分拣系统
US20150114798A1 (en) * 2013-10-24 2015-04-30 Psi Peripheral Solutions Inc. Order sorting system with selective document insertion
CN107609813A (zh) * 2017-08-31 2018-01-19 中科富创(北京)科技有限公司 一种快递自动识别分拣系统

Also Published As

Publication number Publication date
CN108701239A (zh) 2018-10-23
US20190385337A1 (en) 2019-12-19
CN108701239B (zh) 2021-06-01

Similar Documents

Publication Publication Date Title
WO2019178738A1 (zh) 物品定位方法及系统
US20240296414A1 (en) Video for real-time confirmation in package tracking systems
US10692231B1 (en) Composite agent representation
CN109844807B (zh) 用于对物体进行分割和确定尺寸的方法、系统和装置
US9171278B1 (en) Item illumination based on image recognition
US10198711B2 (en) Methods and systems for monitoring or tracking products in a retail shopping facility
US20180197139A1 (en) Package delivery sharing systems and methods
CA3016217C (en) Method for making a description of a piece of luggage and luggage description system
CN109255568A (zh) 一种基于图像识别的智能仓储系统
JP6538458B2 (ja) 物流システム、および物流管理方法
US11049234B2 (en) Baggage identification method
US8570377B2 (en) System and method for recognizing a unit load device (ULD) number marked on an air cargo unit
US11961303B1 (en) Agent re-verification and resolution using imaging
US20220414899A1 (en) Item location detection using homographies
US11875570B1 (en) Updating agent position information
US11810064B2 (en) Method(s) and system(s) for vehicular cargo management
CN110223212B (zh) 一种运输机器人的调度控制方法和系统
US12175686B2 (en) Item identification using multiple cameras
US12136247B2 (en) Image processing based methods and apparatus for planogram compliance
CN114187564A (zh) 智能仓库中跨设备联动方法及视觉辅助联动系统
CN107697533A (zh) 一种传输系统及方法
CN113495979A (zh) 唯一的对象面id
US11068755B2 (en) Locating method and a locator system for locating a billet in a stack of billets
US11663742B1 (en) Agent and event verification
CN111386533B (zh) 使用对称定位的空白区域检测和识别图像数据中图形字符表示的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910904

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18910904

Country of ref document: EP

Kind code of ref document: A1