WO2018090543A1 - 自动跟踪购物车 - Google Patents

自动跟踪购物车 Download PDF

Info

Publication number
WO2018090543A1
WO2018090543A1 PCT/CN2017/079475 CN2017079475W WO2018090543A1 WO 2018090543 A1 WO2018090543 A1 WO 2018090543A1 CN 2017079475 W CN2017079475 W CN 2017079475W WO 2018090543 A1 WO2018090543 A1 WO 2018090543A1
Authority
WO
WIPO (PCT)
Prior art keywords
shopping cart
target
depth
image
automatic tracking
Prior art date
Application number
PCT/CN2017/079475
Other languages
English (en)
French (fr)
Inventor
丁洪利
谷玉
李月
张莹
赵凯
张忆非
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US15/552,158 priority Critical patent/US10394247B2/en
Publication of WO2018090543A1 publication Critical patent/WO2018090543A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B3/00Hand carts having more than one axis carrying transport wheels; Steering devices therefor; Equipment therefor
    • B62B3/14Hand carts having more than one axis carrying transport wheels; Steering devices therefor; Equipment therefor characterised by provisions for nesting or stacking, e.g. shopping trolleys
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B5/00Accessories or details specially adapted for hand carts
    • B62B5/0026Propulsion aids
    • B62B5/0033Electric motors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B5/00Accessories or details specially adapted for hand carts
    • B62B5/0026Propulsion aids
    • B62B5/0063Propulsion aids guiding, e.g. by a rail
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62BHAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
    • B62B5/00Accessories or details specially adapted for hand carts
    • B62B5/0026Propulsion aids
    • B62B5/0069Control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of artificial intelligence, and in particular to an automatic tracking shopping cart.
  • the present disclosure proposes an automatic tracking shopping cart.
  • an automatic tracking shopping cart includes an automatic tracking device that is fixed to the body of the shopping cart for controlling the movement of the shopping cart to track the target object.
  • the automatic tracking device includes: an image capturing unit, configured to collect a color image and a depth image of the visual field; and a processing unit configured to identify the target object according to the collected color image and the depth image, and according to the target object Position and/or movement, determining a motion parameter of the shopping cart; and a shopping cart drive unit that drives the shopping cart to move based on the determined motion parameter.
  • the processing unit includes: a target determining module that determines a target feature of the target object in the color image and a target depth of the target object in the depth image based on the acquired color image and depth image Image analysis module, based on the target feature and target depth determined in the previous frame, from the color image and depth map of the current frame And determining a current depth of the target object; and driving a calculation module to determine a motion parameter of the shopping cart based on the calculated current depth.
  • the automatic tracking shopping cart further includes a console for receiving an instruction input by a user, and the target determining module is configured to: according to the need received through the console The instruction of the new target object is determined, and the human body target closest to the shopping cart among the collected color images is determined as the target object.
  • the target determining module is configured to: when the image analysis module determines that the current depth of the target object is successful from the color image and the depth image of the current frame, using the current depth as the next frame The target depth used at the time.
  • the target determining module is configured to: when the image analysis module determines that the current depth of the target object fails from the color image and the depth image of the current frame, the acquired color image has a The human body target of the target feature that the target feature is most matched in the previous frame is re-determined as the target object.
  • the target determining module is configured to: calculate a histogram of each human target in the currently acquired color image; and map the histogram of the respective human target with the target object determined in the previous frame The histogram is matched to determine a matching value of each human target; and the human target having the highest matching value higher than the reference matching value is re-determined as the target object.
  • the target determining module is configured to: if each determined matching value is lower than the reference matching value, adjust an acquisition direction of the image capturing unit, and reacquire a color image and a depth image .
  • the automatic tracking device further includes an alarm unit
  • the target determination module is further configured to: if the color image and the depth image for the reacquisition are still undetermined, having the highest match that is higher than the reference match value The value of the human target triggers the alarm unit.
  • the image analysis module is configured to: calculate a background projection map based on the color image and the depth image of the current frame; and intercept a predetermined depth range map from the calculated background projection image based on the target depth; The predetermined depth range map performs expansion and averaging filtering processing; and determines a current depth of the target object.
  • the drive calculation module is configured to be based on the calculated The current depth determines a current distance between the target object and the shopping cart, and triggers the shopping cart driving unit to drive the shopping cart when the current distance is greater than the reference distance.
  • the motion parameter includes an average speed of the shopping cart over a next time period, the average speed being determined based on the following formula:
  • the automatic tracking device further includes a memory for storing the color image, the depth image, the target feature, and/or the target depth.
  • FIG. 1 shows a block diagram of an example automatic tracking shopping cart 100 in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a block diagram showing an example structure of the processing unit 120 in the automatic tracking shopping cart 100 shown in FIG. 1.
  • FIG. 1 shows a block diagram of an example automatic tracking shopping cart 100 in accordance with an embodiment of the present disclosure.
  • the automatic tracking shopping cart 100 includes an automatic control device, which is composed of an image capturing unit 110, a processing unit 120, and a shopping cart driving unit 130, and these components constituting the automatic control device are fixed. On the body of the shopping cart 100, they are used to control the shopping cart movement to track the target object.
  • an image acquisition unit 110, a processing unit 120, and a shopping cart drive unit 130 are shown at different locations of the shopping cart body, i.e., the automatic control device is comprised of separate components. It should be understood, however, that the structure shown in FIG. 1 is only an exemplary structure of the present disclosure and is not intended to limit the scope of the present disclosure. In other embodiments, image acquisition unit 110, processing unit 120, and shopping cart drive unit 130 may be implemented as an integrated automatic control device, ie, the automatic control device is formed as a single physical device.
  • the image acquisition unit 110 and the processing unit 120 communicate via a wireless connection
  • the processing unit 120 communicates with the shopping cart driving unit 130 via a wired connection.
  • the connection manner shown in FIG. 1 is only an example of the present disclosure.
  • the image acquisition unit 110 and the processing unit 120 and the processing unit 120 and the shopping cart driving unit 130 may be connected by any suitable connection manner, such as wireless communication.
  • wireless communication for example, WiFi, Bluetooth, mobile networks, etc.
  • the illustrated automatic tracking shopping cart 100 also includes a console (not shown).
  • the console can be implemented by conventional electronic devices having input functions, such as a keyboard, a mouse, a touch screen, a mobile phone, a tablet computer, a microphone device, and the like. Users can do this through the console Input, selection, etc. operations are used to control the automatic tracking shopping cart 100 to enter different modes, such as registration, tracking, standby, shutdown, etc., and the automatic tracking shopping cart 100 can be controlled to implement different operations in different modes.
  • the automatic tracking shopping cart 100 further includes structures and components that are included in a conventional shopping cart, such as a vehicle body, an armrest, a wheel (eg, a universal wheel), a box, etc., and these components and structures are not described herein again. .
  • the illustrated image acquisition unit 110 is configured to acquire color images and depth images of the field of view.
  • the image acquisition unit 110 is an RGB-D camera.
  • the processing unit 120 is configured to identify the target object according to the collected color image and the depth image, and determine the motion parameter of the shopping cart 100 according to the position and/or movement of the target object.
  • the processing unit 120 will be described in detail below in conjunction with FIG.
  • the shopping cart drive unit 130 is configured to drive the shopping cart 100 to move based on the determined motion parameters.
  • the shopping cart drive unit 130 includes a battery, an Arm control board, a motor drive, a motor (eg, a DC brushless motor), and the like.
  • the automatic tracking device further includes a memory.
  • the memory is used to store color images, depth images, target features, and/or target depths, etc., as described below.
  • FIG. 2 is a block diagram showing an example structure of the processing unit 120 in the automatic tracking shopping cart 100 shown in FIG. 1.
  • the processing unit 120 includes a target determining module 210, an image analyzing module 220, and a driving computing module 230.
  • the target determining module 210 is configured to determine, according to the collected color image and the depth image, a target feature of the target object in the color image and a target depth of the target object in the depth image.
  • the image analysis module 220 is configured to determine a current depth of the target object from the color image and the depth image of the current frame based on the target feature and the target depth determined in the previous frame.
  • the driving calculation module 230 is configured to determine a motion parameter of the shopping cart based on the calculated current depth.
  • the target determining module 210 obtains the acquired color image and depth image, and determines a target feature that the target object is embodied in the color image.
  • the target feature may be, for example, a contour of a target object, a color, a size of a particular dimension, and the like.
  • the target determination module 210 determines the human body target closest to the shopping cart among the collected color images as the target object.
  • the target determination module 210 uses the target feature determined in the previous frame as the target feature used in the frame. In one embodiment, if a tracking failure occurs in the frame, the target feature may be updated or corrected, and the target feature updated in the current frame is determined as the target feature used in the next frame in the next frame. .
  • the target determination module 210 also determines a target depth that the target object is embodied in the depth image.
  • the target object is identified, and the depth image acquired by the image acquisition unit 110 may determine the current depth (initial depth) of the target object, that is, the target depth determined in the current registration frame (for the next frame) ).
  • the target determination module 210 uses the "current" depth determined in the previous frame as the target depth in the current frame.
  • the target depth is equivalent to an initial value for determining the actual depth to which the target object has traveled in the current frame, thereby implementing the technical solution of the present disclosure in conjunction with the algorithm described below.
  • the image analysis module 220 is configured to determine a current depth of the target object from a color image and a depth image of the current frame. Specifically, this process can include the following steps:
  • the background projection map is calculated based on the color image and the depth image of the current frame, and the background projection map can be understood as a three-dimensional image obtained by combining the color image and the depth image, that is, each object in the two-dimensional color image according to the depth value in the depth image. Assigning a value to the third dimension to form a three-dimensional image;
  • a predetermined depth range map is taken from the calculated background projection image, such as an image of a depth range of m cm before and after the target depth, and the value range of m can be set as needed, for example, m ⁇ [ 5,20];
  • the current depth of the target object is determined.
  • the determination of the current depth of the target object may be implemented using a continuous adaptive mean shift (Camshift) algorithm.
  • Camshift continuous adaptive mean shift
  • the driving calculation module 230 is configured to determine a motion parameter of the shopping cart based on the calculated current depth.
  • the motion parameters include an average speed of the shopping cart over a next time period, the average speed being determined based on the following formula: Where T is the time interval of the time period, ⁇ l 1 is the distance between the shopping cart and the target object at the beginning of the current time period, ⁇ l 2 is the distance between the shopping cart and the target object at the end of the current time period, and L is the shopping cart The distance traveled during the current time period.
  • the drive calculation module 230 is configured to determine a current distance of the target object from the shopping cart based on the calculated current depth, and trigger the shopping cart drive only when the current distance is greater than the reference distance Unit 130 drives the shopping cart.
  • the image analysis module 220 determines that the current depth of the target object has failed from the color image and the depth image of the current frame, and at this time, the target object needs to be re-determined.
  • the target determining module 210 when the image analyzing module 220 determines that the current depth of the target object fails from the color image and the depth image of the current frame, has the acquired color image having the same as the previous one.
  • the human body target of the target feature that is determined by the target feature determined in the frame is newly determined as the target object.
  • the re-determining the human body target having the target feature that best matches the target feature determined in the previous frame among the acquired color images as the target object comprises: calculating the currently acquired color image a histogram of each of the human body targets; matching the histograms of the respective human targets with the histograms of the target objects determined in the previous frame to determine matching values of the respective human targets; and having a higher than the reference matching value The human target with the highest matching value is re-determined as the target object.
  • the processing unit 120 When the determined respective matching values are lower than the reference matching value, the processing unit 120 notifies the image capturing unit 110 to adjust its image capturing direction, and after the direction adjustment, re-acquires the color image and the depth image. Then, the processing unit 120 is based on reacquisition The color image and the depth image are performed again.
  • the processing unit 120 no longer informs the image acquisition unit 110 to adjust the acquisition. Instead, the alarm unit, which is set on the shopping cart 100, is triggered to alert the user to notify the user to re-register.
  • aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
  • a computer program eg, implemented as one or more programs running on one or more computer systems
  • implemented as one or more programs running on one or more processors eg, implemented as one or One or more programs running on a plurality of microprocessors, implemented as firmware, or substantially in any combination of the above, and those skilled in the art, in accordance with the present disclosure, will be provided with design circuitry and/or write software and / or firmware code capabilities.
  • signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).

Abstract

本公开提供了一种自动跟踪购物车。所述自动跟踪购物车包括自动跟踪设备。该自动跟踪设备固定于购物车的车体上,用于控制购物车的移动,以跟踪目标对象。所述自动跟踪设备包括:图像采集单元,用于采集视野的彩色图像和深度图像;处理单元,用于根据所采集的彩色图像和深度图像,识别出目标对象,并根据目标对象的位置和/或移动,确定购物车的运动参数;购物车驱动单元,基于所确定的运动参数,驱动所述购物车进行移动。

Description

自动跟踪购物车 技术领域
本公开涉及人工智能领域,具体地涉及一种自动跟踪购物车。
背景技术
目前,在超市购物时,购物者需要手动推动购物车行走。然而,对于购物者来讲,在仔细挑选、比较品目繁多的货物时照顾购物车并不方便。这一不便对于某些特定的环境或特定的人群更为严重,比如,对于年长的购物者,推动装有大量货物的购物车并且灵活地控制购物车的平移和转动并不容易,稍不注意可能会对身体造成伤害。为了改善购物体验并提高特定购物场景的安全性,需要对现有的购物车进行改进。
发明内容
为了解决现有技术中存在的上述问题,本公开提出了一种自动跟踪购物车。
根据本公开的一个方面,提出了一种自动跟踪购物车。所述自动跟踪购物车包括自动跟踪设备,其固定于购物车的车体上,用于控制购物车的移动,以跟踪目标对象。具体地,所述自动跟踪设备包括:图像采集单元,用于采集视野的彩色图像和深度图像;处理单元,用于根据所采集的彩色图像和深度图像,识别出目标对象,并根据目标对象的位置和/或移动,确定购物车的运动参数;以及购物车驱动单元,基于所确定的运动参数,驱动所述购物车进行移动。
在一个实施例中,所述处理单元包括:目标确定模块,基于所采集的彩色图像和深度图像,确定目标对象在所述彩色图像中的目标特征和目标对象在所述深度图像中的目标深度;图像分析模块,基于前一帧中所确定的目标特征和目标深度,从当前帧的彩色图像和深度图 像确定所述目标对象的当前深度;以及驱动计算模块,基于所计算出的当前深度,确定购物车的运动参数。
在一个实施例中,所述自动跟踪购物车还包括控制台,该控制台用于接收用户输入的指令,以及,所述目标确定模块被配置为:根据通过所述控制台接收到的关于需要确定新的目标对象的指令,将所采集的彩色图像中距离购物车最近的人体目标确定为所述目标对象。
在一个实施例中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像确定所述目标对象的当前深度成功时,将所述当前深度作为下一帧时所使用的目标深度。
在一个实施例中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像确定所述目标对象的当前深度失败时,将所采集的彩色图像中具有与前一帧中所确定的目标特征最匹配的目标特征的人体目标重新确定为所述目标对象。
在一个实施例中,所述目标确定模块被配置为:计算当前所采集的彩色图像中的各个人体目标的直方图;将所述各个人体目标的直方图与前一帧中确定的目标对象的直方图进行匹配,确定各个人体目标的匹配值;以及将具有高于基准匹配值的最高匹配值的人体目标重新确定为目标对象。
在一个实施例中,所述目标确定模块被配置为:如果所确定的各个匹配值均低于所述基准匹配值,则调整所述图像采集单元的采集方向,并重新采集彩色图像和深度图像。
在一个实施例中,所述自动跟踪设备还包括警报单元,以及所述目标确定模块还被配置为:如果针对重新采集的彩色图像和深度图像仍然无法确定出具有高于基准匹配值的最高匹配值的人体目标,则触发所述警报单元。
在一个实施例中,图像分析模块被配置为:基于当前帧的彩色图像和深度图像计算背景投影图;基于所述目标深度,从计算出的背景投影图中截取预定深度范围图;对所述预定深度范围图进行膨胀和均值滤波处理;以及确定出所述目标对象的当前深度。
在一个实施例中,所述驱动计算模块被配置为:基于所计算出的 当前深度,确定目标对象与购物车的当前距离,当所述当前距离大于基准距离时才触发所述购物车驱动单元驱动所述购物车。
在一个实施例中,所述运动参数包括所述购物车在下一时间周期内的平均速度,所述平均速度是基于以下公式确定的:
Figure PCTCN2017079475-appb-000001
在一个实施例中,所述自动跟踪设备还包括:存储器,用于存储所述彩色图像、所述深度图像、所述目标特征和/或所述目标深度。
附图说明
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图,图中:
图1示出了根据本公开实施例的一种示例自动跟踪购物车100的结构图。
图2示出了图1所示的自动跟踪购物车100中的处理单元120的示例结构框图。
具体实施方式
下面将详细描述本公开的具体实施例,应当注意,这里描述的实施例只用于举例说明,并不用于限制本公开。在以下描述中,为了提供对本公开的透彻理解,阐述了大量特定细节。然而,对于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行本公开。在其他实例中,为了避免混淆本公开,未具体描述公知的电路、材料或方法。
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特性被包含在本公开至少一个实施例中。因此,在整个说明书的各个地方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例”不 一定都指同一实施例或示例。此外,可以以任何适当的组合和/或子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。此外,本领域普通技术人员应当理解,在此提供的附图都是为了说明的目的,并且附图不一定是按比例绘制的。这里使用的术语“和/或”包括一个或多个相关列出的项目的任何和所有组合。
以下参考附图对本公开进行具体描述。
图1示出了根据本公开实施例的一种示例自动跟踪购物车100的结构图。
如图1所示,所述自动跟踪购物车100包括自动控制设备,所述自动控制设备由图像采集单元110、处理单元120和购物车驱动单元130组成,构成自动控制设备的这些组成部件均固定于购物车100的车体上,它们用于控制购物车移动,以跟踪目标对象。
在图1中,在购物车车身的不同位置示出了图像采集单元110、处理单元120和购物车驱动单元130,即所述自动控制设备由分离的组件构成。但应该理解的是,图1中所示的结构只是本公开的一种示例结构,其并不用于限制本公开的范围。在其他实施例中,可以将图像采集单元110、处理单元120和购物车驱动单元130实现为集成的自动控制设备,即所述自动控制设备形成为单个实体设备。
在图1中,所述图像采集单元110与所述处理单元120之间通过无线连接进行通信,所述处理单元120与所述购物车驱动单元130之间通过有线连接进行通信。同样,应该理解的是,图1中所示的连接方式只是本公开的一种示例。在其他实施例中,所述图像采集单元110与所述处理单元120之间以及所述处理单元120与所述购物车驱动单元130之间可采用任何适当的连接方式进行连接,比如无线通信方式,例如,WiFi、蓝牙、移动网络等。
除了图像采集单元110、处理单元120和购物车驱动单元130之外,所示自动跟踪购物车100还包括控制台(未示出)。所述控制台可以通过具有输入功能的常规电子设备来实现,例如,键盘、鼠标、触摸屏、移动电话、平板计算机、麦克风设备等。用户可以通过在控制台进行 输入、选择等操作来控制所述自动跟踪购物车100进入不同的模式,比如注册、跟踪、待机、关机等,并可以在不同的模式中控制所述自动跟踪购物车100实现不同的操作。
所述自动跟踪购物车100还包括常规购物车中所具有的结构和部件,比如,车身、扶手、车轮(例如,万向轮)、箱体等,在此不再对这些部件和结构进行赘述。
所示图像采集单元110用于采集视野的彩色图像和深度图像。在一个实施例中,所述图像采集单元110为RGB-D摄像头。
所述处理单元120用于根据所采集的彩色图像和深度图像,识别出目标对象,并根据目标对象的位置和/或移动,确定所述购物车100的运动参数。下文将结合图2对处理单元120进行详细描述。
所述购物车驱动单元130被配置为基于所确定的运动参数来驱动所述购物车100进行移动。在一个实施例中,所述购物车驱动单元130包括电池、Arm控制板、电机驱动器、电机(例如,直流无刷电机)等。
在一个实施例中,所述自动跟踪设备还包括存储器。所述存储器用于存储如下文所述的彩色图像、深度图像、目标特征和/或目标深度等。
图2示出了图1所示的自动跟踪购物车100中的处理单元120的示例结构框图。
如图2所示,所述处理单元120包括目标确定模块210、图像分析模块220和驱动计算模块230。具体地,所述目标确定模块210用于基于所采集的彩色图像和深度图像,确定目标对象在所述彩色图像中的目标特征和目标对象在所述深度图像中的目标深度。所述图像分析模块220用于基于前一帧中所确定的目标特征和目标深度,从当前帧的彩色图像和深度图像确定所述目标对象的当前深度。所述驱动计算模块230用于基于所计算出的当前深度,确定购物车的运动参数。
所述目标确定模块210获得所采集的彩色图像和深度图像,并确定目标对象体现在彩色图像中的目标特征。所述目标特征可以是例如目标对象的轮廓、颜色、特定维度的尺寸等。
在初次使用(即,注册模式下)时(定义为第1帧),需要首先对 目标对象进行识别,并进而确定所述目标特征。在一个实施例中,用户通过控制台输入关于需要确定新的目标对象的指令,所述目标确定模块210根据通过所述控制台接收到的关于需要确定新的目标对象的指令,通过结合所采集的彩色图像和深度图像,将所采集的彩色图像中距离购物车最近的人体目标确定为所述目标对象。
在注册后的跟踪过程中(即,第2帧、第3帧...),目标确定模块210采用前一帧中确定的目标特征作为该帧中使用的目标特征。在一个实施例中,如果在该帧中发生跟踪失败的情况,可能会对目标特征进行更新或修正,则在下一帧中将当前帧中更新的目标特征确定为下一帧中使用的目标特征。
所述目标确定模块210还确定目标对象体现在所述深度图像中的目标深度。
在注册模式下,对目标对象进行识别,通过图像采集单元110采集的深度图像可以确定所述目标对象的当前深度(初始深度),即在当前注册帧中确定的目标深度(用于下一帧)。
在随后的各个帧中,目标确定模块210将前一帧中确定出的“当前”深度作为当前帧中的目标深度使用。该目标深度相当于用于确定目标对象在当前帧中已经行进到的实际深度的初始值,从而结合下文中描述的算法,实现本公开的技术方案。
在确定了所述目标深度后,所述图像分析模块220用于从当前帧的彩色图像和深度图像中确定所述目标对象的当前深度。具体地,这一过程可包括以下步骤:
基于当前帧的彩色图像和深度图像计算背景投影图,背景投影图可以理解为将彩色图像与深度图像结合起来得到的三维图像,即根据深度图像中的深度值向二维彩色图像中的各个对象赋予第三维度的数值,从而形成三维图像;
基于所述目标深度,从计算出的背景投影图中截取预定深度范围图,比如目标深度前后各m厘米的深度范围的图像,m的取值范围可以根据需要进行设定,例如,m∈[5,20];
对所述预定深度范围图进行膨胀和均值滤波处理;以及
确定出所述目标对象的当前深度。
在一个实施例中,对所述目标对象的当前深度的确定可以是利用连续自适应均值漂移(Camshift)算法实现的。
所述驱动计算模块230用于基于所计算出的当前深度,确定购物车的运动参数。所述运动参数包括所述购物车在下一时间周期内的平均速度,所述平均速度是基于以下公式确定的:
Figure PCTCN2017079475-appb-000002
其中,T为时间周期的时间间隔,Δl1为当前时间周期开始时购物车与目标对象之间的距离,Δl2为当前时间周期结束时购物车与目标对象之间的距离,L为购物车在当前时间周期中行走的距离。
在一个实施例中,所述驱动计算模块230被配置为基于所计算出的当前深度,确定目标对象与购物车的当前距离,只有当所述当前距离大于基准距离时才触发所述购物车驱动单元130驱动所述购物车。
在使用所述自动跟踪购物车100的过程中,还可能出现图像分析模块220从当前帧的彩色图像和深度图像确定所述目标对象的当前深度失败的情况,此时,需要重新确定目标对象。
在一个实施例中,所述目标确定模块210当所述图像分析模块220从当前帧的彩色图像和深度图像确定所述目标对象的当前深度失败时,将所采集的彩色图像中具有与前一帧中所确定的目标特征最匹配的目标特征的人体目标重新确定为所述目标对象。
在一些实施例中,所述将所采集的彩色图像中具有与前一帧中所确定的目标特征最匹配的目标特征的人体目标重新确定为所述目标对象包括:计算当前所采集的彩色图像中的各个人体目标的直方图;将所述各个人体目标的直方图与前一帧中确定的目标对象的直方图进行匹配,确定各个人体目标的匹配值;以及将具有高于基准匹配值的最高匹配值的人体目标重新确定为目标对象。
当所确定的各个匹配值均低于所述基准匹配值时,所述处理单元120通知所述图像采集单元110调整其图像采集方向,并在方向调整后,重新采集彩色图像和深度图像。然后,所述处理单元120基于重新采集 的彩色图像和深度图像,再次执行上述过程。
当针对重新采集(或多次重新采集,比如三次)的彩色图像和深度图像仍然无法确定出具有高于基准匹配值的匹配值的人体目标时,处理单元120不再通知图像采集单元110调整采集方向,而是触发购物车100上设置的警报单元进行报警,以便通知用户重新进行注册。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了自动跟踪购物车的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开的实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
虽然已参照几个典型实施例描述了本公开,但应当理解,所用的术语是说明和示例性、而非限制性的术语。由于本公开能够以多种形 式具体实施而不脱离公开的精神或实质,所以应当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要求所涵盖。

Claims (12)

  1. 一种自动跟踪购物车,包括:
    自动跟踪设备,固定于购物车的车体上,用于控制购物车的移动,以跟踪目标对象,所述自动跟踪设备包括:
    图像采集单元,用于采集视野的彩色图像和深度图像;
    处理单元,用于根据所采集的彩色图像和深度图像,识别出目标对象,并根据目标对象的位置和/或移动,确定购物车的运动参数;以及
    购物车驱动单元,基于所确定的运动参数,驱动所述购物车移动。
  2. 根据权利要求1所述的自动跟踪购物车,其中,所述处理单元包括:
    目标确定模块,基于所采集的彩色图像和深度图像,确定目标对象在所述彩色图像中的目标特征和目标对象在所述深度图像中的目标深度;
    图像分析模块,基于前一帧中所确定的目标特征和目标深度,从当前帧的彩色图像和深度图像确定所述目标对象的当前深度;以及
    驱动计算模块,基于所计算出的当前深度,确定购物车的运动参数。
  3. 根据权利要求2所述的自动跟踪购物车,其中,所述自动跟踪购物车还包括控制台,该控制台用于接收用户输入的指令,以及
    所述目标确定模块被配置为:根据通过所述控制台接收到的关于需要确定新的目标对象的指令,将所采集的彩色图像中距离购物车最近的人体目标确定为所述目标对象。
  4. 根据权利要求2所述的自动跟踪购物车,其中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像确定所述目标对象的当前深度成功时,将所述当前深度作为下一帧时所使用的目标深度。
  5. 根据权利要求2所述的自动跟踪购物车,其中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像 确定所述目标对象的当前深度失败时,将所采集的彩色图像中具有与前一帧中所确定的目标特征最匹配的目标特征的人体目标重新确定为所述目标对象。
  6. 根据权利要求5所述的自动跟踪购物车,其中,所述目标确定模块被配置为:
    计算当前所采集的彩色图像中的各个人体目标的直方图;
    将所述各个人体目标的直方图与前一帧中确定的目标对象的直方图进行匹配,确定各个人体目标的匹配值;以及
    将具有高于基准匹配值的最高匹配值的人体目标重新确定为目标对象。
  7. 根据权利要求6所述的自动跟踪购物车,其中,所述目标确定模块被配置为:如果所确定的各个匹配值均低于所述基准匹配值,则调整所述图像采集单元的采集方向,并重新采集彩色图像和深度图像。
  8. 根据权利要求7所述的自动跟踪购物车,其中,所述自动跟踪设备还包括警报单元,以及所述目标确定模块还被配置为:如果针对重新采集的彩色图像和深度图像仍然无法确定出具有高于基准匹配值的匹配值的人体目标,则触发所述警报单元。
  9. 根据权利要求2所述的自动跟踪购物车,其中,所述图像分析模块被配置为:
    基于当前帧的彩色图像和深度图像计算背景投影图;
    基于所述目标深度,从计算出的背景投影图中截取预定深度范围图;
    对所述预定深度范围图进行膨胀和均值滤波处理;以及
    确定出所述目标对象的当前深度。
  10. 根据权利要求2所述的自动跟踪购物车,其中,所述驱动计算模块被配置为:基于所计算出的当前深度,确定目标对象与购物车的当前距离,当所述当前距离大于基准距离时才触发所述购物车驱动单元驱动所述购物车。
  11. 根据权利要求1所述的自动跟踪购物车,其中,所述运动参数包括所述购物车在下一时间周期内的平均速度v,所述平均速度v是基 于以下公式确定的:
    Figure PCTCN2017079475-appb-100001
    其中,T为时间周期的时间间隔,Δl1为当前时间周期开始时购物车与目标对象之间的距离,Δl2为当前时间周期结束时购物车与目标对象之间的距离,L为购物车在当前时间周期中行走的距离。
  12. 根据权利要求1所述的自动跟踪购物车,所述自动跟踪设备还包括:存储器,用于存储所述彩色图像、所述深度图像、所述目标特征和/或所述目标深度。
PCT/CN2017/079475 2016-11-17 2017-04-05 自动跟踪购物车 WO2018090543A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/552,158 US10394247B2 (en) 2016-11-17 2017-04-05 Automatic tracking shopping cart

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611020109.7 2016-11-17
CN201611020109.7A CN106778471B (zh) 2016-11-17 2016-11-17 自动跟踪购物车

Publications (1)

Publication Number Publication Date
WO2018090543A1 true WO2018090543A1 (zh) 2018-05-24

Family

ID=58969006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/079475 WO2018090543A1 (zh) 2016-11-17 2017-04-05 自动跟踪购物车

Country Status (3)

Country Link
US (1) US10394247B2 (zh)
CN (1) CN106778471B (zh)
WO (1) WO2018090543A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109099891A (zh) * 2018-07-12 2018-12-28 广州维绅科技有限公司 基于图像识别的空间定位方法、装置及系统
CN109324625A (zh) * 2018-11-12 2019-02-12 辽东学院 自动跟踪购物设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292907B (zh) * 2017-07-14 2020-08-21 灵动科技(北京)有限公司 一种对跟随目标进行定位的方法以及跟随设备
CN108596128B (zh) 2018-04-28 2020-06-26 京东方科技集团股份有限公司 对象识别方法、装置及存储介质
CN109816688A (zh) * 2018-12-03 2019-05-28 安徽酷哇机器人有限公司 物品跟随方法和行李箱
US11511785B2 (en) * 2019-04-30 2022-11-29 Lg Electronics Inc. Cart robot with automatic following function
WO2020222329A1 (ko) * 2019-04-30 2020-11-05 엘지전자 주식회사 자동 추종 기능을 갖는 카트
WO2020222330A1 (ko) * 2019-04-30 2020-11-05 엘지전자 주식회사 자동 추종 기능을 갖는 카트
US11585934B2 (en) * 2019-04-30 2023-02-21 Lg Electronics Inc. Cart robot having auto-follow function
JP7274970B2 (ja) * 2019-08-01 2023-05-17 本田技研工業株式会社 追従対象特定システム及び追従対象特定方法
KR102484489B1 (ko) * 2021-04-09 2023-01-03 동의대학교 산학협력단 스마트 카트 및 이의 제어 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006155039A (ja) * 2004-11-26 2006-06-15 Toshiba Corp 店舗ロボット
CN101612950A (zh) * 2009-08-05 2009-12-30 山东大学 智能跟踪助力行李架
CN201808591U (zh) * 2010-03-22 2011-04-27 北京印刷学院 超市购物车驱动终端
CN102289556A (zh) * 2011-05-13 2011-12-21 郑正耀 一种超市购物助手机器人
CN102867311A (zh) * 2011-07-07 2013-01-09 株式会社理光 目标跟踪方法和目标跟踪设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6378684B1 (en) * 2000-02-14 2002-04-30 Gary L. Cox Detecting mechanism for a grocery cart and the like and system
US8564661B2 (en) * 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US7487913B2 (en) * 2006-07-12 2009-02-10 International Business Machines Corporation Method and system for reducing waste due to product spoilage within a grocery environment
US7742952B2 (en) * 2008-03-21 2010-06-22 Sunrise R&D Holdings, Llc Systems and methods of acquiring actual real-time shopper behavior data approximate to a moment of decision by a shopper
JP5589527B2 (ja) * 2010-04-23 2014-09-17 株式会社リコー 撮像装置および追尾被写体検出方法
CN102509074B (zh) * 2011-10-18 2014-01-29 Tcl集团股份有限公司 一种目标识别方法和设备
US9740937B2 (en) * 2012-01-17 2017-08-22 Avigilon Fortress Corporation System and method for monitoring a retail environment using video content analysis with depth sensing
PL2898384T3 (pl) * 2012-09-19 2020-05-18 Follow Inspiration Unipessoal, Lda. System automatycznie śledzący i jego sposób działania
WO2014205425A1 (en) * 2013-06-22 2014-12-24 Intellivision Technologies Corp. Method of tracking moveable objects by combining data obtained from multiple sensor types
GB2522291A (en) * 2014-01-20 2015-07-22 Joseph Bentsur Shopping cart and system
CN105785996A (zh) * 2016-03-31 2016-07-20 浙江大学 超市智能跟踪购物车及其跟踪方法
CN105930784B (zh) * 2016-04-15 2017-10-13 济南大学 一种手势识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006155039A (ja) * 2004-11-26 2006-06-15 Toshiba Corp 店舗ロボット
CN101612950A (zh) * 2009-08-05 2009-12-30 山东大学 智能跟踪助力行李架
CN201808591U (zh) * 2010-03-22 2011-04-27 北京印刷学院 超市购物车驱动终端
CN102289556A (zh) * 2011-05-13 2011-12-21 郑正耀 一种超市购物助手机器人
CN102867311A (zh) * 2011-07-07 2013-01-09 株式会社理光 目标跟踪方法和目标跟踪设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109099891A (zh) * 2018-07-12 2018-12-28 广州维绅科技有限公司 基于图像识别的空间定位方法、装置及系统
CN109099891B (zh) * 2018-07-12 2021-08-13 广州达泊智能科技有限公司 基于图像识别的空间定位方法、装置及系统
CN109324625A (zh) * 2018-11-12 2019-02-12 辽东学院 自动跟踪购物设备

Also Published As

Publication number Publication date
US10394247B2 (en) 2019-08-27
CN106778471A (zh) 2017-05-31
CN106778471B (zh) 2019-11-19
US20180335786A1 (en) 2018-11-22

Similar Documents

Publication Publication Date Title
WO2018090543A1 (zh) 自动跟踪购物车
JP6942488B2 (ja) 画像処理装置、画像処理システム、画像処理方法、及びプログラム
KR101776622B1 (ko) 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
JP6162805B2 (ja) 拡張の継続性の維持
CN104981680A (zh) 相机辅助的运动方向和速度估计
KR101784183B1 (ko) ADoG 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
JP6806188B2 (ja) 情報処理システム、情報処理方法及びプログラム
US9373198B2 (en) Faulty cart wheel detection
US9305217B2 (en) Object tracking system using robot and object tracking method using a robot
WO2018073829A1 (en) Human-tracking robot
JP2015036980A (ja) 駐車区画占有率判定のための映像および視覚ベースのアクセス制御のハイブリッド方法およびシステム
WO2019064375A1 (ja) 情報処理装置、制御方法、及びプログラム
KR20150144727A (ko) 에지 기반 재조정을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
Bosch-Jorge et al. Fall detection based on the gravity vector using a wide-angle camera
US20170006215A1 (en) Methods and systems for controlling a camera to perform a task
JP6217635B2 (ja) 転倒検知装置および転倒検知方法、転倒検知カメラ、並びにコンピュータ・プログラム
CN112703533A (zh) 对象跟踪
JP6868061B2 (ja) 人物追跡方法、装置、機器及び記憶媒体
US20170322676A1 (en) Motion sensing method and motion sensing device
JP6789421B2 (ja) 情報処理装置、追跡方法、及び追跡プログラム
JP2012191354A (ja) 情報処理装置、情報処理方法及びプログラム
US20110304730A1 (en) Pan, tilt, and zoom camera and method for aiming ptz camera
Cosma et al. Camloc: Pedestrian location estimation through body pose estimation on smart cameras
JP6163732B2 (ja) 画像処理装置、プログラム、及び方法
US11558539B2 (en) Systems and methods of detecting and identifying an object

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 15552158

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17872154

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17872154

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17872154

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/11/2019)