WO2018090543A1 - 自动跟踪购物车 - Google Patents
自动跟踪购物车 Download PDFInfo
- Publication number
- WO2018090543A1 WO2018090543A1 PCT/CN2017/079475 CN2017079475W WO2018090543A1 WO 2018090543 A1 WO2018090543 A1 WO 2018090543A1 CN 2017079475 W CN2017079475 W CN 2017079475W WO 2018090543 A1 WO2018090543 A1 WO 2018090543A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- shopping cart
- target
- depth
- image
- automatic tracking
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000010191 image analysis Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000015654 memory Effects 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 claims description 2
- 238000000034 method Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/758—Involving statistics of pixels or of feature values, e.g. histogram matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62B—HAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
- B62B3/00—Hand carts having more than one axis carrying transport wheels; Steering devices therefor; Equipment therefor
- B62B3/14—Hand carts having more than one axis carrying transport wheels; Steering devices therefor; Equipment therefor characterised by provisions for nesting or stacking, e.g. shopping trolleys
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62B—HAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
- B62B5/00—Accessories or details specially adapted for hand carts
- B62B5/0026—Propulsion aids
- B62B5/0033—Electric motors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62B—HAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
- B62B5/00—Accessories or details specially adapted for hand carts
- B62B5/0026—Propulsion aids
- B62B5/0063—Propulsion aids guiding, e.g. by a rail
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B62—LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
- B62B—HAND-PROPELLED VEHICLES, e.g. HAND CARTS OR PERAMBULATORS; SLEDGES
- B62B5/00—Accessories or details specially adapted for hand carts
- B62B5/0026—Propulsion aids
- B62B5/0069—Control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present disclosure relates to the field of artificial intelligence, and in particular to an automatic tracking shopping cart.
- the present disclosure proposes an automatic tracking shopping cart.
- an automatic tracking shopping cart includes an automatic tracking device that is fixed to the body of the shopping cart for controlling the movement of the shopping cart to track the target object.
- the automatic tracking device includes: an image capturing unit, configured to collect a color image and a depth image of the visual field; and a processing unit configured to identify the target object according to the collected color image and the depth image, and according to the target object Position and/or movement, determining a motion parameter of the shopping cart; and a shopping cart drive unit that drives the shopping cart to move based on the determined motion parameter.
- the processing unit includes: a target determining module that determines a target feature of the target object in the color image and a target depth of the target object in the depth image based on the acquired color image and depth image Image analysis module, based on the target feature and target depth determined in the previous frame, from the color image and depth map of the current frame And determining a current depth of the target object; and driving a calculation module to determine a motion parameter of the shopping cart based on the calculated current depth.
- the automatic tracking shopping cart further includes a console for receiving an instruction input by a user, and the target determining module is configured to: according to the need received through the console The instruction of the new target object is determined, and the human body target closest to the shopping cart among the collected color images is determined as the target object.
- the target determining module is configured to: when the image analysis module determines that the current depth of the target object is successful from the color image and the depth image of the current frame, using the current depth as the next frame The target depth used at the time.
- the target determining module is configured to: when the image analysis module determines that the current depth of the target object fails from the color image and the depth image of the current frame, the acquired color image has a The human body target of the target feature that the target feature is most matched in the previous frame is re-determined as the target object.
- the target determining module is configured to: calculate a histogram of each human target in the currently acquired color image; and map the histogram of the respective human target with the target object determined in the previous frame The histogram is matched to determine a matching value of each human target; and the human target having the highest matching value higher than the reference matching value is re-determined as the target object.
- the target determining module is configured to: if each determined matching value is lower than the reference matching value, adjust an acquisition direction of the image capturing unit, and reacquire a color image and a depth image .
- the automatic tracking device further includes an alarm unit
- the target determination module is further configured to: if the color image and the depth image for the reacquisition are still undetermined, having the highest match that is higher than the reference match value The value of the human target triggers the alarm unit.
- the image analysis module is configured to: calculate a background projection map based on the color image and the depth image of the current frame; and intercept a predetermined depth range map from the calculated background projection image based on the target depth; The predetermined depth range map performs expansion and averaging filtering processing; and determines a current depth of the target object.
- the drive calculation module is configured to be based on the calculated The current depth determines a current distance between the target object and the shopping cart, and triggers the shopping cart driving unit to drive the shopping cart when the current distance is greater than the reference distance.
- the motion parameter includes an average speed of the shopping cart over a next time period, the average speed being determined based on the following formula:
- the automatic tracking device further includes a memory for storing the color image, the depth image, the target feature, and/or the target depth.
- FIG. 1 shows a block diagram of an example automatic tracking shopping cart 100 in accordance with an embodiment of the present disclosure.
- FIG. 2 is a block diagram showing an example structure of the processing unit 120 in the automatic tracking shopping cart 100 shown in FIG. 1.
- FIG. 1 shows a block diagram of an example automatic tracking shopping cart 100 in accordance with an embodiment of the present disclosure.
- the automatic tracking shopping cart 100 includes an automatic control device, which is composed of an image capturing unit 110, a processing unit 120, and a shopping cart driving unit 130, and these components constituting the automatic control device are fixed. On the body of the shopping cart 100, they are used to control the shopping cart movement to track the target object.
- an image acquisition unit 110, a processing unit 120, and a shopping cart drive unit 130 are shown at different locations of the shopping cart body, i.e., the automatic control device is comprised of separate components. It should be understood, however, that the structure shown in FIG. 1 is only an exemplary structure of the present disclosure and is not intended to limit the scope of the present disclosure. In other embodiments, image acquisition unit 110, processing unit 120, and shopping cart drive unit 130 may be implemented as an integrated automatic control device, ie, the automatic control device is formed as a single physical device.
- the image acquisition unit 110 and the processing unit 120 communicate via a wireless connection
- the processing unit 120 communicates with the shopping cart driving unit 130 via a wired connection.
- the connection manner shown in FIG. 1 is only an example of the present disclosure.
- the image acquisition unit 110 and the processing unit 120 and the processing unit 120 and the shopping cart driving unit 130 may be connected by any suitable connection manner, such as wireless communication.
- wireless communication for example, WiFi, Bluetooth, mobile networks, etc.
- the illustrated automatic tracking shopping cart 100 also includes a console (not shown).
- the console can be implemented by conventional electronic devices having input functions, such as a keyboard, a mouse, a touch screen, a mobile phone, a tablet computer, a microphone device, and the like. Users can do this through the console Input, selection, etc. operations are used to control the automatic tracking shopping cart 100 to enter different modes, such as registration, tracking, standby, shutdown, etc., and the automatic tracking shopping cart 100 can be controlled to implement different operations in different modes.
- the automatic tracking shopping cart 100 further includes structures and components that are included in a conventional shopping cart, such as a vehicle body, an armrest, a wheel (eg, a universal wheel), a box, etc., and these components and structures are not described herein again. .
- the illustrated image acquisition unit 110 is configured to acquire color images and depth images of the field of view.
- the image acquisition unit 110 is an RGB-D camera.
- the processing unit 120 is configured to identify the target object according to the collected color image and the depth image, and determine the motion parameter of the shopping cart 100 according to the position and/or movement of the target object.
- the processing unit 120 will be described in detail below in conjunction with FIG.
- the shopping cart drive unit 130 is configured to drive the shopping cart 100 to move based on the determined motion parameters.
- the shopping cart drive unit 130 includes a battery, an Arm control board, a motor drive, a motor (eg, a DC brushless motor), and the like.
- the automatic tracking device further includes a memory.
- the memory is used to store color images, depth images, target features, and/or target depths, etc., as described below.
- FIG. 2 is a block diagram showing an example structure of the processing unit 120 in the automatic tracking shopping cart 100 shown in FIG. 1.
- the processing unit 120 includes a target determining module 210, an image analyzing module 220, and a driving computing module 230.
- the target determining module 210 is configured to determine, according to the collected color image and the depth image, a target feature of the target object in the color image and a target depth of the target object in the depth image.
- the image analysis module 220 is configured to determine a current depth of the target object from the color image and the depth image of the current frame based on the target feature and the target depth determined in the previous frame.
- the driving calculation module 230 is configured to determine a motion parameter of the shopping cart based on the calculated current depth.
- the target determining module 210 obtains the acquired color image and depth image, and determines a target feature that the target object is embodied in the color image.
- the target feature may be, for example, a contour of a target object, a color, a size of a particular dimension, and the like.
- the target determination module 210 determines the human body target closest to the shopping cart among the collected color images as the target object.
- the target determination module 210 uses the target feature determined in the previous frame as the target feature used in the frame. In one embodiment, if a tracking failure occurs in the frame, the target feature may be updated or corrected, and the target feature updated in the current frame is determined as the target feature used in the next frame in the next frame. .
- the target determination module 210 also determines a target depth that the target object is embodied in the depth image.
- the target object is identified, and the depth image acquired by the image acquisition unit 110 may determine the current depth (initial depth) of the target object, that is, the target depth determined in the current registration frame (for the next frame) ).
- the target determination module 210 uses the "current" depth determined in the previous frame as the target depth in the current frame.
- the target depth is equivalent to an initial value for determining the actual depth to which the target object has traveled in the current frame, thereby implementing the technical solution of the present disclosure in conjunction with the algorithm described below.
- the image analysis module 220 is configured to determine a current depth of the target object from a color image and a depth image of the current frame. Specifically, this process can include the following steps:
- the background projection map is calculated based on the color image and the depth image of the current frame, and the background projection map can be understood as a three-dimensional image obtained by combining the color image and the depth image, that is, each object in the two-dimensional color image according to the depth value in the depth image. Assigning a value to the third dimension to form a three-dimensional image;
- a predetermined depth range map is taken from the calculated background projection image, such as an image of a depth range of m cm before and after the target depth, and the value range of m can be set as needed, for example, m ⁇ [ 5,20];
- the current depth of the target object is determined.
- the determination of the current depth of the target object may be implemented using a continuous adaptive mean shift (Camshift) algorithm.
- Camshift continuous adaptive mean shift
- the driving calculation module 230 is configured to determine a motion parameter of the shopping cart based on the calculated current depth.
- the motion parameters include an average speed of the shopping cart over a next time period, the average speed being determined based on the following formula: Where T is the time interval of the time period, ⁇ l 1 is the distance between the shopping cart and the target object at the beginning of the current time period, ⁇ l 2 is the distance between the shopping cart and the target object at the end of the current time period, and L is the shopping cart The distance traveled during the current time period.
- the drive calculation module 230 is configured to determine a current distance of the target object from the shopping cart based on the calculated current depth, and trigger the shopping cart drive only when the current distance is greater than the reference distance Unit 130 drives the shopping cart.
- the image analysis module 220 determines that the current depth of the target object has failed from the color image and the depth image of the current frame, and at this time, the target object needs to be re-determined.
- the target determining module 210 when the image analyzing module 220 determines that the current depth of the target object fails from the color image and the depth image of the current frame, has the acquired color image having the same as the previous one.
- the human body target of the target feature that is determined by the target feature determined in the frame is newly determined as the target object.
- the re-determining the human body target having the target feature that best matches the target feature determined in the previous frame among the acquired color images as the target object comprises: calculating the currently acquired color image a histogram of each of the human body targets; matching the histograms of the respective human targets with the histograms of the target objects determined in the previous frame to determine matching values of the respective human targets; and having a higher than the reference matching value The human target with the highest matching value is re-determined as the target object.
- the processing unit 120 When the determined respective matching values are lower than the reference matching value, the processing unit 120 notifies the image capturing unit 110 to adjust its image capturing direction, and after the direction adjustment, re-acquires the color image and the depth image. Then, the processing unit 120 is based on reacquisition The color image and the depth image are performed again.
- the processing unit 120 no longer informs the image acquisition unit 110 to adjust the acquisition. Instead, the alarm unit, which is set on the shopping cart 100, is triggered to alert the user to notify the user to re-register.
- aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
- a computer program eg, implemented as one or more programs running on one or more computer systems
- implemented as one or more programs running on one or more processors eg, implemented as one or One or more programs running on a plurality of microprocessors, implemented as firmware, or substantially in any combination of the above, and those skilled in the art, in accordance with the present disclosure, will be provided with design circuitry and/or write software and / or firmware code capabilities.
- signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).
Abstract
Description
Claims (12)
- 一种自动跟踪购物车,包括:自动跟踪设备,固定于购物车的车体上,用于控制购物车的移动,以跟踪目标对象,所述自动跟踪设备包括:图像采集单元,用于采集视野的彩色图像和深度图像;处理单元,用于根据所采集的彩色图像和深度图像,识别出目标对象,并根据目标对象的位置和/或移动,确定购物车的运动参数;以及购物车驱动单元,基于所确定的运动参数,驱动所述购物车移动。
- 根据权利要求1所述的自动跟踪购物车,其中,所述处理单元包括:目标确定模块,基于所采集的彩色图像和深度图像,确定目标对象在所述彩色图像中的目标特征和目标对象在所述深度图像中的目标深度;图像分析模块,基于前一帧中所确定的目标特征和目标深度,从当前帧的彩色图像和深度图像确定所述目标对象的当前深度;以及驱动计算模块,基于所计算出的当前深度,确定购物车的运动参数。
- 根据权利要求2所述的自动跟踪购物车,其中,所述自动跟踪购物车还包括控制台,该控制台用于接收用户输入的指令,以及所述目标确定模块被配置为:根据通过所述控制台接收到的关于需要确定新的目标对象的指令,将所采集的彩色图像中距离购物车最近的人体目标确定为所述目标对象。
- 根据权利要求2所述的自动跟踪购物车,其中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像确定所述目标对象的当前深度成功时,将所述当前深度作为下一帧时所使用的目标深度。
- 根据权利要求2所述的自动跟踪购物车,其中,所述目标确定模块被配置为:当所述图像分析模块从当前帧的彩色图像和深度图像 确定所述目标对象的当前深度失败时,将所采集的彩色图像中具有与前一帧中所确定的目标特征最匹配的目标特征的人体目标重新确定为所述目标对象。
- 根据权利要求5所述的自动跟踪购物车,其中,所述目标确定模块被配置为:计算当前所采集的彩色图像中的各个人体目标的直方图;将所述各个人体目标的直方图与前一帧中确定的目标对象的直方图进行匹配,确定各个人体目标的匹配值;以及将具有高于基准匹配值的最高匹配值的人体目标重新确定为目标对象。
- 根据权利要求6所述的自动跟踪购物车,其中,所述目标确定模块被配置为:如果所确定的各个匹配值均低于所述基准匹配值,则调整所述图像采集单元的采集方向,并重新采集彩色图像和深度图像。
- 根据权利要求7所述的自动跟踪购物车,其中,所述自动跟踪设备还包括警报单元,以及所述目标确定模块还被配置为:如果针对重新采集的彩色图像和深度图像仍然无法确定出具有高于基准匹配值的匹配值的人体目标,则触发所述警报单元。
- 根据权利要求2所述的自动跟踪购物车,其中,所述图像分析模块被配置为:基于当前帧的彩色图像和深度图像计算背景投影图;基于所述目标深度,从计算出的背景投影图中截取预定深度范围图;对所述预定深度范围图进行膨胀和均值滤波处理;以及确定出所述目标对象的当前深度。
- 根据权利要求2所述的自动跟踪购物车,其中,所述驱动计算模块被配置为:基于所计算出的当前深度,确定目标对象与购物车的当前距离,当所述当前距离大于基准距离时才触发所述购物车驱动单元驱动所述购物车。
- 根据权利要求1所述的自动跟踪购物车,所述自动跟踪设备还包括:存储器,用于存储所述彩色图像、所述深度图像、所述目标特征和/或所述目标深度。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/552,158 US10394247B2 (en) | 2016-11-17 | 2017-04-05 | Automatic tracking shopping cart |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611020109.7 | 2016-11-17 | ||
CN201611020109.7A CN106778471B (zh) | 2016-11-17 | 2016-11-17 | 自动跟踪购物车 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018090543A1 true WO2018090543A1 (zh) | 2018-05-24 |
Family
ID=58969006
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/079475 WO2018090543A1 (zh) | 2016-11-17 | 2017-04-05 | 自动跟踪购物车 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10394247B2 (zh) |
CN (1) | CN106778471B (zh) |
WO (1) | WO2018090543A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109099891A (zh) * | 2018-07-12 | 2018-12-28 | 广州维绅科技有限公司 | 基于图像识别的空间定位方法、装置及系统 |
CN109324625A (zh) * | 2018-11-12 | 2019-02-12 | 辽东学院 | 自动跟踪购物设备 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292907B (zh) * | 2017-07-14 | 2020-08-21 | 灵动科技(北京)有限公司 | 一种对跟随目标进行定位的方法以及跟随设备 |
CN108596128B (zh) | 2018-04-28 | 2020-06-26 | 京东方科技集团股份有限公司 | 对象识别方法、装置及存储介质 |
CN109816688A (zh) * | 2018-12-03 | 2019-05-28 | 安徽酷哇机器人有限公司 | 物品跟随方法和行李箱 |
US11511785B2 (en) * | 2019-04-30 | 2022-11-29 | Lg Electronics Inc. | Cart robot with automatic following function |
WO2020222329A1 (ko) * | 2019-04-30 | 2020-11-05 | 엘지전자 주식회사 | 자동 추종 기능을 갖는 카트 |
WO2020222330A1 (ko) * | 2019-04-30 | 2020-11-05 | 엘지전자 주식회사 | 자동 추종 기능을 갖는 카트 |
US11585934B2 (en) * | 2019-04-30 | 2023-02-21 | Lg Electronics Inc. | Cart robot having auto-follow function |
JP7274970B2 (ja) * | 2019-08-01 | 2023-05-17 | 本田技研工業株式会社 | 追従対象特定システム及び追従対象特定方法 |
KR102484489B1 (ko) * | 2021-04-09 | 2023-01-03 | 동의대학교 산학협력단 | 스마트 카트 및 이의 제어 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006155039A (ja) * | 2004-11-26 | 2006-06-15 | Toshiba Corp | 店舗ロボット |
CN101612950A (zh) * | 2009-08-05 | 2009-12-30 | 山东大学 | 智能跟踪助力行李架 |
CN201808591U (zh) * | 2010-03-22 | 2011-04-27 | 北京印刷学院 | 超市购物车驱动终端 |
CN102289556A (zh) * | 2011-05-13 | 2011-12-21 | 郑正耀 | 一种超市购物助手机器人 |
CN102867311A (zh) * | 2011-07-07 | 2013-01-09 | 株式会社理光 | 目标跟踪方法和目标跟踪设备 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6378684B1 (en) * | 2000-02-14 | 2002-04-30 | Gary L. Cox | Detecting mechanism for a grocery cart and the like and system |
US8564661B2 (en) * | 2000-10-24 | 2013-10-22 | Objectvideo, Inc. | Video analytic rule detection system and method |
US7487913B2 (en) * | 2006-07-12 | 2009-02-10 | International Business Machines Corporation | Method and system for reducing waste due to product spoilage within a grocery environment |
US7742952B2 (en) * | 2008-03-21 | 2010-06-22 | Sunrise R&D Holdings, Llc | Systems and methods of acquiring actual real-time shopper behavior data approximate to a moment of decision by a shopper |
JP5589527B2 (ja) * | 2010-04-23 | 2014-09-17 | 株式会社リコー | 撮像装置および追尾被写体検出方法 |
CN102509074B (zh) * | 2011-10-18 | 2014-01-29 | Tcl集团股份有限公司 | 一种目标识别方法和设备 |
US9740937B2 (en) * | 2012-01-17 | 2017-08-22 | Avigilon Fortress Corporation | System and method for monitoring a retail environment using video content analysis with depth sensing |
PL2898384T3 (pl) * | 2012-09-19 | 2020-05-18 | Follow Inspiration Unipessoal, Lda. | System automatycznie śledzący i jego sposób działania |
WO2014205425A1 (en) * | 2013-06-22 | 2014-12-24 | Intellivision Technologies Corp. | Method of tracking moveable objects by combining data obtained from multiple sensor types |
GB2522291A (en) * | 2014-01-20 | 2015-07-22 | Joseph Bentsur | Shopping cart and system |
CN105785996A (zh) * | 2016-03-31 | 2016-07-20 | 浙江大学 | 超市智能跟踪购物车及其跟踪方法 |
CN105930784B (zh) * | 2016-04-15 | 2017-10-13 | 济南大学 | 一种手势识别方法 |
-
2016
- 2016-11-17 CN CN201611020109.7A patent/CN106778471B/zh active Active
-
2017
- 2017-04-05 WO PCT/CN2017/079475 patent/WO2018090543A1/zh active Application Filing
- 2017-04-05 US US15/552,158 patent/US10394247B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006155039A (ja) * | 2004-11-26 | 2006-06-15 | Toshiba Corp | 店舗ロボット |
CN101612950A (zh) * | 2009-08-05 | 2009-12-30 | 山东大学 | 智能跟踪助力行李架 |
CN201808591U (zh) * | 2010-03-22 | 2011-04-27 | 北京印刷学院 | 超市购物车驱动终端 |
CN102289556A (zh) * | 2011-05-13 | 2011-12-21 | 郑正耀 | 一种超市购物助手机器人 |
CN102867311A (zh) * | 2011-07-07 | 2013-01-09 | 株式会社理光 | 目标跟踪方法和目标跟踪设备 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109099891A (zh) * | 2018-07-12 | 2018-12-28 | 广州维绅科技有限公司 | 基于图像识别的空间定位方法、装置及系统 |
CN109099891B (zh) * | 2018-07-12 | 2021-08-13 | 广州达泊智能科技有限公司 | 基于图像识别的空间定位方法、装置及系统 |
CN109324625A (zh) * | 2018-11-12 | 2019-02-12 | 辽东学院 | 自动跟踪购物设备 |
Also Published As
Publication number | Publication date |
---|---|
US10394247B2 (en) | 2019-08-27 |
CN106778471A (zh) | 2017-05-31 |
CN106778471B (zh) | 2019-11-19 |
US20180335786A1 (en) | 2018-11-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018090543A1 (zh) | 自动跟踪购物车 | |
JP6942488B2 (ja) | 画像処理装置、画像処理システム、画像処理方法、及びプログラム | |
KR101776622B1 (ko) | 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법 | |
JP6162805B2 (ja) | 拡張の継続性の維持 | |
CN104981680A (zh) | 相机辅助的运动方向和速度估计 | |
KR101784183B1 (ko) | ADoG 기반 특징점을 이용한 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법 | |
JP6806188B2 (ja) | 情報処理システム、情報処理方法及びプログラム | |
US9373198B2 (en) | Faulty cart wheel detection | |
US9305217B2 (en) | Object tracking system using robot and object tracking method using a robot | |
WO2018073829A1 (en) | Human-tracking robot | |
JP2015036980A (ja) | 駐車区画占有率判定のための映像および視覚ベースのアクセス制御のハイブリッド方法およびシステム | |
WO2019064375A1 (ja) | 情報処理装置、制御方法、及びプログラム | |
KR20150144727A (ko) | 에지 기반 재조정을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법 | |
Bosch-Jorge et al. | Fall detection based on the gravity vector using a wide-angle camera | |
US20170006215A1 (en) | Methods and systems for controlling a camera to perform a task | |
JP6217635B2 (ja) | 転倒検知装置および転倒検知方法、転倒検知カメラ、並びにコンピュータ・プログラム | |
CN112703533A (zh) | 对象跟踪 | |
JP6868061B2 (ja) | 人物追跡方法、装置、機器及び記憶媒体 | |
US20170322676A1 (en) | Motion sensing method and motion sensing device | |
JP6789421B2 (ja) | 情報処理装置、追跡方法、及び追跡プログラム | |
JP2012191354A (ja) | 情報処理装置、情報処理方法及びプログラム | |
US20110304730A1 (en) | Pan, tilt, and zoom camera and method for aiming ptz camera | |
Cosma et al. | Camloc: Pedestrian location estimation through body pose estimation on smart cameras | |
JP6163732B2 (ja) | 画像処理装置、プログラム、及び方法 | |
US11558539B2 (en) | Systems and methods of detecting and identifying an object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15552158 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17872154 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17872154 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17872154 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 05/11/2019) |