WO2022266996A1 - 物体检知方法及物体检知装置 - Google Patents
物体检知方法及物体检知装置 Download PDFInfo
- Publication number
- WO2022266996A1 WO2022266996A1 PCT/CN2021/102347 CN2021102347W WO2022266996A1 WO 2022266996 A1 WO2022266996 A1 WO 2022266996A1 CN 2021102347 W CN2021102347 W CN 2021102347W WO 2022266996 A1 WO2022266996 A1 WO 2022266996A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- detection
- detection frame
- rectangular
- frame
- image
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 274
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000004931 aggregating effect Effects 0.000 claims 1
- 238000000034 method Methods 0.000 abstract description 10
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Definitions
- the invention relates to an object detection method and an object detection device for detecting objects from images.
- Patent Document 1 discloses an example of a product detection method. In Patent Document 1, it is disclosed that images of displayed products are acquired, products in the images are detected, images of each product are intercepted and products are classified according to the spatial positional relationship of each product.
- Patent Document 1 Specification of Chinese Patent Application Publication No. 110738123
- Patent Document 1 an image of each product is cut out using a rectangular frame such as a rectangle or a square.
- the outer shape of the product in the image may be distorted so that it is not a rectangle depending on the shooting angle when the image of the product is acquired or the like.
- the cropped image may not include a part of the product to be detected or may include images other than the product to be detected, and the detection accuracy of the product may decrease.
- the present invention solves the above problems, and an object of the present invention is to obtain an object detection method and an object detection device capable of improving the detection accuracy of an object.
- the object detection method involved in the present invention includes: an image acquisition step, acquiring an image containing an object; a first detection step, using a rectangular first detection frame to detect the object in the image; a detection frame setting step, Setting a non-rectangular second detection frame corresponding to the detected object; and a second detection step, using the second detection frame to detect the object.
- the object detection device includes: an image acquisition unit, which acquires an image containing an object; a first detection unit, which uses a rectangular first detection frame to detect the object in the image; a detection frame setting unit, A non-rectangular second detection frame corresponding to the detected object is set; and the second detection unit detects the object using the second detection frame.
- the detection accuracy of the object can be improved.
- FIG. 1 is a schematic configuration diagram of an object information acquisition system according to Embodiment 1. As shown in FIG. 1
- FIG. 2 is a control block diagram of the object information acquisition system according to Embodiment 1.
- FIG. 2 is a control block diagram of the object information acquisition system according to Embodiment 1.
- FIG. 3 is a flowchart of object information acquisition processing according to Embodiment 1.
- FIG. 3 is a flowchart of object information acquisition processing according to Embodiment 1.
- FIG. 4 is an example of a front image of a shelf captured by the imaging device 2 .
- FIG. 5 is a flowchart of object detection processing according to Embodiment 1.
- FIG. 5 is a flowchart of object detection processing according to Embodiment 1.
- FIG. 6 is an example of detection results by the first detection unit according to Embodiment 1.
- FIG. 6 is an example of detection results by the first detection unit according to Embodiment 1.
- FIG. 7 is an example of detection results by the second detection unit according to Embodiment 1.
- FIG. 7 is an example of detection results by the second detection unit according to Embodiment 1.
- FIG. 8 is a flowchart of object information acquisition processing according to Embodiment 2.
- FIG. 1 is a schematic configuration diagram of an object information acquisition system 100 according to the first embodiment.
- the object information acquisition system 100 of this embodiment is a system that is used in a retail store such as a supermarket, and automatically detects and recognizes an object P that is a commodity stored on a shelf S in the store and acquires the object stored on the shelf S. P's information.
- the object information acquisition system 100 is composed of a processing device 1 and a photographing device 2 .
- the processing device 1 is a PC including a CPU and a memory, a server on the cloud, or the like.
- the imaging device 2 is a camera that is installed on the ceiling or wall of the store and captures frontal images of the shelves S. As shown in FIG.
- the processing device 1 and the imaging device 2 are connected to be communicable by wire or wirelessly. Images captured by the photographing device 2 are sent to the processing device 1 .
- FIG. 2 is a control block diagram of the object information acquisition system 100 according to the first embodiment.
- the processing device 1 includes: an object detection unit 10 that detects an object P from an image; an object recognition unit 20 that recognizes the detected object P; and a storage unit 30 .
- the object detection unit 10 and the object recognition unit 20 are functional units realized by executing programs by the CPU. Alternatively, the object detection unit 10 and the object recognition unit 20 may also be realized by a dedicated processing circuit.
- the object detection unit 10 includes an image acquisition unit 11 , a first detection unit 12 , a detection frame setting unit 13 , and a second detection unit 14 .
- the image acquisition unit 11 acquires an image captured by the photographing device 2 and sends it to the first detection unit 12 .
- the first detection unit 12 detects the object P in the acquired image using an algorithm such as SSD (Single Shot Multibox Detector: object detection algorithm) using deep learning. In the first detection unit 12 , the detection of the object P is performed using the rectangular first detection frame F1 .
- the detection frame setting unit 13 sets a non-rectangular second detection frame F2 corresponding to the object P detected by the first detection unit 12 .
- the second detection unit 14 detects the object P using the second detection frame F2 set by the detection frame setting unit 13 , and sends the detection result to the object recognition unit 20 .
- the object recognition unit 20 recognizes the object P included in the image detected by the second detection unit 14 of the object detection unit 10 based on the shelf information and product information.
- the type and product name of the object P are recognized by using a known machine learning algorithm.
- the storage unit 30 is, for example, a volatile or nonvolatile memory such as RAM, ROM, or flash memory.
- the storage unit 30 stores programs executed by the object detection unit 10 and the object recognition unit 20 , various parameters used in the programs, shelf information, product information, detection frame data, detection history, and the like.
- the shelf information includes the position of each shelf S in the store, the category of products stored on each shelf S, the number and size of each shelf S, and the number of detection frames on each shelf S.
- the product information includes identification information such as the type and name of the product.
- the detection frame data is data of a plurality of non-rectangular detection frames that are candidates for the second detection frame F2 set by the detection frame setting unit 13.
- FIG. 3 is a flowchart of object information acquisition processing according to Embodiment 1.
- FIG. This process is periodically executed by the processing device 1 .
- system initialization is performed (S1).
- initial values are set for each parameter of the object information acquisition process.
- the parameters are the number of detection frames, the maximum number of detection frames that can be detected by each shelf, the maximum number of sections of the shelf, the type of detection frames, etc.
- the front image of the shelf S is photographed by the imaging device 2 and acquired by the image acquisition unit 11 of the processing device 1 (S2).
- the object detection unit 10 executes object detection processing based on the acquired image (S3). Thereby, a plurality of objects P included in the image are detected.
- the object recognition part 20 executes the object recognition process (S4). Thereby, the detected object P can be recognized, and the information of the object P accommodated on the shelf S can be acquired.
- the acquired information on the object P is sent to a management server or the like, and used for grasping sales data, product management, or the like.
- FIG. 4 is an example of the front image of the shelf S captured by the imaging device 2 .
- object detection is performed using a rectangular or square detection frame.
- the imaging device 2 is installed above the ceiling or the wall and takes an image of the shelf S from above, as shown in FIG. 4 , the outer shape of the object P in the image is deformed from a rectangle. Therefore, in the object detection process of the present embodiment, the detection of the image is performed after setting the detection frame for detecting the object as a detection frame suitable for the deformation of the image.
- FIG. 5 is a flowchart of object detection processing according to Embodiment 1.
- the first detection is performed by the first detection unit 12 based on the acquired image (S31).
- the object P is detected using the rectangular first detection frame F1.
- FIG. 6 is an example of detection results by the first detection unit 12 according to the first embodiment.
- a plurality of non-rectangular detection frames stored in the storage unit 30 are applied to each detected object P by the detection frame setting unit 13, and the reliability of each detection frame is obtained (S32).
- the shapes of the plurality of non-rectangular detection frames are parallelogram, trapezoid, circle or ellipse. Also, each shape contains boxes of multiple sizes. Then, make the center coordinates of the non-rectangular detection frame consistent with the center coordinates of the rectangular first detection frame F1 including the object P detected by the first detection, and obtain the reliability of the non-rectangular detection frame .
- the size of the area of the object P within the detection frame (the sharing of the object P with the non-rectangular detection frame The size of the area of the part) or the size of the area in the detection frame other than the object P (the size of the area of the non-shared part between the object P and the non-rectangular detection frame).
- the larger the area of the object P within the detection frame the higher the reliability
- the smaller the area of the detection frame other than the object P the higher the reliability.
- the higher the ratio of the area of the object P to the area of the detection frame the higher the reliability.
- the detection frame setting unit 13 filters a plurality of non-rectangular detection frames ( S33 ). Among the multiple non-rectangular detection frames, detection frames whose reliability is lower than a preset threshold are excluded from the candidates. Then, the detection frame setting unit 13 acquires, for each object P, the center coordinates of a plurality of non-rectangular detection frames whose reliability is equal to or greater than the threshold ( S34 ). The detection frame setting unit 13 performs contour detection of each object P in the image using the acquired center coordinates, and acquires position information of the contour of the object P ( S35 ). For contour detection of the object P, a known contour detection algorithm such as edge detection can be used.
- the contour detection of the object P is performed on each of the center coordinates of a plurality of non-rectangular detection frames.
- the detection frames with low reliability in step S33 it is possible to suppress an abnormality in which the center coordinates of the non-rectangular detection frames are located outside the object P and the outline of the object P cannot be detected.
- the detection frame setting unit 13 compares the non-rectangular detection frame with the outline of the object P (S36). Then, a non-rectangular detection frame including all contours of the object P is set as the second detection frame F2 ( S37 ). Steps S36 and S37 are performed for each object P to set the second detection frame F2 corresponding to each object P.
- the set shape and position of the second detection frame F2 are stored in the storage unit 30 . Among them, when there are multiple non-rectangular detection frames including all contours of the object P, the detection frame with the highest reliability is taken as the second detection frame F2. As described above, the reliability is obtained from the ratio of the area of the object P to the area of the detection frame.
- any one is selected as the second detection frame F2.
- a duplication detection algorithm that calculates overlapping areas, if the ratio of overlapping areas is equal to or greater than a threshold value, it is determined that they are the same, and only one can be left. Thus, the best detection frame is selected for each object P.
- FIG. 7 is an example of detection results by the second detection unit 14 according to the first embodiment.
- the object P can be detected using the second detection frame F2 along the outer shape of the object P by performing the object detection processing of the present embodiment.
- the object detection processing of the present embodiment Even when the object P in the image is deformed due to the influence of the imaging angle, etc., it is possible to suppress the detection of a part of the object P and the detection of objects other than the object P, thereby improving the performance of the image.
- the detection accuracy of the object P In particular, for an object P that is densely arranged such as commodities stored on the shelf S, by using a detection frame along the outer shape of the object, not only the accuracy is improved, but also the detection speed is improved.
- the detection frame setting unit 13 can gather the multiple non-rectangular detection frames filtered in step S33 at the position of the object P on the shelf S, and compare the inclination with other detection frames in the same cluster Different detection boxes are detected as errors and excluded from the candidates.
- the detection frame setting unit 13 may estimate the inclination of the object P from the position information of the imaging device 2, detect a non-rectangular detection frame having a slope different from the estimated inclination as an error, and select excluded. Furthermore, the detection frame setting unit 13 may detect a non-rectangular detection frame larger than the size of the shelf as an error based on the shelf information, and exclude it from the candidates.
- FIG. 8 is a flowchart of object information acquisition processing according to Embodiment 2.
- FIG. The structure of the object information acquisition system 100 in this embodiment is the same as that in the first embodiment.
- initialization ( S1 ) and image acquisition ( S2 ) are performed in the same manner as in the first embodiment.
- the object detection unit 10 judges whether the current detection is a re-detection based on the detection history ( S11 ).
- the re-detection refers to a case where object detection has been performed on the shelf S in the past and the second detection frame F2 of the object on the shelf S is stored.
- the first detection (S11: No) object detection processing (S3) and object recognition processing (S4) are executed in the same manner as in Embodiment 1. ).
- the current detection is a re-detection (S11: Yes)
- the image captured by the imaging device 2 during the previous detection and the image captured by the imaging device 2 this time are acquired. difference (S12).
- object detection processing (S3) and object recognition processing (S4) are performed on the difference area. That is, in the present embodiment, detection of the object P and recognition of the object P are performed only for the area where there was a change from the previous time, and the information of the object P in other areas is set to be the same as last time.
- the present embodiment by performing object detection processing and object recognition processing only on the area where there is a change, it is possible to reduce the processing load at the time of re-detection and improve the processing speed.
- the second detection frame F2 is set by performing the processing of steps S32 to S36 in FIG. 5 by the detection frame setting unit 13 , but the present invention is not limited thereto.
- the detection frame setting unit 13 may estimate the inclination of the object P from the position information of the imaging device 2, and set a non-rectangular detection frame having the estimated inclination as the second detection frame F2.
- the detection frame setting unit 13 may set the detection frame with the highest reliability obtained in step S32 of FIG. 5 as the second detection frame F2 .
- the detection frame setting unit 13 can detect the contour of the object P detected by the first detection unit 12, and compare the multiple non-rectangular detection frames stored in the storage unit 30 with the contour of the object P. Compare and set the detection frame including the outline of the object P as the second detection frame F2.
- the object detection unit 10 may select the rectangular first detection frame F1 as one of the candidates for the second detection frame F2, and perform the processing of steps S32 to S36 in FIG. 5 .
- the object detection unit 10 uses the first detection frame F1 as the second detection frame F2.
- the object detection unit 10 may select a rectangular detection frame different in size from the first detection frame F1 as a candidate for the second detection frame F2 and perform the processing of steps S32 to S36 in FIG. 5 .
- the above embodiment detects an object P as a product from an image of a shelf S of a retail store, but is not limited thereto, and can be applied to a method of detecting an object from an image including a plurality of objects.
- the processing device 1 is configured to have the object detection unit 10 and the object recognition unit 20, but the object detection device having the object detection unit 10 and the object recognition unit 20 may be configured separately. object recognition device.
- 1-processing device 2-photography device, 10-object detection unit, 11-image acquisition unit, 12-first detection unit, 13-detection frame setting unit, 14-second detection unit, 20- Object recognition unit, 30-storage unit, 100-object information acquisition system, F1-first detection frame, F2-second detection frame, P-object, S-shelf.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (11)
- 一种物体检知方法,其包括:图像获取步骤,获取包含物体的图像;第1检知步骤,使用矩形的第1检知框来检知所述图像中的所述物体;检知框设定步骤,设定与所检知的所述物体对应的非矩形的第2检知框;及第2检知步骤,使用所述第2检知框来检知所述物体。
- 根据权利要求1所述的物体检知方法,其中,所述检知框设定步骤包括:将多个非矩形的检知框应用于在所述第1检知步骤中所检知的所述物体的步骤;获取所述多个非矩形的检知框中的每一个的可靠度的步骤;及从所述第2检知框的候选中排除所述多个非矩形的检知框中的所述可靠度小于预先设定的阈值的检知框的步骤。
- 根据权利要求2所述的物体检知方法,其中,所述检知框设定步骤包括:获取所述可靠度为所述阈值以上的检知框的中心坐标的步骤;使用所述中心坐标来进行所述物体的轮廓的检知的步骤;及将所述检知框与所述物体的所述轮廓进行比较,并将包含所述物体的所有所述轮廓的检知框设定为所述第2检知框的步骤。
- 根据权利要求2所述的物体检知方法,其中,所述检知框设定步骤包括将应用于所述物体的所述多个非矩形的检知框聚集在所述物体的位置上并从所述第2检知框的候选中排除倾斜度与相同集群内的其他检知框不同的检知框的步骤。
- 根据权利要求2所述的物体检知方法,其中,所述检知框设定步骤包括从拍摄所述图像的摄影装置的位置信息估计所述物体的倾斜度并从所述第2检知框的候选中排除具有与所估计的倾斜度不同的倾斜度的非矩形的检知框的步骤。
- 根据权利要求1所述的物体检知方法,其中,所述检知框设定步骤包括:进行所述物体的轮廓的检知的步骤;及将多个非矩形的检知框与所述物体的所述轮廓进行比较,并将包含所述物体的所有所述轮廓的检知框设定为所述第2检知框的步骤。
- 根据权利要求1所述的物体检知方法,其中,所述检知框设定步骤包括从拍摄所述图像的摄影装置的位置信息估计所述物体的倾斜度并将具有所估计的倾斜度的非矩形的检知框设定为所述第2检知框的步骤。
- 根据权利要求1所述的物体检知方法,其中,所述检知框设定步骤包括:将多个非矩形的检知框应用于在所述第1检知步骤中所检知的所述物体的步骤;获取所述多个非矩形的检知框中的每一个的可靠度的步骤;及将所述可靠度最高的检知框设定为所述第2检知框的步骤。
- 根据权利要求1至8中任一项所述的物体检知方法,其还具备判定是否为再检知的步骤,在为所述再检知的情况下,对上次的图像与本次的图像的差分区域实施所述第1检知步骤、所述检知框设定步骤及所述第2检知步骤。
- 根据权利要求1至9中任一项所述的物体检知方法,其中,所述第2检知框为平行四边形或梯形。
- 一种物体检知装置,其具备:图像获取部,获取包含物体的图像;第1检知部,使用矩形的第1检知框来检知所述图像中的所述物体;检知框设定部,设定与所检知的所述物体对应的非矩形的第2检知框;及第2检知部,使用所述第2检知框来检知所述物体。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180099742.2A CN117616468A (zh) | 2021-06-25 | 2021-06-25 | 物体检知方法及物体检知装置 |
JP2023579455A JP2024522881A (ja) | 2021-06-25 | 2021-06-25 | 物体検知方法及び物体検知装置 |
PCT/CN2021/102347 WO2022266996A1 (zh) | 2021-06-25 | 2021-06-25 | 物体检知方法及物体检知装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/102347 WO2022266996A1 (zh) | 2021-06-25 | 2021-06-25 | 物体检知方法及物体检知装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022266996A1 true WO2022266996A1 (zh) | 2022-12-29 |
Family
ID=84544000
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/102347 WO2022266996A1 (zh) | 2021-06-25 | 2021-06-25 | 物体检知方法及物体检知装置 |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP2024522881A (zh) |
CN (1) | CN117616468A (zh) |
WO (1) | WO2022266996A1 (zh) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794267A (zh) * | 2005-12-29 | 2006-06-28 | 兆日科技(深圳)有限公司 | 用于防伪信息识别过程中矩形框定位的方法 |
CN103065163A (zh) * | 2013-02-04 | 2013-04-24 | 成都索贝数码科技股份有限公司 | 一种基于静态图片的快速目标检测识别系统及方法 |
JP2016157258A (ja) * | 2015-02-24 | 2016-09-01 | Kddi株式会社 | 人物領域検出装置、方法およびプログラム |
CN108960174A (zh) * | 2018-07-12 | 2018-12-07 | 广东工业大学 | 一种目标检测结果优化方法及装置 |
CN109657681A (zh) * | 2018-12-28 | 2019-04-19 | 北京旷视科技有限公司 | 图片的标注方法、装置、电子设备及计算机可读存储介质 |
CN110334752A (zh) * | 2019-06-26 | 2019-10-15 | 电子科技大学 | 一种基于梯形卷积的不规则形状物体检测方法 |
CN111598091A (zh) * | 2020-05-20 | 2020-08-28 | 北京字节跳动网络技术有限公司 | 图像识别方法、装置、电子设备及计算可读存储介质 |
CN112183529A (zh) * | 2020-09-23 | 2021-01-05 | 创新奇智(北京)科技有限公司 | 四边形物体检测、模型训练方法、装置、设备及存储介质 |
-
2021
- 2021-06-25 JP JP2023579455A patent/JP2024522881A/ja active Pending
- 2021-06-25 CN CN202180099742.2A patent/CN117616468A/zh active Pending
- 2021-06-25 WO PCT/CN2021/102347 patent/WO2022266996A1/zh active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1794267A (zh) * | 2005-12-29 | 2006-06-28 | 兆日科技(深圳)有限公司 | 用于防伪信息识别过程中矩形框定位的方法 |
CN103065163A (zh) * | 2013-02-04 | 2013-04-24 | 成都索贝数码科技股份有限公司 | 一种基于静态图片的快速目标检测识别系统及方法 |
JP2016157258A (ja) * | 2015-02-24 | 2016-09-01 | Kddi株式会社 | 人物領域検出装置、方法およびプログラム |
CN108960174A (zh) * | 2018-07-12 | 2018-12-07 | 广东工业大学 | 一种目标检测结果优化方法及装置 |
CN109657681A (zh) * | 2018-12-28 | 2019-04-19 | 北京旷视科技有限公司 | 图片的标注方法、装置、电子设备及计算机可读存储介质 |
CN110334752A (zh) * | 2019-06-26 | 2019-10-15 | 电子科技大学 | 一种基于梯形卷积的不规则形状物体检测方法 |
CN111598091A (zh) * | 2020-05-20 | 2020-08-28 | 北京字节跳动网络技术有限公司 | 图像识别方法、装置、电子设备及计算可读存储介质 |
CN112183529A (zh) * | 2020-09-23 | 2021-01-05 | 创新奇智(北京)科技有限公司 | 四边形物体检测、模型训练方法、装置、设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
XU YONGCHAO; FU MINGTAO; WANG QIMENG; WANG YUKANG; CHEN KAI; XIA GUI-SONG; BAI XIANG: "Gliding Vertex on the Horizontal Bounding Box for Multi-Oriented Object Detection", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE COMPUTER SOCIETY., USA, vol. 43, no. 4, 18 February 2020 (2020-02-18), USA , pages 1452 - 1459, XP011843410, ISSN: 0162-8828, DOI: 10.1109/TPAMI.2020.2974745 * |
Also Published As
Publication number | Publication date |
---|---|
CN117616468A (zh) | 2024-02-27 |
JP2024522881A (ja) | 2024-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985199B (zh) | 商品取放操作的检测方法、装置及存储介质 | |
US10212324B2 (en) | Position detection device, position detection method, and storage medium | |
CN108549870B (zh) | 一种对物品陈列进行鉴别的方法及装置 | |
US20170053409A1 (en) | Information processing apparatus, information processing method and program | |
US10217083B2 (en) | Apparatus, method, and program for managing articles | |
EP3563345B1 (en) | Automatic detection, counting, and measurement of lumber boards using a handheld device | |
US20070076922A1 (en) | Object detection | |
US10074029B2 (en) | Image processing system, image processing method, and storage medium for correcting color | |
JP2018048024A (ja) | 物品管理装置 | |
GB2430736A (en) | Image processing | |
JP2017004505A (ja) | 視覚分析に基づくプラノグラム・コンプライアンス照合のための方法及びシステム | |
Rosado et al. | Supervised learning for Out-of-Stock detection in panoramas of retail shelves | |
JP2015041164A (ja) | 画像処理装置、画像処理方法およびプログラム | |
EP3214604B1 (en) | Orientation estimation method and orientation estimation device | |
JP2016201105A (ja) | 情報処理装置及び情報処理方法 | |
JPWO2018179361A1 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
EP3404513A1 (en) | Information processing apparatus, method, and program | |
Saran et al. | Robust visual analysis for planogram compliance problem | |
US20150116543A1 (en) | Information processing apparatus, information processing method, and storage medium | |
CN113221617A (zh) | 在图像中区分人群中的人 | |
US20200005492A1 (en) | Image processing device, image processing method, and recording medium | |
JP6769554B2 (ja) | 物体識別装置、物体識別方法、計算装置、システムおよび記録媒体 | |
WO2015079054A1 (en) | Estimating gaze from un-calibrated eye measurement points | |
JP6244960B2 (ja) | 物体認識装置、物体認識方法及び物体認識プログラム | |
CN111832381B (zh) | 物体信息登记装置以及物体信息登记方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21946490 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023579455 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180099742.2 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21946490 Country of ref document: EP Kind code of ref document: A1 |