WO2021012268A1 - 货架边缘定位方法及装置 - Google Patents

货架边缘定位方法及装置 Download PDF

Info

Publication number
WO2021012268A1
WO2021012268A1 PCT/CN2019/097720 CN2019097720W WO2021012268A1 WO 2021012268 A1 WO2021012268 A1 WO 2021012268A1 CN 2019097720 W CN2019097720 W CN 2019097720W WO 2021012268 A1 WO2021012268 A1 WO 2021012268A1
Authority
WO
WIPO (PCT)
Prior art keywords
shelf
edge
shelf edge
pictures
deep learning
Prior art date
Application number
PCT/CN2019/097720
Other languages
English (en)
French (fr)
Inventor
鲜霞
苏汛沅
Original Assignee
浙江汉朔电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江汉朔电子科技有限公司 filed Critical 浙江汉朔电子科技有限公司
Priority to DE112019007564.0T priority Critical patent/DE112019007564T5/de
Priority to PCT/CN2019/097720 priority patent/WO2021012268A1/zh
Publication of WO2021012268A1 publication Critical patent/WO2021012268A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/469Contour-based spatial representations, e.g. vector-coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the invention relates to the field of Internet, in particular to a method and device for positioning the edge of a shelf.
  • shelf positioning that is, shelf edge detection
  • shelf edge detection is the first step in the detection operations such as display shortages. Therefore, shelf edge detection is a key step in the construction of smart stores.
  • adding markers method cannot guarantee that the shelf edge detection markers will not be moved, the accuracy of shelf edge positioning is low, and additional markers will affect the shelf Aesthetics. Therefore, the existing method has the problems of low efficiency and low accuracy when positioning the shelf edge.
  • the embodiment of the present invention provides a shelf edge positioning method for positioning the shelf edge, which has high efficiency, low cost, and high accuracy.
  • the method includes:
  • the embodiment of the present invention provides a shelf edge positioning device for positioning the shelf edge, which has high efficiency, low cost and high accuracy.
  • the device includes:
  • Picture acquisition module used to acquire multiple shelf edge pictures
  • the edge type determination module is used to input the multiple shelf edge pictures into the deep learning model to determine the edge type corresponding to the multiple shelf pictures.
  • the deep learning model is obtained by training based on historical shelf edge pictures and used to determine the shelf pictures Corresponding edge type;
  • the shelf edge line determination module is used to determine multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types
  • the shelf vertex coordinate determination module is used to determine the shelf vertex coordinates according to multiple shelf edge lines.
  • the embodiment of the present invention also provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the foregoing shelf edge positioning method when the computer program is executed.
  • the embodiment of the present invention also provides a computer-readable storage medium that stores a computer program for executing the foregoing shelf edge positioning method.
  • multiple shelf edge pictures are obtained; the multiple shelf edge pictures are input to a deep learning model to determine the edge type corresponding to the multiple shelf pictures, and the deep learning model is trained based on historical shelf edge pictures Obtained, used to determine the edge type corresponding to the shelf picture; determine multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types; determine shelf vertex coordinates according to multiple shelf edge lines.
  • Figure 1 is a flow chart of a method for positioning shelf edges in an embodiment of the present invention
  • Figure 2 is a schematic diagram of shelf edge types
  • 3 to 6 are schematic diagrams of determining the coordinates of the apex of the shelf in the embodiment of the present invention.
  • FIG. 7 is a detailed flowchart of a method for positioning the shelf edge according to an embodiment of the present invention.
  • Fig. 8 is a schematic diagram of a shelf edge positioning device provided by an embodiment of the present invention.
  • Fig. 1 is a flowchart of a method for positioning shelf edges in an embodiment of the present invention. As shown in Fig. 1, the method includes:
  • Step 101 Obtain multiple shelf edge pictures
  • Step 102 Input the multiple shelf edge pictures into a deep learning model to determine the edge type corresponding to the multiple shelf pictures.
  • the deep learning model is obtained through training based on historical shelf edge pictures and is used to determine the edge type corresponding to the shelf pictures ;
  • Step 103 Determine multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types
  • Step 104 Determine the coordinates of the apex of the shelf according to the multiple shelf edge lines.
  • multiple shelf edge pictures are obtained; the multiple shelf edge pictures are input to a deep learning model to determine the edge type corresponding to the multiple shelf pictures, and the deep learning model is trained based on historical shelf edge pictures Obtained, used to determine the edge type corresponding to the shelf picture; determine multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types; determine shelf vertex coordinates according to multiple shelf edge lines.
  • FIG. 2 is a schematic diagram of the shelf edge type.
  • the edge type generally includes the left edge type (the angle A between the shelf edge line and the horizontal direction in the shelf edge picture is greater than 90 degrees), right edge type (the angle A between the shelf edge line and the horizontal direction in the shelf edge picture is less than 90 degrees), vertical type (the angle A between the shelf edge line and the horizontal direction in the shelf edge picture is equal to 90 degree).
  • the shelf vertex coordinate 2 can be determined.
  • the deep learning model can be obtained in a variety of ways, one of which is given below.
  • the deep learning model is obtained by training in the following manner:
  • the parameters of the deep learning model are adjusted until the loss function of the deep learning model meets the preset convergence conditions, and the trained deep learning model is obtained.
  • determining multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types includes:
  • For each shelf edge picture determine the area where the shelf edge line in the shelf edge picture is located;
  • the shelf edge line is determined from the area where the shelf edge line is located.
  • a shelf edge picture is generally the area where the shelf edge line is located. Therefore, the area where the shelf edge line is located can be parsed from the photo. Then, according to the edge type corresponding to the shelf edge picture, for example, left For edge type, you can find the edge points of each layer on the shelf and connect them according to the characteristics of the left edge type to form the shelf edge line.
  • determining the coordinates of the apex of the shelf according to multiple shelf edge lines includes:
  • the shelf vertex coordinates corresponding to the shelf edge line are determined.
  • Figure 3-6 is a schematic diagram of determining the coordinates of the apex of the shelf in the embodiment of the present invention.
  • Figure 3 is a photo of a shelf
  • Figure 4 is based on the photo of the shelf, the area where the shelf edge line determined by the rectangular frame is determined, and the left Two edge types, edge type and right edge type, have determined the shelf edge lines represented by two dashed lines.
  • A'(x5, y5) and B'(x6, y6) can be obtained, C(x3, y3) and D(x4, y4) are 4 shelf vertex coordinates, as shown in Figure 6.
  • the shelf edge positioning method further includes:
  • the shelf area is determined.
  • the coordinates of the vertices of the four shelves are connected, and the area formed is the shelf area.
  • FIG. 7 is a detailed flow chart of the shelf edge positioning method proposed by the embodiment of the present invention. As shown in FIG. 7, an embodiment The detailed process of the shelf edge positioning method includes:
  • Step 701 Obtain multiple shelf edge pictures
  • Step 702 Obtain a picture of the edge of the historical shelf
  • Step 703 Extract feature vectors of historical shelf edge pictures
  • Step 704 Use the feature vector to train a deep learning model
  • Step 705 Adjust the parameters of the deep learning model during the training process until the loss function of the deep learning model meets a preset convergence condition, and obtain a trained deep learning model;
  • Step 706 Input the multiple shelf edge pictures into the deep learning model to determine the edge type corresponding to the multiple shelf pictures;
  • Step 707 For each shelf edge picture, determine the area where the shelf edge line in the shelf edge picture is located;
  • Step 708 Determine the shelf edge line from the area where the shelf edge line is located according to the edge type corresponding to the shelf edge picture;
  • Step 709 Determine the minimum and maximum ordinates of the vertices of the multiple shelf edge lines
  • Step 710 For each shelf edge line, obtain the slope and offset of the shelf edge line;
  • Step 711 According to the slope and offset of the shelf edge line, the maximum value and the minimum value, the shelf vertex coordinates corresponding to the shelf edge line are determined.
  • multiple shelf edge pictures are obtained; the multiple shelf edge pictures are input into a deep learning model to determine the edge type corresponding to the multiple shelf pictures, and the deep learning model is based on historical shelf Edge picture training is obtained and used to determine the edge type corresponding to the shelf picture; according to multiple shelf edge pictures and corresponding edge types, multiple shelf edge lines are determined; according to multiple shelf edge lines, the shelf vertex coordinates are determined.
  • embodiments of the present invention also provide a shelf edge positioning device, as described in the following embodiments. Since these problem-solving principles are similar to the shelf edge positioning method, the implementation of the device can refer to the implementation of the method, and the repetition will not be repeated.
  • Fig. 8 is a schematic diagram of a shelf edge positioning device provided by an embodiment of the present invention. As shown in Fig. 8, the device includes:
  • the picture obtaining module 801 is used to obtain multiple shelf edge pictures
  • the edge type determination module 802 is used to input the multiple shelf edge pictures into a deep learning model to determine the edge type corresponding to the multiple shelf pictures.
  • the deep learning model is obtained by training based on historical shelf edge pictures and is used to determine the shelf The type of edge corresponding to the picture;
  • the shelf edge line determination module 803 is used to determine multiple shelf edge lines according to multiple shelf edge pictures and corresponding edge types
  • the shelf vertex coordinate determination module 804 is configured to determine the shelf vertex coordinates according to multiple shelf edge lines.
  • the deep learning model is obtained by training in the following manner:
  • the parameters of the deep learning model are adjusted until the loss function of the deep learning model meets the preset convergence conditions, and the trained deep learning model is obtained.
  • shelf edge line determination module 803 is specifically configured to:
  • For each shelf edge picture determine the area where the shelf edge line in the shelf edge picture is located;
  • the shelf edge line is determined from the area where the shelf edge line is located.
  • shelf vertex coordinate determination module 804 is specifically configured to:
  • the shelf vertex coordinates corresponding to the shelf edge line are determined.
  • the shelf edge positioning device further includes a shelf area determining module 805, configured to determine the shelf area according to the coordinates of the apex of the shelf.
  • multiple shelf edge pictures are obtained; the multiple shelf edge pictures are input to the deep learning model to determine the edge type corresponding to the multiple shelf pictures, and the deep learning model is based on the historical shelf Edge picture training is obtained and used to determine the edge type corresponding to the shelf picture; according to multiple shelf edge pictures and corresponding edge types, multiple shelf edge lines are determined; according to multiple shelf edge lines, the shelf vertex coordinates are determined.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • a computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.

Abstract

一种货架边缘定位方法、装置、计算机设备及计算机可读存储介质。该方法包括:获得多张货架边缘图片;将多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线(1);根据多条货架边缘线(1),确定货架顶点坐标(2)。该方法可以对货架边缘进行定位,效率高,成本低,准确率高。

Description

货架边缘定位方法及装置 技术领域
本发明涉及互联网领域,尤其涉及一种货架边缘定位方法及装置。
背景技术
随机计算机技术的飞速发展,企业以互联网为依托,通过运用大数据、人工智能等先进技术手段,对商品的生产、流通与销售过程进行升级改造,进而重塑业态结构与生态圈,并对线上服务、线下体验以及现代物流进行深度融合,新零售模式将是零售行业发展趋势。如何构建智慧门店,对超市商品进行智能,高效,便捷地管理对新零售发展至关重要。
超市商品通常以货架为基本单元,货架定位即货架边缘检测是进行陈列缺货等检测操作的第一步,因此货架边缘检测是智慧门店构建的关键步骤。目前使用的货架边缘检测方法通常为两种,一种是传统的手动标注法,该方法通过工堪人员手动逐一货架进行标注,该方法虽然精度高但是工作量大,耗时长,因此需要的成本高,效率低,后期维护更新困难;第二种是标识物法,通过附加货架边缘标识物,增强货架边缘特征后再进行货架边缘检测,该方法需要在部署时对所有货架添加标识物,将投入巨大的人力,物力,时间等成本,对于人员流动性的大超市,增加标识物法不能保证货架边缘检测标识物不被移动,货架边缘定位的准确率低,并且附加标识物一会影响货架美观度。因此,现有方法在对货架边缘定位时,存在效率低、准确率低的问题。
发明内容
本发明实施例提出一种货架边缘定位方法,用以对货架边缘进行定位,效率高,成本低,准确率高,该方法包括:
获得多张货架边缘图片;
将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
根据多条货架边缘线,确定货架顶点坐标。
本发明实施例提出一种货架边缘定位装置,用以对货架边缘进行定位,效率高,成本低,准确率高,该装置包括:
图片获得模块,用于获得多张货架边缘图片;
边缘类型确定模块,用于将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
货架边缘线确定模块,用于根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
货架顶点坐标确定模块,用于根据多条货架边缘线,确定货架顶点坐标。
本发明实施例还提出了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述货架边缘定位方法。
本发明实施例还提出了一种计算机可读存储介质,所述计算机可读存储介质存储有执行上述货架边缘定位方法的计算机程序。
在本发明实施例中,获得多张货架边缘图片;将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;根据多条货架边缘线,确定货架顶点坐标。在上述过程中,只需要获得多张货架边缘图片并将其输入至深度学习模型,即可自动确定货架顶点坐标,实现货架边缘定位,效率高,且上述过程不需要手工操作,成本低;不需要对货架添加标志物,因此不存在标志物被移动造成的影响,定位准确率高。
附图说明
此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,并不构成对本发明的限定。在附图中:
图1为本发明实施例中货架边缘定位方法的流程图;
图2为货架边缘类型的示意图;
图3-图6为本发明实施例中确定货架顶点坐标的示意图;
图7为本发明实施例提出的货架边缘定位方法的详细流程图;
图8为本发明实施例提出的货架边缘定位装置的示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚明白,下面结合附图对本发明实施例做进一步详细说明。在此,本发明的示意性实施例及其说明用于解释本发明,但并不作为对本发明的限定。
图1为本发明实施例中货架边缘定位方法的流程图,如图1所示,该方法包括:
步骤101,获得多张货架边缘图片;
步骤102,将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
步骤103,根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
步骤104,根据多条货架边缘线,确定货架顶点坐标。
在本发明实施例中,获得多张货架边缘图片;将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;根据多条货架边缘线,确定货架顶点坐标。在上述过程中,只需要获得多张货架边缘图片并将其输入至深度学习模型,即可自动确定货架顶点坐标,实现货架边缘定位,效率高,且上述过程不需要手工操作,成本低;不需要对货架添加标志物,因此不存在标志物被移动造成的影响,定位准确率高。
具体实施时,多张货架边缘图片可以从通过摄像机进行拍摄,然后发送过来,深度学习模型有很多,最常见的深度学习模型包括卷积神经网络模型、递归神经网络模型,可根据实际情况选择使用,以相机拍摄货架的角度为依据,确定货架的边缘类型,图2为货架边缘类型的示意图,边缘类型一般包括左边缘型(货架边缘线与货架边缘图片中水平方向之间的夹角A大于90度),右边缘型(货架边缘线与货架边缘图片中水平方向之间的夹角A小于90度),垂直型(货架边缘线与货架边缘图片中水平方向之间的夹角A等于90度)。根据多张货架边缘图片和对应的边缘类型,即对每张货架边缘图片,从图片中找出货架边缘的线条1,最后根据货架边缘线1,即可确定货架顶点坐标2。
具体实施时,深度学习模型可采用多种方法获得,下面给出其中一个方法。
在一实施例中,深度学习模型采用如下方式训练获得:
获得历史货架边缘图片;
提取历史货架边缘图片的特征向量;
利用所述特征向量训练深度学习模型;
在训练的过程中调整深度学习模型的参数,直至深度学习模型损失函数满足预设收敛条件,获得训练后的深度学习模型。
具体实施时,根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线的方法有多种,下面给出其中一个实施例。
在一实施例中,根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线,包括:
对每一张货架边缘图片,确定该货架边缘图片中的货架边缘线所在区域;
根据该张货架边缘图片对应的边缘类型,从货架边缘线所在区域中确定货架边缘线。
在上述实施例中,一张货架边缘图片一般为货架边缘线所在的区域,因此,从照片中可解析出来货架边缘线所在区域,然后,根据该张货架边缘图片对应的边缘类型,例如,左边缘型,就可以按照左边缘型的特点,找出货架上每一层的边缘点进行连线,从而形成货架边缘线。
具体实施时,根据多条货架边缘线,确定货架顶点坐标的方法有多种,下面给出其中一个实施例。
在一实施例中,根据多条货架边缘线,确定货架顶点坐标,包括:
确定多条货架边缘线的顶点纵坐标的最小值和最大值;
对每一货架边缘线,获得该货架边缘线的斜率和偏移量;
根据该货架边缘线的斜率和偏移量,所述最大值和最小值,确定该货架边缘线对应的货架顶点坐标。
图3-图6为本发明实施例中确定货架顶点坐标的示意图,首先,图3为一个货架的照片,图4为根据该货架照片,确定了矩形框确定的货架边缘线所在区域,和左边缘型、右边缘型两种边缘类型,确定了两条虚线表示的货架边缘线,图5中确定了两条虚线的定点坐标A(x1,y1)、B(x2,y2)、C(x3,y3)和D(x4,y4),从坐标A、B、C和D中确定两条货架边缘线的顶点纵坐标的最小值和最大值,分别坐标C的纵坐标和坐标D的纵坐标,然后,根据坐标A和B表示的货架边缘线,确定该货架边缘线的斜率k 和偏移量b,然后延长坐标A和B表示的货架边缘线,根据该货架边缘线的斜率k和偏移量b,坐标C的纵坐标和坐标D的纵坐标,确定该货架边缘线对应的货架顶点坐标A'(x5,y5)和B'(x6,y6),具体可采用如下公式:
y5=y3,x5=(y5/k)+b
y6=y4,x6=(y6/k)+b
最后,可得A'(x5,y5)和B'(x6,y6),C(x3,y3)和D(x4,y4)为4个货架顶点坐标,如图6所示。
在一实施例中,货架边缘定位方法还包括:
根据货架顶点坐标,确定货架区域。
在上述实施例中,即连接4个货架顶点坐标,形成的区域为货架区域。
基于上述实施例,本发明提出如下一个实施例来说明货架边缘定位方法的详细流程,图7为本发明实施例提出的货架边缘定位方法的详细流程图,如图7所示,在一实施例中,货架边缘定位方法的详细流程包括:
步骤701,获得多张货架边缘图片;
步骤702,获得历史货架边缘图片;
步骤703,提取历史货架边缘图片的特征向量;
步骤704,利用所述特征向量训练深度学习模型;
步骤705,在训练的过程中调整深度学习模型的参数,直至深度学习模型损失函数满足预设收敛条件,获得训练后的深度学习模型;
步骤706,将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型;
步骤707,对每一张货架边缘图片,确定该货架边缘图片中的货架边缘线所在区域;
步骤708,根据该张货架边缘图片对应的边缘类型,从货架边缘线所在区域中确定货架边缘线;
步骤709,确定多条货架边缘线的顶点纵坐标的最小值和最大值;
步骤710,对每一货架边缘线,获得该货架边缘线的斜率和偏移量;
步骤711,根据该货架边缘线的斜率和偏移量,所述最大值和最小值,确定该货架边缘线对应的货架顶点坐标。
当然,可以理解的是,上述货架边缘定位方法的详细流程还可以有其他变化例,相关变化例均应落入本发明的保护范围。
在本发明实施例提出的方法中,获得多张货架边缘图片;将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;根据多条货架边缘线,确定货架顶点坐标。在上述过程中,只需要获得多张货架边缘图片并将其输入至深度学习模型,即可自动确定货架顶点坐标,实现货架边缘定位,效率高,且上述过程不需要手工操作,成本低;不需要对货架添加标志物,因此不存在标志物被移动造成的影响,定位准确率高。
基于同样的发明构思,本发明实施例还提供了一种货架边缘定位装置,如下面的实施例所述。由于这些解决问题的原理与货架边缘定位方法相似,因此装置的实施可以参见方法的实施,重复之处不在赘述。
图8为本发明实施例提出的货架边缘定位装置的示意图,如图8所示,该装置包括:
图片获得模块801,用于获得多张货架边缘图片;
边缘类型确定模块802,用于将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
货架边缘线确定模块803,用于根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
货架顶点坐标确定模块804,用于根据多条货架边缘线,确定货架顶点坐标。
在一实施例中,深度学习模型采用如下方式训练获得:
获得历史货架边缘图片;
提取历史货架边缘图片的特征向量;
利用所述特征向量训练深度学习模型;
在训练的过程中调整深度学习模型的参数,直至深度学习模型损失函数满足预设收敛条件,获得训练后的深度学习模型。
在一实施例中,货架边缘线确定模块803具体用于:
对每一张货架边缘图片,确定该货架边缘图片中的货架边缘线所在区域;
根据该张货架边缘图片对应的边缘类型,从货架边缘线所在区域中确定货架边缘线。
在一实施例中,货架顶点坐标确定模块804具体用于:
确定多条货架边缘线的顶点纵坐标的最小值和最大值;
对每一货架边缘线,获得该货架边缘线的斜率和偏移量;
根据该货架边缘线的斜率和偏移量,所述最大值和最小值,确定该货架边缘线对应的货架顶点坐标。
在一实施例中,货架边缘定位装置还包括货架区域确定模块805,用于:根据货架顶点坐标,确定货架区域。
在本发明实施例提出的装置中,获得多张货架边缘图片;将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;根据多条货架边缘线,确定货架顶点坐标。在上述过程中,只需要获得多张货架边缘图片并将其输入至深度学习模型,即可自动确定货架顶点坐标,实现货架边缘定位,效率高,且上述过程不需要手工操作,成本低;不需要对货架添加标志物,因此不存在标志物被移动造成的影响,定位准确率高。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令 装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种货架边缘定位方法,其特征在于,包括:
    获得多张货架边缘图片;
    将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
    根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
    根据多条货架边缘线,确定货架顶点坐标。
  2. 如权利要求1所述的货架边缘定位方法,其特征在于,深度学习模型采用如下方式训练获得:
    获得历史货架边缘图片;
    提取历史货架边缘图片的特征向量;
    利用所述特征向量训练深度学习模型;
    在训练的过程中调整深度学习模型的参数,直至深度学习模型损失函数满足预设收敛条件,获得训练后的深度学习模型。
  3. 如权利要求1所述的货架边缘定位方法,其特征在于,根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线,包括:
    对每一张货架边缘图片,确定该货架边缘图片中的货架边缘线所在区域;
    根据该张货架边缘图片对应的边缘类型,从货架边缘线所在区域中确定货架边缘线。
  4. 如权利要求1所述的货架边缘定位方法,其特征在于,根据多条货架边缘线,确定货架顶点坐标,包括:
    确定多条货架边缘线的顶点纵坐标的最小值和最大值;
    对每一货架边缘线,获得该货架边缘线的斜率和偏移量;
    根据该货架边缘线的斜率和偏移量,所述最大值和最小值,确定该货架边缘线对应的货架顶点坐标。
  5. 如权利要求1所述的货架边缘定位方法,其特征在于,还包括:
    根据货架顶点坐标,确定货架区域。
  6. 一种货架边缘定位装置,其特征在于,包括:
    图片获得模块,用于获得多张货架边缘图片;
    边缘类型确定模块,用于将所述多张货架边缘图片输入至深度学习模型,确定多张货架图片对应的边缘类型,所述深度学习模型是根据历史货架边缘图片训练获得,用于确定货架图片对应的边缘类型;
    货架边缘线确定模块,用于根据多张货架边缘图片和对应的边缘类型,确定多条货架边缘线;
    货架顶点坐标确定模块,用于根据多条货架边缘线,确定货架顶点坐标。
  7. 如权利要求6所述的货架边缘定位装置,其特征在于,深度学习模型采用如下方式训练获得:
    获得历史货架边缘图片;
    提取历史货架边缘图片的特征向量;
    利用所述特征向量训练深度学习模型;
    在训练的过程中调整深度学习模型的参数,直至深度学习模型损失函数满足预设收敛条件,获得训练后的深度学习模型。
  8. 如权利要求6所述的货架边缘定位装置,其特征在于,还包括货架区域确定模块,用于:根据货架顶点坐标,确定货架区域。
  9. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至5任一项所述方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有执行权利要求1至5任一项所述方法的计算机程序。
PCT/CN2019/097720 2019-07-25 2019-07-25 货架边缘定位方法及装置 WO2021012268A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE112019007564.0T DE112019007564T5 (de) 2019-07-25 2019-07-25 Regalkantenpositionierungsverfahren und -vorrichtung
PCT/CN2019/097720 WO2021012268A1 (zh) 2019-07-25 2019-07-25 货架边缘定位方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/097720 WO2021012268A1 (zh) 2019-07-25 2019-07-25 货架边缘定位方法及装置

Publications (1)

Publication Number Publication Date
WO2021012268A1 true WO2021012268A1 (zh) 2021-01-28

Family

ID=74192787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/097720 WO2021012268A1 (zh) 2019-07-25 2019-07-25 货架边缘定位方法及装置

Country Status (2)

Country Link
DE (1) DE112019007564T5 (zh)
WO (1) WO2021012268A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100193588A1 (en) * 2009-02-04 2010-08-05 Datalogic Scanning, Inc. Systems and methods for selectively masking a scan volume of a data reader
CN106570510A (zh) * 2016-11-07 2017-04-19 南京航空航天大学 一种超市商品识别方法
CN107292248A (zh) * 2017-06-05 2017-10-24 广州诚予国际市场信息研究有限公司 一种基于图像识别技术的商品管理方法及系统
CN108549870A (zh) * 2018-04-16 2018-09-18 图麟信息科技(深圳)有限公司 一种对物品陈列进行鉴别的方法及装置
CN109357630A (zh) * 2018-10-30 2019-02-19 南京工业大学 一种多类型工件批量视觉测量系统及方法
JP2019055828A (ja) * 2017-09-19 2019-04-11 東芝テック株式会社 棚情報推定装置及び情報処理プログラム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100193588A1 (en) * 2009-02-04 2010-08-05 Datalogic Scanning, Inc. Systems and methods for selectively masking a scan volume of a data reader
CN106570510A (zh) * 2016-11-07 2017-04-19 南京航空航天大学 一种超市商品识别方法
CN107292248A (zh) * 2017-06-05 2017-10-24 广州诚予国际市场信息研究有限公司 一种基于图像识别技术的商品管理方法及系统
JP2019055828A (ja) * 2017-09-19 2019-04-11 東芝テック株式会社 棚情報推定装置及び情報処理プログラム
CN108549870A (zh) * 2018-04-16 2018-09-18 图麟信息科技(深圳)有限公司 一种对物品陈列进行鉴别的方法及装置
CN109357630A (zh) * 2018-10-30 2019-02-19 南京工业大学 一种多类型工件批量视觉测量系统及方法

Also Published As

Publication number Publication date
DE112019007564T5 (de) 2022-04-28

Similar Documents

Publication Publication Date Title
EP4120199A1 (en) Image rendering method and apparatus, and electronic device and storage medium
CN107103613B (zh) 一种三维手势姿态估计方法
Deuss et al. ShapeOp—a robust and extensible geometric modelling paradigm
CN104899563A (zh) 一种二维人脸关键特征点定位方法及系统
US11783500B2 (en) Unsupervised depth prediction neural networks
US11514607B2 (en) 3-dimensional reconstruction method, 3-dimensional reconstruction device, and storage medium
CN110415521A (zh) 交通数据的预测方法、装置和计算机可读存储介质
CN111739016B (zh) 目标检测模型训练方法、装置、电子设备及存储介质
Wójcicki Supporting the diagnostics and the maintenance of technical devices with augmented reality
US20230338842A1 (en) Rendering processing method and electronic device
CN111008631B (zh) 图像的关联方法及装置、存储介质和电子装置
US11796670B2 (en) Radar point cloud data processing method and device, apparatus, and storage medium
CN114202027A (zh) 执行配置信息的生成方法、模型训练方法和装置
CN112927328A (zh) 表情迁移方法、装置、电子设备及存储介质
US10627984B2 (en) Systems, devices, and methods for dynamic virtual data analysis
CN113205090B (zh) 图片矫正方法、装置、电子设备及计算机可读存储介质
WO2021012268A1 (zh) 货架边缘定位方法及装置
WO2021077321A1 (zh) 电子价签识别系统及方法、服务器
EP3410389A1 (en) Image processing method and device
CN113762397B (zh) 检测模型训练、高精度地图更新方法、设备、介质及产品
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
CN113592981B (zh) 图片标注方法、装置、电子设备和存储介质
CN112308064B (zh) 货架边缘定位方法及装置
CN106600691B (zh) 多路二维视频图像在三维地理空间中融合校正方法、系统
CN113032443A (zh) 用于处理数据的方法、装置、设备和计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19938841

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022505320

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 19938841

Country of ref document: EP

Kind code of ref document: A1