CN110782464A - Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device - Google Patents

Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device Download PDF

Info

Publication number
CN110782464A
CN110782464A CN201911067754.8A CN201911067754A CN110782464A CN 110782464 A CN110782464 A CN 110782464A CN 201911067754 A CN201911067754 A CN 201911067754A CN 110782464 A CN110782464 A CN 110782464A
Authority
CN
China
Prior art keywords
area
plane
space
article
occupancy rate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911067754.8A
Other languages
Chinese (zh)
Other versions
CN110782464B (en
Inventor
吴良健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201911067754.8A priority Critical patent/CN110782464B/en
Publication of CN110782464A publication Critical patent/CN110782464A/en
Application granted granted Critical
Publication of CN110782464B publication Critical patent/CN110782464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Development Economics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a calculation method of an object accumulation 3D space occupancy rate, a coder-decoder and a storage device. The calculation method comprises the following steps: acquiring an image acquired by a monocular camera, wherein the image comprises a plurality of planes forming an article accumulation space area; processing the image by adopting a deep learning segmentation algorithm, and determining a plane with an article and an occupied area of the article; calculating the area of each article occupying area and the area of the corresponding plane; and calculating the space occupancy rate of the space region according to the area of the plane and the area of the occupied region of the articles. Through the mode, the method can automatically identify the occupied areas of the objects in the plurality of planes in the space area by utilizing the deep learning algorithm, intelligently eliminate the non-target objects, prevent the influence of the non-object occupied areas on the occupancy rate of the calculation space, and improve the accuracy of the calculation result; the method has the advantages that the monocular image is used for accurately and quickly detecting the article accumulation state in the space area, the cost is saved, and the popularization is strong.

Description

Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device
Technical Field
The present application relates to the technical field of article stacking status detection, and in particular, to a method for calculating an occupancy rate of an article stacking 3D space, a codec, and a storage device.
Background
In a transfer station in the logistics industry, packages can be temporarily accumulated in a designated area, and the accumulation degree of the packages in each area is analyzed in real time through video monitoring to guide a scheduling system. The existing method for detecting the package accumulation state relies on a depth camera to obtain a depth information map, and because the depth information map can only feed back space dimension information, people, vehicles or packages cannot be distinguished when people or vehicles appear in the package accumulation space, so that the detection result is inaccurate.
Disclosure of Invention
The application provides a calculation method of an object accumulation 3D space occupancy rate, a coder-decoder and a storage device, which can accurately and quickly detect an object accumulation state in a space area by using a monocular image.
In order to solve the technical problem, the application adopts a technical scheme that: provided is a method for calculating an occupancy rate of an article stacking 3D space, comprising the following steps:
acquiring an image acquired by a monocular camera, wherein the image comprises a plurality of planes forming an article accumulation space area;
processing the image by adopting a deep learning segmentation algorithm, and determining the plane with the object and the object occupying area;
calculating the area of each article occupying area and the area of the corresponding plane;
and calculating the space occupancy rate of the space area according to the area of the plane and the area of the occupied area of the articles.
In order to solve the above technical problem, the present application adopts another technical solution that: providing a codec, the codec comprising a processor, a memory coupled to the processor, wherein the memory stores program instructions for implementing the above-mentioned computing method; the processor is configured to execute the program instructions stored in the memory to obtain an item stacking 3D space occupancy.
In order to solve the above technical problem, the present application adopts another technical solution that: a storage device is provided which stores a program file capable of implementing the above-described calculation method.
The beneficial effect of this application is: the application provides a calculation method, a coder-decoder and a storage device for the occupancy rate of an object stacking 3D space, which can automatically identify the occupied areas of a plurality of objects in a plane in a space area by using a deep learning algorithm, intelligently remove non-target objects, prevent the influence of the occupied areas of the non-objects on the occupancy rate of the calculation space, and improve the accuracy of the calculation result; the method has the advantages that the monocular image is used for accurately and quickly detecting the article accumulation state in the space area, the cost is saved, and the popularization is strong.
Drawings
Fig. 1 is a flowchart illustrating a method for calculating an occupancy rate of a 3D space for stacking items according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of the structure of a spatial region in accordance with one embodiment of the present invention;
FIG. 3 is a schematic view of a scenario for item occupancy in a spatial region in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for calculating an occupancy rate of a 3D space for stacking items according to a second embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for calculating an occupancy rate of a 3D space for stacking items according to an embodiment of the present invention;
FIG. 6 is a block diagram of a codec according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a memory device according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first", "second" and "third" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any indication of the number of technical features indicated. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. All directional indications (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are only used to explain the relative positional relationship between the components, the movement, and the like in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Fig. 1 is a flowchart illustrating a method for calculating an occupancy of a 3D space in stacking items according to a first embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 1 if the results are substantially the same. As shown in fig. 1, the method comprises the steps of:
step S101: an image captured by a monocular camera is acquired, the image comprising a plurality of planes that make up a region of space in which items are stacked.
In step S101, the item may be a parcel, and the plane may include an item occupying region and a non-item occupying region, and in the subsequent step, the item occupying region needs to be identified and divided. In one embodiment, as shown in fig. 2, the spatial region 1 includes: a horizontal plane 11 constituting the space region 1, and a first standing plane 12, a second standing plane 13, and a third standing plane 14 provided perpendicular to the horizontal plane 11. In one scenario, as shown in fig. 3, the horizontal plane 11 includes an item-occupying area 110, and the first, second, and third vertical planes 12, 13, 14 have no corresponding item-occupying areas. The embodiment adopts the monocular camera to collect images, and compared with a depth camera, the cost is lower. In this embodiment, the monocular camera needs to pre-configure the space area for the first time, and needs to re-configure the space area every time a different article stacking scene is changed in the subsequent use process.
Step S102: and processing the image by adopting a deep learning segmentation algorithm, and determining a plane with the object and an object occupying area thereof.
In step S102, the image is input to the segmentation model, and the article occupation region is output. The segmentation model is established according to a plurality of groups of training data, and each group of training data comprises an article image. When the articles are packages, the occupied areas of the articles in a plurality of planes in the space region can be automatically identified by adopting a deep learning segmentation algorithm, non-target articles are intelligently removed, the output result is only the occupied area of the articles, the interference of the occupied area of people or vehicles in the space region is prevented, and the accuracy of the subsequent calculation result is improved.
Step S103: the area of each article occupying region and the area of the corresponding plane are calculated.
Step S104: and calculating the space occupancy rate of the space region according to the area of the plane and the area of the occupied region of the articles.
In step S104, in an embodiment, a ratio of an occupied area of each article to an area of a plane where the article is located is obtained to obtain an area occupancy rate corresponding to each plane; and accumulating the area occupancy rate to obtain the space occupancy rate of the space area. In this embodiment, the area occupancy of each plane is: the ratio of the occupied area of each article to the area of the plane is multiplied by the weight coefficient of the corresponding plane.
In this embodiment, the weight coefficient is used to evaluate the space utilization rate when the articles are stacked, and if the article stacking states are different, the utilization rate of each plane is different, which results in different weight coefficients of each plane. The weight coefficient of the plane is preset according to the article accumulation state of the plurality of images before the area occupancy of each plane is obtained by multiplying the ratio by the weight coefficient of the corresponding plane. Through the mode, the area occupancy rate calculation result of the embodiment is more accurate, and the data is more real and reliable.
According to the method for calculating the occupancy rate of the object accumulation 3D space, disclosed by the first embodiment of the invention, the images acquired by the monocular camera are processed through the deep learning segmentation algorithm, the target object and the non-target object can be accurately identified, the occupied area of the object and the occupied area of the non-object are further identified, the occupied area of the object is segmented in the segmentation model, the influence of the occupied area of the non-object on the occupancy rate of the space area is prevented, and the accuracy of the calculation result is improved; the method has the advantages that the monocular image is used for accurately and quickly detecting the article accumulation state in the space area, the cost is saved, and the popularization is strong.
Fig. 4 is a flowchart illustrating a method for calculating an occupancy of a 3D space for stacking items according to a second embodiment of the present invention. It should be noted that the method of the present invention is not limited to the flow sequence shown in fig. 4 if the results are substantially the same. As shown in fig. 4, taking a scene in which each plane constituting a space region is occupied by an article as an example for explanation, the method includes the steps of:
step S401: an image captured by a monocular camera is acquired, the image comprising a plurality of planes that make up a region of space in which items are stacked.
In this embodiment, step S401 in fig. 4 is similar to step S101 in fig. 1, and for brevity, is not described herein again.
Step S402: obtaining planar distribution data for a spatial region, the spatial region comprising: the horizontal plane, the first vertical plane, the second vertical plane and the third vertical plane that constitute the space region and set up with the horizontal plane is perpendicular.
Step S403: and processing the image by adopting a deep learning segmentation algorithm, and determining a horizontal plane and an object occupying area thereof, a first vertical plane and an object occupying area thereof, a second vertical plane and an object occupying area thereof, and a third vertical plane and an object occupying area thereof.
Step S404: and calculating the area of the horizontal plane and the area occupied by the articles thereof, the area of the first vertical plane and the area occupied by the articles thereof, the area of the second vertical plane and the area occupied by the articles thereof, and the area of the third vertical plane and the area occupied by the articles thereof respectively.
In step S404, the order of the areas of the planes and the occupied areas of the articles is calculated.
Step S405: the space occupancy of the space area is calculated.
In step S405, the area occupancy rate of the horizontal plane, the area occupancy rate of the first vertical plane, the area occupancy rate of the second vertical plane, and the area occupancy rate of the third vertical plane are calculated, and the area occupancy rate of the horizontal plane, the area occupancy rate of the first vertical plane, the area occupancy rate of the second vertical plane, and the area occupancy rate of the third vertical plane are accumulated to obtain the space occupancy rate of the space region.
The area occupancy rate of the horizontal plane is the ratio of the area occupied by the articles in the horizontal plane to the area of the horizontal plane, and then the ratio is multiplied by the weight coefficient of the horizontal plane; the area occupancy rate of the first vertical plane is the ratio of the area occupied by the articles of the first vertical plane to the area of the first vertical plane, and then the ratio is multiplied by the weight coefficient of the first vertical plane; the area occupancy rate of the second vertical plane is the ratio of the area occupied by the articles of the second vertical plane to the area of the second vertical plane, and then the ratio is multiplied by the weight coefficient of the second vertical plane; the area occupancy rate of the third vertical plane is the ratio of the area occupied by the articles of the third vertical plane to the area of the third vertical plane, and then the ratio is multiplied by the weight coefficient of the third vertical plane.
The method for calculating the article accumulation 3D space occupancy in the second embodiment of the present invention is based on the first embodiment, and by presetting a space area including a horizontal plane and a vertical plane, and using an image acquired by a monocular camera to realize calculation of the article accumulation occupancy in the space area, the cost is saved, and the method is highly generalizable.
Fig. 5 is a schematic structural diagram of an apparatus for calculating an occupancy rate of a 3D space in stacking items according to an embodiment of the present invention. As shown in fig. 5, the apparatus 50 includes an image acquisition module 51, an image segmentation module 52, a first calculation module 53 and a second calculation module 54.
The image acquisition module 51 is used to acquire images captured by the monocular camera, the images including a plurality of planes constituting an article accumulation space region.
The image segmentation module 52 is coupled to the image acquisition module 51, and is configured to process the image by using a deep learning segmentation algorithm, and determine a plane having an object and an object occupying area thereof.
The first calculation module 53 is coupled to the image segmentation module 52, and is configured to calculate an area of the occupied area of each article and an area of the corresponding plane.
The second calculation module 54 is coupled to the first calculation module 53 for calculating the space occupancy of the space region based on the area of the plane and the area of the item occupancy region.
Optionally, the second calculating module 54 includes an area occupancy calculating unit and a space occupancy calculating unit, where the area occupancy calculating unit is configured to calculate the area occupancy of the plane according to the area of each item occupying area and the area of the corresponding plane; the space occupancy rate calculating unit is used for accumulating the area occupancy rate to obtain the space occupancy rate of the space area. In this embodiment, the area occupancy calculation unit calculates the ratio of the area occupied by each article to the area of the plane, and then multiplies the ratio by the weight coefficient of the corresponding plane to obtain the area occupancy of the plane.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a codec according to an embodiment of the invention. As shown in fig. 6, the codec 60 includes a processor 61 and a memory 62 coupled to the processor 61.
The memory 62 stores program instructions for implementing the method for calculating the occupancy rate of a 3D space for stacking items according to any of the above embodiments.
The processor 61 is configured to execute program instructions stored in the memory 62 to obtain an item stacking 3D space occupancy.
The processor 61 may also be referred to as a CPU (Central Processing Unit). The processor 61 may be an integrated circuit chip having signal processing capabilities. The processor 61 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a memory device according to an embodiment of the invention. The storage device of the embodiment of the present invention stores a program file 71 capable of implementing all the methods described above, wherein the program file 71 may be stored in the storage device in the form of a software product, and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. The aforementioned storage device includes: various media capable of storing program codes, such as a usb disk, a mobile hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or terminal devices, such as a computer, a server, a mobile phone, and a tablet.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (9)

1. A method for calculating an occupancy rate of a 3D space for stacking items, comprising:
acquiring an image acquired by a monocular camera, wherein the image comprises a plurality of planes forming an article accumulation space area;
processing the image by adopting a deep learning segmentation algorithm, and determining the plane with the object and the object occupying area;
calculating the area of each article occupying area and the area of the corresponding plane;
and calculating the space occupancy rate of the space area according to the area of the plane and the area of the occupied area of the articles.
2. The computing method of claim 1, wherein the processing the image using a deep learning segmentation algorithm to determine the plane with items and the item footprints thereof comprises:
inputting the images into a segmentation model, wherein the segmentation model is established according to a plurality of groups of training data, and each group of training data comprises an article image;
outputting the item occupancy area.
3. The method of claim 1, wherein said calculating the space occupancy of the space region from the area of the plane and the area of the item occupancy region comprises:
obtaining the ratio of the occupied area of each article to the area of the plane to obtain the area occupancy rate corresponding to each plane;
and accumulating the area occupancy rate to obtain the space occupancy rate of the space area.
4. The method of claim 3, wherein said obtaining a ratio of an area occupied by each of said items to an area of said plane to obtain an area occupancy corresponding to each of said planes comprises:
obtaining the ratio of the occupied area of each article to the area of the plane;
and multiplying the ratio by the weight coefficient of the corresponding plane to obtain the area occupancy rate of each plane.
5. The calculation method according to claim 4, wherein before the multiplying the ratio by the weight coefficient of the corresponding plane to obtain the area occupancy of each plane, the method comprises:
presetting a weight coefficient of the plane according to the article accumulation states of the plurality of images;
wherein the weighting coefficients of at least two of the planes are different.
6. The computing method according to claim 1, wherein said obtaining the image captured by the monocular camera comprises:
obtaining planar distribution data for the spatial region, the spatial region comprising: the horizontal plane, the first vertical plane, the second vertical plane and the third vertical plane are perpendicular to the horizontal plane and form the space area.
7. The computing method according to claim 1, wherein before the acquiring the image captured by the monocular camera when changing different article accumulation scenes, the method comprises:
the spatial region is preconfigured again.
8. A codec comprising a processor, a memory coupled to the processor, wherein,
the memory stores program instructions for implementing the computing method of any of claims 1-7;
the processor is configured to execute the program instructions stored by the memory to occupy a 3D space for item stacking.
9. A storage device in which a program file capable of implementing the calculation method according to any one of claims 1 to 7 is stored.
CN201911067754.8A 2019-11-04 2019-11-04 Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device Active CN110782464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911067754.8A CN110782464B (en) 2019-11-04 2019-11-04 Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911067754.8A CN110782464B (en) 2019-11-04 2019-11-04 Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device

Publications (2)

Publication Number Publication Date
CN110782464A true CN110782464A (en) 2020-02-11
CN110782464B CN110782464B (en) 2022-07-15

Family

ID=69388983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911067754.8A Active CN110782464B (en) 2019-11-04 2019-11-04 Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device

Country Status (1)

Country Link
CN (1) CN110782464B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523429A (en) * 2020-04-16 2020-08-11 中冶赛迪重庆信息技术有限公司 Deep learning-based steel pile identification method
CN111582778A (en) * 2020-04-17 2020-08-25 上海中通吉网络技术有限公司 Operation site cargo accumulation measuring method, device, equipment and storage medium
CN113674339A (en) * 2020-05-14 2021-11-19 因特利格雷特总部有限责任公司 Transfer control based on reinforcement learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107356203A (en) * 2017-08-09 2017-11-17 顺丰科技有限公司 One kind loads measuring device and measuring method
CN107589420A (en) * 2017-09-07 2018-01-16 广东工业大学 A kind of interior of articles component detection method, apparatus and system
US20180018820A1 (en) * 2012-05-04 2018-01-18 Intermec Ip Corp. Volume dimensioning systems and methods
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
CN109146878A (en) * 2018-09-30 2019-01-04 安徽农业大学 A kind of method for detecting impurities based on image procossing
CN109872357A (en) * 2019-01-16 2019-06-11 创新奇智(广州)科技有限公司 A kind of article arrangement face accounting calculation method, system and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018820A1 (en) * 2012-05-04 2018-01-18 Intermec Ip Corp. Volume dimensioning systems and methods
CN107356203A (en) * 2017-08-09 2017-11-17 顺丰科技有限公司 One kind loads measuring device and measuring method
CN107589420A (en) * 2017-09-07 2018-01-16 广东工业大学 A kind of interior of articles component detection method, apparatus and system
CN108898044A (en) * 2018-04-13 2018-11-27 顺丰科技有限公司 Charging ratio acquisition methods, device, system and storage medium
CN109146878A (en) * 2018-09-30 2019-01-04 安徽农业大学 A kind of method for detecting impurities based on image procossing
CN109872357A (en) * 2019-01-16 2019-06-11 创新奇智(广州)科技有限公司 A kind of article arrangement face accounting calculation method, system and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111523429A (en) * 2020-04-16 2020-08-11 中冶赛迪重庆信息技术有限公司 Deep learning-based steel pile identification method
CN111582778A (en) * 2020-04-17 2020-08-25 上海中通吉网络技术有限公司 Operation site cargo accumulation measuring method, device, equipment and storage medium
CN111582778B (en) * 2020-04-17 2024-04-12 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for measuring accumulation of cargos in operation site
CN113674339A (en) * 2020-05-14 2021-11-19 因特利格雷特总部有限责任公司 Transfer control based on reinforcement learning

Also Published As

Publication number Publication date
CN110782464B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN110782464B (en) Calculation method of object accumulation 3D space occupancy rate, coder-decoder and storage device
CN108985199B (en) Detection method and device for commodity taking and placing operation and storage medium
US10318976B2 (en) Methods for determining measurement data of an item
CN113421305B (en) Target detection method, device, system, electronic equipment and storage medium
CN105706144B (en) Object detection systems and correlation technique based on support vector machines
CN112464697B (en) Visual and gravity sensing based commodity and customer matching method and device
CN111695429B (en) Video image target association method and device and terminal equipment
CN113447923A (en) Target detection method, device, system, electronic equipment and storage medium
CN103581620A (en) Image processing apparatus, image processing method and program
CN114862929A (en) Three-dimensional target detection method and device, computer readable storage medium and robot
CN111680654B (en) Personnel information acquisition method, device and equipment based on article picking and placing event
CN110008802B (en) Method and device for selecting target face from multiple faces and comparing face recognition
CN111814846A (en) Training method and recognition method of attribute recognition model and related equipment
CN113052838B (en) Object placement detection method and device and intelligent cabinet
CN112598610B (en) Depth image obtaining method and device, electronic equipment and storage medium
CN110766646A (en) Display rack shielding detection method and device and storage medium
CN113489897A (en) Image processing method and related device
CN112906646A (en) Human body posture detection method and device
CN116958873A (en) Pedestrian tracking method, device, electronic equipment and readable storage medium
CN109643437A (en) Image processing apparatus, stereo-camera arrangement and image processing method
CN112395920A (en) Radar-based attitude recognition device and method and electronic equipment
CN112633143B (en) Image processing system, method, head-mounted device, processing device, and storage medium
CN106295693A (en) A kind of image-recognizing method and device
CN113936042B (en) Target tracking method and device and computer readable storage medium
CN110796062B (en) Method and device for precisely matching and displaying object frame and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant