CN109919040B - Goods specification information identification method and device - Google Patents

Goods specification information identification method and device Download PDF

Info

Publication number
CN109919040B
CN109919040B CN201910118085.6A CN201910118085A CN109919040B CN 109919040 B CN109919040 B CN 109919040B CN 201910118085 A CN201910118085 A CN 201910118085A CN 109919040 B CN109919040 B CN 109919040B
Authority
CN
China
Prior art keywords
sample
surface image
top surface
goods
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910118085.6A
Other languages
Chinese (zh)
Other versions
CN109919040A (en
Inventor
陈宝华
邓磊
牛辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tsingh Technology Co ltd
Original Assignee
Beijing Tsingh Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tsingh Technology Co ltd filed Critical Beijing Tsingh Technology Co ltd
Priority to CN201910118085.6A priority Critical patent/CN109919040B/en
Publication of CN109919040A publication Critical patent/CN109919040A/en
Application granted granted Critical
Publication of CN109919040B publication Critical patent/CN109919040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a goods specification information identification method and a goods specification information identification device, wherein the method comprises the following steps: when the whole-support goods are put in or out of a warehouse, a first camera is controlled to shoot the top surface of the whole-support goods and a second camera is controlled to shoot the side surface of the whole-support goods, so that a top surface image and a side surface image of the whole-support goods are obtained respectively, wherein the packages of all goods in the whole-support goods are the same; respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo; optionally selecting one of the top side pictures of each package as a target top side image and one of the side pictures of each package as a target side image; and determining the specification information of the whole truer goods according to the target top surface image, the target side surface image and the pre-trained recognition model. Therefore, the information of the goods gauge of the whole goods can be automatically and quickly identified by only acquiring the top surface image and the side surface image of the whole goods.

Description

Goods specification information identification method and device
Technical Field
The invention relates to the technical field of warehousing management, in particular to a goods specification information identification method and device.
Background
The problem of goods specification information identification is one of the main technical difficulties in realizing unmanned intelligent storage. In the conventional logistics warehouse, a barcode scanning technology or a Radio Frequency Identification (RFID) technology is generally adopted to detect and identify the specification information of the goods. For the bar code technology, a coded bar code is attached to the surface of goods, and a special scanning reader-writer is used for transmitting information to the scanning reader-writer from the bar code through optical signals; the RFID technology uses a dedicated RFID reader and a dedicated RFID tag that can be attached to the surface of the goods, and transmits information from the RFID tag to the RFID reader by using a frequency signal. The reader of the RFID electronic tag is in wireless communication with the RFID electronic tag through the antenna, and can read or write the tag identification code and the memory data.
However, the field of view recognizable by the barcode recognition technology is small, and the barcode position needs to be determined in advance, so the detection and recognition are generally performed in a manual handheld mode, and specifically, a worker scans and registers the barcode of the goods box characterization specification information in real time by using a handheld barcode scanner. In addition, when the cargo flow is large, needed human resources are correspondingly increased, or when the human resources are limited, the workload is correspondingly increased, and the two modes have large requirements on the capability of workers. The RFID technology needs to stick RFID labels on the surfaces of every stack of goods, the overall modification cost is too high, and the identification accuracy is greatly reduced in the environment with more moisture and metal content.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
The first purpose of the invention is to provide a goods specification information identification method.
A second object of the present invention is to provide a device for identifying the specification information of a good.
A third object of the invention is to propose a computer device.
A fourth object of the invention is to propose a computer-readable storage medium.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a method for identifying quality information of goods, including:
when the whole-support goods are put in or out of a warehouse, a first camera is controlled to shoot the top surface of the whole-support goods and a second camera is controlled to shoot the side surface of the whole-support goods, so that a top surface image and a side surface image of the whole-support goods are obtained respectively, wherein the packages of all goods in the whole-support goods are the same;
respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo;
optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image;
and determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model.
Further, the determining the specification information of the whole truer goods according to the target top surface image, the target side surface image and the pre-trained recognition model comprises:
identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image;
identifying the target side image according to the pre-trained identification model, and determining the specification information corresponding to the target side image;
and comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
Further, the segmenting the top surface image and the side surface image respectively to obtain a package top surface image and a package side surface image of each cargo includes:
respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain vertex coordinates of the packages of the cargos in the top surface image and vertex coordinates of the packages of the cargos in the side surface image;
segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo;
and segmenting the side images according to the vertex coordinates of the packages of the cargos in the side images to obtain the package side images of the cargos.
Further, the method further comprises:
obtaining sample data of at least one sample entire-palletized cargo, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample entire-palletized cargo, and the sample entire-palletized cargo comprises at least one sample cargo;
calibrating a calibration vertex coordinate of the package of each sample cargo in the corresponding sample top surface image and a calibration vertex coordinate of the corresponding sample side surface image;
taking the sample top surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample side surface image as a first training sample;
and training an initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
Further, the method further comprises:
processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images;
processing the sample side top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side surface images;
segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo;
segmenting the sample side surface image according to the actual vertex coordinates of the package of each sample cargo in the sample side top surface image to obtain the sample package side surface image of each sample cargo;
taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample;
and training the neural network by using each second training sample to obtain the pre-trained recognition model.
According to the goods specification information identification method provided by the embodiment of the invention, when the whole-trusteeship is put in or taken out of a warehouse, the first camera is controlled to shoot the top surface of the whole-trusteeship and the second camera is controlled to shoot the side surface of the whole-trusteeship, so that the top surface image and the side surface image of the whole-trusteeship are respectively obtained, wherein the packages of all goods in the whole-trusteeship are the same; respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo; optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image; and determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model. Therefore, the information of the goods gauge of the whole goods can be automatically and rapidly identified by only acquiring the top surface image and the side surface image of the whole goods, the identification accuracy is high, the external interference is less, a large amount of labor cost is saved, and the efficiency of goods entering and leaving the warehouse is improved. Meanwhile, the omission phenomenon of goods information can be effectively avoided, the account and reality of the warehouse are consistent, and the sorting and reasonable storage of the goods after being put in storage are also played a vital role; the method can overcome the complex environment of the warehouse and effectively acquire the goods information in front of the construction prospect of digital, intelligent and unmanned warehousing, and has great theoretical and practical value.
In order to achieve the above object, a second embodiment of the present invention provides a device for identifying quality information of goods, including:
the image acquisition module is used for controlling a first camera to shoot the top surface of the whole-tray goods and controlling a second camera to shoot the side surface of the whole-tray goods when the whole-tray goods are put in or out of a warehouse, so as to respectively obtain the top surface image and the side surface image of the whole-tray goods, wherein the packages of all goods in the whole-tray goods are the same;
the image segmentation module is used for respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo;
a selection module for selecting one of the top side pictures of each of the packages as a target top side image and one of the side images of each of the packages as a target side image;
and the determining module is used for determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model.
Further, the determining module is specifically configured to:
identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image;
identifying the target side image according to the pre-trained identification model, and determining the specification information corresponding to the target side image;
and comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
Further, the image segmentation module is specifically configured to:
respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain vertex coordinates of the packages of the cargos in the top surface image and vertex coordinates of the packages of the cargos in the side surface image;
segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo;
and segmenting the side images according to the vertex coordinates of the packages of the cargos in the side images to obtain the package side images of the cargos.
Further, the apparatus further comprises: a first training module;
the first training module is to:
obtaining sample data of at least one sample entire-palletized cargo, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample entire-palletized cargo, and the sample entire-palletized cargo comprises at least one sample cargo;
calibrating a calibration vertex coordinate of the package of each sample cargo in the corresponding sample top surface image and a calibration vertex coordinate of the corresponding sample side surface image;
taking the sample top surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample side surface image as a first training sample;
and training an initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
Further, the apparatus further comprises: a second training module;
the second training module is to:
processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images;
processing the sample side top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side surface images;
segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo;
segmenting the sample side surface image according to the actual vertex coordinates of the package of each sample cargo in the sample side top surface image to obtain the sample package side surface image of each sample cargo;
taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample;
and training the neural network by using each second training sample to obtain the pre-trained recognition model.
According to the goods specification information identification device provided by the embodiment of the invention, when the whole-trusteeship goods are put in or taken out of a warehouse, the first camera is controlled to shoot the top surface of the whole-trusteeship goods and the second camera is controlled to shoot the side surface of the whole-trusteeship goods, so that the top surface image and the side surface image of the whole-trusteeship goods are respectively obtained, wherein the packages of all goods in the whole-trusteeship goods are the same; respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo; optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image; and determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model. Therefore, the information of the goods gauge of the whole goods can be automatically and rapidly identified by only acquiring the top surface image and the side surface image of the whole goods, the identification accuracy is high, the external interference is less, a large amount of labor cost is saved, and the efficiency of goods entering and leaving the warehouse is improved. Meanwhile, the omission phenomenon of goods information can be effectively avoided, the account and reality of the warehouse are consistent, and the sorting and reasonable storage of the goods after being put in storage are also played a vital role; the method can overcome the complex environment of the warehouse and effectively acquire the goods information in front of the construction prospect of digital, intelligent and unmanned warehousing, and has great theoretical and practical value.
To achieve the above object, a third embodiment of the present invention provides a computer device, including: the storage, the processor and the computer program stored on the storage and capable of running on the processor are characterized in that the processor realizes the goods specification information identification method when executing the program.
In order to achieve the above object, a fourth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the specification information identification method for goods as described above.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flow chart of a method for identifying quality and specification information of goods according to an embodiment of the present invention;
FIG. 2 is an exemplary top surface image;
fig. 3 is a schematic flowchart of a method for identifying quality information of goods according to another embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating a further method for identifying quality information of goods according to an embodiment of the present invention
Fig. 5 is a schematic structural diagram of a device for identifying quality and specification information of goods according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The method and apparatus for identifying the specification information of a good according to the embodiment of the present invention will be described with reference to the drawings.
Fig. 1 is a schematic flow chart of a method for identifying quality and specification information of goods according to an embodiment of the present invention. The embodiment provides a goods specification information identification method, and the execution main body is a goods specification information identification method device, and the execution main body is composed of hardware and/or software.
As shown in fig. 1, the method for identifying the specification information of the goods includes the following steps:
s101, when the whole-support goods are put in or taken out of a warehouse, a first camera is controlled to shoot the top surface of the whole-support goods, a second camera is controlled to shoot the side surface of the whole-support goods, and the top surface image and the side surface image of the whole-support goods are obtained respectively, wherein the packages of all goods in the whole-support goods are the same.
Specifically, units such as factories or logistics distribution centers strictly require warehousing management of goods, and identification and registration of goods specification information of the goods can play an important role in detection and identification of information of goods entering and leaving a warehouse, such as unmanned warehousing and intelligent logistics.
Generally, in the warehousing link, goods are warehoused or warehoused in a whole-support local mode, the number of the goods is large, and warehousing or warehousing is frequent, in order to effectively perform warehousing management on the goods, in the embodiment, when the whole-support goods are warehoused or warehoused, the top surface and the side surface of the whole-support goods are shot, the top surface image and the side surface image of the whole-support goods are obtained, the specification information of the goods can be automatically and quickly identified based on the top surface image and the side surface image, the identification accuracy is high, the external interference is small, a large amount of labor cost is saved, and the warehousing efficiency of the goods are improved.
In this embodiment, a first camera and a second camera are installed according to an actual field environment of a factory, a logistics distribution center, or other units, and the type of the camera is selected according to the size of a monitored area, the detail requirement of a monitored picture, or the like, as long as the first camera can capture the top surface of a whole pallet of goods and the image quality of the captured top surface image meets the requirement, and as long as the second camera can capture the side surface of the whole pallet of goods and the image quality of the captured side surface image meets the requirement. It should be noted that the first camera and the second camera may be integrated together, or may be two separate cameras.
In this embodiment, when the whole consignment is put in or out of a warehouse, the first camera is controlled to capture a top image of the top surface of the whole consignment in the monitoring area, and the first camera is controlled to capture a side image of the side surface of the whole consignment in the monitoring area. For example, a controller with an ultrasonic ranging function is configured in an actual field, the controller detects whether the whole-pallet goods are in storage or out of storage by transmitting ultrasonic waves, when the whole-pallet goods are detected to be in storage or out of storage, starting signals are sent to a first camera and a second camera, and the first camera and the second camera are started to shoot the whole-pallet goods in a monitored area.
And S102, segmenting the top surface image and the side surface image respectively to obtain a package top surface image and a package side surface image of each cargo.
Specifically, the package has a function of highlighting characteristics of the goods in addition to a function of facilitating storage, transportation and display of the goods. For example, the components of the package include a trademark, a brand name, a package shape, a package pattern, and a product label, and the specification information of the goods can be recognized by recognizing the components of the package. Therefore, in order to improve the accuracy of identifying the gauge information of the goods, after the top surface image and the side surface image are obtained, the package top surface image of each goods is extracted from the top surface image, the package side surface image of each goods is extracted from the side surface image, and then the gauge information of the goods is identified based on the package top surface image and the package side surface image.
Fig. 2 is an exemplary top surface image. Taking fig. 2 as an example, in fig. 2, 14 goods with the same package are provided, a boundary frame of each goods in the top surface image is determined, and the top surface image is segmented according to the boundary frame to obtain a package top surface image corresponding to each goods. And analogizing in sequence, determining a boundary frame of each cargo in the side image, and segmenting the side image according to the boundary frame to obtain a packaging side image corresponding to each cargo.
In order to segment the top image and the side image quickly and accurately and improve the recognition efficiency and the recognition accuracy of the cargo item gauge information, the image segmentation is performed by a pre-trained segmentation network, and then the specific implementation process of step S102 is as follows:
and S1021, respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain the vertex coordinates of the packages of the goods in the top surface image and the vertex coordinates of the packages of the goods in the side surface image.
And S1022, segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo.
And S1023, segmenting the side image according to the vertex coordinates of the package of each cargo in the side image to obtain the package side image of each cargo.
In this embodiment, the pre-trained segmentation network is obtained by training a large number of samples, and is capable of detecting an object in an input image, locating a position of the object in the image, determining vertex coordinates of the object in the image according to the position of the object in the image, and segmenting a corresponding object image from the input image according to the vertex coordinates.
A uv pixel plane coordinate system is briefly introduced here, which is a coordinate system established by taking the upper left corner of an image as an origin and taking pixels as a unit, and the abscissa u and the ordinate v of the pixels are the number of columns and the number of rows in the image array, respectively.
In this embodiment, in the pixel plane coordinate system corresponding to the top surface image, the rectangular bounding box corresponding to the package of each good may be determined according to the vertex coordinates of the package of the good in the top surface image, the image area enclosed by the rectangular bounding box is determined as the package top surface image of the good, and the image area enclosed by the rectangular bounding box is extracted to segment the package top surface image of the good from the top surface image. In the same way, in the pixel plane coordinate system corresponding to the side surface image, the rectangular boundary frame corresponding to the package of each cargo can be determined according to the vertex coordinates of the package of each cargo in the side top surface image, the image area enclosed by the rectangular boundary frame is determined as the package side surface image of the cargo, and the image area enclosed by the rectangular boundary frame is extracted to segment the package side surface image of the cargo from the side surface image.
S103, optionally selecting one of the pictures of the top surface of each package as a target top surface image, and optionally selecting one of the pictures of the side surface of each package as a target side surface image.
And S104, determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model.
In this embodiment, the pre-trained recognition model is obtained by training a large number of samples, and can automatically, quickly and accurately recognize the specification information of the input picture.
In practical situations, the packages of goods with different product size information are different, but the packages of goods with different product size information may have the same top or side. For example, the top surfaces of the packages of goods with different gauge information may be the same or different, and likewise, the side surfaces of the packages of goods with different gauge information may be the same or different, but the goods with different gauge information may not have the same top and side surfaces at the same time. For example, different series of diapers of the same brand, which are likely to have the same top side but different sides of the package, may also have the same sides but different tops. Therefore, in the embodiment, the pre-trained recognition model is used to respectively recognize the target top surface image and the target side surface image, and different recognition results are fused and analyzed to obtain the specification information of the entire truer goods.
In a possible implementation manner, the specific implementation manner of step S104 is:
s1041, identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image.
S1042, recognizing the target side image according to the pre-trained recognition model, and determining the specification information corresponding to the target side image.
S1043, comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
In the embodiment, the pre-trained recognition model is used for recognizing the target top surface image and the target side surface image, the recognized standard information is compared, the standard information of the whole consignment is determined according to the comparison result, the recognition error can be avoided as much as possible, and the recognition accuracy of the standard information of the goods is improved. And during comparison, if the specification information corresponding to the target top surface image is the same as that of the target side surface image, determining the same specification information as that of the whole truer goods. For example, the specification information corresponding to the target top surface image may be specification 1, specification 2, and specification 3, and the specification information corresponding to the target side surface image may be specification 1, and obviously, the specification information corresponding to the target top surface image and the specification information corresponding to the target side surface image both have specification 1, and then specification 1 is the specification information of the whole goods.
According to the goods specification information identification method provided by the embodiment of the invention, when the whole-trusteeship is put in or taken out of a warehouse, the first camera is controlled to shoot the top surface of the whole-trusteeship and the second camera is controlled to shoot the side surface of the whole-trusteeship, so that the top surface image and the side surface image of the whole-trusteeship are respectively obtained, wherein the packages of all goods in the whole-trusteeship are the same; respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo; optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image; and determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model. Therefore, the information of the goods gauge of the whole goods can be automatically and rapidly identified by only acquiring the top surface image and the side surface image of the whole goods, the identification accuracy is high, the external interference is less, a large amount of labor cost is saved, and the efficiency of goods entering and leaving the warehouse is improved. Meanwhile, the omission phenomenon of goods information can be effectively avoided, the account and reality of the warehouse are consistent, and the sorting and reasonable storage of the goods after being put in storage are also played a vital role; the method can overcome the complex environment of the warehouse and effectively acquire the goods information in front of the construction prospect of digital, intelligent and unmanned warehousing, and has great theoretical and practical value.
Fig. 3 is a schematic flowchart of a method for identifying quality information of goods according to another embodiment of the present invention. The present embodiment explains a training process of a segmented network. With reference to fig. 3, on the basis of the embodiment shown in fig. 1, the method may further include the following steps:
s201, sample data of at least one sample whole-support cargo is obtained, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample whole-support cargo, and the sample whole-support cargo comprises at least one sample cargo.
In this embodiment, different sample entire shipments have different specification information, and each sample shipment in each sample entire shipment has the specific same specification information. It can be understood that the more samples of different specification information are put together, the higher the precision of the trained segmentation net is. For example, the at least one sample entire pallet of goods is brand a diaper, brand B diaper, brand C diaper, brand a diaper, brand B diaper, brand C laundry detergent, brand D mop, brand E rice cooker, and the like.
In the embodiment, the top surface of each sample of the whole-pallet goods is shot to obtain a sample top surface image of the sample of the whole-pallet goods; and shooting the side of each sample entire-palletized goods to obtain a sample side image of the sample entire-palletized goods.
S202, calibrating the calibration vertex coordinates of the packages of the sample goods in the corresponding sample top surface images and the calibration vertex coordinates of the corresponding sample side surface images respectively.
In this embodiment, an image labeling tool, such as Labelme, labelImg, etc., may be used to calibrate the calibration vertex coordinates of the package of each sample cargo in the corresponding sample top surface image and the calibration vertex coordinates of the package of each sample cargo in the corresponding sample side surface image. Specifically, a boundary frame of each sample cargo packaged in the corresponding sample top surface image is positioned, and four vertex coordinates of the boundary frame are calibrated; and positioning a boundary frame of the package of each sample cargo in the corresponding sample side image, and calibrating four vertex coordinates of the boundary frame.
S203, taking the sample top surface image and the calibration vertex coordinates of the packages of the corresponding sample goods in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the packages of the corresponding sample goods in the sample side surface image as a first training sample.
In this embodiment, if the first training sample is the sample top surface image and the calibration vertex coordinates of each corresponding sample good packaged in the sample top surface image, the sample top surface image is used as the input of the initial segmentation network, and the calibration vertex coordinates of each corresponding sample good packaged in the sample top surface image are used as the expected output of the initial segmentation network. If the first training sample is the sample side image and the corresponding calibration vertex coordinates of each sample cargo packaged in the sample side image, the sample side image is used as the input of the initial segmentation network, and the corresponding calibration vertex coordinates of each sample cargo packaged in the sample side image are used as the expected output of the initial segmentation network.
And S204, training the initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
In this embodiment, the segmentation Network may be a Convolutional Neural Network (CNN) or a full-Convolutional Network (FNN), but is not limited thereto. Among them, CNN is powerful in that its multi-layer structure can automatically learn features, and can learn features of multiple layers: the sensing domain of the shallower convolutional layer is smaller, and the characteristics of some local regions are learned; deeper convolutional layers have larger perceptual domains and can learn more abstract features. These abstract features are less sensitive to the size, position, orientation, etc. of the object, thereby contributing to an improvement in recognition performance. Among them, compared with the conventional method of image segmentation using CNN, FNN has two significant advantages: one is that any size of input image can be accepted without requiring all training images and test images to be the same size. Secondly, it is more efficient because the problems of repeated storage and convolution calculation due to the use of pixel blocks are avoided.
In this embodiment, in the process of training the initial segmentation network by using each first training sample, a BP (Back Propagation) algorithm or an SGD (Stochastic Gradient Descent) algorithm may be used to adjust network parameters of the segmentation network.
According to the goods gauge information identification method provided by the embodiment of the invention, the image segmentation is executed by training the segmentation network, the top image and the side image can be rapidly and accurately segmented, and the goods gauge information identification efficiency and the goods gauge information identification accuracy are improved.
Fig. 4 is a flowchart illustrating a method for identifying quality information of a cargo according to another embodiment of the present invention. This embodiment explains a training process of the recognition model. With reference to fig. 4, on the basis of the embodiment shown in fig. 1 or fig. 3, the method may further include the following steps:
s301, processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images.
In this embodiment, the top surfaces of the sample entire-palletized goods with different product specification information are photographed to obtain a sample top surface image of each sample entire-palletized goods. It can be understood that the more samples of different specification information are put together in a trusteeship, the higher the accuracy of the trained recognition model is.
After obtaining the sample top surface image, the sample top surface image is input into a pre-trained segmentation network, and the pre-trained segmentation network outputs actual vertex coordinates of the packages of the respective sample goods in the sample top surface image.
S302, processing the sample side top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side surface images.
In this embodiment, the side of the sample full-palletized goods of different product specification information is photographed to obtain a sample top surface image of each sample full-palletized goods. It can be understood that the more samples of different specification information are put together in a trusteeship, the higher the accuracy of the trained recognition model is.
After the sample side images are obtained, the sample side images are input into a pre-trained segmentation network, which outputs actual vertex coordinates of the individual sample goods packaged in the sample side images.
And S303, segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo.
In this embodiment, in the pixel plane coordinate system corresponding to the sample top surface image, the rectangular bounding box corresponding to the package of each sample good may be determined according to the actual vertex coordinates of the package of the sample good in the sample top surface image, the image area enclosed by the rectangular bounding box is determined as the package top surface image of the sample good, and the image area enclosed by the rectangular bounding box is extracted to segment the package top surface image of the sample good from the sample top surface image.
S304, segmenting the sample side face image according to the actual vertex coordinates of the package of each sample cargo in the sample side top face image to obtain the sample package side face image of each sample cargo.
In this embodiment, in the pixel plane coordinate system corresponding to the sample side surface image, a rectangular bounding box corresponding to the package of each good may be determined according to the actual vertex coordinates of the package of the good in the sample side top surface image, an image area enclosed by the rectangular bounding box is determined as the package side surface image of the good, and the image area enclosed by the rectangular bounding box is extracted to segment the package side surface image of the sample good from the sample side surface image.
S305, taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample.
S306, training the neural network by using each second training sample to obtain the pre-trained recognition model.
In this embodiment, if the second training sample is the sample package top surface image and the specification information corresponding to the sample goods, the sample package top surface image is used as the input of the neural network to be trained, and the specification information corresponding to the sample goods is used as the expected output of the neural network to be trained. And if the second training sample is the sample package side image and the specification information corresponding to the sample goods, taking the sample package side image as the input of the neural network to be trained, and taking the specification information corresponding to the sample goods as the expected output of the neural network to be trained.
In this embodiment, the Neural network may be a Convolutional Neural Network (CNN), but is not limited thereto. In the process of training the neural network by using each second training sample, a BP (Back Propagation) algorithm or an SGD (Stochastic Gradient Descent) algorithm may be used to adjust network parameters of the neural network.
According to the goods specification information identification method provided by the embodiment of the invention, the goods specification information of the goods is automatically identified by training the identification model, and the identification accuracy of the goods specification information can be improved.
The embodiment of the invention also provides a goods specification information identification device. Fig. 5 is a schematic structural diagram of a device for identifying quality and specification information of goods according to an embodiment of the present invention. As shown in fig. 5, the device for identifying the specification information of the goods includes: the device comprises an image acquisition module 11, an image segmentation module 12, a selection module 13 and a determination module 14.
The image acquisition module 11 is configured to control a first camera to shoot a top surface of a whole-tray cargo and control a second camera to shoot a side surface of the whole-tray cargo when the whole-tray cargo enters or leaves a warehouse, so as to obtain a top surface image and a side surface image of the whole-tray cargo, respectively, where packages of the cargos in the whole-tray cargo are the same;
an image segmentation module 12, configured to segment the top surface image and the side surface image respectively to obtain a package top surface image and a package side surface image of each cargo;
a selection module 13, configured to select one of the top surface pictures of each of the packages as a target top surface image, and select one of the side surface images of each of the packages as a target side surface image;
and the determining module 14 is configured to determine the specification information of the entire truer cargo according to the target top surface image, the target side surface image and a pre-trained recognition model.
Further, the determining module 14 is specifically configured to:
identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image;
identifying the target side image according to the pre-trained identification model, and determining the specification information corresponding to the target side image;
and comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
Further, the image segmentation module 12 is specifically configured to:
respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain vertex coordinates of the packages of the cargos in the top surface image and vertex coordinates of the packages of the cargos in the side surface image;
segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo;
and segmenting the side images according to the vertex coordinates of the packages of the cargos in the side images to obtain the package side images of the cargos.
Further, the apparatus further comprises: a first training module;
the first training module is to:
obtaining sample data of at least one sample entire-palletized cargo, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample entire-palletized cargo, and the sample entire-palletized cargo comprises at least one sample cargo;
calibrating a calibration vertex coordinate of the package of each sample cargo in the corresponding sample top surface image and a calibration vertex coordinate of the corresponding sample side surface image;
taking the sample top surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample side surface image as a first training sample;
and training an initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
Further, the apparatus further comprises: a second training module;
the second training module is to:
processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images;
processing the sample side top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side surface images;
segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo;
segmenting the sample side surface image according to the actual vertex coordinates of the package of each sample cargo in the sample side top surface image to obtain the sample package side surface image of each sample cargo;
taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample;
and training the neural network by using each second training sample to obtain the pre-trained recognition model.
It should be noted that the explanation of the embodiment of the method for identifying the goods specification information is also applicable to the device for identifying the goods specification information of the goods of this embodiment, and details are not repeated here.
According to the goods specification information identification device provided by the embodiment of the invention, when the whole-trusteeship goods are put in or taken out of a warehouse, the first camera is controlled to shoot the top surface of the whole-trusteeship goods and the second camera is controlled to shoot the side surface of the whole-trusteeship goods, so that the top surface image and the side surface image of the whole-trusteeship goods are respectively obtained, wherein the packages of all goods in the whole-trusteeship goods are the same; respectively segmenting the top surface image and the side surface image to obtain a package top surface image and a package side surface image of each cargo; optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image; and determining the product specification information of the whole truer goods according to the target top surface image, the target side surface image and a pre-trained recognition model. Therefore, the information of the goods gauge of the whole goods can be automatically and rapidly identified by only acquiring the top surface image and the side surface image of the whole goods, the identification accuracy is high, the external interference is less, a large amount of labor cost is saved, and the efficiency of goods entering and leaving the warehouse is improved. Meanwhile, the omission phenomenon of goods information can be effectively avoided, the account and reality of the warehouse are consistent, and the sorting and reasonable storage of the goods after being put in storage are also played a vital role; the method can overcome the complex environment of the warehouse and effectively acquire the goods information in front of the construction prospect of digital, intelligent and unmanned warehousing, and has great theoretical and practical value.
Fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the present invention. The computer device includes:
memory 1001, processor 1002, and computer programs stored on memory 1001 and executable on processor 1002.
The processor 1002, when executing the program, implements the specification information identifying method of the goods provided in the above-described embodiment.
Further, the computer device further comprises:
a communication interface 1003 for communicating between the memory 1001 and the processor 1002.
A memory 1001 for storing computer programs that may be run on the processor 1002.
Memory 1001 may include high-speed RAM memory and may also include non-volatile memory (e.g., at least one disk memory).
The processor 1002 is configured to implement the method for identifying the specification information of the goods according to the foregoing embodiment when executing the program.
If the memory 1001, the processor 1002, and the communication interface 1003 are implemented independently, the communication interface 1003, the memory 1001, and the processor 1002 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 1001, the processor 1002, and the communication interface 1003 are integrated on one chip, the memory 1001, the processor 1002, and the communication interface 1003 may complete communication with each other through an internal interface.
The processor 1002 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method for identifying the specification information of goods as described above.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A goods specification information identification method is characterized by comprising the following steps:
when the whole-support goods are put in or out of a warehouse, a first camera is controlled to shoot the top surface of the whole-support goods and a second camera is controlled to shoot the side surface of the whole-support goods, so that a top surface image and a side surface image of the whole-support goods are obtained respectively, wherein the packages of the goods in the whole-support goods are the same, and the packages of the goods with different product specification information are different; the controller transmits ultrasonic waves to detect whether the whole-tray goods are in storage or out of the storage, when the whole-tray goods are detected to be in storage or out of the storage, starting signals are sent to the first camera and the second camera, and the first camera and the second camera are started to shoot the whole-tray goods in the monitoring area;
segmenting the top surface image and the side surface image respectively to obtain a package top surface image and a package side surface image of each cargo, comprising:
respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain vertex coordinates of the packages of the cargos in the top surface image and vertex coordinates of the packages of the cargos in the side surface image;
segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo;
segmenting the side images according to the vertex coordinates of the packages of the cargos in the side images to obtain package side images of the cargos;
optionally selecting one of the top side pictures of each of the packages as a target top side image and one of the side pictures of each of the packages as a target side image;
determining the standard information of the whole consignment according to the target top surface image, the target side surface image and a pre-trained recognition model, wherein the standard information of the whole consignment is as follows: and obtaining the same specification information between the specification information corresponding to the target top surface image and the specification information corresponding to the target side surface image according to the target top surface image, the target side surface image and a pre-trained recognition model.
2. The method of claim 1, wherein determining the gauge information of the whole consignment based on the target top image, the target side image, and a pre-trained recognition model comprises:
identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image;
identifying the target side image according to the pre-trained identification model, and determining the specification information corresponding to the target side image;
and comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
3. The method of claim 1, further comprising:
obtaining sample data of at least one sample entire-palletized cargo, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample entire-palletized cargo, and the sample entire-palletized cargo comprises at least one sample cargo;
calibrating a calibration vertex coordinate of the package of each sample cargo in the corresponding sample top surface image and a calibration vertex coordinate of the corresponding sample side surface image;
taking the sample top surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample side surface image as a first training sample;
and training an initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
4. The method of claim 3, further comprising:
processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images;
processing the sample side images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side images;
segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo;
segmenting the sample side images according to actual vertex coordinates of the packages of the sample goods in the sample side images to obtain sample package side images of the sample goods;
taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample;
and training the neural network by using each second training sample to obtain the pre-trained recognition model.
5. A specification information recognition device for a cargo, comprising:
the image acquisition module is used for controlling a first camera to shoot the top surface of the whole-support goods and controlling a second camera to shoot the side surface of the whole-support goods when the whole-support goods are put in or taken out of a warehouse, so as to respectively obtain the top surface image and the side surface image of the whole-support goods, wherein the packages of all goods in the whole-support goods are the same, and the packages of goods with different goods specification information are different; the controller transmits ultrasonic waves to detect whether the whole-tray goods are in storage or out of the storage, when the whole-tray goods are detected to be in storage or out of the storage, starting signals are sent to the first camera and the second camera, and the first camera and the second camera are started to shoot the whole-tray goods in the monitoring area;
the image segmentation module is used for segmenting the top surface image and the side surface image respectively to obtain a package top surface image and a package side surface image of each cargo, and is specifically used for:
respectively processing the top surface image and the side surface image according to a pre-trained segmentation network to obtain vertex coordinates of the packages of the cargos in the top surface image and vertex coordinates of the packages of the cargos in the side surface image;
segmenting the top surface image according to the vertex coordinates of the packages of the cargos in the top surface image to obtain the package top surface image of each cargo;
segmenting the side images according to the vertex coordinates of the packages of the cargos in the side images to obtain package side images of the cargos;
a selection module for selecting one of the top side pictures of each of the packages as a target top side image and one of the side images of each of the packages as a target side image;
a determining module, configured to determine the standard information of the entire consignment according to the target top surface image, the target side surface image, and a pre-trained recognition model, where the standard information of the entire consignment is: and obtaining the same specification information between the specification information corresponding to the target top surface image and the specification information corresponding to the target side surface image according to the target top surface image, the target side surface image and a pre-trained recognition model.
6. The apparatus of claim 5, wherein the determining module is specifically configured to:
identifying the target top surface image according to the pre-trained identification model, and determining the specification information corresponding to the target top surface image;
identifying the target side image according to the pre-trained identification model, and determining the specification information corresponding to the target side image;
and comparing the standard information corresponding to the target top surface image with the standard information corresponding to the target side surface image, and determining the standard information of the whole trued goods according to the comparison result.
7. The apparatus of claim 5, further comprising: a first training module;
the first training module is to:
obtaining sample data of at least one sample entire-palletized cargo, wherein the sample data comprises a sample top surface image and a sample side surface image of the sample entire-palletized cargo, and the sample entire-palletized cargo comprises at least one sample cargo;
calibrating a calibration vertex coordinate of the package of each sample cargo in the corresponding sample top surface image and a calibration vertex coordinate of the corresponding sample side surface image;
taking the sample top surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample top surface image as a first training sample, and/or taking the sample side surface image and the calibration vertex coordinates of the corresponding sample goods packaged in the sample side surface image as a first training sample;
and training an initial segmentation network by using each first training sample to obtain the pre-trained segmentation network.
8. The apparatus of claim 7, further comprising: a second training module;
the second training module is to:
processing the sample top surface images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample top surface images;
processing the sample side images according to the pre-trained segmentation network to obtain actual vertex coordinates of each sample cargo packaged in the sample side images;
segmenting the sample top surface image according to the actual vertex coordinates of the package of each sample cargo in the sample top surface image to obtain the sample package top surface image of each sample cargo;
segmenting the sample side images according to actual vertex coordinates of the packages of the sample goods in the sample side images to obtain sample package side images of the sample goods;
taking the top surface image of the sample package and the specification information corresponding to the sample goods as a second training sample, and/or taking the side surface image of the sample package and the specification information corresponding to the sample goods as a second training sample;
and training the neural network by using each second training sample to obtain the pre-trained recognition model.
9. A computer device, comprising:
memory, processor and computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, implements the method for identifying information on the specifications of goods according to any of claims 1 to 4.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the method for identifying specifications information of goods according to any one of claims 1 to 4.
CN201910118085.6A 2019-02-15 2019-02-15 Goods specification information identification method and device Active CN109919040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910118085.6A CN109919040B (en) 2019-02-15 2019-02-15 Goods specification information identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910118085.6A CN109919040B (en) 2019-02-15 2019-02-15 Goods specification information identification method and device

Publications (2)

Publication Number Publication Date
CN109919040A CN109919040A (en) 2019-06-21
CN109919040B true CN109919040B (en) 2022-04-19

Family

ID=66961602

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910118085.6A Active CN109919040B (en) 2019-02-15 2019-02-15 Goods specification information identification method and device

Country Status (1)

Country Link
CN (1) CN109919040B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443119B (en) * 2019-06-25 2021-11-30 中车工业研究院有限公司 Method and device for identifying state of goods in carriage
DE102019119138B4 (en) * 2019-07-15 2022-01-20 Deutsche Post Ag Determination of distribution and/or sorting information for the automated distribution and/or sorting of a shipment
CN111626982A (en) * 2020-04-13 2020-09-04 中国外运股份有限公司 Method and device for identifying batch codes of containers to be detected
CN111626983A (en) * 2020-04-13 2020-09-04 中国外运股份有限公司 Method and device for identifying quantity of goods to be detected
CN111626981A (en) * 2020-04-13 2020-09-04 中国外运股份有限公司 Method and device for identifying category of goods to be detected
CN114972931B (en) * 2022-08-03 2022-12-30 国连科技(浙江)有限公司 Goods storage method and device based on knowledge distillation

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650780A (en) * 2009-09-08 2010-02-17 宁波中科集成电路设计中心有限公司 Identification method of container number
BR112012008408A2 (en) * 2010-06-01 2016-03-29 Milenium Espacio Soft S A method for recognizing objects, computer readable device, and computer program
CN106203239B (en) * 2015-05-04 2020-08-04 杭州海康威视数字技术股份有限公司 Information processing method, device and system for container tallying
CN107077659A (en) * 2016-09-26 2017-08-18 达闼科技(北京)有限公司 A kind of intelligent inventory management system, server, method, terminal and program product
CN108171750A (en) * 2016-12-08 2018-06-15 广州映博智能科技有限公司 The chest handling positioning identification system of view-based access control model
CN108520194A (en) * 2017-12-18 2018-09-11 上海云拿智能科技有限公司 Kinds of goods sensory perceptual system based on imaging monitor and kinds of goods cognitive method
CN108830213A (en) * 2018-06-12 2018-11-16 北京理工大学 Car plate detection and recognition methods and device based on deep learning

Also Published As

Publication number Publication date
CN109919040A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109919040B (en) Goods specification information identification method and device
US10776661B2 (en) Methods, systems and apparatus for segmenting and dimensioning objects
CN110603533A (en) Method and apparatus for object state detection
EP2551833A1 (en) Device for registering and managing book based on computer vision and radio frequency identification technique
JP7049983B2 (en) Object recognition device and object recognition method
CN109911481B (en) Cabin frame target visual identification and positioning method and system for metallurgical robot plugging
JP5780083B2 (en) Inspection device, inspection system, inspection method and program
US20070229280A1 (en) RFID tag reading rate
US20070115124A1 (en) Determining a state for object identified by an RFID tag
Rodriguez-Araujo et al. Field-programmable system-on-chip for localization of UGVs in an indoor iSpace
WO2016158438A1 (en) Inspection processing apparatus, method, and program
US10109045B2 (en) Defect inspection apparatus for inspecting sheet-like inspection object, computer-implemented method for inspecting sheet-like inspection object, and defect inspection system for inspecting sheet-like inspection object
CN111767780A (en) AI and vision combined intelligent hub positioning method and system
CN111160450A (en) Fruit and vegetable weighing method based on neural network, storage medium and device
JP5674933B2 (en) Method and apparatus for locating an object in a warehouse
CN111753858A (en) Point cloud matching method and device and repositioning system
US10907954B2 (en) Methods and systems for measuring dimensions of a 2-D object
CN111386533B (en) Method and apparatus for detecting and identifying graphic character representations in image data using symmetrically located blank areas
CN113978987B (en) Pallet object packaging and picking method, device, equipment and medium
CN111080701A (en) Intelligent cabinet object detection method and device, server and storage medium
WO2023213070A1 (en) Method and apparatus for obtaining goods pose based on 2d camera, device, and storage medium
US11599737B1 (en) System for generating tags
JP2017079326A (en) Identification device, traceability system, and identification method
CN109388983B (en) Bar code classification method, classification device, electronic equipment and storage medium
CN112069841A (en) Novel X-ray contraband parcel tracking method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant