CN108876241B - Storage space identification management system based on vision - Google Patents

Storage space identification management system based on vision Download PDF

Info

Publication number
CN108876241B
CN108876241B CN201810587214.1A CN201810587214A CN108876241B CN 108876241 B CN108876241 B CN 108876241B CN 201810587214 A CN201810587214 A CN 201810587214A CN 108876241 B CN108876241 B CN 108876241B
Authority
CN
China
Prior art keywords
area
image
camera
warehouse
base station
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810587214.1A
Other languages
Chinese (zh)
Other versions
CN108876241A (en
Inventor
蒋涛
蔡涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Smart Motion Muniu Intelligent Technology Co., Ltd.
Original Assignee
Sichuan Smart Motion Muniu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Smart Motion Muniu Intelligent Technology Co ltd filed Critical Sichuan Smart Motion Muniu Intelligent Technology Co ltd
Priority to CN201810587214.1A priority Critical patent/CN108876241B/en
Publication of CN108876241A publication Critical patent/CN108876241A/en
Application granted granted Critical
Publication of CN108876241B publication Critical patent/CN108876241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Economics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Geometry (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Finance (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual-based storage space identification management system, which comprises a storage information management center, a camera, a network switch and a base station, wherein the network switch is connected with the camera and the base station, and the base station is connected with the storage information management center; the base station is an image processing workstation; four cameras are installed at each storage position; each library position is a rectangular area; and a camera is respectively arranged right above the central point corresponding to the four boundary lines, the camera is arranged above the central point of each boundary, and the area shot by the camera is divided into an effective area and an ineffective area. After the base station acquires the images, the images acquired by each camera are divided into 4 groups, the remaining ground areas capable of stacking goods in the corresponding storage area are distinguished, and the area is calculated. The invention solves the problem of low utilization efficiency of warehouse resources caused by untimely acquisition of the area information of the remaining stockpiling area in the warehouse in warehouse management, and avoids various errors caused by manual reasons; and logistics transportation and allocation are carried out in time, and warehouse resources are efficiently utilized.

Description

Storage space identification management system based on vision
Technical Field
The invention belongs to the technical field of logistics warehouse management, and particularly relates to a visual-based warehouse space identification management system.
Background
The economy is continuously developed, and the object flow is gradually increased, so that the construction of an intelligent warehouse is promoted. At present, part of warehouses which are built or transformed intelligently to a certain degree are realized, and the resource utilization and the efficiency are greatly improved. For the warehouse with relatively low intelligent construction degree, the resource utilization and the working efficiency are low, particularly, like a transfer station on a railway, the freight volume of warehousing and ex-warehouse is large, the goods are various in variety, the storage period is short, the intelligent construction difficulty is increased, the information such as the stock quantity in the warehouse and the remaining storage areas is acquired by manpower, the phenomena of untimely information acquisition and large errors are easily caused, and great troubles are caused for warehouse management and warehouse resource allocation and utilization.
The invention provides an early-stage intelligent warehouse construction scheme for a warehouse similar to a railway transfer station with large cargo traffic and short cargo storage period, is used for distinguishing and calculating the residual stockpiling area on the ground of the warehouse and has strong implementation. The ground images of the warehouse locations in the warehouse are collected by the camera, the areas where the goods are stacked are judged by the images, the area of the areas where the goods can be stacked in each warehouse location is calculated, and the area is transmitted to the warehouse information management center through the network, so that warehouse managers can conveniently master the current situation of the warehouse in time, the warehouse storage is reasonably arranged, the transportation jam phenomenon caused by untimely acquisition of warehouse information in the peak period of logistics is improved, and the allocation of logistics transportation services is facilitated. Compared with the manual work, the information acquisition is more timely and convenient, and the warehouse resource allocation is quicker and more reasonable.
Disclosure of Invention
The invention provides a visual-based storage space identification management system aiming at the current intelligent warehouse construction situation, which utilizes machine vision and image processing technology to calculate the area of each warehouse location residual stackable area in each warehouse, utilizes network technology and advanced management concept to enable the logistics warehouse ground warehouse location information to be obtained more timely, facilitates warehouse management and logistics transportation allocation, and promotes the construction of intelligent warehouse management.
The purpose of the invention is realized by the following technical scheme: a warehouse space identification management system based on vision comprises a warehouse information management center, a camera, a network switch and a base station, wherein the network switch is connected with the camera and the base station, and the base station is connected with the warehouse information management center; the base station is an image processing workstation;
n warehouse positions are arranged in one warehouse, four cameras are arranged on each warehouse position in each warehouse, and the total number of the installed cameras in one warehouse is N4;
each storage position is a rectangular area, and the boundary of the rectangular area is provided with a boundary line which is in sharp contrast with the ground color; the boundary line is used as a characteristic point of library boundary identification during image processing;
the rectangular area is provided with four boundary lines, a camera is respectively arranged right above a central point corresponding to the four boundary lines, the camera is arranged above the central point of each boundary line and shoots vertically downwards, and an area shot by the camera is divided into an effective area and an invalid area;
the calculation process is as follows:
the method comprises the steps that control signals of four cameras of one storage position are sent by a base station and transmitted to the cameras through an exchanger, the four cameras of the storage position are switched to a working state from a standby state, the collection of ground images of the storage position is started, each camera collects a plurality of images and then stops collecting, image information of the four cameras is transmitted to the base station through the exchanger through a network cable, and the base station is responsible for receiving and separately storing the image information of the four cameras;
after the base station collects the image, starting to analyze and process the image; the images collected by each camera are divided into 4 groups, each group of images is processed, the remaining stackable ground areas in the corresponding library area are distinguished, and the area is calculated.
As a preferred mode, the network switch adopts a plurality of switches to be cascaded, the first-level switch is connected with the camera, the second-level switch is connected with all the first-level switches, and the second-level switch is connected with the camera management and information processing base station.
As a preferred mode, the output of the camera network port is hundred megabits, the first-level switch is a hundred-megabits network port, the second-level switch is a kilomega network port, and the base station is provided with the kilomega network port.
Preferably, the installed cameras support ethernet communication and POE power supply, each camera is connected to a switch with POE power supply through a network cable, and the switch communicates with the camera through the network cable and supplies power to the camera to provide working energy of the camera.
Preferably, the acquisition is stopped after each camera acquires 20 images.
Preferably, four boundary lines of one library position are respectively AB, BC, CD and DA, the central point corresponding to the four boundary lines is E, F, G, H, a high-definition network camera is respectively installed at a position with a height of H meters right above the four central points of E, F, G, H, and the corresponding cameras are numbered as 1, 2, 3 and 4;
each camera vertically shoots a ground image downwards at a height of H meters, and the size of the shot ground area is approximately the size of a storage position; because the camera is arranged above the central point of each boundary and shoots vertically downwards, the size of the area which can be shot by the camera in the storage position only occupies a half of the area of the storage position, the corresponding half of the area of the storage position is only calculated during calculation, and the rest part of the area is the ineffective shooting area of the camera.
Preferably, the specific process of each group of image processing is as follows:
(1) calibrating a camera;
(2) correcting the image;
(3) extracting a library position area image;
(4) preprocessing an image;
(5) distinguishing ground areas under an RGB color space;
(6) distinguishing ground areas under an HSV color space;
(7) combining the areas identified under the RGB and HSV two-color spaces;
(8) calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
As a preferable mode, (1) camera calibration
The calibration process is carried out only by manual assistance when the camera image is processed for the first time, and internal reference or external reference of the camera is obtained by calibrating the camera to correct the distortion of the camera, so that the distortion correction processing of the subsequent image is facilitated;
(2) correcting image
After one-time camera calibration, the internal reference or the internal and external references of the camera are determined, the shot image can be well corrected only by using the determined internal reference or the determined internal and external references of the camera, if distortion exists, proper correction can be carried out by using some correction algorithms, and the image distortion is reduced;
(3) extracting a library position region image
After the image is corrected, the position of the boundary corresponding to the library position area in each image is almost kept unchanged, and the image information corresponding to the library position area can be directly extracted in a coordinate point mode in the given image;
(4) image pre-processing
The method comprises the steps of image filtering processing, color image graying and color space conversion, and some preparations are made for image segmentation and target identification;
(5) ground area discrimination in RGB color space
The method comprises the steps of firstly carrying out local self-adaptive threshold segmentation on a gray level image, wherein a binarization threshold value at a pixel position is determined according to the pixel value distribution of a neighborhood block of the pixel; then carrying out edge detection; then combining the contour with a watershed segmentation algorithm to segment different target areas;
(6) ground area discrimination in HSV color space
Before ground area identification, the H, V, S color characteristic value range corresponding to the ground needs to be determined, and is determined only once initially, and the color characteristic value range can be directly utilized for subsequent ground area identification without repeated determination;
in the HVS image needing ground area identification, the area with H, S, V values of the pixel points in the range is the ground image area, and the ground area can be separated from the image by judging H, V, S values of the pixel points;
after each area is divided by color matching, deleting the areas which are not close to the edge of the image;
(7) region merging for RGB and HSV two-color space recognition
Overlapping image data of areas identified under RGB and HSV two color spaces, overlapping and combining the areas into a graph in an OR relationship, wherein the area under each color space has an area boundary, and taking a boundary line close to the edge of the image on the other side except the boundary of the image; finally, the part left after overlapping is the region where the rest goods can be piled in the storage area;
calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
Preferably, after the calculation of the remaining stackable area of one warehouse location is completed, the base station sends control signals of four cameras of the next warehouse location again, receives image data of the cameras through the Ethernet, and repeats a group of image processing processes.
As a preferred mode, when the area of the warehouse location connected with a certain base station is calculated, the next round of acquisition and calculation is carried out, wherein the base station immediately sends area and warehouse location data to a warehouse management information center through the Ethernet and/or uploads image information shot by the warehouse location to the warehouse information management center when calculating the remaining stackable area of one warehouse location, so that management personnel can conveniently check the area;
finally, the staff can allocate the resources of the warehouse and allocate the logistics transportation by using the uploaded data through the warehouse information management center, the data can also be transmitted to the Internet, and another logistics transfer station can watch the data and allocate the logistics transportation business together with the logistics transfer station.
The invention has the beneficial effects that: the problem of low utilization efficiency of warehouse resources caused by untimely acquisition of area information of the remaining stockpiling area in the warehouse in warehouse management is solved, and various errors caused by manual reasons are avoided. Better realization to the grasp of ground heap goods condition in the warehouse, in time carry out the commodity circulation transportation allotment, the high efficiency utilizes warehouse resource, plays important impetus for realizing the construction of wisdom commodity circulation, wisdom storage.
Drawings
FIG. 1 is a block diagram of the system architecture of the present invention;
FIG. 2 is a schematic diagram of a layout of a warehouse location within a warehouse;
FIG. 3 is a schematic view of the installation position of the camera directly above a certain storage location;
fig. 4 is a schematic view of the effective area and the entire effective area of each camera.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a warehouse space recognition management system based on vision is characterized in that: the system comprises a warehouse information management center, a camera, a network switch and a base station, wherein the network switch is connected with the camera and the base station, and the base station is connected with the warehouse information management center; the base station is an image processing workstation;
n warehouse positions are arranged in one warehouse, four cameras are arranged on each warehouse position in each warehouse, and the total number of the installed cameras in one warehouse is N4;
as shown in fig. 2, a warehouse contains 10 positions, each position is a rectangular area, and the boundary of the rectangular area is drawn with a boundary line which is in sharp contrast with the ground color; the boundary line is used as a characteristic point of library boundary identification during image processing;
the rectangular region of the warehouse location is not easy to be divided too much, the height H of the built warehouse, at which the cameras can be installed at the top of the warehouse, is limited, the cameras are selected according to the size of the area of the warehouse location and the height H, and if no camera meeting the regulation exists, the size of the warehouse location can be divided again according to the height H and the parameters of the cameras;
the rectangular area is provided with four boundary lines, a camera is respectively arranged right above a central point corresponding to the four boundary lines, the camera is arranged above the central point of each boundary line and shoots vertically downwards, and an area shot by the camera is divided into an effective area and an invalid area;
the calculation process is as follows:
the method comprises the steps that control signals of four cameras of one storage position are sent by a base station and transmitted to the cameras through an exchanger, the four cameras of the storage position are switched to a working state from a standby state, the collection of ground images of the storage position is started, each camera collects a plurality of images and then stops collecting, image information of the four cameras is transmitted to the base station through the exchanger through a network cable, and the base station is responsible for receiving and separately storing the image information of the four cameras;
after the base station collects the image, starting to analyze and process the image; the images collected by each camera are divided into 4 groups, each group of images is processed, the remaining stackable ground areas in the corresponding library area are distinguished, and the area is calculated.
The size of the area which can be shot by the camera in the storage is only half of the storage area, the corresponding half of the storage area is only calculated during calculation, and the rest part of the area is an invalid shooting area of the camera; .
In a preferred embodiment, the network switch adopts a plurality of switches in cascade connection, the primary switch is connected with the camera, the secondary switch is connected with all the primary switches, and the secondary switch is further connected with the camera management and information processing base station. Because the power supply required by the cameras in the storeroom is generated by the switch, when the number of storeroom positions in the storeroom is large, one switch is used, the number of network interfaces is large, and the load is large, so that a plurality of switches can be used for cascading.
In a preferred embodiment, the output of the camera network port is hundred megabits, the first-level switch is a hundred megabits network port, the second-level switch is a gigabit network port, and the base station is equipped with the gigabit network port.
In a preferred embodiment, the installed cameras support ethernet communication and POE power, each camera is connected to a POE powered switch via a network cable, and the switch communicates with the camera using the network cable and supplies power to the camera to provide operating power for the camera. The camera adopts the POE power supply, compares the camera of independent power supply, need not consider the installation wiring of camera mains operated line alone, the construction degree of difficulty of wiring when reducing the camera installation.
In a preferred embodiment, the acquisition is stopped after 20 images are acquired by each camera.
In a preferred embodiment, as shown in fig. 3, four boundary lines of a library position, AB, BC, CD, DA respectively, the central points corresponding to the four boundary lines are E, F, G, H, and a high-definition network camera is installed at a position with a height H meter directly above the four central points of E, F, G, H, and the corresponding cameras are numbered 1, 2, 3, 4;
each camera vertically shoots a ground image downwards at a height of H meters, and the size of the shot ground area is approximately the size of a storage position; because the camera is arranged above the central point of each boundary and shoots vertically downwards, the size of the area which can be shot by the camera in the storage position only occupies a half of the area of the storage position, the corresponding half of the area of the storage position is only calculated during calculation, and the rest part of the area is the ineffective shooting area of the camera. For example, the effective area shot by camera No. 1 is rectangular ABFH, while the actual shooting area is 2 times more than the rectangular area ABFH, and the corresponding effective areas shot by camera nos. 2, 3 and 4 are EBCG, FCDH and GDAE. These four active areas are also only partially in the image taken by each camera, close to 1/2. The image portions corresponding to ABFH, EBCG, FCDH and GDA are the image portions actually used for ground remaining stockpiling area identification and area calculation. The shooting central point of the camera is on the boundary, only the condition that the visual field is blocked within the boundary line of the warehouse is considered, the images of the four cameras are used for distinguishing the residual goods stacking areas respectively, then the areas, with the visual lines blocked by goods, in four directions can be well removed through overlapping, the areas outside the boundary are shot, blocked areas exist in the shooting mode, and the shooting areas are invalid, so that the error caused by the fact that the visual field is blocked in the shooting of the camera can be greatly reduced.
In a preferred embodiment, the specific process of each set of image processing:
(1) calibrating a camera;
(2) correcting the image;
(3) extracting a library position area image;
(4) preprocessing an image;
(5) distinguishing ground areas under an RGB color space;
(6) distinguishing ground areas under an HSV color space;
(7) combining the areas identified under the RGB and HSV two-color spaces;
(8) calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
In a preferred embodiment, (1) camera calibration
The calibration process is carried out only by manual assistance when the camera image is processed for the first time, and internal reference or external reference of the camera is obtained by calibrating the camera to correct the distortion of the camera, so that the distortion correction processing of the subsequent image is facilitated; because the camera shoots the ground image at a fixed point, the shooting distance is not changed, after the camera parameters are calibrated once, the calibrated parameters are only needed to be utilized, and the calibration is not needed, unless the shooting position of the camera is changed under the conditions of camera replacement, camera displacement and the like.
(2) Correcting image
After one-time camera calibration, the internal reference or the internal and external references of the camera are determined, the shot image can be well corrected only by utilizing the determined internal reference or the determined internal and external references of the camera, if distortion exists, proper correction can be carried out by using some correction algorithms, and the image distortion is reduced;
(3) extracting a library position region image
After image correction, the position of the corresponding boundary of the library position area in each image is almost kept unchanged, image information corresponding to the library position area can be directly extracted by giving a coordinate point mode in the image, for example, when processing the image of a No. 1 camera, I only needs to give the position coordinates of four vertexes A, B, F, H in the image of an ABFH rectangular area, the image area encircled by the coordinates of the four vertexes is the image information corresponding to the library position area, the image information of the part can be separated as a foreground, and the rest part as a background is completely changed into black, which is equivalent to segmentation. If the image has distortion, in addition to four vertexes, a plurality of points can be specified to circle the image area corresponding to the library position area;
(4) image pre-processing
The method comprises image filtering processing, color image graying and color space conversion, and is used for preparing image segmentation and target identification. The image filtering processing is to filter the image, so that the image is smoother, and the purpose of noise reduction is achieved; the color image graying is to convert an RGB three-channel color image into a single-channel grayscale image; the color space conversion is to convert the RGB three-channel color image into an image of HVS color space;
(5) ground area discrimination in RGB color space
The local adaptive threshold segmentation is firstly carried out on the gray level image, and the method is to determine the binary threshold value at the pixel position according to the pixel value distribution of the neighborhood block of the pixel. The benefit of this is that the binarization threshold at each pixel location is not fixed, but rather determined by the distribution of its surrounding neighborhood pixels. The binarization threshold value of the image area with higher brightness is generally higher, while the binarization threshold value of the image area with lower brightness is correspondingly smaller. Local image regions of different brightness, contrast, texture will have corresponding local binarization thresholds. Therefore, almost all contours can be segmented by the segmentation mode, contour information is complete, different global fixed threshold segmentation is adopted, the global fixed threshold segmentation is used for carrying out binarization on the whole image by using a uniform threshold, and when light in a warehouse changes, the image can be segmented by the segmentation mode to possibly cause the loss of the desired contour information. After the local adaptive threshold is cut, the contour boundaries of all objects can be clearly separated; then, edge detection is carried out, the most common Canny operator (or the variant of the operator) edge detection method is used for detecting the boundary profiles of different objects, and then the profiles with smaller profile length and smaller area are removed; the contours are then combined with a watershed segmentation algorithm to segment out the different target regions, where a modified watershed algorithm is used, i.e. a series of predefined markers is used to guide the way in which the image is segmented. The specific implementation process is as follows: and (3) taking the detected contour as a label of a watershed segmentation algorithm, marking partial pixels in the filtered color RGB image to indicate that the region to which the contour belongs is known, and determining the regions to which other pixels belong by the watershed algorithm according to the initial label so as to segment different object regions. Temporarily marking the area close to the image boundary as a ground area, wherein the image boundary refers to the boundary of the separated library area image;
(6) ground area discrimination in HSV color space
In the HSV color space, the H parameter represents color information, namely the position of the spectral color; the parameter is expressed by an angular measure, and red, green and blue are respectively separated by 120 degrees; the complementary colors differ by 180 degrees, respectively; purity S is a proportional value, ranging from 0 to 1, expressed as the ratio between the purity of the selected color and the maximum purity of that color; v represents the brightness degree of the color, and ranges from 0 to 1; before the ground area identification, the H, V, S color characteristic value range corresponding to the ground needs to be determined, and the determination is only performed once initially, so that the color characteristic value range can be directly used for the subsequent ground area identification without repeated determination. The process is that a camera is used for shooting the ground images of the positions of the storeroom at different moments, the ground area image is cut out from each image, each RGB color image is converted into an HVS image, H, V, S values of the ground images at different moments are obtained, the range of H, S, V values corresponding to the ground color features is judged according to H, S, V values of all the images, namely the range of H values is H1-H2, the range of V values is V1-V2, the range of S values is S1-S2, and the H, V, S value range is used as a judgment standard when the ground colors are matched.
In the HVS image needing ground area identification, the area with H, S, V values of pixel points in the range is the ground image area, the ground area can be separated from the image by judging H, V, S values of the pixel points, the separation is not very accurate, the ground color possibly encounters some interference, the matching accuracy is reduced, and according to the environment, the coating can be used for coating the reservoir with a more vivid color, so that the matching accuracy is increased. After each area is divided by color matching, deleting the areas which are not close to the edge of the image;
(7) region merging for RGB and HSV two-color space recognition
Overlapping image data of areas identified under RGB and HSV two color spaces, overlapping and combining the areas into a graph in an OR relationship, wherein the area under each color space has an area boundary, and taking a boundary line close to the edge of the image on the other side except the boundary of the image; and finally, the part left after overlapping is the region which can be stocked in the storage area.
After the 4 groups of image data are processed, referring to fig. 4, the remaining stackable areas in the four directions of the stock location, the areas indicated by the black dots in 1, 2, 3 and 4 in fig. 4, are obtained, and are superimposed to obtain the image information of the remaining stackable areas of the whole stock location, namely the areas indicated by the black dots in 5 in fig. 4.
Calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
Assuming that the actual area of the entire bin is S1, the image area of the entire region in the image corresponding to the actual bin is S2, and the coefficient of the conversion relationship between the image area and the ground area of the actual bin is: assuming that the currently measured image area of the remaining stock area in a certain stock location is S3, and the ground area S4 corresponding to the actual stock location is S1/S2: s4 ═ S3 a. The errors caused by the vision blind areas can be well reduced by adopting the area superposition of four directions, when the vision blind areas are shot by the camera, goods block the vision of the camera, and partial areas which are not piled on the ground on the side surface cannot be shot, so that the information of the partial areas is lost.
In a preferred embodiment, after the calculation of the remaining stackable area of one bay is completed, the base station sends control signals of four cameras of the next bay again, receives image data of the cameras through the ethernet, and repeats a set of image processing processes.
In a preferred embodiment, until the area of the warehouse location connected with a certain base station is calculated, the next round of acquisition and calculation is carried out, wherein when the base station calculates the remaining stackable area of one warehouse location, the area and warehouse location data are immediately sent to a warehouse management information center through the Ethernet and/or image information shot by the warehouse location is uploaded to the warehouse information management center, so that the management personnel can conveniently check the area and the warehouse location data;
finally, the staff can allocate the resources of the warehouse and allocate the logistics transportation by using the uploaded data through the warehouse information management center, the data can also be transmitted to the Internet, and another logistics transfer station can watch the data and allocate the logistics transportation business together with the logistics transfer station.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, it should be noted that any modifications, equivalents and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The utility model provides a storage space discernment management system based on vision which characterized in that: the system comprises a warehouse information management center, a camera, a network switch and a base station, wherein the network switch is connected with the camera and the base station, and the base station is connected with the warehouse information management center; the base station is an image processing workstation;
n warehouse positions are arranged in one warehouse, four cameras are arranged on each warehouse position in each warehouse, and the total number of the installed cameras in one warehouse is N4;
each storage position is a rectangular area, and the boundary of the rectangular area is provided with a boundary line which is in sharp contrast with the ground color; the boundary line is used as a characteristic point of library boundary identification during image processing;
the rectangular area is provided with four boundary lines, a camera is respectively arranged right above a central point corresponding to the four boundary lines, the camera is arranged above the central point of each boundary line and shoots vertically downwards, and an area shot by the camera is divided into an effective area and an invalid area;
the calculation process is as follows:
the method comprises the steps that control signals of four cameras of one storage position are sent by a base station and transmitted to the cameras through an exchanger, the four cameras of the storage position are switched to a working state from a standby state, the collection of ground images of the storage position is started, each camera collects a plurality of images and then stops collecting, image information of the four cameras is transmitted to the base station through the exchanger through a network cable, and the base station is responsible for receiving and separately storing the image information of the four cameras;
after the base station collects the image, starting to analyze and process the image; the images collected by each camera are divided into 4 groups, each group of images is processed, the remaining stackable ground areas in the corresponding library area are distinguished, and the area is calculated.
2. A vision-based warehousing space identification management system as claimed in claim 1 wherein: the network switch adopts a plurality of switches to cascade, and the camera is connected to the first grade switch, and all first grade switches are connected to the second grade switch, and the second grade switch links to each other with camera management and information processing basic station again.
3. A vision-based warehousing space identification management system as claimed in claim 2 wherein: the output of the camera network port is hundred mega, the first-level switch selects the hundred mega network port, the second-level switch selects the kilomega network port, and the base station is provided with the kilomega network port.
4. A vision-based warehousing space identification management system as claimed in claim 1 wherein: the installed cameras support Ethernet communication and POE power supply, each camera is connected to the switch with the POE power supply through a network cable, and the switch communicates with the cameras through the network cable and supplies power to the cameras to provide working energy of the cameras.
5. A vision-based warehousing space identification management system as claimed in claim 1 wherein: and stopping the acquisition after each camera acquires 20 images.
6. A vision-based warehousing space identification management system as claimed in claim 1 wherein: four boundary lines of a library position are AB, BC, CD and DA respectively, the central points corresponding to the four boundary lines are E, F, G, H, a high-definition network camera is arranged at a position with a height of H meters right above the four central points of E, F, G, H respectively, and the corresponding cameras are numbered as 1, 2, 3 and 4;
each camera vertically shoots a ground image downwards at a height of H meters, and the size of the shot ground area is approximately the size of a storage position; because the camera is arranged above the central point of each boundary and shoots vertically downwards, the size of the area which can be shot by the camera in the storage position only occupies a half of the area of the storage position, the corresponding half of the area of the storage position is only calculated during calculation, and the rest part of the area is the ineffective shooting area of the camera.
7. A vision-based warehousing space identification management system as claimed in claim 1 wherein: the specific process of each group of image processing is as follows:
(1) calibrating a camera;
(2) correcting the image;
(3) extracting a library position area image;
(4) preprocessing an image;
(5) distinguishing ground areas under an RGB color space;
(6) distinguishing ground areas under an HSV color space;
(7) combining the areas identified under the RGB and HSV two-color spaces;
(8) calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
8. A vision-based warehousing space identification management system according to claim 1 or 7, characterized in that:
(1) camera calibration
The calibration process is carried out only by manual assistance when the camera image is processed for the first time, and internal reference or external reference of the camera is obtained by calibrating the camera to correct the distortion of the camera, so that the distortion correction processing of the subsequent image is facilitated;
(2) correcting image
After one-time camera calibration, the internal reference or the internal and external references of the camera are determined, the shot image can be well corrected only by using the determined internal reference or the determined internal and external references of the camera, if distortion exists, proper correction can be carried out by using some correction algorithms, and the image distortion is reduced;
(3) extracting a library position region image
After the image is corrected, the position of the boundary corresponding to the library position area in each image is almost kept unchanged, and the image information corresponding to the library position area can be directly extracted in a coordinate point mode in the given image;
(4) image pre-processing
The method comprises the steps of image filtering processing, color image graying and color space conversion, and some preparations are made for image segmentation and target identification;
(5) ground area discrimination in RGB color space
The method comprises the steps of firstly carrying out local self-adaptive threshold segmentation on a gray level image, wherein a binarization threshold value at a pixel position is determined according to the pixel value distribution of a neighborhood block of the pixel; then carrying out edge detection; then combining the contour with a watershed segmentation algorithm to segment different target areas;
(6) ground area discrimination in HSV color space
Before ground area identification, the H, V, S color characteristic value range corresponding to the ground needs to be determined, and is determined only once initially, and the color characteristic value range can be directly utilized for subsequent ground area identification without repeated determination;
in the HVS image needing ground area identification, the area with H, S, V values of the pixel points in the range is the ground image area, and the ground area can be separated from the image by judging H, V, S values of the pixel points;
after each area is divided by color matching, deleting the areas which are not close to the edge of the image;
(7) region merging for RGB and HSV two-color space recognition
Overlapping image data of areas identified under RGB and HSV two color spaces, overlapping and combining the areas into a graph in an OR relationship, wherein the area under each color space has an area boundary, and taking a boundary line close to the edge of the image on the other side except the boundary of the image; finally, the part left after overlapping is the region where the rest goods can be piled in the storage area;
calculating the area: because the actual area of the stock space corresponds to the area of the corresponding pixel point composition surface in the image, the area in the image is calculated, and then the area is multiplied by the corresponding relation coefficient, so that the remaining stackable area of the actual stock space can be obtained.
9. A vision-based warehousing space identification management system as claimed in claim 8 wherein: and after the calculation of the remaining stockpiling area of one warehouse location is finished, the base station sends the control signals of the four cameras of the next warehouse location again, receives the image data of the cameras through the Ethernet and repeats a group of image processing processes.
10. A vision-based warehousing space identification management system as claimed in claim 9 wherein:
when the area of the warehouse location connected with a certain base station is calculated, the next round of acquisition and calculation is carried out, wherein when the base station calculates the remaining stackable area of one warehouse location, the area and warehouse location data are immediately sent to a warehouse management information center through the Ethernet and/or the image information shot by the warehouse location is uploaded to the warehouse information management center, so that the management personnel can conveniently check the area;
finally, the staff can allocate the resources of the warehouse and allocate the logistics transportation by using the uploaded data through the warehouse information management center, the data can also be transmitted to the Internet, and another logistics transfer station can watch the data and allocate the logistics transportation business together with the logistics transfer station.
CN201810587214.1A 2018-06-08 2018-06-08 Storage space identification management system based on vision Active CN108876241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810587214.1A CN108876241B (en) 2018-06-08 2018-06-08 Storage space identification management system based on vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810587214.1A CN108876241B (en) 2018-06-08 2018-06-08 Storage space identification management system based on vision

Publications (2)

Publication Number Publication Date
CN108876241A CN108876241A (en) 2018-11-23
CN108876241B true CN108876241B (en) 2021-09-03

Family

ID=64338584

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810587214.1A Active CN108876241B (en) 2018-06-08 2018-06-08 Storage space identification management system based on vision

Country Status (1)

Country Link
CN (1) CN108876241B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685467A (en) * 2018-12-25 2019-04-26 杭州彦德信息科技有限公司 Inventory management method, stock control device, equipment and storage medium
CN110084186A (en) * 2019-04-25 2019-08-02 中信梧桐港供应链管理有限公司 A kind of warehouse remote supervisory method and device
CN110751445B (en) * 2019-10-25 2023-04-07 上海德启信息科技有限公司 Method and device for managing and controlling goods in depot area and storage medium
CN111340027A (en) * 2020-03-05 2020-06-26 中冶赛迪重庆信息技术有限公司 Steel pile identification method and system, electronic equipment and medium
CN111832454A (en) * 2020-06-30 2020-10-27 苏州罗伯特木牛流马物流技术有限公司 System and method for realizing ground goods space management by using industrial camera visual identification
CN111762490A (en) * 2020-07-01 2020-10-13 泰森日盛集团有限公司 Finished product warehouse intelligent warehouse system convenient for warehouse location management
CN112184104A (en) * 2020-09-18 2021-01-05 安徽三禾一信息科技有限公司 Material stacking method for storage
CN114819285A (en) * 2022-03-31 2022-07-29 日日顺供应链科技股份有限公司 Warehouse setting method and warehouse entering method
CN114936829B (en) * 2022-07-26 2022-10-21 山东睿达电子科技有限责任公司 Product logistics storage is with intelligent identification management and control system based on RFID technique
CN115860642B (en) * 2023-02-02 2023-05-05 上海仙工智能科技有限公司 Visual identification-based warehouse-in and warehouse-out management method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102905113A (en) * 2012-10-10 2013-01-30 吉林省粮油科学研究设计院 Intelligent grain warehouse monitoring system based on image recognition technology
CN106097329A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on rim detection
CN106485937A (en) * 2016-08-31 2017-03-08 国网山东省电力公司巨野县供电公司 A kind of storage space intelligent management
CN108122081A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 Robot and its inventory management method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193430A1 (en) * 2015-12-31 2017-07-06 International Business Machines Corporation Restocking shelves based on image data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102905113A (en) * 2012-10-10 2013-01-30 吉林省粮油科学研究设计院 Intelligent grain warehouse monitoring system based on image recognition technology
CN106097329A (en) * 2016-06-07 2016-11-09 浙江工业大学 A kind of container profile localization method based on rim detection
CN106485937A (en) * 2016-08-31 2017-03-08 国网山东省电力公司巨野县供电公司 A kind of storage space intelligent management
CN108122081A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 Robot and its inventory management method

Also Published As

Publication number Publication date
CN108876241A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
CN108876241B (en) Storage space identification management system based on vision
CN101493892B (en) Image characteristic extracting method and device
CN108846401A (en) Commodity detect terminal, method, system and computer equipment, readable medium
CN114219842B (en) Visual identification, distance measurement and positioning method in port container automatic loading and unloading operation
CN113139521B (en) Pedestrian boundary crossing monitoring method for electric power monitoring
CN102930279B (en) For the image-recognizing method that product quantity detects
CN103198705A (en) Parking place state automatic detection method
KR20170092734A (en) Method for extracting pallet image using color information, and rack or pallet loading condition recognition module using vision recognition for automatic guided vehicle
CN109472261A (en) A kind of quantity of stored grains in granary variation automatic monitoring method based on computer vision
CN103914833A (en) Method and system for automatically detecting whether bill is incomplete or not
CN107527343A (en) A kind of agaricus bisporus stage division based on image procossing
CN113034624A (en) Temperature early warning image identification method, system, equipment and storage medium based on temperature sensing color-changing adhesive tape
CN103902985A (en) High-robustness real-time lane detection algorithm based on ROI
CN106529556A (en) Visual inspection system for instrument indicator lamp
CN116311132A (en) Deceleration strip identification method, deceleration strip identification device, deceleration strip identification equipment and storage medium
CN112926365A (en) Lane line detection method and system
Changhui et al. Overlapped fruit recognition for citrus harvesting robot in natural scenes
Rodriguez-Telles et al. A fast floor segmentation algorithm for visual-based robot navigation
CN103065147A (en) Vehicle monitoring method based on image matching and recognition technology
Wu et al. Steel bars counting and splitting method based on machine vision
CN111985436A (en) Workshop ground mark line identification fitting method based on LSD
CN115082841B (en) Method for monitoring abnormity of working area of warehouse logistics robot
CN108710850B (en) Wolfberry fruit identification method and system
CN110766698B (en) Method for tracking and identifying oscillating apples under dynamic background
Wang et al. An efficient method of shadow elimination based on image region information in HSV color space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20191122

Address after: 610043 No.129, middle Juqiao street, Wuhou District, Chengdu, Sichuan Province

Applicant after: Sichuan Smart Motion Muniu Intelligent Technology Co., Ltd.

Address before: West high tech Zone Fucheng Road in Chengdu city of Sichuan province 610000 399 No. 7 Building 1 unit 11 floor No. 1107

Applicant before: SICHUAN MONIULIUMA INTELLIGENT TECHNOLOGY CO., LTD.

GR01 Patent grant
GR01 Patent grant