CN114863250A - Container lockhole identification and positioning method, system and storage medium - Google Patents

Container lockhole identification and positioning method, system and storage medium Download PDF

Info

Publication number
CN114863250A
CN114863250A CN202210353783.6A CN202210353783A CN114863250A CN 114863250 A CN114863250 A CN 114863250A CN 202210353783 A CN202210353783 A CN 202210353783A CN 114863250 A CN114863250 A CN 114863250A
Authority
CN
China
Prior art keywords
container
lock hole
center
identification
lifting appliance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210353783.6A
Other languages
Chinese (zh)
Inventor
刘晓洋
缪煜洋
赵东阳
于耀泽
张昊
孙兴锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202210353783.6A priority Critical patent/CN114863250A/en
Publication of CN114863250A publication Critical patent/CN114863250A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a container lock hole identification and positioning method, a system and a storage medium, wherein the method comprises the following steps: s1: continuously acquiring an overlooking plane image of the container by utilizing a group of opposite-angle stereo cameras arranged on a container spreader; s2: detecting the position of a lock hole in an overlook plane image by adopting an optimized deep learning target detection model to perform real-time identification, tracking and positioning, and controlling the lifting appliance to descend and move towards the center direction of the container according to the lock hole position obtained by identification until the height of the lifting appliance from the container is less than a set threshold value; s3: the method comprises the following steps that a three-dimensional camera collects a depth image of a container, preprocesses the depth image and determines the positions of four corners of the container through edge detection; s4: and calculating the center position of the lock hole according to the geometric relationship between the four corners of the container and the center of the lock hole. According to the method, the position of the lock hole is calculated according to the edge detection of the container depth image, the calculated position of the lock hole is higher in precision, and the influence of outdoor complex scenes and light environments is less.

Description

Container lockhole identification and positioning method, system and storage medium
Technical Field
The invention belongs to the technical field of port automation, and particularly relates to a container lock hole identification and positioning method, a container lock hole identification and positioning system and a storage medium.
Background
The automated port adopts unmanned management and operation mode, so that the labor cost can be saved, the automated cargo transferring and management mode is efficient, the handling capacity of port cargos can be greatly improved, and the effective increase of economic benefit is realized. The container grabbing is an important link of the automatic port in the cargo transferring process, the realization difficulty and the technical complexity are high, and the key is the identification and the positioning of the container lock hole. If manual operation is adopted, the requirement on the proficiency of workers is high, the workers are prone to fatigue, and a large safety risk exists.
At present, the automatic port mainly adopts a high-precision radar to realize the identification and the positioning of the lock holes of the container, and although the method is effective, the cost is higher, and the method is not beneficial to large-scale popularization. The method based on machine vision is low in cost, but in the face of outdoor complex scenes and light environments, accurate identification and high-precision positioning of lock holes are difficult to achieve, and application of the automatic container grabbing technology is limited.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a container lock hole identification and positioning method, which can improve the position identification precision of a lock hole and can adapt to outdoor complex scenes and light environments.
Another objective of the present invention is to provide a system capable of implementing the above container locking hole identification and location method, and a storage medium storing a computer program instantiated by the above identification and location method.
The technical scheme is as follows: the invention relates to a container lock hole identification and positioning method, which comprises the following steps:
s1: continuously acquiring an overlooking plane image of the container by utilizing a group of opposite-angle stereo cameras arranged on a container spreader;
s2: detecting the position of a lock hole in an overlook plane image by adopting an optimized deep learning target detection model to perform real-time identification, tracking and positioning, and controlling the lifting appliance to descend and move towards the center direction of the container according to the lock hole position obtained by identification until the height of the lifting appliance from the container is less than a set threshold value;
s3: the method comprises the following steps that a three-dimensional camera collects a depth image of a container, preprocesses the depth image and determines the positions of four corners of the container through edge detection;
s4: and calculating the center position of the lock hole according to the geometric relationship between the four corners of the container and the center of the lock hole.
Further, the optimization performed by the deep learning object detection model in step S2 includes: and constraining the aspect ratio of the anchor frame by the aspect ratio of the lock hole of the standard container, calculating the size of the lock hole in the image according to the height of the hanger, the number of layers of the container and the parameters of the stereo camera, and constraining the size of the anchor frame by the size.
Further, the optimizing performed by the deep learning object detection model in step S2 further includes: and performing key point detection on the acquired plane image, determining the position of anchor frame generation according to the key point distribution density clustering center and generating anchor frames with corresponding quantity.
Further, the deep learning target detection model in the step S2 is a YOLO V4 target detection model.
Further, the step S2 includes:
s2.1: the two stereo cameras respectively collect the top view plane images of a pair of containers, and a deep learning target detection model is adopted to identify the position of a lock hole in the plane images;
s2.2: calculating the position relation between the center of the spreader and the center of the container according to the height of the spreader and the positions of the lock holes;
s2.3: controlling the lifting appliance to descend and move towards the center of the container according to the position relation between the center of the lifting appliance and the center of the container, acquiring the overlooking plane image of the container again, and calculating the position range of a lock hole on the plane image according to the displacement of the lifting appliance;
s2.4: and identifying the position of the lock hole in the calculated position range by adopting a deep learning target detection model, and returning to the step S2.2 until the height of the lifting appliance from the container is less than a set threshold value.
The container lock hole identification and positioning system comprises a controller and at least two stereo cameras, wherein the two stereo cameras are electrically connected with the controller, and are respectively arranged on a group of opposite angles of a container spreader.
The storage medium of the invention stores a computer program, and the computer program is set to realize the container lock hole identification and positioning method when running.
Has the advantages that: compared with the prior art, the invention has the following advantages: 1. the lifting appliance is guided to move by the deep learning target detection model, the stereo camera is used for collecting depth images and identifying the edges of the container, and the position of the lock hole is calculated according to the edges of the container. 2. The position range of the lockhole in the image is calculated according to the movement of the lifting appliance, the size of the anchor frame of the target detection model is adjusted in real time according to the height of the lifting appliance, the generation position and the number of the anchor frame are determined according to the distribution of key points, the target tracking accuracy is improved, and the calculated amount is effectively reduced.
Drawings
Fig. 1 is a flowchart of a container lock hole identification and positioning method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a mounting position of a stereo camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for tracking keyhole position according to an embodiment of the present invention;
fig. 4 is a top view of the container.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
Referring to fig. 1, the container lock hole identification and positioning method according to the embodiment of the invention comprises the following steps:
s1: continuously acquiring an overlooking plane image of the container by utilizing a group of opposite-angle stereo cameras arranged on a container spreader;
s2: detecting the position of a lock hole in an overlook plane image by adopting an optimized deep learning target detection model to perform real-time identification, tracking and positioning, and controlling the lifting appliance to descend and move towards the center direction of the container according to the lock hole position obtained by identification until the height of the lifting appliance from the container is less than a set threshold value;
s3: the method comprises the following steps that a three-dimensional camera collects a depth image of a container, preprocesses the depth image and determines the positions of four corners of the container through edge detection;
s4: and calculating the center position of the lock hole according to the geometric relationship between the four corners of the container and the center of the lock hole.
Because the depth information near the corner position of the container can be changed drastically, and the structure of the container meets the corresponding standard, the position relationship between the lock hole and the corner of the container is clear and has high precision, as shown in fig. 4. Therefore, according to the characteristics, the depth images of the group of opposite angles of the container are acquired by installing the group of stereo cameras on the group of opposite angles of the lifting appliance, as shown in fig. 2, the positions of the angular points of the depth images of the container are identified, and then the positions of the lock holes of the container are calculated according to the structural geometric relationship of the container, so that the noise influence caused by outdoor scenes and complex light environments can be overcome, the detection reliability and accuracy are higher, and the lifting appliance can be assisted to lift the container more accurately through the lock holes.
And because the working principle of the stereo camera is generally binocular parallax, synchronous focusing is needed, the anti-interference capability is poor, and the stereo camera is not suitable for tracking the lock hole position in the moving process of the lifting appliance, the stereo camera is only required to collect a plane image of the container in the earlier stage, the lock hole position in the collected plane image is tracked through a deep learning target detection model, the relative position of the lifting appliance and the center of the container is calculated and fed back to the controller, and the lifting appliance is controlled to descend and move to the center of the container. The deep learning target detection model can adopt convolution neural network target detection models such as YOLO V3, YOLO V4 or Faster RCNN.
In this embodiment, the deep learning target detection model adopts an improved YOLO V4 target detection model, and the scouring method is as follows:
collecting a large number of two-dimensional plane images of containers at different heights, different time periods and different light environments;
manually marking the lockholes in the collected images by adopting a rectangular frame, and establishing a container lockhole target detection data set;
and training and testing the YOLO V4 target detection model for multiple times by adopting the established data set, and selecting the model with the best test structure as the model for detection.
Meanwhile, in order to improve the adaptability of the YOLO V4 target detection model to the container lock hole identification scene and improve the identification speed and accuracy, the aspect ratio, the size, the generation position and the number of the anchor frame generated by the YOLO V4 target detection model are optimized according to the characteristics of the shape, the size and the like of the lock hole as follows:
a) the length of the standard lock hole is 178mm multiplied by 162mm, and the length-width ratio is about 1:1.099, so that the aspect ratio of the anchor frame is restricted to 1: 1-1: 1.2;
b) when the lifting appliance is positioned at different heights, calculating the size of the lock hole in the acquired plane image according to the height of the lifting appliance, the number of layers where the container is positioned and camera parameters, setting the size of the anchor frame according to the size of the lock hole in the image, and allowing an error of 10 percent;
c) the method comprises the steps of detecting key points of images collected by a camera by adopting an ORB (ordered Fast and Rotated Brief) algorithm, determining anchor frame generation positions according to the distribution of the key points, clustering the key point positions with the key point density reaching above a certain threshold value, calculating a clustering center, and generating a corresponding number of anchor frames according to the key point density of the region by taking the clustering center as the center, wherein the higher the key point density is, the more the number of generated anchor frames is, the lower the density is, the fewer the number of generated anchor frames is, the anchor frames are not generated in the region without the key points, the calculation amount is reduced, and the tracking efficiency is improved.
Meanwhile, in order to further improve the accuracy of target tracking and reduce the calculation amount, as shown in fig. 3, during the descending process of the lifting appliance, the position range of the lock hole in the moved image is calculated according to the moving amount of the lifting appliance, as shown in the left diagram of fig. 3, and then the lock hole is identified by using a YOLO V4 target detection model within the estimated position range.
In practice, because the precision and the anti-interference capability of the stereo camera are poor, the acquired depth image obviously has default depth values of partial areas, and therefore operations such as image enhancement, edge detection and the like need to be performed on the depth image to determine the position of a container corner. In this embodiment, based on a color image captured at the same time as the depth image, filtering and cavity filling are performed on the depth image by using a joint bilateral filtering method, an edge with high significance in the enhanced depth image is detected by using a Canny operator, and an intersection point of a fitting straight line of the edge is used as a position of a container corner.
According to the container lock hole identification system disclosed by the embodiment of the invention, as shown in fig. 2, the container lock hole identification system comprises a controller and at least two stereo cameras, wherein the controller is connected with the two stereo cameras and is used for collecting container images shot by the stereo cameras, the two stereo cameras are respectively arranged on one group of opposite angles of a lifting appliance, and the controller can be shared with the lifting appliance controller. In this embodiment, real sense D435i by intel is used as the stereo camera. The storage medium according to the embodiment of the invention stores the instantiated computer program of the container lock hole identification and positioning method.

Claims (8)

1. A container lock hole identification and positioning method is characterized by comprising the following steps:
s1: continuously acquiring an overlooking plane image of the container by utilizing a group of opposite-angle stereo cameras arranged on a container spreader;
s2: detecting the lock hole position in the overlooking plane image by adopting an optimized deep learning target detection model to carry out real-time identification, tracking and positioning, and controlling the lifting appliance to descend and move towards the center direction of the container according to the lock hole position obtained by identification until the height of the lifting appliance from the container is less than a set threshold value;
s3: the method comprises the following steps that a three-dimensional camera collects a depth image of a container, preprocesses the depth image and determines the positions of four corners of the container through edge detection;
s4: and calculating the center position of the lock hole according to the geometric relationship between the four corners of the container and the center of the lock hole.
2. The method for identifying and locating the lock hole of the container as claimed in claim 1, wherein the optimizing performed by the deep learning object detection model in the step S2 includes: and constraining the aspect ratio of the anchor frame by the aspect ratio of the lock hole of the standard container, calculating the size of the lock hole in the image according to the height of the hanger, the number of layers of the container and the parameters of the stereo camera, and constraining the size of the anchor frame by the size.
3. The method for identifying and locating the lock hole of the container as claimed in claim 2, wherein the optimizing performed by the deep learning object detection model in step S2 further comprises: and detecting key points of the acquired planar image, determining the position of anchor frame generation according to the key point distribution density clustering center and generating anchor frames with corresponding quantity.
4. The method for identifying and locating lock holes in containers as claimed in claim 1, wherein the deep learning target detection model in step S2 is a YOLO V4 target detection model.
5. The method for identifying and locating the lock hole of the container as claimed in claim 1, wherein the step S2 includes:
s2.1: the two stereo cameras respectively collect the top view plane images of a pair of containers, and a deep learning target detection model is adopted to identify the position of a lock hole in the plane images;
s2.2: calculating the position relation between the center of the spreader and the center of the container according to the height of the spreader and the positions of the lock holes;
s2.3: controlling the lifting appliance to descend and move towards the center of the container according to the position relation between the center of the lifting appliance and the center of the container, acquiring the overlooking plane image of the container again, and calculating the position range of a lock hole on the plane image according to the displacement of the lifting appliance;
s2.4: and identifying the position of the lock hole in the calculated position range by adopting a deep learning target detection model, and returning to the step S2.2 until the height of the lifting appliance from the container is less than a set threshold value.
6. The method for identifying and locating the lock hole of the container as claimed in claim 1, wherein the step S3 includes:
s3.1: the two stereo cameras respectively collect depth images of a pair of containers;
s3.2: filtering and filling holes in the depth image by adopting a combined bilateral filtering method on the basis of color images acquired by two stereo cameras at the same time;
s3.3: detecting edges with high significance in the processed depth image by adopting a Canny operator, and identifying intersection points of fitting straight lines of the edges as positions of container angles;
s3.4: and establishing a plane two-dimensional coordinate system according to the obtained positions of one group of opposite angles, and calculating the position of the other group of opposite angles.
7. A container lock hole identification and positioning system for implementing the container lock hole identification and positioning method according to any one of claims 1 to 6, comprising a controller and at least two stereo cameras, wherein the two stereo cameras are electrically connected with the controller, and the two stereo cameras are respectively arranged on a group of opposite corners of a container spreader.
8. A storage medium storing a computer program, wherein the computer program is configured to implement the container lock hole identification and location method according to any one of claims 1 to 6 when the computer program is executed.
CN202210353783.6A 2022-04-06 2022-04-06 Container lockhole identification and positioning method, system and storage medium Pending CN114863250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210353783.6A CN114863250A (en) 2022-04-06 2022-04-06 Container lockhole identification and positioning method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210353783.6A CN114863250A (en) 2022-04-06 2022-04-06 Container lockhole identification and positioning method, system and storage medium

Publications (1)

Publication Number Publication Date
CN114863250A true CN114863250A (en) 2022-08-05

Family

ID=82629624

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210353783.6A Pending CN114863250A (en) 2022-04-06 2022-04-06 Container lockhole identification and positioning method, system and storage medium

Country Status (1)

Country Link
CN (1) CN114863250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115180512A (en) * 2022-09-09 2022-10-14 湖南洋马信息有限责任公司 Automatic loading and unloading method and system for container truck based on machine vision
CN117496189A (en) * 2024-01-02 2024-02-02 中国石油大学(华东) Rectangular tray hole identification method and system based on depth camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115180512A (en) * 2022-09-09 2022-10-14 湖南洋马信息有限责任公司 Automatic loading and unloading method and system for container truck based on machine vision
CN115180512B (en) * 2022-09-09 2023-01-20 湖南洋马信息有限责任公司 Automatic loading and unloading method and system for container truck based on machine vision
CN117496189A (en) * 2024-01-02 2024-02-02 中国石油大学(华东) Rectangular tray hole identification method and system based on depth camera
CN117496189B (en) * 2024-01-02 2024-03-22 中国石油大学(华东) Rectangular tray hole identification method and system based on depth camera

Similar Documents

Publication Publication Date Title
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN111089569B (en) Large box body measuring method based on monocular vision
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN114863250A (en) Container lockhole identification and positioning method, system and storage medium
CN111260289A (en) Micro unmanned aerial vehicle warehouse checking system and method based on visual navigation
CN115213896A (en) Object grabbing method, system and equipment based on mechanical arm and storage medium
CN111340834B (en) Lining plate assembly system and method based on laser radar and binocular camera data fusion
CN110992422B (en) Medicine box posture estimation method based on 3D vision
CN113319859B (en) Robot teaching method, system and device and electronic equipment
CN114241269B (en) A collection card vision fuses positioning system for bank bridge automatic control
CN117086519B (en) Networking equipment data analysis and evaluation system and method based on industrial Internet
CN110110823A (en) Object based on RFID and image recognition assists in identifying system and method
CN115082559A (en) Multi-target intelligent sorting method and system for flexible parts and storage medium
CN114972421A (en) Workshop material identification tracking and positioning method and system
CN115115768A (en) Object coordinate recognition system, method, device and medium based on stereoscopic vision
CN115937810A (en) Sensor fusion method based on binocular camera guidance
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN114463357B (en) Method for determining dynamic information of medium pile in real time in dense medium coal dressing
CN110928311A (en) Indoor mobile robot navigation method based on linear features under panoramic camera
CN115147764A (en) Pipe die bolt identification and positioning method based on multi-view vision
CN114998430A (en) Lifting appliance multi-view fusion positioning system for automatic grabbing and releasing box of quayside container crane
CN113510691A (en) Intelligent vision system of plastering robot
CN113065483A (en) Positioning method, positioning device, electronic equipment, medium and robot
CN111951334A (en) Identification and positioning method and lifting method for stacking steel billets based on binocular vision technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination