CN112329849A - Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal - Google Patents

Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal Download PDF

Info

Publication number
CN112329849A
CN112329849A CN202011219153.7A CN202011219153A CN112329849A CN 112329849 A CN112329849 A CN 112329849A CN 202011219153 A CN202011219153 A CN 202011219153A CN 112329849 A CN112329849 A CN 112329849A
Authority
CN
China
Prior art keywords
target
mag
discharging device
discharging
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011219153.7A
Other languages
Chinese (zh)
Inventor
庞殊杨
毛尚伟
袁钰博
刘斌
崔宇奥
李双江
尹波
李邈
龚强
贾鸿盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CISDI Chongqing Information Technology Co Ltd
Original Assignee
CISDI Chongqing Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CISDI Chongqing Information Technology Co Ltd filed Critical CISDI Chongqing Information Technology Co Ltd
Priority to CN202011219153.7A priority Critical patent/CN112329849A/en
Publication of CN112329849A publication Critical patent/CN112329849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a medium and a terminal for identifying the unloading state of a scrap yard based on machine vision, wherein the method comprises the following steps: acquiring image data of a scrap steel unloading site, and forming a data set through marking; establishing a discharging device target detection model according to the data set, inputting the collected real-time image of the discharging site of the scrap steel into the discharging device target detection model, and obtaining a detection result; according to the method, the working state of the target discharging device is judged according to the position information, the working state of the discharging device in a scene can be identified in real time by utilizing the convolutional neural network and the machine vision under a special working scene of a scrap yard, for example, whether scrap steel is being taken or not or the taking of the scrap steel is finished.

Description

Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal
Technical Field
The invention relates to the field of metallurgy and the field of image recognition, in particular to a method, a medium and a terminal for recognizing the unloading state of a scrap yard based on machine vision.
Background
In the steel smelting scrap scene, the unloading of scrap from the dumper is a particularly important operation in the transportation process. When unloading, the scrap steel is usually taken from the unloading vehicle by the unloading device, i.e. a disk crane or a mechanical arm, and in the process, the unloading state of the unloading device is judged quickly and accurately, which is a necessary step for ensuring the smooth operation of the automatic production line.
In the prior art, methods of manual detection exist. Because the unloading process is long, human resources are wasted to a certain extent due to manual detection, and certain safety risk also exists in the reciprocating motion of large equipment. Therefore, it is important to realize the automatic identification of the discharging state of the scrap yard.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention provides a method, medium and terminal for identifying the discharge state of a scrap yard based on machine vision, so as to solve the above-mentioned technical problems.
The invention provides a scrap steel yard unloading state identification method based on machine vision, which comprises the following steps:
acquiring image data of a scrap steel unloading site;
labeling the discharging device in the image data to form a data set;
establishing a target detection model of the discharging device according to the data set, and training the target detection model to obtain a trained target detection model of the discharging device;
inputting the collected real-time image of the scrap steel discharging site into the discharging device target detection model, and obtaining a detection result, wherein the detection result comprises an identification result of a target discharging device and position information of the target discharging device;
and judging the working state of the target discharging device according to the position information.
Optionally, the working state of the target discharging device includes a moving state and a static state, and the working state of the target discharging device is determined according to the position change of the target discharging device in two images in a continuous frame of a continuous video stream in the real-time image.
Optionally, position information of the discharging car is obtained, and when the target discharging device is determined to be in a static state, the discharging state is determined by comparing the position information of the target discharging device with the position information of the discharging car, wherein the discharging state comprises that the discharging device extracts scrap steel from the discharging car or finishes the extraction of the scrap steel.
Optionally, the unloading device includes a magnetic disc crane or a mechanical arm, the position of the unloading device in the image is marked by performing rectangular frame selection in the image data, and the position information of the target frame is recorded to form a data set, where the data set includes a training set, a test set, and a verification set.
Optionally, the effective information of the target discharging device is trained through the training set, the effective information includes picture basic attributes and labeling information, the picture basic attributes include file names, widths, heights and image depths, and the labeling information includes an upper left-corner abscissa, an upper left-corner ordinate, a lower right-corner abscissa and a lower right-corner ordinate of the target frame of the discharging device in the image and the category of the target object.
Optionally, the position information content and format of the disk crane are as follows:
[MAGxmin,MAGymin,MAGxmax,MAGymax]
wherein, MAGx min, MAGGy min, MAGx max and ARMx max are respectively the horizontal coordinate of the upper left corner, the vertical coordinate of the upper left corner, the horizontal coordinate of the lower right corner and the vertical coordinate of the lower right corner of the disk suspension target frame;
the position information content and format of the mechanical arm are as follows:
[ARMxmin,ARMymin,ARMxmax,ARMymax]
the system comprises a mechanical arm target frame, a first detection module, a second detection module, a third detection module, a fourth detection module and a fourth detection module, wherein the first detection module is used for detecting the position of the mechanical arm target frame;
judging the working state of the target disk crane according to a first condition:
|(MAGxmin)t-(MAGxmin)t-1|<P1
|(MAGymin)t-(MAGymin)t-1|<P1
|(MAGxmax)t-(MAGxmax)t-1|<P1
|(MAGymax)t-(MAGymax)t-1|<P1
and P1 is a preset first threshold, t and t-1 represent a current frame and a previous frame of a picture, when all the continuous n frames of videos meet the first condition, the target disk crane is judged to be in a static state, and otherwise, the target disk crane is judged to be in a moving state.
Optionally, the unloading state is judged through a second condition:
|VEHxmin-MAGxmin|<P2
|VEHxmax-MAGxmax|<P2
|VEHymax-MAGymax|<P2
|VEHymin-MAGymin|<P2
wherein, P2 is a preset second threshold value, VEHx min, VEHy min, VEHx max, VEHx max are respectively the upper left-corner abscissa, the upper left-corner ordinate, the lower right-corner abscissa and the lower right-corner ordinate of the discharge car target frame;
and when the second condition is met, judging that the target discharging vehicle is located in the position area of the discharging device, outputting that the discharging device is sucking or grabbing waste, and if any one condition is not met, judging that the target discharging device finishes extracting the waste steel.
Optionally, an image acquisition module is arranged at a vertical joint of the travelling track and the upright column, the image acquisition module is positioned obliquely above the discharging device, and image data of a waste steel discharging site is acquired through the image acquisition module.
The present invention also provides a computer-readable storage medium having stored thereon a computer program characterized in that: the computer program, when executed by a processor, implements the method of any of the above.
The present invention also provides an electronic terminal, comprising: a processor and a memory;
the memory is adapted to store a computer program and the processor is adapted to execute the computer program stored by the memory to cause the terminal to perform the method as defined in any one of the above.
The invention has the beneficial effects that: according to the method, the medium and the terminal for identifying the discharging state of the scrap steel yard based on the machine vision, the convolutional neural network and the machine vision are utilized, and the working state of the discharging device in a scene can be identified in real time under a special working scene of the scrap steel yard, such as whether scrap steel is being taken or not or the taking of the scrap steel is finished.
Drawings
FIG. 1 is a schematic flow chart of a scrap yard discharge state identification method based on machine vision in the embodiment of the invention.
FIG. 2 is a schematic diagram of the installation positions of the cameras in an application scenario of the steel scrap yard discharge state identification method based on machine vision in the embodiment of the present invention.
FIG. 3 is a schematic view of a camera in the steel scrap yard discharge state identification method based on machine vision in the embodiment of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention, however, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details, and in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
As shown in fig. 1, the method for identifying the discharging state of the scrap yard based on machine vision in this embodiment includes:
s1, obtaining image data of a waste steel discharging site;
s2, labeling the discharging device in the image data to form a data set;
s3, establishing a target detection model of the discharging device according to the data set, training the target detection model, and obtaining the trained target detection model of the discharging device;
s4, inputting the acquired real-time image of the steel scrap unloading site to the unloading device target detection model to obtain a detection result, wherein the detection result comprises an identification result of the target unloading device and position information of the target unloading device;
and S5, judging the working state of the target discharging device according to the position information.
As shown in fig. 1 and 2, in step S1 of the present embodiment, an image capturing module is provided in the steel smelting scrap yard, and the arrangement of the cameras in the steel smelting scrap yard is as follows: a camera is arranged at the vertical joint of the travelling track and the upright post, and the position of the camera is positioned above the discharging device in an inclined mode so as to obtain image data of a waste steel discharging site. Optionally, the image acquisition module in this embodiment may be an industrial camera or a monitoring camera, and the like.
In step S2 of this embodiment, labeling and creating a data set for the acquired image including the disk crane and the robot arm of the unloading device; image labeling is carried out on the disc crane and mechanical arm pictures of the unloading device, which are obtained by shooting in a specific scene, the position of the unloading device in the image is marked by using a rectangular frame of an image labeling tool, the position information of a target frame is recorded and made into a target data set of the unloading device, and the target data set is divided into three parts: training set, testing set, verifying set, training unloading device target detection model with training set data. Effective information which can be used for training of the training set after image labeling comprises image basic attributes and labeling information. The picture basic attributes are: file name, width, height, image depth. The labeling information comprises an upper left-corner abscissa, an upper left-corner ordinate, a lower right-corner abscissa, a lower right-corner ordinate and the category of the target object of the target frame of the discharging device in the image.
In step S3 of this embodiment, a discharge apparatus target detection model is established according to the data set, and is trained, a neural network is established by obtaining the trained discharge apparatus target detection model, and the model is trained by using the data set, so as to obtain the discharge apparatus target detection model. The target detection model in this embodiment is a convolutional neural network model based on deep learning, and the target detection model of the discharging device is finally obtained by learning the target characteristics of the objects within the range of the identification frame in the target training set image of each discharging device.
In this embodiment, a target detection model is invoked to identify the tripper target in the input video stream and record its position information. In the recognition scene, the unloading device is a magnetic disc crane or a mechanical arm; the algorithm calls the model to obtain the position information of the discharging device in the input image. The range of the position area of the discharging device in the image is determined by the coordinates of the upper left corner and the lower right corner of the rectangle. For a magnetic disk suspension (Magnet, abbreviated as MAG), the position information content and format are as follows:
[MAGxmin,MAGymin,MAGxmax,MAGymax]
for a Mechanical ARM (Mechanical ARM, abbreviated as ARM), the position information content and format are as follows:
[ARMxmin,ARMymin,ARMxmax,ARMymax]。
in this embodiment, the working state of the target discharging device includes a moving state and a static state, and the working state of the target discharging device is determined by the position change of the target discharging device in two images in a continuous frame of a continuous video stream in a real-time image. Specifically, the real-time position information returned by the target detection model of the discharging device is used for judging whether the discharging device is static or not; whether the unloading device moves or is still can be judged by judging whether the change of the position information of the devices in two images in n frames of the continuous video stream exceeds a threshold value or not, taking a disk crane as an example:
|(MAGxmin)t-(MAGxmin)t-1|<P1
|(MAGymin)t-(MAGymin)t-1|<P1
|(MAGxmax)t-(MAGxmax)t-1|<P1
|(MAGymax)t-(MAGymax)t-1|<P1
t and t-1 represent a current frame and a previous frame of a picture, P1 is a first threshold, the variation of the position information of the discharging device is calculated for the two frames respectively, and if all continuous n frames of videos meet the formula, the discharging device is in a static state and can be judged in the next step. Otherwise, the motion state is maintained, and no further detection is carried out.
In this embodiment, for a stationary discharging device, the acquired position information of the discharging device is compared with the position information of the discharging carriage, and it is determined that the discharging device is extracting the scrap steel from the discharging carriage or finishing the extraction of the scrap steel according to the preset threshold condition.
And judging the unloading state by analyzing the difference range between the position information of the unloading vehicle and the position information of the unloading device position area (MAG or ARM). The conditions for judging whether the target of the discharging car is positioned in the position area of the discharging device at the moment are as follows:
|VEHxmin-MAGxmin|<P2
|VEHxmax-MAGxmax|<P2
|VEHymax-MAGymax|<P2
|VEHymin-MAGymin|<P2
where P2 is the second threshold. Satisfy above condition, then can judge for whether the dummy car target is located discharge apparatus position region, the output is that discharge apparatus is absorbing or snatchs the waste material. If any one of the two types of the waste steel extraction equipment is not satisfied, the fact that the target of the discharging car is not located in the position area of the discharging device can be judged, and the fact that the discharging device finishes the extraction of the waste steel is output.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods in the present embodiments.
The present invention also provides an electronic terminal, comprising: a processor and a memory;
the memory is used for storing computer programs, and the processor is used for executing the computer programs stored by the memory so as to make the terminal execute any method in the embodiment.
The computer-readable storage medium in the present embodiment may be understood by those skilled in the art as follows: all or part of the steps for implementing the above method embodiments may be performed by hardware associated with a computer program. The computer program may be stored in a computer readable storage medium. When executed, the program performs steps comprising the above-described method embodiments; and the aforementioned storage medium includes: ROM, RAM, magnetic or optical disks, etc. may store the program code.
The electronic terminal provided by the embodiment comprises a processor, a memory, a transceiver and a communication interface, wherein the memory and the communication interface are connected with the processor and the transceiver and are used for realizing mutual communication, the memory is used for storing a computer program, the communication interface is used for carrying out communication, and the processor and the transceiver are used for operating the computer program.
In this embodiment, the Memory may include a Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In the above embodiments, unless otherwise specified, the description of common objects by using "first", "second", etc. ordinal numbers only indicate that they refer to different instances of the same object, rather than indicating that the objects being described must be in a given sequence, whether temporally, spatially, in ranking, or in any other manner. In the above-described embodiments, reference in the specification to "the present embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least some embodiments, but not necessarily all embodiments. The multiple occurrences of "the present embodiment" do not necessarily all refer to the same embodiment.
In the embodiments described above, although the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory structures (e.g., dynamic ram (dram)) may use the discussed embodiments. The embodiments of the invention are intended to embrace all such alternatives, modifications and variances that fall within the broad scope of the appended claims.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing embodiments are merely illustrative of the principles of the present invention and its efficacy, and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A scrap steel yard unloading state identification method based on machine vision is characterized by comprising the following steps:
acquiring image data of a scrap steel unloading site;
labeling the discharging device in the image data to form a data set;
establishing a target detection model of the discharging device according to the data set, and training the target detection model to obtain a trained target detection model of the discharging device;
inputting the collected real-time image of the scrap steel discharging site into the discharging device target detection model, and obtaining a detection result, wherein the detection result comprises an identification result of a target discharging device and position information of the target discharging device;
and judging the working state of the target discharging device according to the position information.
2. The machine vision-based scrap yard discharging state identifying method according to claim 1, wherein the working state of the target discharging device comprises a moving state and a static state, and the working state of the target discharging device is judged through the position change of the target discharging device in two images in continuous frames of continuous video streams in the real-time images.
3. The machine vision-based scrap yard discharging state recognition method according to claim 2, wherein position information of the discharging carriage is acquired, and when the target discharging device is determined to be in the stationary state, the discharging state including that the discharging device is extracting scrap from the discharging carriage or that extraction of scrap is completed is determined by comparing the position information of the target discharging device with the position information of the discharging carriage.
4. The machine vision-based scrap yard discharging state recognition method according to claim 2, wherein the discharging device comprises a magnetic disc crane or a mechanical arm, rectangular frame selection is performed in the image data to mark the position of the discharging device in the image, and position information of a target frame is recorded to form a data set, wherein the data set comprises a training set, a testing set and a verification set.
5. The machine vision-based scrap yard discharge state recognition method according to claim 4, characterized in that effective information of a target discharge device is trained through the training set, the effective information includes picture basic attributes and labeling information, the picture basic attributes include file name, width, height, image depth, and the labeling information includes an upper left abscissa, an upper left ordinate, a lower right abscissa and a lower right ordinate of a discharge device target frame in an image and a category of a target object.
6. The machine vision-based scrap yard discharge state identifying method according to claim 5,
the position information content and format of the magnetic disc crane are as follows:
[MAGxmin,MAGymin,MAGxmax,MAGymax]
wherein the MAGxmin,MAGymin,MAGxmax,ARMxmaxRespectively an upper left-corner abscissa, an upper left-corner ordinate, a lower right-corner abscissa and a lower right-corner ordinate of the magnetic disk suspension target frame;
the position information content and format of the mechanical arm are as follows:
[ARMxmin,ARMymin,ARMxmax,ARMymax]
wherein, ARMxmin,ARMymin,ARMxmax,ARMxmaxRespectively an upper left corner abscissa, an upper left corner ordinate, a lower right corner abscissa and a lower right corner ordinate of the mechanical arm target frame;
judging the working state of the target disk crane according to a first condition:
|(MAGxmin)t-(MAGxmin)t-1|<P1
|(MAGymin)t-(MAGymin)t-1|<P1
|(MAGxmax)t-(MAGxmax)t-1|<P1
|(MAGymax)t-(MAGymax)t-1|<P1
and P1 is a preset first threshold, t and t-1 represent a current frame and a previous frame of a picture, when all the continuous n frames of videos meet the first condition, the target disk crane is judged to be in a static state, and otherwise, the target disk crane is judged to be in a moving state.
7. The machine vision-based scrap yard discharging state identifying method according to claim 6, wherein the discharging state is judged by a second condition:
|VEHxmin-MAGxmin|<P2
|VEHxmax-MAGxmax|<P2
|VEHymax-MAGymax|<P2
|VEHymin-MAGymin|<P2
wherein P2 is a preset second threshold value, VEHxmin,VEHymin,VEHxmax,VEHxmaxRespectively an upper left-corner abscissa, an upper left-corner ordinate, a lower right-corner abscissa and a lower right-corner ordinate of the discharging car target frame;
and when the second condition is met, judging that the target discharging vehicle is located in the position area of the discharging device, outputting that the discharging device is sucking or grabbing waste, and if any one condition is not met, judging that the target discharging device finishes extracting the waste steel.
8. The machine vision-based scrap yard discharging state identification method according to any one of claims 1 to 7, wherein an image acquisition module is arranged at the vertical joint of a travelling rail and a stand column, the image acquisition module is positioned obliquely above a discharging device, and image data of a scrap discharging site is acquired through the image acquisition module.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program, when executed by a processor, implements the method of any one of claims 1 to 8.
10. An electronic terminal, comprising: a processor and a memory;
the memory is for storing a computer program and the processor is for executing the computer program stored by the memory to cause the terminal to perform the method of any of claims 1 to 8.
CN202011219153.7A 2020-11-04 2020-11-04 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal Pending CN112329849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011219153.7A CN112329849A (en) 2020-11-04 2020-11-04 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011219153.7A CN112329849A (en) 2020-11-04 2020-11-04 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal

Publications (1)

Publication Number Publication Date
CN112329849A true CN112329849A (en) 2021-02-05

Family

ID=74315961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011219153.7A Pending CN112329849A (en) 2020-11-04 2020-11-04 Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal

Country Status (1)

Country Link
CN (1) CN112329849A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115070162A (en) * 2022-07-19 2022-09-20 中冶节能环保有限责任公司 Intelligent environment-friendly cutting method and device for bulk steel scraps

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019057169A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Text detection method, storage medium, and computer device
WO2019184124A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Risk-control model training method, risk identification method and apparatus, and device and medium
WO2020057355A1 (en) * 2018-09-21 2020-03-26 深圳市九洲电器有限公司 Three-dimensional modeling method and device
CN111524112A (en) * 2020-04-17 2020-08-11 中冶赛迪重庆信息技术有限公司 Steel chasing identification method, system, equipment and medium
CN111524113A (en) * 2020-04-17 2020-08-11 中冶赛迪重庆信息技术有限公司 Lifting chain abnormity identification method, system, equipment and medium
CN111553950A (en) * 2020-04-30 2020-08-18 中冶赛迪重庆信息技术有限公司 Steel coil centering judgment method, system, medium and electronic terminal
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
CN111626117A (en) * 2020-04-22 2020-09-04 杭州电子科技大学 Garbage sorting system and method based on target detection
CN111724338A (en) * 2020-03-05 2020-09-29 中冶赛迪重庆信息技术有限公司 Turntable abnormity identification method, system, electronic equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019057169A1 (en) * 2017-09-25 2019-03-28 腾讯科技(深圳)有限公司 Text detection method, storage medium, and computer device
WO2019184124A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Risk-control model training method, risk identification method and apparatus, and device and medium
WO2020057355A1 (en) * 2018-09-21 2020-03-26 深圳市九洲电器有限公司 Three-dimensional modeling method and device
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
CN111724338A (en) * 2020-03-05 2020-09-29 中冶赛迪重庆信息技术有限公司 Turntable abnormity identification method, system, electronic equipment and medium
CN111524112A (en) * 2020-04-17 2020-08-11 中冶赛迪重庆信息技术有限公司 Steel chasing identification method, system, equipment and medium
CN111524113A (en) * 2020-04-17 2020-08-11 中冶赛迪重庆信息技术有限公司 Lifting chain abnormity identification method, system, equipment and medium
CN111626117A (en) * 2020-04-22 2020-09-04 杭州电子科技大学 Garbage sorting system and method based on target detection
CN111553950A (en) * 2020-04-30 2020-08-18 中冶赛迪重庆信息技术有限公司 Steel coil centering judgment method, system, medium and electronic terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115070162A (en) * 2022-07-19 2022-09-20 中冶节能环保有限责任公司 Intelligent environment-friendly cutting method and device for bulk steel scraps

Similar Documents

Publication Publication Date Title
Fang et al. Falls from heights: A computer vision-based approach for safety harness detection
CN108960067B (en) Real-time train driver action recognition system and method based on deep learning
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN104239867B (en) License plate locating method and system
WO2021082662A1 (en) Method and apparatus for assisting user in shooting vehicle video
CN112348791B (en) Intelligent scrap steel detecting and judging method, system, medium and terminal based on machine vision
CN111310645A (en) Overflow bin early warning method, device, equipment and storage medium for cargo accumulation amount
CN112686923A (en) Target tracking method and system based on double-stage convolutional neural network
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN112329849A (en) Scrap steel stock yard unloading state identification method based on machine vision, medium and terminal
CN113808200B (en) Method and device for detecting moving speed of target object and electronic equipment
CN112348894B (en) Method, system, equipment and medium for identifying position and state of scrap steel truck
CN112749735A (en) Converter tapping steel flow identification method, system, medium and terminal based on deep learning
CN112037198B (en) Hot-rolled bar fixed support separation detection method, system, medium and terminal
CN109816588B (en) Method, device and equipment for recording driving trajectory
CN117218633A (en) Article detection method, device, equipment and storage medium
CN101943575B (en) Test method and test system for mobile platform
CN112053339B (en) Rod finished product warehouse driving safety monitoring method, device and equipment based on machine vision
CN113963233A (en) Target detection method and system based on double-stage convolutional neural network
CN113449617A (en) Track safety detection method, system, device and storage medium
CN114550062A (en) Method and device for determining moving object in image, electronic equipment and storage medium
CN114399671A (en) Target identification method and device
CN112037199A (en) Hot rolled bar collecting and finishing roller way blanking detection method, system, medium and terminal
CN114596239A (en) Loading and unloading event detection method and device, computer equipment and storage medium
CN112308848A (en) Method and system for identifying state of baffle plate of scrap steel truck, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 401329 No. 5-6, building 2, No. 66, Nongke Avenue, Baishiyi Town, Jiulongpo District, Chongqing

Applicant after: MCC CCID information technology (Chongqing) Co.,Ltd.

Address before: 20-24 / F, No.7 Longjing Road, North New District, Yubei District, Chongqing

Applicant before: CISDI CHONGQING INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20210205

RJ01 Rejection of invention patent application after publication