CN106986272B - A kind of container container car based on machine vision tracking is prevented slinging method and system - Google Patents

A kind of container container car based on machine vision tracking is prevented slinging method and system Download PDF

Info

Publication number
CN106986272B
CN106986272B CN201710104528.7A CN201710104528A CN106986272B CN 106986272 B CN106986272 B CN 106986272B CN 201710104528 A CN201710104528 A CN 201710104528A CN 106986272 B CN106986272 B CN 106986272B
Authority
CN
China
Prior art keywords
tracking
frame
target
container
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710104528.7A
Other languages
Chinese (zh)
Other versions
CN106986272A (en
Inventor
郑智辉
唐波
张聪
韦海萍
高仕博
肖利平
张辉
周斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Automatic Control Research Institute
Original Assignee
Beijing Aerospace Automatic Control Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Automatic Control Research Institute filed Critical Beijing Aerospace Automatic Control Research Institute
Priority to CN201710104528.7A priority Critical patent/CN106986272B/en
Publication of CN106986272A publication Critical patent/CN106986272A/en
Application granted granted Critical
Publication of CN106986272B publication Critical patent/CN106986272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/18Control systems or devices
    • B66C13/48Automatic control of crane drives for producing a single or repeated working cycle; Programme control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C13/00Other constructional features or details
    • B66C13/16Applications of indicating, registering, or weighing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C15/00Safety gear
    • B66C15/06Arrangements or use of warning devices
    • B66C15/065Arrangements or use of warning devices electrical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B66HOISTING; LIFTING; HAULING
    • B66CCRANES; LOAD-ENGAGING ELEMENTS OR DEVICES FOR CRANES, CAPSTANS, WINCHES, OR TACKLES
    • B66C2700/00Cranes
    • B66C2700/08Electrical assemblies or electrical control devices for cranes, winches, capstans or electrical hoists
    • B66C2700/084Protection measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of container container cars based on machine vision tracking to prevent the system of slinging, including tyre crane, camera, container car, container, video alarm machine, Central Control Room control device, camera is installed on the tyre crane bottom bracket, height is concordant with the vehicle frame of the container car, and the visual field of camera is perpendicular to container car travel direction;Camera inputs the video image of tracking acquisition to video alarm machine, video alarm machine is handled the video image of acquisition using the track algorithm based on fast Fourier transform on-line study, when judging that container car is lifted, alarm signal is sent to Central Control Room control device.The system can automatic detection vanning whether separated with container car, so as to which human error be avoided to sling container container car by mistake, realize the safety pre-control to case area storage yard operation.

Description

Container container truck anti-hoisting method and system based on machine vision tracking
Technical Field
The invention belongs to the technical field of intelligent anti-hoisting of container trucks, and particularly relates to an autonomous tracking and anti-hoisting early warning method for a container truck based on machine vision tracking.
Background
With the rapid development of global container transportation, the operation of modern container terminals and storage yards is increasingly busy, and the working environment is increasingly complex. The hoisting of the container at present usually utilizes a gantry crane or a tire crane, and due to the particularity of the container loading and unloading operation, the event that the container truck is taken up by the container due to the fact that the lock pin of the container truck is not completely opened frequently occurs in the loading and unloading process. When the container hoisting operation is carried out by utilizing the tire crane on a storage yard, the following three working conditions can be generally adopted: (1) the container is separated from the container truck, and at the moment, a relatively uniform gap exists between the container and the container truck, which is called as a 'completely separated state'; (2) the container and the container truck are partially separated, namely one end of the container is separated from the container truck, while the other end is not separated, at this time, a certain gap exists between the container and the container truck, but the gap is a triangular gap with a larger side and a smaller side, which is called as an 'incomplete separation state' or a 'single-side separation state'; (3) the two ends of the container and the container truck are not separated, and at the moment, no gap exists between the container and the container truck basically, which is called as a completely non-separated state. In the case of the (2) th working condition and the (3) rd working condition, the container and the truck can be damaged in severe cases, and even casualty accidents of drivers can happen.
In order to prevent accidents caused by the fact that a container truck and a container are carried together, a dock and a storage yard usually adopt a camera shooting technology, a tire crane driver monitors the separation condition of the container and the truck through a cab display, a lifting operator communicates with the driver through a telephone operator, and the accidents are avoided through manual operation, namely the accidents are completely avoided through ' people ' defense '. At present, container terminal storage yard area is big, and the operation car is many and the car condition is complicated, and to large-scale operation place, mechanical equipment's allotment and management are very important, lean on the people to arouse fatigue and negligence easily, and rely on traditional wireless intercom to assign the operation instruction and transmit production information and have a great deal of inconvenience, have greatly influenced the operating efficiency. Obviously, the current "people's air defense" measures cannot effectively avoid the accidents.
Therefore, in order to deal with the increasingly heavy and complex container transportation industry, a 'technical prevention' measure for automatically detecting whether the container and the container truck are separated or not based on machine vision and automatically or prompting to stop the action of the lifting mechanism under the condition of no separation is needed, so that accidents caused by the container truck and the container are avoided.
Disclosure of Invention
The invention aims to solve the technical problem of researching and providing an autonomous tracking and anti-lifting early warning method for the container truck based on machine vision tracking, which is used for monitoring the running area of the container truck in real time and sending out warning signals for potential lifting accidents, so that the occurrence of the lifting accidents of the container truck in a storage yard can be effectively prevented, and the method is suitable for increasingly complex and heavy container transportation industries.
The invention discloses a container truck anti-hoisting method based on machine vision tracking, which comprises the following steps:
the method comprises the steps that firstly, a camera collects a video image at the current moment t, a video image of a corresponding area of a container truck is obtained, and the width and the height of the video image are determined;
selecting the lower half part of an image in a camera view field as a tracking interesting area, and dividing a plurality of tracking subareas in the tracking interesting area, wherein the number of the tracking subareas is an odd number not less than 3; each tracking subarea is rectangular, and the position of each tracking subarea meets the set topological relation;
thirdly, the camera collects a video image of the next moment t +1, and the video alarm machine adopts a tracking algorithm based on fast Fourier change online learning to each sub-area in the second step to estimate the position of a target outline frame of each sub-area;
step four, judging whether the tracking result of each sub-area has more than half of the vertical upward movement of the sub-area exceeding N pixels; if yes, judging that the frame of the container truck is lifted; if not, judging that the frame of the container truck is not lifted, continuously taking the time of T seconds as an interval, automatically resetting each subarea according to the step two, and repeating the step three to the step four;
if the container truck frame is judged to be hoisted, the video alarm sends an early warning signal to the central control room control equipment to remind a crane driver to suspend hoisting; otherwise, the crane driver completes normal hoisting according to the program;
the camera field of view is perpendicular to the driving direction of the container truck, and the camera inputs a video image obtained by tracking to the video alarm; t is 1,2, 3., M is the total number of tracking video frames, N is the number of pixels, and T is the set time interval of video image acquisition.
Preferably, the number of tracking sub-regions defined in the second step is 3.
Preferably, the method for selecting the tracking region of interest in the second step is as follows:
1) acquiring a T frame image of a video image sequence, and representing a tracking interesting Region by a rectangular frame T _ Region, wherein the T _ Region is represented by four elements, namely a left upper corner point abscissa T _ Region.x, a left upper corner point ordinate T _ Region.y, a rectangular frame width T _ Region.width and a rectangular frame height T _ Region.height;
2) tracking a target area of the rectangular frame T _ Region, which includes a frame portion of the container truck; the selection rule of the target area is as follows:
the upper left corner abscissa T _ region.x is 0,
ordinate of upper left corner point
The rectangular frame width T _ region.width is image.width,
height of rectangular frame
Wherein, image.width is the width of the t frame image, and image.height is the height of the t frame image;
3) the central point of the tracking sub-area box _1 is (x1, y1), the central point of the tracking sub-area box _2 is (x2, y2), the central point of the tracking sub-area box _3 is (x3, y3), and the width or height of each tracking sub-area is taken asThe ratio of the width of the tracking subarea to the width of the video image or the ratio of the height of the tracking subarea to the height of the video image is adopted;
4) the topological relation selection rules of the three subareas are as follows:
x2=0.5*image.width,
y2=T_Region.y+0.5*T_Region.height,
the ratio of the width of the tracking subarea to the width of the video image or the ratio of the height of the tracking subarea to the height of the video image is adopted; q is the number of pixels in the overlapping area of the three tracking subregions.
Preferably, the target tracking method based on fast fourier transform online learning in step three is:
(1) initializing a tracking target area;
(2) according to the tracking result of the target of the t-th frame, a context prior probability model P (c (z) o) of the tracking target of the t-th frame is constructed;
(3) constructing a confidence distribution graph c (z) of the t frame tracking target according to the t frame target tracking result;
(4) constructing a spatial context model of the t frame tracking target
(5) According to the target tracking result of the t frame, calculating the video image data of the t +1 frame for target tracking, and acquiring the position coordinates of the tracking target in the current frameWhere t is 1,2, 3.
Preferably, the method for initializing the tracking target area includes:
(1) acquiring a t frame image of a video image sequence, and initializing the position of a tracking target area;
(2) determining a tracking target Context related area Context _ Region according to an initialization result;
(3) defining Hanning Window matrix Mhmwindow
(4) Initializing the size factor sigmatAnd scale transformation parameter st
Preferably, the method for obtaining the position coordinates of the target in the current frame includes:
(1) constructing a tracking target context prior probability model P (c (z) o) according to the t +1 frame image;
(2) constructing a space-time context model of the t +1 th frame tracking target
(3) Constructing a confidence distribution graph c of the tracking target of the t +1 th framet+1(z);
(4) Calculating the position point coordinates of the tracking target of the t +1 th frame
(5) Updating the size factor sigmat
(6) Updating the spatial context model of the tracking target of the t +1 th frame
Preferably, the calculation method of the tracking target context prior model P (c (z) o) is:
wherein I (z) represents that the gray value of the pixel of the tracking target area T _ Region is subjected to mean value removing processing and multiplied by a Hanning window matrix, z represents the coordinate of the pixel in the T _ Region,it is shown that the matrix multiplication operation,
I(z)=I(z)-mean(I(z))
wherein x*For tracking the coordinates of the central point of the target, mean (-) represents the image mean, a is a normalization parameter, and the value thereof is
Preferably, the confidence distribution map c (z) of the target region is calculated by:
wherein b is a normalization parameter having a value of
Preferably, the spatial context modelThe calculation method comprises the following steps:
preferably, the t +1 frame tracks the spatiotemporal context model of the targetThe calculation method comprises the following steps:
where ρ is a learning factor.
Preferably, the t +1 frame tracks the confidence distribution map c of the targett+1The calculation method (z) is as follows:
preferably, the t +1 frame tracks the position point coordinates of the targetThe calculation method comprises the following steps:
preferably, the update size factor σtThe calculation method comprises the following steps:
whereinIs the confidence map, s 'of the t frame tracking target'tAccording to the target scale estimated by two continuous frame tracking images, the ratio of the target confidence degree calculation results of the t frame and the t-1 frame is used for representing;is the average of the target scale estimated from the successive most recent n frames of the tracking image, st+1Is the target scale estimated according to the new frame tracking image, and the lambda is more than 0 and is a set fixed value filtering parameter.
Preferably, the space context model of the tracking target of the t +1 th frame is updatedThe calculation method comprises the following steps:
the invention also discloses a container truck anti-hoisting system based on machine vision tracking, which comprises a tyre crane, a camera, a container truck, a container, a video alarm and a central control room control device, and is characterized in that: the camera is mounted on the bottom support of the tire crane, the height of the camera is flush with the frame of the container truck, and the view field of the camera is perpendicular to the running direction of the container truck; the camera inputs the video images obtained by tracking to the video alarm machine, the video alarm machine processes the obtained video images by adopting a tracking algorithm based on fast Fourier change online learning, and when the container truck is judged to be lifted, an alarm signal is sent to the central control room control equipment.
The invention has the following beneficial effects:
(1) the invention discloses an autonomous tracking and anti-lifting early warning method of a container truck based on machine vision tracking.A tracking algorithm based on fast Fourier change online learning monitors, acquires and processes video images of a storage yard operation area in real time, realizes great improvement from 'civil defense' to 'technical defense', and can effectively avoid the occurrence of lifting accidents of the container truck caused by artificial misoperation;
(2) the container truck autonomous tracking and anti-lifting early warning method based on machine vision tracking adopts a tracking algorithm based on fast Fourier change online learning to divide a tracking interested area in a storage yard into a plurality of sub-areas, and compares tracking results of the sub-areas, so that comprehensive and dead-corner-free monitoring of a storage yard operation area can be realized, and meanwhile, the accuracy of an alarm result is ensured.
Drawings
FIG. 1 is a schematic view of a machine vision tracking based anti-pick-up system for a container truck according to the present invention;
wherein, (a) is a front view of the system, (b) is a full-scene schematic diagram of the system, and (c) is a layout schematic diagram of system equipment;
1-a first camera, 2-a second camera, 3-a container truck, 4-a first video alarm, 5-a second video alarm, 6-a central control room, 7-a control device, 8-a tyre crane, 9-a tyre crane bottom bracket, 10-a container truck frame, 11-a container, 9-1 is the left end of the tyre crane bottom bracket, and 9-2 is the right end of the tyre crane bottom bracket;
FIG. 2 is a schematic diagram of a machine vision tracking based anti-pick up method for a container truck in accordance with the present invention;
FIG. 3 is a schematic diagram of the machine vision tracking area and multi-sub video tracking in the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail with reference to the accompanying drawings and the detailed description.
As shown in fig. 1(a), 1(b), 1(c) and 2, the container truck autonomous tracking and anti-lifting early warning method based on machine vision tracking includes the following steps:
the method comprises the steps that firstly, a first camera 1 and a second camera 2 collect video images at the current moment t, and video images of a view field corresponding to a container truck 3 are obtained, wherein image.width is the width of the video images, and image.height is the height of the video images.
And step two, as shown in fig. 3, the lower half part of the image in the selected field of view is a tracking region of interest, and tracking sub-regions box _1, box _2 and box _3 are defined in the tracking region of interest. The three sub-areas are all squares, and the positions of the three sub-areas meet the set topological relation. And initializing the tracking target area. The specific method comprises the following steps:
(1) acquiring a T frame image of a video image sequence, and representing a tracking interesting area by a rectangular frame T _ Region, wherein the T _ Region is represented by four elements, namely a left upper corner point abscissa T _ Region.x, a left upper corner point ordinate T _ Region.y, a rectangular frame width T _ Region.width and a rectangular frame height T _ Region.height. The rectangular box T _ Region should contain the target area of the container truck frame 10 portion.
(2) The target area of the rectangular tracking box T _ Region should contain a portion of the container truck frame 10. The selection rule of the target area is as follows:
the upper left corner abscissa T _ region.x is 0,
ordinate of upper left corner pointIn the embodiment, M is 2;
the rectangular frame width T _ region.width is image.width,
height of rectangular frameIn this example, 2 is taken as N.
(3) The central point of the tracking sub-area box _1 is (x1, y1), the central point of the tracking sub-area box _2 is (x2, y2), the central point of the tracking sub-area box _3 is (x3, y3), and the width and the height of each tracking sub-area are both taken asThe ratio of the width of the tracking sub-area to the width of the video image or the ratio of the height of the tracking sub-area to the height of the video image. In this example, P is 10.
(4) The topological relation selection rules of the three subareas are as follows:
x2=0.5*image.width,
y2=T_Region.y+0.5*T_Region.height,
the ratio of the width of the tracking subarea to the width of the video image or the ratio of the height of the tracking subarea to the height of the video image is adopted; q is the number of pixels in the overlapping area of the three tracking sub-areas, and in this embodiment, q is 10.
And step three, estimating the positions of the target outline frames of the three sub-areas by adopting a target tracking method based on fast Fourier change online learning for each sub-area in the step two. Taking box1 as an example to illustrate the target tracking method based on fast fourier transform online learning, box2 and box3 adopt the same tracking method. The specific method comprises the following steps:
(1) acquiring a t frame image of a video image sequence, and representing a sub-area box1 by a rectangular frame Target _ Region, wherein the Target _ Region is represented by four elements, namely an abscissa of a top left corner point, Target _ region.x, an ordinate of the top left corner point, Target _ region.y, a rectangular frame width, Target _ Region, and a rectangular frame height, Target _ region.height.
The coordinate of the center point of the target area is centrPoint, and the abscissa thereof is:
centerPoint.x=Target_Region.x+Target_Region.width*0.5
the ordinate is:
centerPoint.y=Target_Region.y+Target_Region.height*0.5。
(2) and determining a tracking Target Context relevant area Context _ Region according to the rectangular box Target _ Region. In the embodiment, the length and the width of the context _ xt R are respectively selected to be 2 times of those of the Target _ Reg, and the central point is coincided with the central point of the Target _ Region of the Target rectangular frame. Namely, the width is:
Context_Region.width=Target_Region.width*2
the height is:
Context_Region.height=Target_Region.height*2
the abscissa of the upper left corner point is:
Context_Region.x=centerPoint.x-Context_Region.width*0.5
the vertical coordinate of the upper left corner point is as follows:
Context_Region.y=centerPoint.y-Context_Region.height*0.5
(3) a hanning window matrix is defined to reduce the frequency effect of the image edges on the fourier transform. Hanning window in MhmwindowIndicates that its width and height are consistent with the Context _ Region, i.e.
hmwindow.width=Context_Region.width
hmwindow.height=Context_Region.height
The definition of each position in the hanning window matrix is as follows:
hmwindow(i,j)=
(0.54-0.46*cos(2*π*i/hmwindow.height))*(0.54-0.46*cos(2*π*j/hmwindow.width))
wherein, i is 0,1,2,., hmwindow, height-1, j is 0,1,2,., hmwindow, width-1, pi is 3.14.
(4) Initializing the size factor sigmat(T _ region.width + T _ region.height) 0.5, scaling parameter st=1。
(5) Obtaining a prior probability model of a tracked target context
Wherein It(z) the pixel gray value of the target Region T _ Region is subjected to mean value removing processing and multiplied by a Hanning window matrix to obtain:
It(z)=It(z)-mean(It(z))
wherein z represents in Target _ RegionThe coordinates of the pixels of (a) and (b),representing a matrix multiplication operation, x*To track the target center point coordinates, i.e., centepoint. a is a normalization parameter, and the value of a is as follows:
(6) obtaining a confidence distribution map of a tracked target
Wherein b is a normalization parameter, and the value of b is as follows:
in this example, α was 2.25, and β was 1.
(7) Establishing a tracking target space context model
Wherein F (-) represents a fast Fourier transform operation, F-1(. cndot.) represents an inverse fast fourier transform operation.
(8) According to the target tracking result of the t frame, the video image data of the t +1 frame is obtained, the video image data of the t +1 frame is calculated for target tracking, and the position coordinate of the tracking target in the current frame is obtainedWhere t 1,2, 3., M is the total number of tracked video frames. The specific method comprises the following steps:
1) obtaining a prior probability model of a tracked target context
The concrete method is the same as the third step (5).
2) Establishing a space-time context model of a t +1 frame tracking target
Where ρ is a learning factor, and ρ is 0.075 in this embodiment.
3) Calculating a confidence distribution graph c of the t +1 frame tracking targett+1(z) the calculation method is
Where a · represents a dot product operation.
4) Calculating the coordinates of the position point of the tracking target in the t +1 frame image as
5) Updating the size factor sigma t, wherein the calculation method comprises the following steps:
wherein,is the confidence distribution map of the t frame tracking target, stAccording to the target scale estimated by two continuous frame tracking images, the ratio of the target confidence degree calculation results of the t frame and the t-1 frame is used for representing;is the average of the target scale estimated from the successive most recent n frames of the tracking image, st+1Is the target scale estimated from the new frame of tracking image. In order to prevent the generation of the over-adaptation problem and reduce the noise pollution caused by the estimation error, a filtering method is adopted to estimate the size of the target in a new frame, and lambda > 0 is a fixed value filtering parameter set.
6) Updating spatial context model of t +1 frame tracking target
And step four, judging the tracking result of the three sub-areas, and if two or more than two vertical upward movements exceed N pixels, judging that the container truck frame 4 is hoisted. Otherwise, it is determined that the container truck frame 4 is not lifted. At intervals of T seconds, i.e. every T seconds, each sub-area is automatically reset according to step two. In this example, N is 10 and T is 5.
And step five, if the container truck frame 4 is judged to be hoisted, sending an early warning signal to the central control room control equipment 7 to remind a crane driver to suspend hoisting and communicate the field situation with the container truck driver. Otherwise, the crane driver completes normal hoisting according to the program.
When the tracking subareas are defined in the tracking interesting area, the number of the subareas is an odd number which is not less than 3, so that the tracking result can be judged conveniently, and the lifting phenomenon of the container truck is effectively avoided. In the embodiment, the region of interest to be tracked is divided into 3 regions, and the system has the highest operation efficiency at this time.
The utility model provides a container car is from independent tracking and prevent early warning system that lifts by crane based on machine vision tracking, through first camera 1, second camera 2 to the container car 3 region of traveling track in real time, when discovering that container car 3 moves upwards, show that container car 3 has been lifted by together with container 11, then first video alarm 4, second video alarm 5 send alarm signal, through 6 controlgear 7 of central control room inform the crane driver in time to stop lifting by crane to prevent the occurence of failure. The first camera 1 and the first camera 2 are mounted at the left end 9-1 and the right end 9-2 of a support 9 at the bottom of the tire crane 8, the height of the first camera 1 and the height of the first camera 2 are flush with a container vehicle frame 10, the field of view of the first camera 1 is perpendicular to the running direction of the container vehicle 3, and the first camera 1 and the second camera 2 respectively input video images obtained by tracking to the first video alarm 4 and the second video alarm 5. The first video alarm 4 and the second video alarm 5 adopt a tracking algorithm based on fast Fourier change on-line learning to process the obtained video images, and when the container truck is judged to be lifted, an alarm signal is sent to the central control room control equipment 7 to remind a crane driver of suspending the lifting action and communicate the field condition with the container truck driver. Otherwise, the crane driver completes normal hoisting according to the program.
The container truck autonomous tracking and anti-lifting early warning method disclosed by the invention adopts a tracking algorithm based on fast Fourier change online learning, can carry out real-time and comprehensive monitoring on a storage yard operation area, and simultaneously carries out accurate information processing and judgment, thereby realizing great improvement from 'civil defense' to 'technical defense' and effectively avoiding the occurrence of the accident that the container truck is lifted due to manual misoperation.
It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention without making creative efforts, shall fall within the scope of the claimed invention.

Claims (15)

1. A container truck anti-hoisting method based on machine vision tracking comprises the following steps:
the method comprises the steps that firstly, a camera collects a video image at the current moment t, a video image of a corresponding area of a container truck is obtained, and the width and the height of the video image are determined;
selecting the lower half part of an image in a camera view field as a tracking interesting area, and dividing a plurality of tracking subareas in the tracking interesting area, wherein the number of the tracking subareas is an odd number not less than 3; each tracking subarea is rectangular, and the position of each tracking subarea meets the set topological relation;
thirdly, the camera collects a video image of the next moment t +1, and the video alarm machine adopts a tracking algorithm based on fast Fourier change online learning to each sub-area in the second step to estimate the position of a target outline frame of each sub-area;
step four, judging whether the tracking result of each sub-area has more than half of the vertical upward movement of the sub-area exceeding N pixels; if yes, judging that the frame of the container truck is lifted; if not, judging that the frame of the container truck is not lifted, continuously taking the time of T seconds as an interval, automatically resetting each subarea according to the step two, and repeating the step three to the step four;
if the container truck frame is judged to be hoisted, the video alarm sends an early warning signal to the central control room control equipment to remind a crane driver to suspend hoisting; otherwise, the crane driver completes normal hoisting according to the program;
the camera field of view is perpendicular to the driving direction of the container truck, and the camera inputs a video image obtained by tracking to the video alarm; t is 1,2, 3., M is the total number of tracking video frames, N is the number of pixels, and T is the set time interval of video image acquisition.
2. The method of claim 1 wherein the number of tracking subsections defined in step two is 3.
3. The container truck anti-pick up method of claim 2, wherein said step two of selecting a tracking area of interest is by:
1) acquiring a T frame image of a video image sequence, and representing a tracking interesting Region by a rectangular frame T _ Region, wherein the T _ Region is represented by four elements, namely a left upper corner point abscissa T _ Region.x, a left upper corner point ordinate T _ Region.y, a rectangular frame width T _ Region.width and a rectangular frame height T _ Region.height;
2) tracking a target area of the rectangular frame T _ Region, which includes a frame portion of the container truck; the selection rule of the target area is as follows:
the upper left corner abscissa T _ region.x is 0,
ordinate of upper left corner point
The rectangular frame width T _ region.width is image.width,
height of rectangular frame
Wherein, image.width is the width of the t frame image, and image.height is the height of the t frame image;
3) the central point of the tracking sub-area box _1 is (x1, y1), the central point of the tracking sub-area box _2 is (x2, y2), the central point of the tracking sub-area box _3 is (x3, y3), and the width or height of each tracking sub-area is taken as The ratio of the width of the tracking subarea to the width of the video image or the ratio of the height of the tracking subarea to the height of the video image is adopted;
4) the topological relation selection rules of the three subareas are as follows:
x2=0.5*image.width,
y2=T_Region.y+0.5*T_Region.height,
the ratio of the width of the tracking subarea to the width of the video image or the ratio of the height of the tracking subarea to the height of the video image is adopted; q is the number of pixels in the overlapping area of the three tracking subregions.
4. The container truck anti-lifting method according to any one of claims 2 or 3, wherein the target tracking method based on fast Fourier transform on-line learning in the third step is:
(1) initializing a tracking target area;
(2) according to the tracking result of the target of the t-th frame, a context prior probability model P (c (z) o) of the tracking target of the t-th frame is constructed;
(3) constructing a confidence distribution graph c (z) of the t frame tracking target according to the t frame target tracking result;
(4) constructing a spatial context model of the t frame tracking target
(5) According to the target tracking result of the t frame, calculating the video image data of the t +1 frame for target tracking, and acquiring the position coordinates of the tracking target in the current frameWhere t is 1,2, 3.
5. The container truck anti-pick up method of claim 4, wherein said method of initializing a tracking target area is:
(1) acquiring a t frame image of a video image sequence, and initializing the position of a tracking target area;
(2) determining a tracking target Context related area Context _ Region according to an initialization result;
(3) defining Hanning Window matrix Mhmwindow
(4) Initializing the size factor sigmatAnd scale transformation parameter st
6. The container truck anti-pick up method of claim 4, wherein said method of obtaining the position coordinates of the target in the current frame is:
(1) constructing a tracking target context prior probability model P (c (z) o) according to the t +1 frame image;
(2) constructing a space-time context model of the t +1 th frame tracking target
(3) Constructing a confidence distribution graph c of the tracking target of the t +1 th framet+1(z);
(4) Calculating the position point coordinates of the tracking target of the t +1 th frame
(5) Updating the size factor sigmat
(6) Updating the spatial context model of the tracking target of the t +1 th frame
7. The method of claim 4, wherein the tracking target context prior model P (c (z) o) is calculated by:
wherein I (z) represents trackingThe gray value of the pixel of the target area T _ Region is subjected to mean value removing processing and multiplied by a Hanning window matrix to obtain the gray value, z represents the coordinate of the pixel in the T _ Region,it is shown that the matrix multiplication operation,
I(z)=I(z)-mean(I(z))
wherein x*For tracking the coordinates of the central point of the target, mean (-) represents the image mean, a is a normalization parameter, and the value thereof is
8. The method of claim 4 wherein said confidence map c (z) of said target area is calculated by:
wherein b is a normalization parameter having a value of
9. The container truck anti-pick up method of claim 4, wherein the spatial context modelThe calculation method comprises the following steps:
10. the container truck anti-pick up method of claim 6, wherein said t +1 frame tracks the spatiotemporal context model of the targetThe calculation method comprises the following steps:
where ρ is a learning factor.
11. The container truck anti-pick up method of claim 6, wherein said t +1 frame tracking target confidence profile ct+1The calculation method (z) is as follows:
12. the container truck anti-pick up method of claim 6, wherein said t +1 frame tracks the location point coordinates of the targetThe calculation method comprises the following steps:
13. a container truck anti-pick up method as claimed in claim 6, wherein said method is carried out in a vessel truckThe updated size factor sigmatThe calculation method comprises the following steps:
whereinIs the confidence distribution map of the t frame tracking target, stAccording to the target scale estimated by two continuous frame tracking images, the ratio of the target confidence degree calculation results of the t frame and the t-1 frame is used for representing;is the average of the target scale estimated from the successive most recent n frames of the tracking image, st+1Is the target scale estimated according to the new frame tracking image, and the lambda is more than 0 and is a set fixed value filtering parameter.
14. The container truck anti-pick up method of claim 6, wherein the spatial context model of the t +1 th frame tracking target is updatedThe calculation method comprises the following steps:
15. the utility model provides a container car prevents system of lifting by crane based on machine vision tracking, includes tyre crane, camera, container car, container, video alarm machine, central control room controlgear, its characterized in that: the camera is mounted on the bottom support of the tire crane, the height of the camera is flush with the frame of the container truck, and the view field of the camera is perpendicular to the running direction of the container truck; the camera inputs the video images obtained by tracking to the video alarm machine, the video alarm machine processes the obtained video images by adopting a tracking algorithm based on fast Fourier change online learning, and when the container truck is judged to be lifted, an alarm signal is sent to the central control room control equipment.
CN201710104528.7A 2017-02-24 2017-02-24 A kind of container container car based on machine vision tracking is prevented slinging method and system Active CN106986272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710104528.7A CN106986272B (en) 2017-02-24 2017-02-24 A kind of container container car based on machine vision tracking is prevented slinging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710104528.7A CN106986272B (en) 2017-02-24 2017-02-24 A kind of container container car based on machine vision tracking is prevented slinging method and system

Publications (2)

Publication Number Publication Date
CN106986272A CN106986272A (en) 2017-07-28
CN106986272B true CN106986272B (en) 2018-05-22

Family

ID=59412488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710104528.7A Active CN106986272B (en) 2017-02-24 2017-02-24 A kind of container container car based on machine vision tracking is prevented slinging method and system

Country Status (1)

Country Link
CN (1) CN106986272B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107527347B (en) * 2017-10-11 2020-01-14 南京大学 Port container lifting safety monitoring method based on computer vision image processing
CN108711174B (en) * 2018-04-13 2021-12-07 北京航天自动控制研究所 Approximate parallel vision positioning system for mechanical arm
CN110874544B (en) * 2018-08-29 2023-11-21 宝钢工程技术集团有限公司 Metallurgical driving safety monitoring and identifying method
CN109335964B (en) * 2018-09-21 2020-05-12 北京航天自动控制研究所 Container twist lock detection system and detection method
CN109523553B (en) * 2018-11-13 2022-10-18 华际科工(北京)卫星通信科技有限公司 Container abnormal movement monitoring method based on LSD linear detection segmentation algorithm
CN109534177A (en) * 2019-01-10 2019-03-29 上海海事大学 A kind of anti-hoisting device of truck based on machine vision and truck are prevented slinging method
CN109775569B (en) * 2019-03-29 2020-06-19 三一海洋重工有限公司 Method and device for separating and determining containers
CN109949358A (en) * 2019-03-29 2019-06-28 三一海洋重工有限公司 A kind of detection method and detection device of container truck lifting state
CN111027538A (en) * 2019-08-23 2020-04-17 上海撬动网络科技有限公司 Container detection method based on instance segmentation model
CN111539344B (en) * 2020-04-27 2024-09-06 北京国泰星云科技有限公司 Video stream and artificial intelligence based integrated card anti-lifting control system and method
CN111832415B (en) * 2020-06-15 2023-12-26 航天智造(上海)科技有限责任公司 Truck safety intelligent protection system for container hoisting operation
CN112183264B (en) * 2020-09-17 2023-04-21 国网天津静海供电有限公司 Method for judging someone remains under crane boom based on spatial relationship learning
CN112784725B (en) * 2021-01-15 2024-06-07 北京航天自动控制研究所 Pedestrian anti-collision early warning method, device, storage medium and stacker
CN114465148B (en) * 2022-03-29 2024-09-27 山东通源电气有限公司 Method for preventing container type transformer substation from being mistakenly hung

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08119574A (en) * 1994-10-25 1996-05-14 Mitsubishi Heavy Ind Ltd Swing detecting device for hoisted cargo
CN102456129B (en) * 2010-10-26 2017-11-14 同方威视技术股份有限公司 A kind of security inspection image correction method and system
CN104754302B (en) * 2015-03-20 2017-08-08 安徽大学 A kind of target detection tracking method based on rifle ball linked system
CN106210616A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The acquisition method of container representation information, device and system
CN106412501B (en) * 2016-09-20 2019-07-23 华中科技大学 A kind of the construction safety behavior intelligent monitor system and its monitoring method of video
CN106254839A (en) * 2016-09-30 2016-12-21 湖南中铁五新重工有限公司 The anti-method and device of slinging of container truck

Also Published As

Publication number Publication date
CN106986272A (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN106986272B (en) A kind of container container car based on machine vision tracking is prevented slinging method and system
CN102629384B (en) Method for detecting abnormal behavior during video monitoring
CN113370977A (en) Intelligent vehicle forward collision early warning method and system based on vision
CN106841196A (en) Use the wet road surface condition detection of the view-based access control model of tire footprint
CN102496000B (en) Urban traffic accident detection method
CN111626275B (en) Abnormal parking detection method based on intelligent video analysis
CN107330922A (en) Video moving object detection method of taking photo by plane based on movable information and provincial characteristics
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN111002990A (en) Lane departure early warning method and system based on dynamic departure threshold
WO2014073571A1 (en) Image processing device for self-propelled industrial machine and image processing method for self-propelled industrial machine
JP4156084B2 (en) Moving object tracking device
CN111738336A (en) Image detection method based on multi-scale feature fusion
CN111160321A (en) Storage battery car goes up terraced detection and early warning system
CN117474321A (en) BIM model-based construction site risk intelligent identification method and system
CN117787690A (en) Hoisting operation safety risk identification method and identification device
CN106951820A (en) Passenger flow statistical method based on annular template and ellipse fitting
CN109747644A (en) Vehicle tracking anti-collision early warning method, device, controller, system and vehicle
CN111507126A (en) Alarming method and device of driving assistance system and electronic equipment
CN110407045B (en) Method for displaying personnel distribution information in elevator and intelligent elevator system
CN116986487A (en) Hoisting control method and device, electronic equipment and storage medium
CN114724093A (en) Vehicle illegal parking identification method and related equipment
CN111105135A (en) Intelligent city sweeper operation monitoring method and device
CN115690158A (en) Off-road-closing security early warning method, device, system and storage medium
CN113963328A (en) Road traffic state detection method and device based on panoramic monitoring video analysis
CN116311035B (en) Man-car safety early warning system and method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant