CN113382199A - Platform monitoring method and device and electronic equipment - Google Patents

Platform monitoring method and device and electronic equipment Download PDF

Info

Publication number
CN113382199A
CN113382199A CN202010162975.XA CN202010162975A CN113382199A CN 113382199 A CN113382199 A CN 113382199A CN 202010162975 A CN202010162975 A CN 202010162975A CN 113382199 A CN113382199 A CN 113382199A
Authority
CN
China
Prior art keywords
vehicle
platform
loading
unloading
parking space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010162975.XA
Other languages
Chinese (zh)
Inventor
王世斌
杜洪超
徐忠杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010162975.XA priority Critical patent/CN113382199A/en
Publication of CN113382199A publication Critical patent/CN113382199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Multimedia (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a platform monitoring method, a platform monitoring device and electronic equipment, wherein the method comprises the following steps: acquiring a platform monitoring video of the loading and unloading place; acquiring the cargo handling rate of the vehicles in the parking spaces of each platform based on the platform monitoring video; determining the cargo loading and unloading completion time of the vehicles in the platform parking spaces based on the cargo loading and unloading rate of the vehicles in the platform parking spaces; and controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle. The method can reduce the congestion in the loading and unloading places, and improve the loading and unloading efficiency and the loading and unloading utilization rate of each platform.

Description

Platform monitoring method and device and electronic equipment
Technical Field
The present disclosure relates to the field of video surveillance technologies, and in particular, to a method and an apparatus for monitoring a platform, and an electronic device.
Background
The modern industrial platform (hereinafter referred to as platform) refers to a loading and unloading platform built in a loading and unloading place (such as a logistics park), and a transport forklift can safely and quickly enter and exit a transport vehicle to load and unload goods by virtue of the platform.
With the rapid development of the logistics park, the number of goods transported in everyday life is rapidly increased, and the trucks in the traditional logistics park generally enter a certain platform at random for loading and unloading goods, which may cause the problems of congestion and low working efficiency of goods loading and unloading vehicles in peak periods.
Disclosure of Invention
In view of the above, the present application provides a dock monitoring method and apparatus, and an electronic device.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of an embodiment of the present application, there is provided a dock monitoring method, including:
acquiring a platform monitoring video of a loading and unloading place;
acquiring the cargo handling rate of the vehicles in the parking spaces of each platform based on the platform monitoring video;
determining the cargo loading and unloading completion time of the vehicles in the platform parking spaces based on the cargo loading and unloading rate of the vehicles in the platform parking spaces;
and controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle.
According to a second aspect of embodiments of the present application, there is provided a dock monitoring apparatus, comprising
The first acquisition unit is used for monitoring videos of a platform in a loading and unloading place;
the second acquisition unit is used for acquiring the cargo loading and unloading rate of the vehicles in the parking spaces of each platform based on the platform monitoring video;
a determination unit for determining the cargo handling completion time of the vehicle in each platform parking space based on the cargo handling rate of the vehicle in each platform parking space;
and the control unit is used for controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus including:
a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the above-described method.
According to the platform monitoring method, the platform monitoring video of the loading and unloading place is acquired, the cargo loading and unloading rate of the vehicles in the parking spaces of each platform is acquired based on the platform monitoring video, the cargo loading and unloading completion time of the vehicles in the parking spaces of each platform is determined based on the cargo loading and unloading rate of the vehicles in the parking spaces of each platform, the operation of the vehicles in the loading and unloading place is controlled based on the cargo loading and unloading completion time of the vehicles in the parking spaces of each platform and the corresponding parking spaces of the vehicles, congestion in the loading and unloading place is reduced, and the loading and unloading efficiency and the loading and unloading utilization rate of each platform are improved.
Drawings
Fig. 1 is a schematic flow chart illustrating a method for monitoring a platform according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a method for obtaining a cargo handling rate of a vehicle in each dock bay according to an exemplary embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a scenario illustrated in an exemplary embodiment of the present application;
FIG. 4 is a schematic illustration of a dock camera installation shown in an exemplary embodiment of the present application;
FIG. 5 is a schematic structural diagram of a dock camera according to an exemplary embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a display device at an entrance according to an exemplary embodiment of the present application;
FIG. 7 is a flow chart illustrating a method for monitoring a dock according to an exemplary embodiment of the present application;
FIG. 8 is a schematic view of a complete inbound trigger flow shown in accordance with an exemplary embodiment of the present application;
FIG. 9 is a schematic structural diagram illustrating a dock monitoring apparatus according to an exemplary embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of the apparatus shown in fig. 9 according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of a dock monitoring method provided in an embodiment of the present application is schematically illustrated, where the dock monitoring method may be applied to a loading and unloading place, such as a logistics park, and as shown in fig. 1, the dock monitoring method may include the following steps:
and S100, acquiring a platform monitoring video of a loading and unloading place.
In the embodiment of the present application, a surveillance video (referred to herein as a dock surveillance video) within a surveillance range may be acquired by a surveillance front end (referred to herein as a dock camera) deployed at a dock.
And step S110, acquiring the cargo handling rate of the vehicles in the parking spaces of each platform based on the acquired platform monitoring videos.
In the embodiment of the application, when the platform monitoring video is acquired, whether vehicles for loading and unloading goods exist in each platform parking space or not can be detected based on the acquired platform monitoring video.
For any platform parking space where a vehicle for cargo handling is present, the cargo handling rate of the vehicle in the platform parking space can be acquired.
For example, the cargo handling rate of the vehicle in the surveillance video may be identified using a deep learning algorithm.
For example, a cargo handling rate training library can be established according to different environments, deep learning network model training is performed based on training samples in the training library, and then the cargo handling rate of the vehicle in the surveillance video is identified based on the trained deep learning network model.
Step S120 is to determine the cargo handling completion time of the vehicle in each platform parking space based on the cargo handling rate of the vehicle in each platform parking space.
In this embodiment, for any dock parking space, when the cargo handling rate of the vehicle in the dock parking space is determined, the cargo handling completion time of the vehicle may be determined based on the cargo handling rate of the vehicle in the dock parking space.
For example, the cargo handling completion time of the vehicle may be determined based on the cargo handling rate of the vehicle, the cargo type (such as box type, bag type, etc., which may be identified by a deep learning algorithm), and the correspondence between the cargo handling rate of the cargo type and the cargo handling completion time (which may be preset according to an empirical value).
For example, assume that the corresponding relationship between the cargo handling rate and the cargo handling completion time for the cargo of cargo type a is shown in table 1:
TABLE 1
Cargo handling rate Cargo handling completion time
R1 T1
R2 T2
R3 T3
When the cargo handling rate of the vehicle in the dock parking space is determined, the corresponding relationship shown in table 1 may be looked up according to the cargo handling rate of the vehicle and the cargo type to determine the cargo handling completion time of the vehicle.
Step S130 is to control the operation of the vehicle in the loading/unloading place based on the cargo loading/unloading completion time of the vehicle in each platform parking space and the platform parking space associated with the vehicle.
In the embodiment of the application, the operation of the vehicle in the loading and unloading place can be controlled based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space associated with the vehicle.
For example, information such as the cargo loading and unloading completion time of vehicles in each platform parking space and platform parking spaces related to the vehicles can be displayed on a display screen of a designated area at an entrance in the loading and unloading place to prompt the cargo loading and unloading completion time of each platform parking space of incoming vehicles, the vehicle operation condition in the loading and unloading place in the peak period is optimized, the occurrence of congestion is reduced, the loading and unloading utilization rate of each platform is improved, and the situation that a part of platforms have more vehicles waiting for loading and unloading and the other part of platforms are relatively free is avoided.
In the embodiment of the present application, the loading/unloading completion time corresponding to a dock parking space where no vehicle is loaded or unloaded may be empty.
As can be seen, in the method flow shown in fig. 1, by acquiring the dock surveillance video of the loading and unloading location, acquiring the cargo loading and unloading rate of the vehicle in each dock parking space based on the acquired dock surveillance video, determining the cargo loading and unloading completion time of the vehicle in each dock parking space based on the cargo loading and unloading rate of the vehicle in each dock parking space, and further controlling the vehicle operation in the loading and unloading location based on the cargo loading and unloading completion time of the vehicle in each dock parking space and the dock parking space associated with the vehicle, congestion in the loading and unloading location can be reduced, and the loading and unloading efficiency and the loading and unloading utilization rate of each dock can be improved.
In a possible embodiment, as shown in fig. 2, in step S110, the cargo handling rate of the vehicle in each platform parking space is obtained based on the platform monitoring video, and the following steps are performed:
and step S111, when the vehicle is detected in the platform monitoring video, identifying the in-out state of the vehicle.
Step S112, if the vehicle enters the parking space in the entering and exiting state, extracting a corresponding monitoring video frame every preset time after the vehicle parks in the platform parking space, and acquiring the cargo handling rate of the vehicle based on the monitoring video frame.
In this embodiment, vehicle detection may be performed on the acquired platform surveillance video, and when a vehicle is detected, the entry-exit state of the vehicle is recognized.
For example, the entry and exit states of the vehicle may include entering (entering the platform parking space) or exiting (leaving the platform parking space).
For any vehicle, when it is recognized that the vehicle enters the parking space in the entering and exiting state, a corresponding surveillance video frame is extracted every preset time (which may be set according to an actual scene, for example, 30 seconds) after the vehicle is parked in the platform parking space, and the surveillance video frame is analyzed to obtain the cargo handling rate of the vehicle. For example, the surveillance video frame is input into a pre-trained deep learning network model to determine the cargo handling rate of the vehicle.
In one example, the step S111 of identifying the in-out state of the vehicle may include:
when the tail of the vehicle is detected in the monitoring picture, tracking the tail of the vehicle, and determining that the in-out state of the vehicle is the entering state when the tail of the vehicle sequentially passes through a first region at a far distance and a second region at a near distance in the monitoring picture;
when the vehicle tail is detected in the monitoring picture, the vehicle tail is tracked, and when the vehicle tail sequentially passes through the second area and the first area in the monitoring picture, the in-out state of the vehicle is determined as the vehicle-out state.
In this example, considering that the vehicle generally enters and exits the platform parking space with the tail of the vehicle facing the front of the monitoring, the vehicle in the monitoring video can be detected by detecting the tail of the vehicle in the platform monitoring video.
For example, a specific implementation of performing vehicle tail detection on a surveillance video may be described below with reference to a specific example, which is not described herein again in this embodiment of the present application.
When the vehicle tail is detected in the monitoring picture, the detected vehicle tail can be tracked, and when the vehicle tail sequentially passes through a far specified area (referred to as a first area) and a near specified area (referred to as a second area) preset in the monitoring picture, the in-and-out state of the vehicle is determined as entering; and when the tail of the vehicle sequentially passes through the second area and the first area in the monitoring picture, determining that the in-and-out state of the vehicle is the departure state.
Illustratively, the side of the monitoring screen close to the monitoring front end is near, and the side far away from the monitoring front end is far.
It should be noted that, in the embodiment of the present application, when the vehicle tail does not move in the monitoring screen, it may be determined that the vehicle is in a parking state.
For example, when it is detected that a vehicle enters (or exits), the vehicle may be captured when the vehicle passes through the first area or/and the second area, and an entry (or exit) record may be generated, where the entry (or exit) record may include information such as license plate information, capture time, and capture pictures.
In a possible embodiment, after acquiring the platform monitoring video in the loading and unloading site in step S100, the method may further include:
carrying out face detection based on the acquired platform monitoring video;
when a face is detected, identifying the face;
and generating a face recognition record based on the face recognition result.
In this embodiment, in order to manage the personnel in the loading and unloading place, when the platform surveillance video of the loading and unloading place is acquired, the face detection may be performed based on the acquired platform surveillance video, and when the face is detected, the face may be identified.
For example, specific implementation manners of face detection and face recognition may refer to related descriptions in the prior art, and details of the embodiments of the present application are not described herein.
When the recognition result of the face appearing in the platform monitoring video is obtained, a face recognition record can be generated based on the face recognition result, and the face recognition record can comprise the time when the face is detected, a face snapshot and face recognition information.
In one example, the face recognition information may be determined by comparing the faces in the surveillance video with faces in a preset face library.
For example, the preset face library may store face models and related information (name, position, etc.) of workers in the loading and unloading site and face models and related information of registered drivers.
When a face is detected in a platform monitoring video, the detected face can be identified, face structural features are extracted, a face model is constructed, the face model is compared with face models in a preset face library, and if a matched face model (referred to as a target face model) exists, relevant information of the target face model is obtained from the preset face library and is used as face identification information of the face model; if no matched face model exists, adding a specific mark in the corresponding face recognition record, wherein the mark is used for identifying that the corresponding face is the face of a non-worker or a registered driver (which can be called the face of a stranger).
It should be noted that, in the embodiment of the present application, when it is determined that a face of a stranger appears in a dock surveillance video, an alarm may also be triggered, for example, an alarm message is sent to a preset alarm terminal, so as to prompt an alarm handler (e.g., a security guard) to take a corresponding processing measure.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
Referring to fig. 3, in this embodiment, the platform camera is installed above the platform, and is oriented to face forward and be inclined downward, and the parking space is a defense area. In the process that the vehicle enters the parking space, the platform camera can capture the passing in and out of the vehicle (namely capture the passing in and out of the parking space of the vehicle), license plate information, door state (closing or opening the door), personnel information and cargo loading and unloading conditions.
In one example, as shown in fig. 4, the platform camera is installed at a height 3.5 m from the platform, and the height above and below the field of view is required to ensure that the vehicle body covers the uppermost edge (e.g. 4.5 m) of the vehicle body all the way during the driving process of the vehicle. The width is two parking stall widths about the visual field, and both sides of platform parking stall in the control picture all have the reservation of half parking stall width promptly.
In order to realize personnel detection and loading and unloading rate detection of the platform parking operation, the following technical scheme is adopted:
1. when a vehicle reaches a parking space in a defense area, a frame of monitoring video is obtained.
2. According to the monitoring video frame, the network model (such as a deep learning network model) obtained by training is used for detecting personnel information and cargo handling rate, and vehicle characteristic information detection including in-out capture, license plate recognition and vehicle door state recognition is carried out at the same time.
For example, when it is recognized that the door state of the vehicle is the door open state, the vehicle cargo handling rate recognition may be performed.
3. And identifying the cargo handling rate of the vehicle, and identifying the cargo handling rate of the vehicle in the platform parking space every 30 seconds until the cargo handling is finished, and finishing the cargo handling rate and personnel information detection.
For example, for cargo unloading, whether the cargo unloading is completed is determined by whether the compartment of the vehicle is empty; for cargo loading, whether cargo loading is completed is determined as whether the compartment of the vehicle is full or reaches a specified height.
4. And finally, matching the vehicle characteristic information and the vehicle detection information including access capture, license plate recognition, vehicle door state, personnel information and cargo handling rate, and transmitting the information to a terminal for storage so as to search subsequent information.
As shown in fig. 5, in this embodiment, the dock camera may include a video acquisition module, a communication module, a data processing module, a transmission module, and a storage module. Wherein:
the video acquisition module is used for acquiring video information of the defense area.
The communication module is used for sending the video information acquired by the video acquisition module to the data processing module.
Illustratively, the communication module may include an image transmission line or other suitable transmission medium and device.
The data processing module can comprise a digital signal processor, a decoder and a memory of the embedded board card, when the data processing module works, the embedded board card can complete processing of image information, and the data processing module supports dual-channel information uploading and respectively uploads detection information to the storage module and the sending module.
The sending module packages the vehicle license plate information, the platform information and the cargo handling rate information according to a protocol, calculates the predicted cargo handling completion time according to the cargo handling rate, and sends the predicted cargo handling completion time to the display device at the entrance for display.
As shown in fig. 6, the display device at the entrance may include a receiving module, a data processing module, and a display module; wherein:
the receiving module receives the information sent by the platform camera and analyzes the information through a protocol, wherein the information comprises vehicle license plate information, platform information, cargo loading and unloading rate and predicted cargo loading and unloading completion time.
The data processing module processes the received data, converts the data into data which can be displayed by the display module and displays the data at an entrance. The display information comprises platform information, vehicle license plates, the current cargo loading and unloading rate and the predicted loading and unloading completion time, guiding and reminding are carried out on subsequent vehicles, and intelligent scheduling of loading and unloading of cargos on the platforms in the logistics park is achieved.
As shown in fig. 7, in this embodiment, the video acquisition module acquires video data of a dock job. The data processing module extracts identification auxiliary information (auxiliary information for vehicle tail detection based on an edge detection algorithm) from each frame of image of the acquired video data of the platform operation, and if vehicle tail appears in the image according to the identification auxiliary information, the data processing module detects a subsequent image and generates platform operation information parameters.
When the video processing device works, the video acquisition module continuously transmits the acquired video data to the data processing device through the communication module.
Illustratively, the video capture module may be a camera of a gun camera, but it should be appreciated that the video capture module is not limited to a camera, and may be any other device that captures and outputs a video stream externally.
In this embodiment, dock monitoring includes dock triggering and dock operation. Wherein:
the platform triggering process may include: the video acquisition module acquires a platform parking video stream and sends the parking video stream to the data processing module through the communication module.
And the data processing module extracts auxiliary information from each frame of image in the video stream and judges whether the tail of the vehicle appears in the image according to the auxiliary information. If not, extracting auxiliary information from the next frame of image, and judging whether the tail of the vehicle appears in the image according to the auxiliary information; if yes, the data processing module detects the subsequent images and generates platform parking information parameters such as license plate information and cargo handling rate.
As shown in fig. 8, the complete vehicle entering triggering process includes: when a vehicle just enters a defense deployment area a in a video, judging whether the vehicle is a vehicle tail, if so, capturing one vehicle tail, and when the vehicle reaches a defense deployment area b, capturing the second vehicle tail. When the vehicle stays in the defense area b for a certain time (such as 20 seconds), the vehicle license number is identified according to the license algorithm, and the vehicle enters the defense area b to be triggered and reported. The vehicle entering trigger generally comprises a left-side reverse garage and a right-side reverse garage.
The complete departure triggering process comprises the following steps: when the vehicle changes from static state in a defense deployment area b, a first picture is captured, when the vehicle reaches a defense deployment area a, a second picture is captured, the license plate of the vehicle is acquired, and the vehicle is triggered and reported when the vehicle leaves the vehicle, wherein the vehicle leaving trigger generally comprises left-side leaving and right-side leaving.
In this embodiment, the entering triggered vehicle tail detection process includes: and selecting a comparison area in the defense area of the image, and if the number of edge points of a certain row in the comparison area is greater than a first edge point threshold value, determining that the tail features of the vehicle appear in the area. And under the condition that the tail features appear, recording and tracking the tail features until the tail features reach the parking stability of the platform parking space. And similarly, the vehicle-out trigger can be obtained.
For example, according to the requirement, a corresponding edge detection method can be adopted, and commonly used edge detection methods include Roberts, Sobel, Canny, Log and the like. Scanning the captured image in a preset area, obtaining edge points in the preset area, namely points with obvious neighborhood gray scale conversion, by edge detection in the scanning process, and using the number of the points as auxiliary information for subsequent processing.
In actual use, when a vehicle enters a platform for operation, an image is input by snapping a picture every 30 seconds, and common target detection algorithms comprise R-CNN (Selective Search + CNN + SVM), SPP-net (ROI Pooling), Fast R-CNN (Selective Search + CNN + ROI), Fast R-CNN (RPN + CNN + ROI) and R-FCN through a target detection algorithm.
Take the target detection algorithm of Faster R-CNN (RPN + CNN + ROI) as an example. Firstly, inputting a picture to be detected into the CNN to obtain a characteristic diagram. And secondly, inputting the convolution characteristics into the RPN to obtain candidate frame information, and judging whether the characteristics extracted from the candidate frame belong to a characteristic class or not by using a classifier. Finally, the position of the candidate frame in a certain category is further adjusted by a feedback device.
The method can respectively realize the identification of the face information of the personnel, the information of the vehicle door and the information of the cargo handling rate.
In the embodiment of the application, the platform monitoring video of the loading and unloading place is acquired, the cargo loading and unloading rate of the vehicles in each platform parking space is acquired based on the platform monitoring video, the cargo loading and unloading completion time of the vehicles in each platform parking space is determined based on the cargo loading and unloading rate of the vehicles in each platform parking space, and the operation of the vehicles in the loading and unloading place is controlled based on the cargo loading and unloading completion time of the vehicles in each platform parking space and the platform parking spaces associated with the vehicles, so that the congestion in the loading and unloading place is reduced, and the loading and unloading efficiency and the loading and unloading utilization rate of each platform are improved.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 9, a schematic structural diagram of a dock monitoring apparatus provided in an embodiment of the present application is shown in fig. 9, where the dock monitoring apparatus may include:
the first acquisition unit is used for monitoring videos of a platform of the loading and unloading place;
the second acquisition unit is used for acquiring the cargo loading and unloading rate of the vehicles in the parking spaces of each platform based on the platform monitoring video;
a determination unit for determining the cargo handling completion time of the vehicle in each platform parking space based on the cargo handling rate of the vehicle in each platform parking space;
and the control unit is used for controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle.
In one embodiment, the second obtaining unit obtains the cargo handling rate of the vehicle in each platform parking space based on the platform monitoring video, and includes:
when a vehicle is detected in the platform monitoring video, identifying the in-out state of the vehicle;
if the in-out state of the vehicle is the entering state, after the vehicle parks in the platform parking space, extracting a corresponding monitoring video frame every preset time, and acquiring the cargo handling rate of the vehicle based on the monitoring video frame.
In one embodiment, the second acquiring unit identifies an entry and exit state of the vehicle, including:
when the tail of the vehicle is detected in the monitoring picture, tracking the tail of the vehicle, and determining that the in-out state of the vehicle is the entering state when the tail of the vehicle sequentially passes through a first region at a far distance and a second region at a near distance in the monitoring picture;
when the vehicle tail is detected in the monitoring picture, the vehicle tail is tracked, and when the vehicle tail sequentially passes through the second area and the first area in the monitoring picture, the entering and exiting state of the vehicle is determined as the vehicle-out state.
In one embodiment, the control unit controls the operation of the vehicle in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each dock parking space and the dock parking space associated with the vehicle, including:
and displaying the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle through a display device of a designated area part at an entrance of the loading and unloading place.
In one embodiment, after the acquiring unit acquires the platform monitoring video in the loading and unloading site, the method further includes:
carrying out face detection based on the acquired platform monitoring video;
when a face is detected, identifying the face;
and generating a face recognition record based on the face recognition result.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 9. Referring to fig. 10, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media announcer, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (11)

1. A method of dock monitoring, comprising:
acquiring a platform monitoring video of a loading and unloading place;
acquiring the cargo handling rate of the vehicles in the parking spaces of each platform based on the platform monitoring video;
determining the cargo loading and unloading completion time of the vehicles in the platform parking spaces based on the cargo loading and unloading rate of the vehicles in the platform parking spaces;
and controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle.
2. The method of claim 1, wherein obtaining the cargo handling rate of the vehicle in each dock bay based on the dock surveillance video comprises:
when a vehicle is detected in the platform monitoring video, identifying the in-out state of the vehicle;
if the in-out state of the vehicle is the entering state, after the vehicle parks in the platform parking space, extracting a corresponding monitoring video frame every preset time, and acquiring the cargo handling rate of the vehicle based on the monitoring video frame.
3. The method of claim 2, wherein said identifying an ingress and egress state of the vehicle comprises:
when the tail of the vehicle is detected in the monitoring picture, tracking the tail of the vehicle, and determining that the in-out state of the vehicle is the entering state when the tail of the vehicle sequentially passes through a first region at a far distance and a second region at a near distance in the monitoring picture;
when the vehicle tail is detected in the monitoring picture, the vehicle tail is tracked, and when the vehicle tail sequentially passes through the second area and the first area in the monitoring picture, the entering and exiting state of the vehicle is determined as the vehicle-out state.
4. The method of claim 1, wherein controlling operation of the vehicle in the loading and unloading location based on the time of completion of loading and unloading of the cargo of the vehicle in each dock space and the dock space associated with the vehicle comprises:
and displaying the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle through a display device of a designated area part at an entrance of the loading and unloading place.
5. The method of claim 1, wherein after obtaining the dock surveillance video within the loading dock, further comprising:
carrying out face detection based on the acquired platform monitoring video;
when a face is detected, identifying the face;
and generating a face recognition record based on the face recognition result.
6. A dock monitoring device, comprising:
the first acquisition unit is used for monitoring videos of a platform in a loading and unloading place;
the second acquisition unit is used for acquiring the cargo loading and unloading rate of the vehicles in the parking spaces of each platform based on the platform monitoring video;
a determination unit for determining the cargo handling completion time of the vehicle in each platform parking space based on the cargo handling rate of the vehicle in each platform parking space;
and the control unit is used for controlling the vehicle operation in the loading and unloading place based on the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle.
7. The apparatus according to claim 6, wherein the second acquiring unit acquires the cargo handling rate of the vehicle in each platform slot based on the platform surveillance video, and includes:
when a vehicle is detected in the platform monitoring video, identifying the in-out state of the vehicle;
if the in-out state of the vehicle is the entering state, after the vehicle parks in the platform parking space, extracting a corresponding monitoring video frame every preset time, and acquiring the cargo handling rate of the vehicle based on the monitoring video frame.
8. The apparatus of claim 7, wherein the second obtaining unit identifies an ingress and egress state of the vehicle, comprising:
when the tail of the vehicle is detected in the monitoring picture, tracking the tail of the vehicle, and determining that the in-out state of the vehicle is the entering state when the tail of the vehicle sequentially passes through a first region at a far distance and a second region at a near distance in the monitoring picture;
when the vehicle tail is detected in the monitoring picture, the vehicle tail is tracked, and when the vehicle tail sequentially passes through the second area and the first area in the monitoring picture, the entering and exiting state of the vehicle is determined as the vehicle-out state.
9. The apparatus according to claim 6, wherein the control unit controls the operation of the vehicle in the loading/unloading place based on the cargo loading/unloading completion time of the vehicle in each of the platform slots and the platform slot associated with the vehicle, and includes:
and displaying the cargo loading and unloading completion time of the vehicle in each platform parking space and the platform parking space related to the vehicle through a display device of a designated area part at an entrance of the loading and unloading place.
10. The apparatus of claim 6, wherein the obtaining unit, after obtaining the dock surveillance video within the loading/unloading site, further comprises:
carrying out face detection based on the acquired platform monitoring video;
when a face is detected, identifying the face;
and generating a face recognition record based on the face recognition result.
11. An electronic device, comprising:
a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the method steps of any of claims 1-5.
CN202010162975.XA 2020-03-10 2020-03-10 Platform monitoring method and device and electronic equipment Pending CN113382199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010162975.XA CN113382199A (en) 2020-03-10 2020-03-10 Platform monitoring method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010162975.XA CN113382199A (en) 2020-03-10 2020-03-10 Platform monitoring method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113382199A true CN113382199A (en) 2021-09-10

Family

ID=77568907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010162975.XA Pending CN113382199A (en) 2020-03-10 2020-03-10 Platform monitoring method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113382199A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013101248A4 (en) * 2012-09-21 2013-10-17 Ge Global Sourcing Llc Management system and method
CN110047295A (en) * 2019-05-27 2019-07-23 杭州亚美利嘉科技有限公司 Garden vehicle dispatch system and method
CN110443510A (en) * 2019-08-08 2019-11-12 圆通速递有限公司 A kind of vehicle dispatching method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2013101248A4 (en) * 2012-09-21 2013-10-17 Ge Global Sourcing Llc Management system and method
CN110047295A (en) * 2019-05-27 2019-07-23 杭州亚美利嘉科技有限公司 Garden vehicle dispatch system and method
CN110443510A (en) * 2019-08-08 2019-11-12 圆通速递有限公司 A kind of vehicle dispatching method and system

Similar Documents

Publication Publication Date Title
US11709282B2 (en) Asset tracking systems
US9940633B2 (en) System and method for video-based detection of drive-arounds in a retail setting
KR102477061B1 (en) Apparatus and method for monitoring vehicle in parking lot
Chan et al. Detecting rare events in video using semantic primitives with HMM
US10102431B2 (en) Visual monitoring of queues using auxillary devices
CN112002131A (en) In-road parking behavior detection method and device
CN106384532A (en) Video data analysis method and apparatus thereof, and parking space monitoring system
US20170213463A1 (en) Method and apparatus for calculating parking occupancy
CN114067295A (en) Method and device for determining vehicle loading rate and vehicle management system
US20170357855A1 (en) Information processing apparatus, information processing method, and storage medium
US20150310458A1 (en) System and method for video-based detection of drive-offs and walk-offs in vehicular and pedestrian queues
CN109219956B (en) Monitoring device
CN111369708A (en) Vehicle driving information recording method and device
US20230085922A1 (en) Apparatus and method for door control
CN104239847A (en) Driving warning method and electronic device for vehicle
Baek et al. Mono-camera based side vehicle detection for blind spot detection systems
CN113593099B (en) Gate control method, device and system, electronic equipment and storage medium
CN113382199A (en) Platform monitoring method and device and electronic equipment
CN114627432A (en) Loading and unloading goods identification monitoring system
CN111798666B (en) Vehicle snapshot method and device
DE102018222683A1 (en) Method for creating an environment representation map for monitoring a vehicle and / or a vehicle trailer using a camera system
CN113743212A (en) Detection method and device for jam or left object at entrance and exit of escalator and storage medium
CN111723601A (en) Image processing method and device
CN104769637A (en) System for recognizing accident of delivery vehicle
CN112073677B (en) Method, device and system for detecting articles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination