WO2019161663A1 - 一种港区监控方法及系统、中控系统 - Google Patents

一种港区监控方法及系统、中控系统 Download PDF

Info

Publication number
WO2019161663A1
WO2019161663A1 PCT/CN2018/105474 CN2018105474W WO2019161663A1 WO 2019161663 A1 WO2019161663 A1 WO 2019161663A1 CN 2018105474 W CN2018105474 W CN 2018105474W WO 2019161663 A1 WO2019161663 A1 WO 2019161663A1
Authority
WO
WIPO (PCT)
Prior art keywords
self
image
target object
global image
driving vehicle
Prior art date
Application number
PCT/CN2018/105474
Other languages
English (en)
French (fr)
Inventor
吴楠
Original Assignee
北京图森未来科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京图森未来科技有限公司 filed Critical 北京图森未来科技有限公司
Priority to EP18907348.9A priority Critical patent/EP3757866A4/en
Priority to AU2018410435A priority patent/AU2018410435B2/en
Publication of WO2019161663A1 publication Critical patent/WO2019161663A1/zh
Priority to US17/001,082 priority patent/US20210073539A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the invention relates to the field of automatic driving, in particular to a port area monitoring method, a port area monitoring system and a central control system.
  • the present invention provides a port area monitoring method to solve the technical problem that the prior art cannot perform an intuitive and effective global view of a target object in a port area.
  • a method for monitoring a port area includes:
  • the tracking results and categories of the target object are displayed in a global image.
  • a port area monitoring system comprising a roadside camera and a central control system disposed in a port area, wherein:
  • a roadside camera for collecting images and transmitting the images to a monitoring system
  • a central control system for receiving images acquired by each side camera; performing coordinate transformation and splicing on the received image to obtain a global image of the port area under God's perspective; determining a road area in the global image; The road area performs object detection and object tracking to obtain the tracking result and category of the target object; the tracking result and category of the target object are displayed in the global image.
  • a third aspect provides a central control system, where the system includes:
  • a communication unit configured to receive an image collected by each roadside camera
  • An image processing unit configured to perform coordinate conversion and splicing on the received image to obtain a global image of the port area under God's perspective;
  • a road area determining unit configured to determine a road area in the global image
  • a target detection and tracking unit configured to perform object detection and object tracking on a road area in the global image, to obtain a tracking result and a category of the target object
  • a display unit for displaying a tracking result and a category of the target object in a global image.
  • the technical solution of the invention provides a large number of roadside cameras in the port area, and photographs the pictures in the port area through the roadside cameras; firstly, coordinates and splicing the images collected by the roadside cameras in the port area to obtain God
  • the screen can be used to understand all the situation in the port area; on the other hand, the tracking results and categories of the target objects in the road area in the global image can be displayed in real time, and the staff can intuitively understand the movement of the target objects of various categories; Therefore, the technical solution of the present invention solves the technical problem that the prior art cannot perform an intuitive and effective global view of the target object in the port area.
  • FIG. 1 is a schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
  • FIG. 2 is a second schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a central control system according to an embodiment of the present invention.
  • FIG. 4A is a schematic diagram of an image collected by a roadside camera according to an embodiment of the present invention.
  • 4B is a schematic diagram of grouping images according to an acquisition time according to an embodiment of the present invention.
  • 4C is a schematic diagram of a set of bird's-eye view images according to an embodiment of the present invention.
  • 4D is a schematic diagram of splicing a set of bird's-eye view images into a global image according to an embodiment of the present invention
  • 4E is a schematic diagram showing tracking results and categories of a target object in a global image according to an embodiment of the present invention.
  • FIG. 5 is a second schematic structural diagram of a central control system according to an embodiment of the present invention.
  • FIG. 6 is a third structural schematic diagram of a port area monitoring system according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of communication between a first V2X device, a roadside V2X device, and a second V2X device according to an embodiment of the present invention
  • FIG. 8 is a third schematic structural diagram of a central control system according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a method for monitoring a port area according to an embodiment of the present invention.
  • FIG. 10 is a flowchart of performing coordinate conversion and splicing on a received image to obtain a global image of a port area under the perspective of God according to an embodiment of the present invention
  • FIG. 11 is a second flowchart of a method for monitoring a port area according to an embodiment of the present invention.
  • the application scenario of the technical solution of the present invention is not limited to the port area (including the port area, the highway port area, etc.), and can also be applied to other application scenarios such as a mining area, a cargo distribution center, a large warehouse, a park, etc.; the technical solution is transplanted to other applications.
  • the scene does not need to make substantial changes, and those skilled in the art do not need to work creatively and do not need to overcome some specific technical problems. Due to the limited space, the present application does not describe the application of the technical solution of the present invention in detail. The following description of the technical solutions is based on the port area.
  • FIG. 1 is a schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
  • the system includes a roadside camera 1 and a central control system 2 disposed in a port area, wherein:
  • the roadside camera 1 is configured to collect images and send the images to the central control system 2;
  • the central control system 2 is configured to receive images acquired by each side camera 1; perform coordinate transformation and splicing on the received images to obtain a global image of the port area under God's perspective; determine a road area in the global image; The road area in the area performs object detection and object tracking to obtain the tracking result and category of the target object; the tracking result and category of the target object are displayed in the global image.
  • the roadside camera 1 can adopt the full coverage principle of the port area, and the image collection collected by the roadside camera 1 can cover the geographical area of the entire port area as much as possible.
  • the image collection collected by the roadside camera 1 can cover the geographical area of the entire port area as much as possible.
  • flexible settings such as full coverage of only some of the core areas in the port area. This application is not strictly limited.
  • the roadside camera 1 in order to make the image captured by the roadside camera 1 cover a larger field of view, can be disposed on a device having a certain height in the port area, such as a tower crane, a tire crane, and a bridge crane. , a light pole, a crane, a front hoist, a crane, etc., or a roadside device having a certain height dedicated to laying the roadside camera 1 in the port area.
  • the roadside camera 1 disposed on the tower crane can be referred to as a tower crane CAM
  • the roadside camera disposed on the pole is referred to as a light pole CAM
  • the roadside camera disposed on the crane is called Crane CAM.
  • the image acquisition clocks of all the roadside cameras 1 are synchronized, and the camera parameters of the respective side camera 1 are the same, and the acquired images are the same size.
  • the structure of the central control system 2 can be as shown in FIG. 3, including a communication unit 21, an image processing unit 22, a road area determining unit 23, a target detection tracking unit 24, and a display unit 25, wherein:
  • the communication unit 21 is configured to receive an image collected by each roadside camera
  • the image processing unit 22 is configured to perform coordinate conversion and splicing on the received image to obtain a global image of the port area under God's perspective;
  • a road area determining unit 23 configured to determine a road area in the global image
  • the target detection and tracking unit 24 is configured to perform object detection and object tracking on the road area in the global image to obtain a tracking result and a category of the target object;
  • the display unit 25 is configured to display the tracking result and the category of the target object in the global image.
  • the central control system 2 can be operated in a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a field programmable gate array controller, a desktop computer, a mobile computer. , PAD, microcontroller and other equipment.
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • a field programmable gate array controller a desktop computer
  • PAD microcontroller and other equipment.
  • the communication unit 21 can transmit and receive information by means of a wireless manner, for example, by an antenna.
  • the image processing unit 22, the road area determining unit 23, and the target detecting and tracking unit 24 can be executed on a processor of a DSP, an FPGA controller, a desktop computer, a mobile computer, a PAD, a single chip microcomputer, etc. (for example, a CPU (Central Processing Unit)
  • the display unit 25 can be run on a display (such as a GPU (Graphics Processing Unit)) of a DSP, an FPGA controller, a desktop computer, a mobile computer, a PAD, a single chip microcomputer, or the like.
  • the image processing unit 22 is specifically configured to: determine an image with the same acquisition time in the received image as a group of images; perform coordinate conversion on each image in the group of images to obtain a group The image is overlooked; a set of bird's-eye view images are spliced according to a preset splicing order to obtain a global image, and the splicing order is obtained according to the spatial positional relationship between the roadside cameras.
  • n roadside cameras 1 there are n roadside cameras 1 in the port area, and the n roadside cameras 1 are sequentially numbered CAM1, CAM2, CAM3, ..., CAMn according to the adjacent relationship of the spatial positions, according to the n
  • the spatial position relationship of the roadside camera 1 sets the image stitching order as: CAM1->CAM2->CAM3->,...,->CAMn; with time t0 as the starting time, CAM1 sequentially collects images such as image set 1, CAM2
  • the sequentially collected images, such as image collection 2, ..., CAMn sequentially acquire images such as image collection n, as shown in FIG.
  • each image collection contains k images; the acquisition time is the same in the images in the n image collections The image is determined as a set of images.
  • the image in a dotted frame constitutes a group of images, and k sets of images are obtained, and each set of images generates a global image to obtain k global images;
  • Each image in each group of images is coordinate-converted to obtain a set of bird's-eye view images.
  • the four-way side cameras of the port area respectively capture the bird's-eye view images of the four images at the same time, that is, the four images. Overlooking images form a set of bird's eye view For example, FIG.
  • FIG. 4D splicing a set of bird's-eye view images according to a preset splicing order to obtain a global image
  • FIG. 4E is a tracking result and category of a target object of a global image
  • a broken line box indicates a tracking result of the vehicle.
  • an image is projected onto the ground plane to obtain a bird's-eye view image corresponding to the image.
  • the specific implementation can be as follows:
  • the conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system is obtained in advance.
  • the conversion relationship between the camera coordinate system of each roadside camera and the ground plane coordinate system is manually determined by manual or computer; according to the conversion relationship between the camera coordinate system of the roadside camera and the ground plane coordinate system (for the existing Technology), a conversion relationship between a camera coordinate system of the roadside camera and an imaging plane coordinate system of the roadside camera, and a conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system is obtained;
  • each pixel point in the image captured by the roadside camera is projected to the ground according to the conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system.
  • the bird's-eye view image corresponding to the image is obtained.
  • the road area determining unit 23 may be specifically implemented by, but not limited to, any of the following methods:
  • a high-precision map corresponding to the port area is superimposed with the global image to obtain a road area in the global image.
  • Method A2 Perform semantic segmentation on the global image by using a preset semantic segmentation algorithm to obtain a road region in the global image.
  • the high-precision map corresponding to the port area refers to an electronic map drawn by the map engine based on the high-precision map data of the port area, in which all roads in the port area (including road boundary lines, Lane line, road direction, speed limit, steering and other information).
  • the high-precision map corresponding to the port area is superimposed with the global image to obtain the road area of the global image, which can be implemented in the following manner: Step 1) adjust the size of the global map image to be consistent with the high-precision map (eg, By stretching/scaling); step 2) manually calibrating several common reference points that can be used for superposition on high-precision maps and global images (for example, four corner points of high-precision maps, or junction points of certain roads) Etc., superimposing the high-precision map with the global image through the reference point; step 3) manually drawing the road in the corresponding position in the global image according to the road on the high-precision map to obtain the road area in the global image; or
  • the image coordinate system of the global image is used as a reference, and the road points constituting the road on the high-precision map are projected into the image coordinate system, and the coordinate points of the road points in the image coordinate system are obtained, and the global image is coincident with the aforementioned coordinate points.
  • the preset semantic segmentation algorithm may be a pre-trained semantic segmentation model capable of semantically segmenting the input image.
  • the semantic segmentation model can be iteratively trained on the neural network model based on the pre-collected sample data.
  • the sample data includes: a certain number of images containing roads collected in advance in the port area, and the result of semantically labeling the collected images by hand. How to perform iterative training on the neural network model according to the sample data to obtain a semantic segmentation model can be referred to the existing technology, which is not strictly limited.
  • the target detection and tracking unit 24 may be specifically implemented as follows: using a preset object detection algorithm to perform object detection on a road region in a global image to obtain a detection result (the detection result includes a two-dimensional frame and a category of the target object)
  • the class of the target object can be represented by setting the two-dimensional frame of the target object to a different color (for example, a green frame indicates that the target object in the frame is a vehicle, and a red frame indicates that the target object in the frame is a pedestrian, etc.)
  • It is also possible to mark the category of the target object in the vicinity of the two-dimensional frame of the target object for example, by using the text to mark the category of the target object in the two-dimensional frame directly above or below the two-dimensional frame;
  • the tracking algorithm obtains the tracking result and the category of the global image according to the detection result of the global image and the object tracking result of the global image of the previous frame.
  • the category of the target object may include a vehicle, a pedestrian, and the like.
  • the object detection algorithm can perform object detection on the neural network model based on the training data (including a certain number of images including the target object pre-acquired in the port area and the calibration result of the object detection calibration).
  • the object tracking algorithm may be an object tracking model obtained by iteratively training the neural network model according to the training data.
  • the central control system 2 may further include a motion trajectory prediction unit 26 and a path optimization unit 27, as shown in FIG. 5, wherein :
  • the motion trajectory prediction unit 26 is configured to predict a motion trajectory corresponding to each target object according to the tracking result and the category of the target object;
  • the path optimization unit 27 is configured to optimize a driving path of each autonomous driving vehicle according to a motion trajectory corresponding to each target object;
  • the communication unit 21 is further configured to transmit the optimized driving path of each of the self-driving vehicles to the corresponding autonomous driving vehicle.
  • the motion trajectory prediction unit 26 predicts a motion trajectory corresponding to each target object
  • the specific implementation may be as follows: determining the attitude data of the target object according to the tracking result and the category analysis of the target object; and inputting the posture data of the target object to the pre-predetermined In the motion model corresponding to the target object category, the motion trajectory corresponding to the target object is obtained.
  • a positioning unit such as a GPS positioning unit
  • an inertial measurement unit IMU
  • the target object generates the attitude data of the target object by the measurement result of the positioning unit and the measurement result of the inertial measurement unit, and sends the posture data to the motion track prediction unit.
  • the motion trajectory prediction unit 26 predicts a motion trajectory corresponding to each target object, and the specific implementation may be as follows: receiving the attitude data sent by the target object, and inputting the posture data of the target object into a preset motion model corresponding to the target object category, Obtain a motion trajectory corresponding to the target object.
  • the automatic driving control device periodically or in real time predicts the trajectory of the self-driving vehicle in which it is located (the automatic driving control device is based on the historical trajectory of the self-driving vehicle and the self-driving vehicle)
  • the attitude information fed back by the IMU sensor predicts the estimated travel trajectory of the self-driving vehicle. How to estimate can be referred to the prior art, and the technical point is not the invention of the technical solution of the present invention) to the central control system 2.
  • the path optimization unit 27 is specifically configured to:
  • the estimated driving trajectory corresponding to the self-driving vehicle sent by the self-driving vehicle is compared with the motion trajectory corresponding to each target object, and if coincidence occurs (including all coincidence, partial coincidence), the optimization is performed.
  • the driving path of the vehicle is automatically driven so that the optimized driving path does not coincide with the moving track corresponding to each target object; if the coincidence does not occur, the driving path of the self-driving vehicle is not optimized.
  • the estimated driving trajectory corresponding to the self-driving vehicle is composed of a certain number of position points, and the corresponding trajectory of each target object is composed of a certain number of position points, if the predicted driving trajectory of the self-driving vehicle There is n in the motion trajectory of the target object (n is a preset natural number greater than or equal to 1, and the value of n can be flexibly set according to actual needs, and this application does not strictly limit) more than one position point coincides, and the automatic is considered
  • the estimated travel trajectory of the driving vehicle coincides with the motion trajectory of the target object.
  • the system as described in FIG. 5 further includes a roadside V2X (ie, vehicle to everything) device disposed in the port area and an automatic device disposed on the self-driving vehicle.
  • Driving control device ie, vehicle to everything
  • the central control system 2 is provided with a first V2X device
  • the automatic driving control device is provided with a second V2X device, as shown in FIG. 6, wherein;
  • the communication unit 21 is specifically configured to: send the optimized driving path of each self-driving vehicle to the first V2X device, and send, by the first V2X device, the optimized driving path of each self-driving vehicle to the roadside V2X device;
  • a roadside V2X device for broadcasting an optimized travel route of the self-driving vehicle received from the first V2X device, and receiving, by the second V2X device on the self-driving vehicle, an optimized travel corresponding to the self-driving vehicle path.
  • the roadside V2X device can adopt the full coverage principle of the port area, that is, the roadside V2X device can realize communication between the self-driving vehicle and the central control system in all areas in the port area.
  • the first V2X device of the central control system packs the optimized driving path corresponding to the self-driving vehicle into a V2X communication message and broadcasts it; when the V2X communication device receives the V2X communication message, the V2X communication message is performed on the V2X communication message. Broadcasting; receiving, by the second V2X device, a V2X communication message corresponding to the self-driving vehicle in which it is located.
  • the communication unit 21 can package the optimized driving path of the self-driving vehicle into a TCP/UDP (Transmission Control Protocol)/User Datagram Protocol (User Data Protocol) message to the first V2X device (for example, The driving path is used as a payload of the TCP/UDP packet.
  • the first V2X device parses the received TCP/UDP packet to obtain an optimized driving path, and packs the parsed driving path into a V2X communication message, and broadcasts The V2X communication message; when the V2X communication device receives the V2X communication message, the V2X communication message is broadcast; the second V2X device receives the V2X communication message corresponding to the automatically driving vehicle, and receives the V2X communication message.
  • TCP/UDP Transmission Control Protocol
  • User Datagram Protocol User Data Protocol
  • the text is parsed to obtain an optimized driving route corresponding to the self-driving vehicle corresponding to the second V2X device, and the driving path is packaged into a TCP/UDP message and sent to the automatic driving control device corresponding to the self-driving vehicle, such as Figure 7 shows.
  • Both the TCP/UDP message and the V2X communication message carry the identity information corresponding to the self-driving vehicle to declare the self-driving vehicle corresponding to the optimized driving route in the TCP/UDP message and the V2X message.
  • the communication interface between the first V2X device and the communication unit 21 of the central control system 2 can communicate via Ethernet, USB (Universal Serial Bus) or serial port; the communication interface between the second V2X device and the automatic driving control device can be Communicate via Ethernet, USB or serial port.
  • USB Universal Serial Bus
  • the second embodiment of the present invention further provides a central control system.
  • the structure of the central control system can be as shown in FIG. 3 or FIG. 5, and details are not described herein again.
  • the third embodiment of the present invention further provides a central control system.
  • FIG. 8 shows a structure of a central control system provided by an embodiment of the present application, including: a processor 81 and at least one memory 82. At least one memory 82 includes at least one machine executable instruction, and the processor 81 executes at least one machine. Execute instructions to execute:
  • the processor 81 executes at least one machine executable instruction to perform coordinate transformation and splicing of the received image to obtain a global image of the port area under God's perspective, including: acquiring images of the same time in the received image Determining a set of images; performing coordinate transformation on each image in a group of images to obtain a set of bird's-eye view images; splicing a set of bird's-eye view images according to a preset stitching sequence to obtain a global image, the stitching order is based on The spatial positional relationship between the cameras on each side of the road is obtained.
  • the processor 81 executing the at least one machine executable instruction to perform determining the road area in the global image comprises: superimposing the high-precision map corresponding to the port area with the global image to obtain The road region in the global image; or the semantic segmentation of the global image using a preset semantic segmentation algorithm to obtain a road region in the global image.
  • the processor 81 executes the at least one machine executable instruction and further performs: predicting a motion trajectory corresponding to each target object according to the tracking result and the category of the target object; predicting each target object according to the tracking result and the category of the target object Corresponding motion trajectory; optimizing the driving path of each self-driving vehicle according to the motion trajectory corresponding to each target object; and transmitting the optimized driving path of each self-driving vehicle to the corresponding autonomous driving vehicle.
  • the processor 81 executes at least one machine-executable instruction to perform an optimization of the travel path of each of the self-driving vehicles according to a motion trajectory corresponding to each target object, including: for each self-driving vehicle, the self-driving vehicle The estimated predicted driving trajectory corresponding to the automatically driven vehicle is compared with the moving trajectory corresponding to each target object, and if the coincidence occurs, the driving path of the automatically driving vehicle is optimized, so that the optimized driving path corresponds to each target object. The motion trajectories do not coincide; if no coincidence occurs, the driving path of the self-driving vehicle is not optimized.
  • the fourth embodiment of the present invention provides a port area monitoring method.
  • the method is as shown in FIG. 9.
  • the port area monitoring method can be run in the foregoing central control system 2, and the method includes :
  • Step 101 Receive an image collected by each roadside camera disposed in the port area;
  • Step 102 Perform coordinate transformation and splicing on the received image to obtain a global image of the port area under God's perspective;
  • Step 103 Determine a road area in the global image.
  • Step 104 Perform object detection and object tracking on the road area in the global image to obtain a tracking result and a category of the target object.
  • Step 105 Display tracking results and categories of the target object in a global image.
  • step 102 can be specifically implemented by the process shown in FIG. 10:
  • Step 102A Determine an image in which the acquisition time is the same in the received image as a group of images
  • Step 102B Perform coordinate transformation on each image in a group of images to obtain a set of bird's-eye view images
  • Step 102C splicing a set of bird's-eye view images according to a preset splicing order to obtain a global image, and the splicing order is obtained according to a spatial positional relationship between the roadside cameras.
  • the step 103 may be specifically implemented as follows: superimposing a high-precision map corresponding to the port area with the global image to obtain a road area in the global image (refer to the implementation.
  • the method A1 in the first example is not described here again; or the semantic segmentation algorithm is used to perform semantic segmentation on the global image to obtain a road region in the global image (refer to the first embodiment).
  • the way A2, will not repeat here).
  • the method shown in FIG. 9 and FIG. 10 may further include steps 106 to 108. Steps 106 to 108 are further included in the method flow shown in FIG. 9 as shown in FIG.
  • Step 106 Prediction of a motion trajectory corresponding to each target object according to the tracking result and the category of the target object;
  • Step 107 Optimize a driving path of the self-driving vehicle according to a motion trajectory corresponding to each target object
  • Step 108 Send the optimized driving path to the corresponding self-driving vehicle.
  • the step 107 can be specifically implemented as follows:
  • the estimated driving trajectory corresponding to the self-driving vehicle sent by the self-driving vehicle is compared with the motion trajectory corresponding to each target object, and if the coincidence occurs, the driving path of the self-driving vehicle is optimized to The optimized travel path does not coincide with the motion trajectory corresponding to each target object; if the coincidence does not occur, the travel path of the self-driving vehicle is not optimized.
  • step 108 may be embodied as follows: the optimized travel path is transmitted to the corresponding autonomous vehicle by V2X communication technology.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种港区监控方法及系统、中控系统,以解决现有技术无法对港区内的目标物体进行直观、有效的进行全局查看的技术问题。港区监控方法包括:接收设置在港区内的各路侧摄像机采集的图像(101);对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像(102);确定所述全局图像中的道路区域(103);对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别(104);在全局图像中展示所述目标物体的跟踪结果和类别(105)。上述技术方案解决了现有技术无法对港区内的目标物体进行直观、有效的进行全局查看的技术问题。

Description

一种港区监控方法及系统、中控系统
本申请要求在2018年2月24日提交中国专利局、申请号为201810157700.X、发明名称为“一种港区监控方法及系统、中控系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及自动驾驶领域,特别涉及一种港区监控方法、一种港区监控系统和一种中控系统。
背景技术
目前,随着自动驾驶技术的发展,在一些地理区域较大的特定区域(例如临海港区、公路港区、矿区、大型仓库、货物集散地、园区等)配备有大量的自动驾驶车辆,如何能够确保自动驾驶车辆在区域内行驶的安全性,需要对区域内的目标物体(例如自动驾驶车辆、非自动驾驶车辆、行人等)进行全局查看。目前在这些特定区域内虽然安装有监控摄像机,但各监控摄像机之间独立运行,且各摄像机的拍摄角度均不同,工作人员需要同时查看多个监控摄像机的屏幕画面,不仅效率低,而且拍摄得到画面并不是很直观能够获知区域内的目标物体的情况。
发明内容
鉴于上述问题,本发明提供一种港区监控方法,以解决现有技术无法对港区内的目标物体进行直观、有效的进行全局查看的技术问题。
本发明实施例,第一方面,提供一种港区监控方法,方法包括:
接收设置在港区内的各路侧摄像机采集的图像;
对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
确定所述全局图像中的道路区域;
对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
在全局图像中展示所述目标物体的跟踪结果和类别。
本发明实施例,第二方面,提供一种港区监控系统,该系统包括设置在港区内的路侧摄像机、中控系统,其中:
路侧摄像机,用于采集图像,并将图像发送给监控系统;
中控系统,用于接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
本发明实施例,第三方面,提供一种中控系统,该系统包括:
通信单元,用于接收各路侧摄像机采集的图像;
图像处理单元,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
道路区域确定单元,用于确定所述全局图像中的道路区域;
目标检测跟踪单元,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
展示单元,用于在全局图像中展示所述目标物体的跟踪结果和类别。
本发明技术方案在港区内设置有大量的路侧摄像机,通过路侧摄像机拍摄港区内的画面;首先,通过将港区内的各路侧摄像机采集的图像进行坐标转换和拼接以得到上帝视角下港区的全局图像;其次,确定出全局图像中的道路区域;最后,对全局图像进行物体检测和物体跟踪以得到道路区域中目标物体的跟踪结果和类别。采用本发明技术方案,一方面能够实时的得到整个港区的上帝视角下的全局图像,而上帝视角为俯瞰地面角度,能够更加直观的查看整个港区内的情况,工作人员只需要查看一个屏幕画面即可全局了解港区内所有的情况;另一方面,实时显示全局图像中的道路区域的目标物体的跟踪结果和类别,工作人员能够非常直观的了解各种类别的目标物体的运动情况;因此,采用本发明技术方案解决了现有技术无法对港区内的目标物体进行直观、有效的进行全局查看的技术问题。
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。显而易见地,下面描述中的附图仅仅是本发明一些实施例,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:
图1为本发明实施例中港区监控系统的结构示意图之一;
图2为本发明实施例中港区监控系统的结构示意图之二;
图3为本发明实施例中的中控系统的结构示意图之一;
图4A为本发明实施例中路侧摄像机采集得到的图像的示意图;
图4B为本发明实施例中将图像按照采集时间进行分组的示意图;
图4C为本发明实施例中一组俯瞰图像的示意图;
图4D为本发明实施例中将一组俯瞰图像拼接成一张全局图像的示意图;
图4E为本发明实施例中在一张全局图像中展示目标物体的跟踪结果和类别的示意图;
图5为本发明实施例中的中控系统的结构示意图之二;
图6为本发明实施例中港区监控系统的结构示意图之三;
图7为本发明实施例中第一V2X设备、路侧V2X设备和第二V2X设备之间的通信示意图;
图8为本发明实施例中的中控系统的结构示意图之三;
图9为本发明实施例中港区监控方法的流程图之一;
图10为本发明实施例中对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像的流程图;
图11为本发明实施例中港区监控方法的流程图之二。
具体实施方式
为了使本技术领域的人员更好地理解本发明中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
本发明技术方案的应用场景,不仅限于港区(包括临海港区、公路港区等),还可以应用于例如矿区、货物集散地、大型仓库、园区等其他应用场景;该技术方案移植到其他应用场景无需做实质性的改变,本领域技术人员也不需要付出创造性的劳动,也不需要克服一些特定的技术问题。由于篇幅有限,本申请不再对本发明技术方案应用于其他应用场景做详细的描述。以下对技术方案的描述,均以港区为例。
实施例一
参见图1,为本发明实施例中港区监控系统的结构示意图,该系统包括设置在港区内的路侧摄像机1、中控系统2,其中:
路侧摄像机1,用于采集图像,并将图像发送给中控系统2;
中控系统2,用于接收各路侧摄像机1采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
本发明实施例中,路侧摄像机1可采取港区全覆盖原则,尽量使得路侧摄像机1采集到的图像集合能够覆盖整个港区的地理区域范围,当然,本领域技术人员也可以根据实际需求进行灵活设置,例如仅对港区内的一些核心区域进行全面覆盖。本申请不作严格限定。
一些实施例中,为使得路侧摄像机1采集的图像涵盖的视野范围更大,可以将路侧摄像机1设置在港区内已有的具有一定高度的设备上,例如塔吊、轮胎吊、桥吊、灯杆、天车、正面吊、吊车等上,或者还可以在港区内设置专门用于铺设路侧摄像机1的具有一定高度的路侧设备上。如图2所示,可以将设置在塔吊上的路侧摄像机1称为塔吊CAM,将设置在灯杆上的路侧摄像机称为灯杆CAM,将设置在天车上的路侧摄像机称为天车CAM。
一些实施例中,为便于更好的对各路侧摄像机1拍摄的图像进行拼接,所有路侧摄像机1的图像采集时钟同步,且各路侧摄像机1的相机参数相同,采集得到的图像尺寸相同。
一些实施例中,中控系统2的结构可如图3所示,包括通信单元21、图像处理单元22、道路区域确定单元23、目标检测跟踪单元24和展示单元25,其中:
通信单元21,用于接收各路侧摄像机采集的图像;
图像处理单元22,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
道路区域确定单元23,用于确定所述全局图像中的道路区域;
目标检测跟踪单元24,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
展示单元25,用于在全局图像中展示所述目标物体的跟踪结果和类别。
本发明的一些实施例中,中控系统2可以运行在DSP(Digital Signal Processing,数字信号处理器)、FPGA(Field-Programmable Gate Array),现场可编程门阵列)控制器、台式电脑、移动电脑、PAD、单片机等设备上。
本发明的一些实施例中,通信单元21可以通过无线的方式收发信息,例如可以通过天线实现。图像处理单元22、道路区域确定单元23、目标检测跟踪单元24可以运行在DSP、FPGA控制器、台式电脑、移动电脑、PAD、单片机等设备的处理器上(例如CPU(Central Processing Unit,中央处理器));展示单元25可以运行在DSP、FPGA控制器、台式电脑、移动电脑、PAD、单片机等设备的显示器(例如GPU(Graphics Processing Unit,图形处理器))上。
本发明的一些实施例中,图像处理单元22具体用于:将接收到的图像中采集时间相同的 图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
以一个实例进行描述,假设港区内设置有n台路侧摄像机1,该n台路侧摄像机1按照空间位置的相邻关系依次编号为CAM1、CAM2、CAM3、…、CAMn,根据该n个路侧摄像机1的空间位置关系设置图像拼接顺序为:CAM1->CAM2->CAM3->、…、->CAMn;以时间t0为起始时间,CAM1依次采集到的图像如图像集合1、CAM2依次采集到的图像如图像集合2、…、CAMn依次采集到的图像如图像集合n,如图4A所示,每个图像集合包含k张图像;将n个图像集合中的图像中采集时间相同的图像确定为一组图像,如图4B所示,一个虚线框内的图像构成一组图像,得到k组图像,每组图像生成一张全局图像,得到k张全局图像;将每组图像中的每张图像进行坐标转换,得到一组俯瞰图像,如图4C所示为港区的四个路侧摄像机分别拍摄得到的同一时间的四张图像的俯瞰图像,即该四张俯瞰图像组成一组俯瞰图像,图4D将一组俯瞰图像按照预置的拼接顺序拼接得到一张全局图像,图4E为一张全局图像的目标物体的跟踪结果及类别,虚线框表示车辆的跟踪结果。
在一个示例中,将一张图像投影到地平面即得到该图像对应的俯瞰图像。具体实现可如下:
首先,预先建立统一的地平面坐标系;
其次,针对每个路侧摄像机,预先标定得到该路侧摄像机的成像平面坐标系与地平面坐标系之间的转换关系。例如:预先通过人工或计算机标定每个路侧摄像机的相机坐标系与地平面坐标系之间的转换关系;根据路侧摄像机的相机坐标系与地平面坐标系之间的转换关系(为现有技术)、路侧摄像机的相机坐标系与该路侧摄像机的成像平面坐标系之间的转换关系,得到该路侧摄像机的成像平面坐标系与地平面坐标系之间的转换关系;
最后,针对路侧摄像机拍摄的一张图像,根据该路侧摄像机的成像平面坐标系与地平面坐标系之间的转换关系,将该路侧摄像机拍摄的图像中的每个像素点投影到地平面坐标系中,得到该图像对应的俯瞰图像。
本发明的一些实施例中,道路区域确定单元23具体可通过但不仅限于以下任意一种方式实现:
方式A1、将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域。
方式A2、采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
方式A1中,港区对应的高精地图指采用地图引擎根据港区的高精地图数据绘制得到的一张电子地图,在该电子地图中绘制有港区内所有的道路(包括道路边界线、车道线、道路方向、限速、转向等信息)。本发明实施例中,将港区对应的高精地图与全局图像进行叠加得到全局图像的道路区域,可采用以下方式实现:步骤1)将全局图图像的尺寸调整为与高精地图一致(如通过拉伸/缩放的方式);步骤2)通过人工在高精地图和全局图像上标定几个可用于叠加的共通基准点(例如高精地图的四个角点,或者某些道路的交界点等),通过基准点将高精地图与全局图像进行叠加;步骤3)通过人工根据高精地图的上的道路在全局图像中相应位置绘制道路,以得到全局图像中的道路区域;或者,以全局图像的图像坐标系为基准,将高精地图上构成道路的道路点投射到该图像坐标系中,得到各道路点在图像坐标系中的坐标点,将全局图像中与前述坐标点重合的像素点标注为道路点,依此得到全局图像中的道路区域。
方式A2中,预置的语义分割算法可以为一个预先训练得到的能够对输入的图像进行语义分割的语义分割模型。该语义分割模型可以根据预先采集到的样本数据对神经网络模型进行迭代训练得到。样本数据包括:预先在港区内采集到的包含道路的一定数量的图像,以及通过人工对这些采集到的图像进行语义标注的标注结果。如何根据样本数据对神经网络模型进行迭代训练得到语义分割模型,可参见现有的技术,本申请不做严格限定。
本发明的一些实施例中,目标检测跟踪单元24具体实现可如下:采用预置的物体检测算法对全局图像中的道路区域进行物体检测得到检测结果(检测结果包括目标物体的二维框和类别,可以通过将目标物体的二维框设置成不同的颜色来代表该目标物体的类别(例如,绿色框表示该框内的目标物体为车辆,红色框表示该框内的目标物体为行人等),还可以通过在目标物体的二维框附近标注该目标物体的类别,例如在二维框的正上方或正下方用文字标注该二维框内的目标物体的类别);采用预置的物体跟踪算法根据所述全局图像的检测结果和前一帧全局图像的物体跟踪结果,得到所述全局图像的跟踪结果和类别。本发明实施例中,目标物体的类别可以包括车辆、行人等。物体检测算法可以预先根据训练数据(包括在港区内预先采集到的包含目标物体的一定数量的图像,以及对该图像进行物体检测标定的标定结果)对神经网络模型进行迭代训练得到的物体检测模型;物体跟踪算法可以是预先根据训练数据对神经网络模型进行迭代训练得到的物体跟踪模型。
为进一步全局合理规划港区内所有自动驾驶车辆的行驶路径,本发明的一些实施例中,中控系统2还可进一步包括运动轨迹预测单元26、路径优化单元27,如图5所示,其中:
运动轨迹预测单元26,用于根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
路径优化单元27,用于根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径 进行优化;
所述通信单元21进一步用于:将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
在一个示例中,运动轨迹预测单元26预测各目标物体对应的运动轨迹,具体实现可如下:根据目标物体的跟踪结果和类别分析确定该目标物体的姿态数据;将目标物体的姿态数据输入到预置的与该目标物体类别对应的运动模型中,得到该目标物体对应的运动轨迹。
当然,本领域技术人员还可通过其他可替代的技术方案实现对目标物体的运动轨迹的预测,例如:在目标物体中设置有定位单元(例如GPS定位单元)和惯性测量单元(IMU),或者其它的可实现定位以及实现姿态测量的设备;目标物体在行驶过程中,通过定位单元的测量结果和惯性测量单元的测量结果生成目标物体的姿态数据,并将该姿态数据发送给运动轨迹预测单元26。运动轨迹预测单元26预测各目标物体对应的运动轨迹,具体实现可如下:接收目标物体发送的姿态数据,将该目标物体的姿态数据输入到预置的与该目标物体类别对应的运动模型中,得到该目标物体对应的运动轨迹。
本发明的一些实施例中,自动驾驶控制装置周期性地或实时地将其所在的自动驾驶车辆的预估行驶轨迹(自动驾驶控制装置会根据自动驾驶车辆的历史行驶轨迹和自动驾驶车辆上的IMU传感器反馈的姿态信息预估自动驾驶车辆的预估行驶轨迹,如何预估可参见现有技术,该技术点并不是本发明技术方案的发明点)同步给中控系统2。所述路径优化单元27具体用于:
针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合(包括全部重合、部分重合)则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
本发明的一些实施例中,自动驾驶车辆对应的预估行驶轨迹由一定数量的位置点构成,各目标物体分别对应的运动轨迹由一定数量的位置点构成,若自动驾驶车辆的预估行驶轨迹和目标物体的运动轨迹中有n(n为预先设置的大于等于1的自然数,可以根据实际需求灵活设置n的取值,本申请不做严格限定)个以上的位置点重合,则认为该自动驾驶车辆的预估行驶轨迹与目标物体的运动轨迹重合。
本发明的一些实施例中,为提高通信成功率和质量,如图5所述的系统还包括设置在港区内的路侧V2X(即vehicle to everything)设备和设置在自动驾驶车辆上的自动驾驶控制装置;并且,所述中控系统2设置有第一V2X设备,自动驾驶控制装置设置有第二V2X设备,如图6所示,其中;
所述通信单元21具体用于:将各自动驾驶车辆的优化后的行驶路径发送给第一V2X设 备,由第一V2X设备将各自动驾驶车辆的优化后的行驶路径发送给路侧V2X设备;
路侧V2X设备,用于将从第一V2X设备接收到的自动驾驶车辆的优化后的行驶路径进行广播,由自动驾驶车辆上的第二V2X设备接收与该自动驾驶车辆对应的优化后的行驶路径。
本发明的一些实施例中,路侧V2X设备可采用港区全覆盖原则,即通过路侧V2X设备可以实现港区内所有区域的自动驾驶车辆、中控系统之间的通信。中控系统的第一V2X设备将自动驾驶车辆对应的优化后的行驶路径打包成V2X通信报文,并进行广播;路侧V2X设备接收到该V2X通信报文时,对该V2X通信报文进行广播;由第二V2X设备接收与其所在自动驾驶车辆对应的V2X通信报文。
通信单元21可将自动驾驶车辆的优化后的行驶路径打包成TCP/UDP(Transmission Control Protocol(传输控制协议)/User Datagram Protocol(用户数据协议))报文传输给第一V2X设备(例如在将行驶路径作为TCP/UDP报文的payload);第一V2X设备对接收到的TCP/UDP报文进行解析得到优化后的行驶路径,并将解析得到的行驶路径打包成V2X通信报文,并广播该V2X通信报文;路侧V2X设备接收到该V2X通信报文时,广播该V2X通信报文;第二V2X设备接收与其对应自动驾驶车辆的V2X通信报文,并对接收到的V2X通信报文进行解析,得到与该第二V2X设备对应的自动驾驶车辆对应的优化后的行驶路径,并将该行驶路径打包成TCP/UDP报文发送给该自动驾驶车辆对应的自动驾驶控制装置,如图7所示。TCP/UDP报文和V2X通信报文中均携带有自动驾驶车辆对应的身份信息,以声明该TCP/UDP报文、V2X报文中的优化后的行驶路径对应的自动驾驶车辆。第一V2X设备与中控系统2的通信单元21的通信接口可通过以太网、USB(Universal Serial Bus,通用串行总线)或者串口进行通信;第二V2X设备与自动驾驶控制装置的通信接口可通过以太网、USB或者串口通信。
实施例二
基于与前述实施例一相同的发明构思,本发明实施例二还提供一种中控系统,该中控系统的结构可如图3或如图5所示,在此不再赘述。
实施例三
基于与前述实施例一相同的发明构思,本发明实施例三还提供了一种中控系统。
图8示出了本申请实施例提供的中控系统的结构,包括:一个处理器81和至少一个存储器82,至少一个存储器82中包括至少一条机器可执行指令,处理器81执行至少一条机器可 执行指令以执行:
接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
在一些实施例中,处理器81执行至少一条机器可执行指令执行对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像,包括:将接收到的图像中采集时间相同的图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
在一些实施例中,处理器81执行至少一条机器可执行指令执行确定所述全局图像中的道路区域,包括:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
在一些实施例中,处理器81执行至少一条机器可执行指令还执行:根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
在一些实施例中,处理器81执行至少一条机器可执行指令执行根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化,包括:针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
实施例四
基于与前述实施例一相同的发明构思,本发明实施例四提供一种港区监控方法,该方法流程如图9所示,该港区监控方法可以运行在前述中控系统2中,方法包括:
步骤101、接收设置在港区内的各路侧摄像机采集的图像;
步骤102、对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
步骤103、确定所述全局图像中的道路区域;
步骤104、对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
步骤105、在全局图像中展示所述目标物体的跟踪结果和类别。
在本发明的一些实施例中,前述步骤102具体可通过图10所示的流程实现:
步骤102A、将接收到的图像中采集时间相同的图像确定为一组图像;
步骤102B、将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;
步骤102C、将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
本发明的一些实施例中,所述步骤103具体实现可如下:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域(具体可参见实施例一中的方式A1,在此不再赘述);或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域(具体可参见实施例一中的方式A2,在此不再赘述)。
前述图9、图10所示的方法,还可进一步包括步骤106~步骤108,如图11所示在图9所示的方法流程中还包括步骤106~步骤108,其中:
步骤106、根据目标物体的跟踪结果和类别预测各目标物体对应的运动轨迹;
步骤107、根据各目标物体对应的运动轨迹对自动驾驶车辆的行驶路径进行优化;
步骤108、将优化后的行驶路径发送给相应的自动驾驶车辆。
在一些实施例中,所述步骤107具体实现可如下:
针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
在一些实施例中,步骤108具体实现可如下:通过V2X通信技术将优化后的行驶路径发送给相应的自动驾驶车辆。
以上结合具体实施例描述了本发明的基本原理,但是,需要指出的是,对本领域普通技术人员而言,能够理解本发明的方法和装置的全部或者任何步骤或者部件可以在任何计算装置(包括处理器、存储介质等)或者计算装置的网络中,以硬件固件、软件或者他们的组合加以实现,这是本领域普通技术人员在阅读了本发明的说明的情况下运用它们的基本编程技能就能实现的。
本领域普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个 单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本发明的上述实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括上述实施例以及落入本发明范围的所有变更和修改。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (23)

  1. 一种港区监控方法,其特征在于,包括:
    接收设置在港区内的各路侧摄像机采集的图像;
    对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
    确定所述全局图像中的道路区域;
    对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
    在全局图像中展示所述目标物体的跟踪结果和类别。
  2. 根据权利要求1所述的方法,其特征在于,对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像,具体包括:
    将接收到的图像中采集时间相同的图像确定为一组图像;
    将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;
    将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
  3. 根据权利要求1所述的方法,其特征在于,确定所述全局图像中的道路区域,具体包括:
    将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;
    或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    根据目标物体的跟踪结果和类别预测各目标物体对应的运动轨迹;
    根据各目标物体对应的运动轨迹对自动驾驶车辆的行驶路径进行优化;
    将优化后的行驶路径发送给相应的自动驾驶车辆。
  5. 根据权利要求4所述的方法,其特征在于,根据各目标物体对应的运动轨迹对自动驾驶车辆的行驶路径进行优化,具体包括:
    针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
  6. 根据权利要求4所述的方法,其特征在于,将优化后的行驶路径发送给相应的自动驾驶车辆,具体包括:
    通过V2X通信技术将优化后的行驶路径发送给相应的自动驾驶车辆。
  7. 一种港区监控系统,其特征在于,包括设置在港区内的路侧摄像机、中控系统,其中:
    路侧摄像机,用于采集图像,并将图像发送给中控系统;
    中控系统,用于接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
  8. 根据权利要求7所述的系统,其特征在于,所述中控系统包括:
    通信单元,用于接收各路侧摄像机采集的图像;
    图像处理单元,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
    道路区域确定单元,用于确定所述全局图像中的道路区域;
    目标检测跟踪单元,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
    展示单元,用于在全局图像中展示所述目标物体的跟踪结果和类别。
  9. 根据权利要求8所述的系统,其特征在于,所述图像处理单元,具体用于:
    将接收到的图像中采集时间相同的图像确定为一组图像;
    将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;
    将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
  10. 根据权利要求8所述的系统,其特征在于,所述道路区域确定单元,具体用于:
    将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;
    或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
  11. 根据权利要求8所述的系统,其特征在于,所述中控系统还包括运动轨迹预测单元、路径优化单元,其中:
    运动轨迹预测单元,用于根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
    路径优化单元,用于根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;
    所述通信单元进一步用于:将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
  12. 根据权利要求11所述的系统,其特征在于,所述路径优化单元具体用于:
    针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
  13. 根据权利要求11所述的系统,其特征在于,所述系统还包括设置在港区内的路侧V2X设备和设置在自动驾驶车辆上的自动驾驶控制装置;并且,所述中控系统设置有第一V2X设备,自动驾驶控制装置设置有第二V2X设备;
    所述通信单元具体用于:将各自动驾驶车辆的优化后的行驶路径发送给第一V2X设备,由第一V2X设备将各自动驾驶车辆的优化后的行驶路径发送给路侧V2X设备;
    路侧V2X设备,用于将从第一V2X设备接收到的自动驾驶车辆的优化后的行驶路径进行广播,由自动驾驶车辆上的第二V2X设备接收与该自动驾驶车辆对应的优化后的行驶路径。
  14. 一种中控系统,其特征在于,包括:
    通信单元,用于接收各路侧摄像机采集的图像;
    图像处理单元,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;
    道路区域确定单元,用于确定所述全局图像中的道路区域;
    目标检测跟踪单元,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;
    展示单元,用于在全局图像中展示所述目标物体的跟踪结果和类别。
  15. 根据权利要求14所述的中控系统,其特征在于,所述图像处理单元,具体用于:
    将接收到的图像中采集时间相同的图像确定为一组图像;
    将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;
    将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
  16. 根据权利要求14所述的中控系统,其特征在于,所述道路区域确定单元,具体用于:
    将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;
    或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
  17. 根据权利要求14所述的中控系统,其特征在于,还包括运动轨迹预测单元、路径优 化单元,其中:
    根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
    运动轨迹预测单元,用于根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
    路径优化单元,用于根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;
    所述通信单元进一步用于:将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
  18. 根据权利要求17所述的中控系统,其特征在于,所述路径优化单元具体用于:
    针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
  19. 一种中控系统,其特征在于,包括一个处理器和至少一个存储器,至少一个存储器中包括至少一条机器可执行指令,处理器执行至少一条机器可执行指令以执行:
    接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
  20. 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像,包括:
    将接收到的图像中采集时间相同的图像确定为一组图像;
    将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;
    将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
  21. 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行确定所述全局图像中的道路区域,包括:
    将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;
    或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
  22. 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指 令还执行:
    根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
    根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;
    根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;
    将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
  23. 根据权利要求22所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化,包括:
    针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
PCT/CN2018/105474 2018-02-24 2018-09-13 一种港区监控方法及系统、中控系统 WO2019161663A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP18907348.9A EP3757866A4 (en) 2018-02-24 2018-09-13 PORT AREA SURVEILLANCE PROCESS AND SYSTEM, AND CENTRAL CONTROL SYSTEM
AU2018410435A AU2018410435B2 (en) 2018-02-24 2018-09-13 Port area monitoring method and system, and central control system
US17/001,082 US20210073539A1 (en) 2018-02-24 2020-08-24 Port area monitoring method and system and central control system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810157700.XA CN110197097B (zh) 2018-02-24 2018-02-24 一种港区监控方法及系统、中控系统
CN201810157700.X 2018-02-24

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/001,082 Continuation US20210073539A1 (en) 2018-02-24 2020-08-24 Port area monitoring method and system and central control system

Publications (1)

Publication Number Publication Date
WO2019161663A1 true WO2019161663A1 (zh) 2019-08-29

Family

ID=67687914

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105474 WO2019161663A1 (zh) 2018-02-24 2018-09-13 一种港区监控方法及系统、中控系统

Country Status (5)

Country Link
US (1) US20210073539A1 (zh)
EP (1) EP3757866A4 (zh)
CN (1) CN110197097B (zh)
AU (1) AU2018410435B2 (zh)
WO (1) WO2019161663A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067556A (zh) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 环境感知方法、装置、服务器和可读存储介质
JP7185740B1 (ja) 2021-08-30 2022-12-07 三菱電機インフォメーションシステムズ株式会社 領域特定装置、領域特定方法及び領域特定プログラム

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866578B (zh) * 2021-02-03 2023-04-07 四川新视创伟超高清科技有限公司 基于8k视频画面全局到局部的双向可视化及目标跟踪系统及方法
EP4459563A2 (en) 2021-07-02 2024-11-06 Fujitsu Technology Solutions GmbH Ai based monitoring of race tracks
CN114598823B (zh) * 2022-03-11 2024-06-14 北京字跳网络技术有限公司 特效视频生成方法、装置、电子设备及存储介质
CN114820700B (zh) * 2022-04-06 2023-05-16 北京百度网讯科技有限公司 对象跟踪方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267734A1 (en) * 2013-03-14 2014-09-18 John Felix Hart, JR. System and Method for Monitoring Vehicle Traffic and Controlling Traffic Signals
CN105208323A (zh) * 2015-07-31 2015-12-30 深圳英飞拓科技股份有限公司 一种全景拼接画面监控方法及装置
CN105407278A (zh) * 2015-11-10 2016-03-16 北京天睿空间科技股份有限公司 一种全景视频交通态势监控系统及方法
CN106652448A (zh) * 2016-12-13 2017-05-10 山姆帮你(天津)信息科技有限公司 基于视频处理技术的公路交通状态监测系统
CN107122765A (zh) * 2017-05-22 2017-09-01 成都通甲优博科技有限责任公司 一种高速公路服务区全景监控方法及系统

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100373394C (zh) * 2005-10-28 2008-03-05 南京航空航天大学 基于仿生复眼的运动目标的检测方法
CN1897015A (zh) * 2006-05-18 2007-01-17 王海燕 基于机器视觉的车辆检测和跟踪方法及系统
CN102096803B (zh) * 2010-11-29 2013-11-13 吉林大学 基于机器视觉的行人安全状态识别系统
CN102164269A (zh) * 2011-01-21 2011-08-24 北京中星微电子有限公司 全景监控方法及装置
KR101338554B1 (ko) * 2012-06-12 2013-12-06 현대자동차주식회사 V2x 통신을 위한 전력 제어 장치 및 방법
CN103017753B (zh) * 2012-11-01 2015-07-15 中国兵器科学研究院 一种无人机航路规划方法及装置
CN103236160B (zh) * 2013-04-07 2015-03-18 水木路拓科技(北京)有限公司 基于视频图像处理技术的路网交通状态监测系统
CN103473659A (zh) * 2013-08-27 2013-12-25 西北工业大学 配送车辆端实时状态信息驱动的物流任务动态优化分配方法
US9407881B2 (en) * 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices
CN103955920B (zh) * 2014-04-14 2017-04-12 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
US9747505B2 (en) * 2014-07-07 2017-08-29 Here Global B.V. Lane level traffic
CN104410838A (zh) * 2014-12-15 2015-03-11 成都鼎智汇科技有限公司 一种分布式视频监视系统
CN104483970B (zh) * 2014-12-20 2017-06-27 徐嘉荫 一种基于卫星定位系统和移动通信网络的控制无人驾驶系统航行的方法
US9681046B2 (en) * 2015-06-30 2017-06-13 Gopro, Inc. Image stitching in a multi-camera array
EP3141926B1 (en) * 2015-09-10 2018-04-04 Continental Automotive GmbH Automated detection of hazardous drifting vehicles by vehicle sensors
WO2017045116A1 (en) * 2015-09-15 2017-03-23 SZ DJI Technology Co., Ltd. System and method for supporting smooth target following
US9910441B2 (en) * 2015-11-04 2018-03-06 Zoox, Inc. Adaptive autonomous vehicle planner logic
JP6520740B2 (ja) * 2016-02-01 2019-05-29 トヨタ自動車株式会社 物体検出方法、物体検出装置、およびプログラム
CN108292141B (zh) * 2016-03-01 2022-07-01 深圳市大疆创新科技有限公司 用于目标跟踪的方法和系统
JP6595401B2 (ja) * 2016-04-26 2019-10-23 株式会社Soken 表示制御装置
CN107343165A (zh) * 2016-04-29 2017-11-10 杭州海康威视数字技术股份有限公司 一种监控方法、设备及系统
CN105844964A (zh) * 2016-05-05 2016-08-10 深圳市元征科技股份有限公司 一种车辆安全驾驶预警方法及装置
EP3244344A1 (en) * 2016-05-13 2017-11-15 DOS Group S.A. Ground object tracking system
CN106441319B (zh) * 2016-09-23 2019-07-16 中国科学院合肥物质科学研究院 一种无人驾驶车辆车道级导航地图的生成系统及方法
CN107045782A (zh) * 2017-03-05 2017-08-15 赵莉莉 智能交通管控系统差异化调配路线的实现方法
CN106997466B (zh) * 2017-04-12 2021-05-04 百度在线网络技术(北京)有限公司 用于检测道路的方法和装置
CN107226087B (zh) * 2017-05-26 2019-03-26 西安电子科技大学 一种结构化道路自动驾驶运输车及控制方法
US20180307245A1 (en) * 2017-05-31 2018-10-25 Muhammad Zain Khawaja Autonomous Vehicle Corridor
CN107316006A (zh) * 2017-06-07 2017-11-03 北京京东尚科信息技术有限公司 一种道路障碍物检测的方法和系统
CN107341445A (zh) * 2017-06-07 2017-11-10 武汉大千信息技术有限公司 监控场景下行人目标的全景描述方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140267734A1 (en) * 2013-03-14 2014-09-18 John Felix Hart, JR. System and Method for Monitoring Vehicle Traffic and Controlling Traffic Signals
CN105208323A (zh) * 2015-07-31 2015-12-30 深圳英飞拓科技股份有限公司 一种全景拼接画面监控方法及装置
CN105407278A (zh) * 2015-11-10 2016-03-16 北京天睿空间科技股份有限公司 一种全景视频交通态势监控系统及方法
CN106652448A (zh) * 2016-12-13 2017-05-10 山姆帮你(天津)信息科技有限公司 基于视频处理技术的公路交通状态监测系统
CN107122765A (zh) * 2017-05-22 2017-09-01 成都通甲优博科技有限责任公司 一种高速公路服务区全景监控方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3757866A4

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067556A (zh) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 环境感知方法、装置、服务器和可读存储介质
CN114067556B (zh) * 2020-08-05 2023-03-14 北京万集科技股份有限公司 环境感知方法、装置、服务器和可读存储介质
JP7185740B1 (ja) 2021-08-30 2022-12-07 三菱電機インフォメーションシステムズ株式会社 領域特定装置、領域特定方法及び領域特定プログラム
JP2023034184A (ja) * 2021-08-30 2023-03-13 三菱電機インフォメーションシステムズ株式会社 領域特定装置、領域特定方法及び領域特定プログラム

Also Published As

Publication number Publication date
EP3757866A4 (en) 2021-11-10
CN110197097B (zh) 2024-04-19
EP3757866A1 (en) 2020-12-30
CN110197097A (zh) 2019-09-03
AU2018410435B2 (en) 2024-02-29
US20210073539A1 (en) 2021-03-11
AU2018410435A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
WO2019161663A1 (zh) 一种港区监控方法及系统、中控系统
EP3967972A1 (en) Positioning method, apparatus, and device, and computer-readable storage medium
US11676307B2 (en) Online sensor calibration for autonomous vehicles
US11386672B2 (en) Need-sensitive image and location capture system and method
US11721225B2 (en) Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle
US20210131821A1 (en) Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle
CN111046762A (zh) 一种对象定位方法、装置电子设备及存储介质
CN104217439A (zh) 一种室内视觉定位系统及方法
CN106650705A (zh) 区域标注方法、装置和电子设备
EP3552388B1 (en) Feature recognition assisted super-resolution method
US20210003683A1 (en) Interactive sensor calibration for autonomous vehicles
JP7278414B2 (ja) 交通道路用のデジタル復元方法、装置及びシステム
CN111353453B (zh) 用于车辆的障碍物检测方法和装置
WO2022262327A1 (zh) 交通信号灯检测
Wang et al. Quadrotor-enabled autonomous parking occupancy detection
WO2022099482A1 (zh) 曝光控制方法、装置、可移动平台及计算机可读存储介质
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
Kotze et al. Reconfigurable navigation of an Automatic Guided Vehicle utilising omnivision
CN117746426A (zh) 基于高精度地图的图像标签自动生成方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18907348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018410435

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2018907348

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018907348

Country of ref document: EP

Effective date: 20200924

ENP Entry into the national phase

Ref document number: 2018410435

Country of ref document: AU

Date of ref document: 20180913

Kind code of ref document: A