WO2019161663A1 - 一种港区监控方法及系统、中控系统 - Google Patents
一种港区监控方法及系统、中控系统 Download PDFInfo
- Publication number
- WO2019161663A1 WO2019161663A1 PCT/CN2018/105474 CN2018105474W WO2019161663A1 WO 2019161663 A1 WO2019161663 A1 WO 2019161663A1 CN 2018105474 W CN2018105474 W CN 2018105474W WO 2019161663 A1 WO2019161663 A1 WO 2019161663A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- self
- image
- target object
- global image
- driving vehicle
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000012544 monitoring process Methods 0.000 title claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 230000033001 locomotion Effects 0.000 claims description 51
- 238000004891 communication Methods 0.000 claims description 34
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims description 21
- 240000004050 Pentaglottis sempervirens Species 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 18
- 230000011218 segmentation Effects 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000005457 optimization Methods 0.000 claims description 12
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000004590 computer program Methods 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 101100118004 Arabidopsis thaliana EBP1 gene Proteins 0.000 description 2
- 101150052583 CALM1 gene Proteins 0.000 description 2
- 102100025580 Calmodulin-1 Human genes 0.000 description 2
- 102100025579 Calmodulin-2 Human genes 0.000 description 2
- 101100459256 Cyprinus carpio myca gene Proteins 0.000 description 2
- 101001077352 Homo sapiens Calcium/calmodulin-dependent protein kinase type II subunit beta Proteins 0.000 description 2
- 101150091339 cam-1 gene Proteins 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 101150026942 CAM3 gene Proteins 0.000 description 1
- 101150058073 Calm3 gene Proteins 0.000 description 1
- 102100025926 Calmodulin-3 Human genes 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/182—Network patterns, e.g. roads or rivers
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/12—Target-seeking control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the invention relates to the field of automatic driving, in particular to a port area monitoring method, a port area monitoring system and a central control system.
- the present invention provides a port area monitoring method to solve the technical problem that the prior art cannot perform an intuitive and effective global view of a target object in a port area.
- a method for monitoring a port area includes:
- the tracking results and categories of the target object are displayed in a global image.
- a port area monitoring system comprising a roadside camera and a central control system disposed in a port area, wherein:
- a roadside camera for collecting images and transmitting the images to a monitoring system
- a central control system for receiving images acquired by each side camera; performing coordinate transformation and splicing on the received image to obtain a global image of the port area under God's perspective; determining a road area in the global image; The road area performs object detection and object tracking to obtain the tracking result and category of the target object; the tracking result and category of the target object are displayed in the global image.
- a third aspect provides a central control system, where the system includes:
- a communication unit configured to receive an image collected by each roadside camera
- An image processing unit configured to perform coordinate conversion and splicing on the received image to obtain a global image of the port area under God's perspective;
- a road area determining unit configured to determine a road area in the global image
- a target detection and tracking unit configured to perform object detection and object tracking on a road area in the global image, to obtain a tracking result and a category of the target object
- a display unit for displaying a tracking result and a category of the target object in a global image.
- the technical solution of the invention provides a large number of roadside cameras in the port area, and photographs the pictures in the port area through the roadside cameras; firstly, coordinates and splicing the images collected by the roadside cameras in the port area to obtain God
- the screen can be used to understand all the situation in the port area; on the other hand, the tracking results and categories of the target objects in the road area in the global image can be displayed in real time, and the staff can intuitively understand the movement of the target objects of various categories; Therefore, the technical solution of the present invention solves the technical problem that the prior art cannot perform an intuitive and effective global view of the target object in the port area.
- FIG. 1 is a schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
- FIG. 2 is a second schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
- FIG. 3 is a schematic structural diagram of a central control system according to an embodiment of the present invention.
- FIG. 4A is a schematic diagram of an image collected by a roadside camera according to an embodiment of the present invention.
- 4B is a schematic diagram of grouping images according to an acquisition time according to an embodiment of the present invention.
- 4C is a schematic diagram of a set of bird's-eye view images according to an embodiment of the present invention.
- 4D is a schematic diagram of splicing a set of bird's-eye view images into a global image according to an embodiment of the present invention
- 4E is a schematic diagram showing tracking results and categories of a target object in a global image according to an embodiment of the present invention.
- FIG. 5 is a second schematic structural diagram of a central control system according to an embodiment of the present invention.
- FIG. 6 is a third structural schematic diagram of a port area monitoring system according to an embodiment of the present invention.
- FIG. 7 is a schematic diagram of communication between a first V2X device, a roadside V2X device, and a second V2X device according to an embodiment of the present invention
- FIG. 8 is a third schematic structural diagram of a central control system according to an embodiment of the present invention.
- FIG. 9 is a flowchart of a method for monitoring a port area according to an embodiment of the present invention.
- FIG. 10 is a flowchart of performing coordinate conversion and splicing on a received image to obtain a global image of a port area under the perspective of God according to an embodiment of the present invention
- FIG. 11 is a second flowchart of a method for monitoring a port area according to an embodiment of the present invention.
- the application scenario of the technical solution of the present invention is not limited to the port area (including the port area, the highway port area, etc.), and can also be applied to other application scenarios such as a mining area, a cargo distribution center, a large warehouse, a park, etc.; the technical solution is transplanted to other applications.
- the scene does not need to make substantial changes, and those skilled in the art do not need to work creatively and do not need to overcome some specific technical problems. Due to the limited space, the present application does not describe the application of the technical solution of the present invention in detail. The following description of the technical solutions is based on the port area.
- FIG. 1 is a schematic structural diagram of a port area monitoring system according to an embodiment of the present invention.
- the system includes a roadside camera 1 and a central control system 2 disposed in a port area, wherein:
- the roadside camera 1 is configured to collect images and send the images to the central control system 2;
- the central control system 2 is configured to receive images acquired by each side camera 1; perform coordinate transformation and splicing on the received images to obtain a global image of the port area under God's perspective; determine a road area in the global image; The road area in the area performs object detection and object tracking to obtain the tracking result and category of the target object; the tracking result and category of the target object are displayed in the global image.
- the roadside camera 1 can adopt the full coverage principle of the port area, and the image collection collected by the roadside camera 1 can cover the geographical area of the entire port area as much as possible.
- the image collection collected by the roadside camera 1 can cover the geographical area of the entire port area as much as possible.
- flexible settings such as full coverage of only some of the core areas in the port area. This application is not strictly limited.
- the roadside camera 1 in order to make the image captured by the roadside camera 1 cover a larger field of view, can be disposed on a device having a certain height in the port area, such as a tower crane, a tire crane, and a bridge crane. , a light pole, a crane, a front hoist, a crane, etc., or a roadside device having a certain height dedicated to laying the roadside camera 1 in the port area.
- the roadside camera 1 disposed on the tower crane can be referred to as a tower crane CAM
- the roadside camera disposed on the pole is referred to as a light pole CAM
- the roadside camera disposed on the crane is called Crane CAM.
- the image acquisition clocks of all the roadside cameras 1 are synchronized, and the camera parameters of the respective side camera 1 are the same, and the acquired images are the same size.
- the structure of the central control system 2 can be as shown in FIG. 3, including a communication unit 21, an image processing unit 22, a road area determining unit 23, a target detection tracking unit 24, and a display unit 25, wherein:
- the communication unit 21 is configured to receive an image collected by each roadside camera
- the image processing unit 22 is configured to perform coordinate conversion and splicing on the received image to obtain a global image of the port area under God's perspective;
- a road area determining unit 23 configured to determine a road area in the global image
- the target detection and tracking unit 24 is configured to perform object detection and object tracking on the road area in the global image to obtain a tracking result and a category of the target object;
- the display unit 25 is configured to display the tracking result and the category of the target object in the global image.
- the central control system 2 can be operated in a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a field programmable gate array controller, a desktop computer, a mobile computer. , PAD, microcontroller and other equipment.
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- a field programmable gate array controller a desktop computer
- PAD microcontroller and other equipment.
- the communication unit 21 can transmit and receive information by means of a wireless manner, for example, by an antenna.
- the image processing unit 22, the road area determining unit 23, and the target detecting and tracking unit 24 can be executed on a processor of a DSP, an FPGA controller, a desktop computer, a mobile computer, a PAD, a single chip microcomputer, etc. (for example, a CPU (Central Processing Unit)
- the display unit 25 can be run on a display (such as a GPU (Graphics Processing Unit)) of a DSP, an FPGA controller, a desktop computer, a mobile computer, a PAD, a single chip microcomputer, or the like.
- the image processing unit 22 is specifically configured to: determine an image with the same acquisition time in the received image as a group of images; perform coordinate conversion on each image in the group of images to obtain a group The image is overlooked; a set of bird's-eye view images are spliced according to a preset splicing order to obtain a global image, and the splicing order is obtained according to the spatial positional relationship between the roadside cameras.
- n roadside cameras 1 there are n roadside cameras 1 in the port area, and the n roadside cameras 1 are sequentially numbered CAM1, CAM2, CAM3, ..., CAMn according to the adjacent relationship of the spatial positions, according to the n
- the spatial position relationship of the roadside camera 1 sets the image stitching order as: CAM1->CAM2->CAM3->,...,->CAMn; with time t0 as the starting time, CAM1 sequentially collects images such as image set 1, CAM2
- the sequentially collected images, such as image collection 2, ..., CAMn sequentially acquire images such as image collection n, as shown in FIG.
- each image collection contains k images; the acquisition time is the same in the images in the n image collections The image is determined as a set of images.
- the image in a dotted frame constitutes a group of images, and k sets of images are obtained, and each set of images generates a global image to obtain k global images;
- Each image in each group of images is coordinate-converted to obtain a set of bird's-eye view images.
- the four-way side cameras of the port area respectively capture the bird's-eye view images of the four images at the same time, that is, the four images. Overlooking images form a set of bird's eye view For example, FIG.
- FIG. 4D splicing a set of bird's-eye view images according to a preset splicing order to obtain a global image
- FIG. 4E is a tracking result and category of a target object of a global image
- a broken line box indicates a tracking result of the vehicle.
- an image is projected onto the ground plane to obtain a bird's-eye view image corresponding to the image.
- the specific implementation can be as follows:
- the conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system is obtained in advance.
- the conversion relationship between the camera coordinate system of each roadside camera and the ground plane coordinate system is manually determined by manual or computer; according to the conversion relationship between the camera coordinate system of the roadside camera and the ground plane coordinate system (for the existing Technology), a conversion relationship between a camera coordinate system of the roadside camera and an imaging plane coordinate system of the roadside camera, and a conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system is obtained;
- each pixel point in the image captured by the roadside camera is projected to the ground according to the conversion relationship between the imaging plane coordinate system of the roadside camera and the ground plane coordinate system.
- the bird's-eye view image corresponding to the image is obtained.
- the road area determining unit 23 may be specifically implemented by, but not limited to, any of the following methods:
- a high-precision map corresponding to the port area is superimposed with the global image to obtain a road area in the global image.
- Method A2 Perform semantic segmentation on the global image by using a preset semantic segmentation algorithm to obtain a road region in the global image.
- the high-precision map corresponding to the port area refers to an electronic map drawn by the map engine based on the high-precision map data of the port area, in which all roads in the port area (including road boundary lines, Lane line, road direction, speed limit, steering and other information).
- the high-precision map corresponding to the port area is superimposed with the global image to obtain the road area of the global image, which can be implemented in the following manner: Step 1) adjust the size of the global map image to be consistent with the high-precision map (eg, By stretching/scaling); step 2) manually calibrating several common reference points that can be used for superposition on high-precision maps and global images (for example, four corner points of high-precision maps, or junction points of certain roads) Etc., superimposing the high-precision map with the global image through the reference point; step 3) manually drawing the road in the corresponding position in the global image according to the road on the high-precision map to obtain the road area in the global image; or
- the image coordinate system of the global image is used as a reference, and the road points constituting the road on the high-precision map are projected into the image coordinate system, and the coordinate points of the road points in the image coordinate system are obtained, and the global image is coincident with the aforementioned coordinate points.
- the preset semantic segmentation algorithm may be a pre-trained semantic segmentation model capable of semantically segmenting the input image.
- the semantic segmentation model can be iteratively trained on the neural network model based on the pre-collected sample data.
- the sample data includes: a certain number of images containing roads collected in advance in the port area, and the result of semantically labeling the collected images by hand. How to perform iterative training on the neural network model according to the sample data to obtain a semantic segmentation model can be referred to the existing technology, which is not strictly limited.
- the target detection and tracking unit 24 may be specifically implemented as follows: using a preset object detection algorithm to perform object detection on a road region in a global image to obtain a detection result (the detection result includes a two-dimensional frame and a category of the target object)
- the class of the target object can be represented by setting the two-dimensional frame of the target object to a different color (for example, a green frame indicates that the target object in the frame is a vehicle, and a red frame indicates that the target object in the frame is a pedestrian, etc.)
- It is also possible to mark the category of the target object in the vicinity of the two-dimensional frame of the target object for example, by using the text to mark the category of the target object in the two-dimensional frame directly above or below the two-dimensional frame;
- the tracking algorithm obtains the tracking result and the category of the global image according to the detection result of the global image and the object tracking result of the global image of the previous frame.
- the category of the target object may include a vehicle, a pedestrian, and the like.
- the object detection algorithm can perform object detection on the neural network model based on the training data (including a certain number of images including the target object pre-acquired in the port area and the calibration result of the object detection calibration).
- the object tracking algorithm may be an object tracking model obtained by iteratively training the neural network model according to the training data.
- the central control system 2 may further include a motion trajectory prediction unit 26 and a path optimization unit 27, as shown in FIG. 5, wherein :
- the motion trajectory prediction unit 26 is configured to predict a motion trajectory corresponding to each target object according to the tracking result and the category of the target object;
- the path optimization unit 27 is configured to optimize a driving path of each autonomous driving vehicle according to a motion trajectory corresponding to each target object;
- the communication unit 21 is further configured to transmit the optimized driving path of each of the self-driving vehicles to the corresponding autonomous driving vehicle.
- the motion trajectory prediction unit 26 predicts a motion trajectory corresponding to each target object
- the specific implementation may be as follows: determining the attitude data of the target object according to the tracking result and the category analysis of the target object; and inputting the posture data of the target object to the pre-predetermined In the motion model corresponding to the target object category, the motion trajectory corresponding to the target object is obtained.
- a positioning unit such as a GPS positioning unit
- an inertial measurement unit IMU
- the target object generates the attitude data of the target object by the measurement result of the positioning unit and the measurement result of the inertial measurement unit, and sends the posture data to the motion track prediction unit.
- the motion trajectory prediction unit 26 predicts a motion trajectory corresponding to each target object, and the specific implementation may be as follows: receiving the attitude data sent by the target object, and inputting the posture data of the target object into a preset motion model corresponding to the target object category, Obtain a motion trajectory corresponding to the target object.
- the automatic driving control device periodically or in real time predicts the trajectory of the self-driving vehicle in which it is located (the automatic driving control device is based on the historical trajectory of the self-driving vehicle and the self-driving vehicle)
- the attitude information fed back by the IMU sensor predicts the estimated travel trajectory of the self-driving vehicle. How to estimate can be referred to the prior art, and the technical point is not the invention of the technical solution of the present invention) to the central control system 2.
- the path optimization unit 27 is specifically configured to:
- the estimated driving trajectory corresponding to the self-driving vehicle sent by the self-driving vehicle is compared with the motion trajectory corresponding to each target object, and if coincidence occurs (including all coincidence, partial coincidence), the optimization is performed.
- the driving path of the vehicle is automatically driven so that the optimized driving path does not coincide with the moving track corresponding to each target object; if the coincidence does not occur, the driving path of the self-driving vehicle is not optimized.
- the estimated driving trajectory corresponding to the self-driving vehicle is composed of a certain number of position points, and the corresponding trajectory of each target object is composed of a certain number of position points, if the predicted driving trajectory of the self-driving vehicle There is n in the motion trajectory of the target object (n is a preset natural number greater than or equal to 1, and the value of n can be flexibly set according to actual needs, and this application does not strictly limit) more than one position point coincides, and the automatic is considered
- the estimated travel trajectory of the driving vehicle coincides with the motion trajectory of the target object.
- the system as described in FIG. 5 further includes a roadside V2X (ie, vehicle to everything) device disposed in the port area and an automatic device disposed on the self-driving vehicle.
- Driving control device ie, vehicle to everything
- the central control system 2 is provided with a first V2X device
- the automatic driving control device is provided with a second V2X device, as shown in FIG. 6, wherein;
- the communication unit 21 is specifically configured to: send the optimized driving path of each self-driving vehicle to the first V2X device, and send, by the first V2X device, the optimized driving path of each self-driving vehicle to the roadside V2X device;
- a roadside V2X device for broadcasting an optimized travel route of the self-driving vehicle received from the first V2X device, and receiving, by the second V2X device on the self-driving vehicle, an optimized travel corresponding to the self-driving vehicle path.
- the roadside V2X device can adopt the full coverage principle of the port area, that is, the roadside V2X device can realize communication between the self-driving vehicle and the central control system in all areas in the port area.
- the first V2X device of the central control system packs the optimized driving path corresponding to the self-driving vehicle into a V2X communication message and broadcasts it; when the V2X communication device receives the V2X communication message, the V2X communication message is performed on the V2X communication message. Broadcasting; receiving, by the second V2X device, a V2X communication message corresponding to the self-driving vehicle in which it is located.
- the communication unit 21 can package the optimized driving path of the self-driving vehicle into a TCP/UDP (Transmission Control Protocol)/User Datagram Protocol (User Data Protocol) message to the first V2X device (for example, The driving path is used as a payload of the TCP/UDP packet.
- the first V2X device parses the received TCP/UDP packet to obtain an optimized driving path, and packs the parsed driving path into a V2X communication message, and broadcasts The V2X communication message; when the V2X communication device receives the V2X communication message, the V2X communication message is broadcast; the second V2X device receives the V2X communication message corresponding to the automatically driving vehicle, and receives the V2X communication message.
- TCP/UDP Transmission Control Protocol
- User Datagram Protocol User Data Protocol
- the text is parsed to obtain an optimized driving route corresponding to the self-driving vehicle corresponding to the second V2X device, and the driving path is packaged into a TCP/UDP message and sent to the automatic driving control device corresponding to the self-driving vehicle, such as Figure 7 shows.
- Both the TCP/UDP message and the V2X communication message carry the identity information corresponding to the self-driving vehicle to declare the self-driving vehicle corresponding to the optimized driving route in the TCP/UDP message and the V2X message.
- the communication interface between the first V2X device and the communication unit 21 of the central control system 2 can communicate via Ethernet, USB (Universal Serial Bus) or serial port; the communication interface between the second V2X device and the automatic driving control device can be Communicate via Ethernet, USB or serial port.
- USB Universal Serial Bus
- the second embodiment of the present invention further provides a central control system.
- the structure of the central control system can be as shown in FIG. 3 or FIG. 5, and details are not described herein again.
- the third embodiment of the present invention further provides a central control system.
- FIG. 8 shows a structure of a central control system provided by an embodiment of the present application, including: a processor 81 and at least one memory 82. At least one memory 82 includes at least one machine executable instruction, and the processor 81 executes at least one machine. Execute instructions to execute:
- the processor 81 executes at least one machine executable instruction to perform coordinate transformation and splicing of the received image to obtain a global image of the port area under God's perspective, including: acquiring images of the same time in the received image Determining a set of images; performing coordinate transformation on each image in a group of images to obtain a set of bird's-eye view images; splicing a set of bird's-eye view images according to a preset stitching sequence to obtain a global image, the stitching order is based on The spatial positional relationship between the cameras on each side of the road is obtained.
- the processor 81 executing the at least one machine executable instruction to perform determining the road area in the global image comprises: superimposing the high-precision map corresponding to the port area with the global image to obtain The road region in the global image; or the semantic segmentation of the global image using a preset semantic segmentation algorithm to obtain a road region in the global image.
- the processor 81 executes the at least one machine executable instruction and further performs: predicting a motion trajectory corresponding to each target object according to the tracking result and the category of the target object; predicting each target object according to the tracking result and the category of the target object Corresponding motion trajectory; optimizing the driving path of each self-driving vehicle according to the motion trajectory corresponding to each target object; and transmitting the optimized driving path of each self-driving vehicle to the corresponding autonomous driving vehicle.
- the processor 81 executes at least one machine-executable instruction to perform an optimization of the travel path of each of the self-driving vehicles according to a motion trajectory corresponding to each target object, including: for each self-driving vehicle, the self-driving vehicle The estimated predicted driving trajectory corresponding to the automatically driven vehicle is compared with the moving trajectory corresponding to each target object, and if the coincidence occurs, the driving path of the automatically driving vehicle is optimized, so that the optimized driving path corresponds to each target object. The motion trajectories do not coincide; if no coincidence occurs, the driving path of the self-driving vehicle is not optimized.
- the fourth embodiment of the present invention provides a port area monitoring method.
- the method is as shown in FIG. 9.
- the port area monitoring method can be run in the foregoing central control system 2, and the method includes :
- Step 101 Receive an image collected by each roadside camera disposed in the port area;
- Step 102 Perform coordinate transformation and splicing on the received image to obtain a global image of the port area under God's perspective;
- Step 103 Determine a road area in the global image.
- Step 104 Perform object detection and object tracking on the road area in the global image to obtain a tracking result and a category of the target object.
- Step 105 Display tracking results and categories of the target object in a global image.
- step 102 can be specifically implemented by the process shown in FIG. 10:
- Step 102A Determine an image in which the acquisition time is the same in the received image as a group of images
- Step 102B Perform coordinate transformation on each image in a group of images to obtain a set of bird's-eye view images
- Step 102C splicing a set of bird's-eye view images according to a preset splicing order to obtain a global image, and the splicing order is obtained according to a spatial positional relationship between the roadside cameras.
- the step 103 may be specifically implemented as follows: superimposing a high-precision map corresponding to the port area with the global image to obtain a road area in the global image (refer to the implementation.
- the method A1 in the first example is not described here again; or the semantic segmentation algorithm is used to perform semantic segmentation on the global image to obtain a road region in the global image (refer to the first embodiment).
- the way A2, will not repeat here).
- the method shown in FIG. 9 and FIG. 10 may further include steps 106 to 108. Steps 106 to 108 are further included in the method flow shown in FIG. 9 as shown in FIG.
- Step 106 Prediction of a motion trajectory corresponding to each target object according to the tracking result and the category of the target object;
- Step 107 Optimize a driving path of the self-driving vehicle according to a motion trajectory corresponding to each target object
- Step 108 Send the optimized driving path to the corresponding self-driving vehicle.
- the step 107 can be specifically implemented as follows:
- the estimated driving trajectory corresponding to the self-driving vehicle sent by the self-driving vehicle is compared with the motion trajectory corresponding to each target object, and if the coincidence occurs, the driving path of the self-driving vehicle is optimized to The optimized travel path does not coincide with the motion trajectory corresponding to each target object; if the coincidence does not occur, the travel path of the self-driving vehicle is not optimized.
- step 108 may be embodied as follows: the optimized travel path is transmitted to the corresponding autonomous vehicle by V2X communication technology.
- each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
- embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Automation & Control Theory (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (23)
- 一种港区监控方法,其特征在于,包括:接收设置在港区内的各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
- 根据权利要求1所述的方法,其特征在于,对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像,具体包括:将接收到的图像中采集时间相同的图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
- 根据权利要求1所述的方法,其特征在于,确定所述全局图像中的道路区域,具体包括:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据目标物体的跟踪结果和类别预测各目标物体对应的运动轨迹;根据各目标物体对应的运动轨迹对自动驾驶车辆的行驶路径进行优化;将优化后的行驶路径发送给相应的自动驾驶车辆。
- 根据权利要求4所述的方法,其特征在于,根据各目标物体对应的运动轨迹对自动驾驶车辆的行驶路径进行优化,具体包括:针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
- 根据权利要求4所述的方法,其特征在于,将优化后的行驶路径发送给相应的自动驾驶车辆,具体包括:通过V2X通信技术将优化后的行驶路径发送给相应的自动驾驶车辆。
- 一种港区监控系统,其特征在于,包括设置在港区内的路侧摄像机、中控系统,其中:路侧摄像机,用于采集图像,并将图像发送给中控系统;中控系统,用于接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
- 根据权利要求7所述的系统,其特征在于,所述中控系统包括:通信单元,用于接收各路侧摄像机采集的图像;图像处理单元,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;道路区域确定单元,用于确定所述全局图像中的道路区域;目标检测跟踪单元,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;展示单元,用于在全局图像中展示所述目标物体的跟踪结果和类别。
- 根据权利要求8所述的系统,其特征在于,所述图像处理单元,具体用于:将接收到的图像中采集时间相同的图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
- 根据权利要求8所述的系统,其特征在于,所述道路区域确定单元,具体用于:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
- 根据权利要求8所述的系统,其特征在于,所述中控系统还包括运动轨迹预测单元、路径优化单元,其中:运动轨迹预测单元,用于根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;路径优化单元,用于根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;所述通信单元进一步用于:将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
- 根据权利要求11所述的系统,其特征在于,所述路径优化单元具体用于:针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
- 根据权利要求11所述的系统,其特征在于,所述系统还包括设置在港区内的路侧V2X设备和设置在自动驾驶车辆上的自动驾驶控制装置;并且,所述中控系统设置有第一V2X设备,自动驾驶控制装置设置有第二V2X设备;所述通信单元具体用于:将各自动驾驶车辆的优化后的行驶路径发送给第一V2X设备,由第一V2X设备将各自动驾驶车辆的优化后的行驶路径发送给路侧V2X设备;路侧V2X设备,用于将从第一V2X设备接收到的自动驾驶车辆的优化后的行驶路径进行广播,由自动驾驶车辆上的第二V2X设备接收与该自动驾驶车辆对应的优化后的行驶路径。
- 一种中控系统,其特征在于,包括:通信单元,用于接收各路侧摄像机采集的图像;图像处理单元,用于对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;道路区域确定单元,用于确定所述全局图像中的道路区域;目标检测跟踪单元,用于对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;展示单元,用于在全局图像中展示所述目标物体的跟踪结果和类别。
- 根据权利要求14所述的中控系统,其特征在于,所述图像处理单元,具体用于:将接收到的图像中采集时间相同的图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
- 根据权利要求14所述的中控系统,其特征在于,所述道路区域确定单元,具体用于:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
- 根据权利要求14所述的中控系统,其特征在于,还包括运动轨迹预测单元、路径优 化单元,其中:根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;运动轨迹预测单元,用于根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;路径优化单元,用于根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;所述通信单元进一步用于:将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
- 根据权利要求17所述的中控系统,其特征在于,所述路径优化单元具体用于:针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
- 一种中控系统,其特征在于,包括一个处理器和至少一个存储器,至少一个存储器中包括至少一条机器可执行指令,处理器执行至少一条机器可执行指令以执行:接收各路侧摄像机采集的图像;对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像;确定所述全局图像中的道路区域;对全局图像中的道路区域进行物体检测和物体跟踪,得到目标物体的跟踪结果和类别;在全局图像中展示所述目标物体的跟踪结果和类别。
- 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行对接收到的图像进行坐标转换和拼接得到上帝视角下港区的全局图像,包括:将接收到的图像中采集时间相同的图像确定为一组图像;将一组图像中的各图像进行坐标转换,得到一组俯瞰图像;将一组俯瞰图像按照预置的拼接顺序进行拼接得到一张全局图像,所述拼接顺序根据各路侧摄像机之间的空间位置关系得到。
- 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行确定所述全局图像中的道路区域,包括:将所述港区对应的高精地图与所述全局图像进行叠加,以得到所述全局图像中的道路区域;或者,采用预置的语义分割算法对所述全局图像进行语义分割,以得到所述全局图像中的道路区域。
- 根据权利要求19所述的中控系统,其特征在于,处理器执行至少一条机器可执行指 令还执行:根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;根据目标物体的跟踪结果和类别,预测各目标物体对应的运动轨迹;根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化;将各自动驾驶车辆的优化后的行驶路径发送给相应的自动驾驶车辆。
- 根据权利要求22所述的中控系统,其特征在于,处理器执行至少一条机器可执行指令执行根据各目标物体对应的运动轨迹,对各自动驾驶车辆的行驶路径进行优化,包括:针对每辆自动驾驶车辆,将自动驾驶车辆发送的该自动驾驶车辆对应的预估行驶轨迹与各目标物体对应的运动轨迹进行比对,若发生重合则优化所述自动驾驶车辆的行驶路径,以使优化后的行驶路径与各目标物体对应的运动轨迹不重合;若不发生重合则不优化所述自动驾驶车辆的行驶路径。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18907348.9A EP3757866A4 (en) | 2018-02-24 | 2018-09-13 | PORT AREA SURVEILLANCE PROCESS AND SYSTEM, AND CENTRAL CONTROL SYSTEM |
AU2018410435A AU2018410435B2 (en) | 2018-02-24 | 2018-09-13 | Port area monitoring method and system, and central control system |
US17/001,082 US20210073539A1 (en) | 2018-02-24 | 2020-08-24 | Port area monitoring method and system and central control system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810157700.XA CN110197097B (zh) | 2018-02-24 | 2018-02-24 | 一种港区监控方法及系统、中控系统 |
CN201810157700.X | 2018-02-24 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/001,082 Continuation US20210073539A1 (en) | 2018-02-24 | 2020-08-24 | Port area monitoring method and system and central control system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019161663A1 true WO2019161663A1 (zh) | 2019-08-29 |
Family
ID=67687914
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/105474 WO2019161663A1 (zh) | 2018-02-24 | 2018-09-13 | 一种港区监控方法及系统、中控系统 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20210073539A1 (zh) |
EP (1) | EP3757866A4 (zh) |
CN (1) | CN110197097B (zh) |
AU (1) | AU2018410435B2 (zh) |
WO (1) | WO2019161663A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067556A (zh) * | 2020-08-05 | 2022-02-18 | 北京万集科技股份有限公司 | 环境感知方法、装置、服务器和可读存储介质 |
JP7185740B1 (ja) | 2021-08-30 | 2022-12-07 | 三菱電機インフォメーションシステムズ株式会社 | 領域特定装置、領域特定方法及び領域特定プログラム |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112866578B (zh) * | 2021-02-03 | 2023-04-07 | 四川新视创伟超高清科技有限公司 | 基于8k视频画面全局到局部的双向可视化及目标跟踪系统及方法 |
EP4459563A2 (en) | 2021-07-02 | 2024-11-06 | Fujitsu Technology Solutions GmbH | Ai based monitoring of race tracks |
CN114598823B (zh) * | 2022-03-11 | 2024-06-14 | 北京字跳网络技术有限公司 | 特效视频生成方法、装置、电子设备及存储介质 |
CN114820700B (zh) * | 2022-04-06 | 2023-05-16 | 北京百度网讯科技有限公司 | 对象跟踪方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267734A1 (en) * | 2013-03-14 | 2014-09-18 | John Felix Hart, JR. | System and Method for Monitoring Vehicle Traffic and Controlling Traffic Signals |
CN105208323A (zh) * | 2015-07-31 | 2015-12-30 | 深圳英飞拓科技股份有限公司 | 一种全景拼接画面监控方法及装置 |
CN105407278A (zh) * | 2015-11-10 | 2016-03-16 | 北京天睿空间科技股份有限公司 | 一种全景视频交通态势监控系统及方法 |
CN106652448A (zh) * | 2016-12-13 | 2017-05-10 | 山姆帮你(天津)信息科技有限公司 | 基于视频处理技术的公路交通状态监测系统 |
CN107122765A (zh) * | 2017-05-22 | 2017-09-01 | 成都通甲优博科技有限责任公司 | 一种高速公路服务区全景监控方法及系统 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100373394C (zh) * | 2005-10-28 | 2008-03-05 | 南京航空航天大学 | 基于仿生复眼的运动目标的检测方法 |
CN1897015A (zh) * | 2006-05-18 | 2007-01-17 | 王海燕 | 基于机器视觉的车辆检测和跟踪方法及系统 |
CN102096803B (zh) * | 2010-11-29 | 2013-11-13 | 吉林大学 | 基于机器视觉的行人安全状态识别系统 |
CN102164269A (zh) * | 2011-01-21 | 2011-08-24 | 北京中星微电子有限公司 | 全景监控方法及装置 |
KR101338554B1 (ko) * | 2012-06-12 | 2013-12-06 | 현대자동차주식회사 | V2x 통신을 위한 전력 제어 장치 및 방법 |
CN103017753B (zh) * | 2012-11-01 | 2015-07-15 | 中国兵器科学研究院 | 一种无人机航路规划方法及装置 |
CN103236160B (zh) * | 2013-04-07 | 2015-03-18 | 水木路拓科技(北京)有限公司 | 基于视频图像处理技术的路网交通状态监测系统 |
CN103473659A (zh) * | 2013-08-27 | 2013-12-25 | 西北工业大学 | 配送车辆端实时状态信息驱动的物流任务动态优化分配方法 |
US9407881B2 (en) * | 2014-04-10 | 2016-08-02 | Smartvue Corporation | Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices |
CN103955920B (zh) * | 2014-04-14 | 2017-04-12 | 桂林电子科技大学 | 基于三维点云分割的双目视觉障碍物检测方法 |
US9747505B2 (en) * | 2014-07-07 | 2017-08-29 | Here Global B.V. | Lane level traffic |
CN104410838A (zh) * | 2014-12-15 | 2015-03-11 | 成都鼎智汇科技有限公司 | 一种分布式视频监视系统 |
CN104483970B (zh) * | 2014-12-20 | 2017-06-27 | 徐嘉荫 | 一种基于卫星定位系统和移动通信网络的控制无人驾驶系统航行的方法 |
US9681046B2 (en) * | 2015-06-30 | 2017-06-13 | Gopro, Inc. | Image stitching in a multi-camera array |
EP3141926B1 (en) * | 2015-09-10 | 2018-04-04 | Continental Automotive GmbH | Automated detection of hazardous drifting vehicles by vehicle sensors |
WO2017045116A1 (en) * | 2015-09-15 | 2017-03-23 | SZ DJI Technology Co., Ltd. | System and method for supporting smooth target following |
US9910441B2 (en) * | 2015-11-04 | 2018-03-06 | Zoox, Inc. | Adaptive autonomous vehicle planner logic |
JP6520740B2 (ja) * | 2016-02-01 | 2019-05-29 | トヨタ自動車株式会社 | 物体検出方法、物体検出装置、およびプログラム |
CN108292141B (zh) * | 2016-03-01 | 2022-07-01 | 深圳市大疆创新科技有限公司 | 用于目标跟踪的方法和系统 |
JP6595401B2 (ja) * | 2016-04-26 | 2019-10-23 | 株式会社Soken | 表示制御装置 |
CN107343165A (zh) * | 2016-04-29 | 2017-11-10 | 杭州海康威视数字技术股份有限公司 | 一种监控方法、设备及系统 |
CN105844964A (zh) * | 2016-05-05 | 2016-08-10 | 深圳市元征科技股份有限公司 | 一种车辆安全驾驶预警方法及装置 |
EP3244344A1 (en) * | 2016-05-13 | 2017-11-15 | DOS Group S.A. | Ground object tracking system |
CN106441319B (zh) * | 2016-09-23 | 2019-07-16 | 中国科学院合肥物质科学研究院 | 一种无人驾驶车辆车道级导航地图的生成系统及方法 |
CN107045782A (zh) * | 2017-03-05 | 2017-08-15 | 赵莉莉 | 智能交通管控系统差异化调配路线的实现方法 |
CN106997466B (zh) * | 2017-04-12 | 2021-05-04 | 百度在线网络技术(北京)有限公司 | 用于检测道路的方法和装置 |
CN107226087B (zh) * | 2017-05-26 | 2019-03-26 | 西安电子科技大学 | 一种结构化道路自动驾驶运输车及控制方法 |
US20180307245A1 (en) * | 2017-05-31 | 2018-10-25 | Muhammad Zain Khawaja | Autonomous Vehicle Corridor |
CN107316006A (zh) * | 2017-06-07 | 2017-11-03 | 北京京东尚科信息技术有限公司 | 一种道路障碍物检测的方法和系统 |
CN107341445A (zh) * | 2017-06-07 | 2017-11-10 | 武汉大千信息技术有限公司 | 监控场景下行人目标的全景描述方法及系统 |
-
2018
- 2018-02-24 CN CN201810157700.XA patent/CN110197097B/zh active Active
- 2018-09-13 WO PCT/CN2018/105474 patent/WO2019161663A1/zh active Application Filing
- 2018-09-13 AU AU2018410435A patent/AU2018410435B2/en active Active
- 2018-09-13 EP EP18907348.9A patent/EP3757866A4/en active Pending
-
2020
- 2020-08-24 US US17/001,082 patent/US20210073539A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267734A1 (en) * | 2013-03-14 | 2014-09-18 | John Felix Hart, JR. | System and Method for Monitoring Vehicle Traffic and Controlling Traffic Signals |
CN105208323A (zh) * | 2015-07-31 | 2015-12-30 | 深圳英飞拓科技股份有限公司 | 一种全景拼接画面监控方法及装置 |
CN105407278A (zh) * | 2015-11-10 | 2016-03-16 | 北京天睿空间科技股份有限公司 | 一种全景视频交通态势监控系统及方法 |
CN106652448A (zh) * | 2016-12-13 | 2017-05-10 | 山姆帮你(天津)信息科技有限公司 | 基于视频处理技术的公路交通状态监测系统 |
CN107122765A (zh) * | 2017-05-22 | 2017-09-01 | 成都通甲优博科技有限责任公司 | 一种高速公路服务区全景监控方法及系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3757866A4 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067556A (zh) * | 2020-08-05 | 2022-02-18 | 北京万集科技股份有限公司 | 环境感知方法、装置、服务器和可读存储介质 |
CN114067556B (zh) * | 2020-08-05 | 2023-03-14 | 北京万集科技股份有限公司 | 环境感知方法、装置、服务器和可读存储介质 |
JP7185740B1 (ja) | 2021-08-30 | 2022-12-07 | 三菱電機インフォメーションシステムズ株式会社 | 領域特定装置、領域特定方法及び領域特定プログラム |
JP2023034184A (ja) * | 2021-08-30 | 2023-03-13 | 三菱電機インフォメーションシステムズ株式会社 | 領域特定装置、領域特定方法及び領域特定プログラム |
Also Published As
Publication number | Publication date |
---|---|
EP3757866A4 (en) | 2021-11-10 |
CN110197097B (zh) | 2024-04-19 |
EP3757866A1 (en) | 2020-12-30 |
CN110197097A (zh) | 2019-09-03 |
AU2018410435B2 (en) | 2024-02-29 |
US20210073539A1 (en) | 2021-03-11 |
AU2018410435A1 (en) | 2020-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019161663A1 (zh) | 一种港区监控方法及系统、中控系统 | |
EP3967972A1 (en) | Positioning method, apparatus, and device, and computer-readable storage medium | |
US11676307B2 (en) | Online sensor calibration for autonomous vehicles | |
US11386672B2 (en) | Need-sensitive image and location capture system and method | |
US11721225B2 (en) | Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle | |
US20210131821A1 (en) | Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle | |
CN111046762A (zh) | 一种对象定位方法、装置电子设备及存储介质 | |
CN104217439A (zh) | 一种室内视觉定位系统及方法 | |
CN106650705A (zh) | 区域标注方法、装置和电子设备 | |
EP3552388B1 (en) | Feature recognition assisted super-resolution method | |
US20210003683A1 (en) | Interactive sensor calibration for autonomous vehicles | |
JP7278414B2 (ja) | 交通道路用のデジタル復元方法、装置及びシステム | |
CN111353453B (zh) | 用于车辆的障碍物检测方法和装置 | |
WO2022262327A1 (zh) | 交通信号灯检测 | |
Wang et al. | Quadrotor-enabled autonomous parking occupancy detection | |
WO2022099482A1 (zh) | 曝光控制方法、装置、可移动平台及计算机可读存储介质 | |
US20220309693A1 (en) | Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation | |
Kotze et al. | Reconfigurable navigation of an Automatic Guided Vehicle utilising omnivision | |
CN117746426A (zh) | 基于高精度地图的图像标签自动生成方法及系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18907348 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2018410435 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2018907348 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2018907348 Country of ref document: EP Effective date: 20200924 |
|
ENP | Entry into the national phase |
Ref document number: 2018410435 Country of ref document: AU Date of ref document: 20180913 Kind code of ref document: A |