CN112349087B - Visual data input method based on holographic perception of intersection information - Google Patents

Visual data input method based on holographic perception of intersection information Download PDF

Info

Publication number
CN112349087B
CN112349087B CN201910727177.4A CN201910727177A CN112349087B CN 112349087 B CN112349087 B CN 112349087B CN 201910727177 A CN201910727177 A CN 201910727177A CN 112349087 B CN112349087 B CN 112349087B
Authority
CN
China
Prior art keywords
intersection
motor vehicle
information
target object
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910727177.4A
Other languages
Chinese (zh)
Other versions
CN112349087A (en
Inventor
姜廷顺
李萌
陆建
郭娅明
张吉辉
朱朝辉
王文斌
谭墍元
尹胜超
汪海涛
李姝�
吕婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beyond Traffic Science & Technology Co ltd
Original Assignee
Beijing Beyond Traffic Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beyond Traffic Science & Technology Co ltd filed Critical Beijing Beyond Traffic Science & Technology Co ltd
Priority to CN201910727177.4A priority Critical patent/CN112349087B/en
Publication of CN112349087A publication Critical patent/CN112349087A/en
Application granted granted Critical
Publication of CN112349087B publication Critical patent/CN112349087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/08Controlling traffic signals according to detected number or speed of vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention provides a visual data input method based on intersection information holographic perception, firstly acquiring a plurality of pieces of image information collected by a plurality of video devices in an intersection, wherein the information comprises intersection images and images of a plurality of target objects such as motor vehicle, non-motor vehicle and pedestrian information, in order to effectively extract information related to traffic control in the intersection and avoid repeated images and redundant images (such as green belts, buildings and the like), firstly carrying out de-coincidence on the target objects, de-duplicating the target objects in the plurality of images through virtual grids, recovering the information of useful objects at the intersection, then converting the intersection and the target objects into an electronic map form, removing redundant information irrelevant to the traffic control in the electronic map form, thus displaying the traffic information at the intersection in a real-time electronic map manner to effectively restore the information at the intersection, the problem of repeated images and redundant image information in the image information collected by a plurality of cameras is solved.

Description

Visual data input method based on holographic perception of intersection information
Technical Field
The invention relates to the technical field of intelligent traffic control, in particular to a visualized data input method based on intersection information holographic perception.
Background
The intersection traffic signal control system is the key for improving the intersection traffic efficiency, and the current intersection signal control mode can be divided into timing control and self-adaptive control. The timing control mode does not need detection equipment, traffic engineers only need to configure corresponding cycle, green-to-traffic ratio and phase difference parameter schemes according to different time periods, a signal machine can automatically execute the specified scheme configured by the traffic engineers in advance at the specified time no matter what the traffic flow at the current time changes, the mode has the biggest characteristic of low cost, however, the crossing traffic efficiency is low, when a motor vehicle meets a red light, the motor vehicle, a non-motor vehicle and pedestrians pass through at the green light in the other direction, so that the green light time is lost, and when the motor vehicle flow is increased, the control parameters cannot be changed in a self-adaptive mode, so that the crossing is blocked. The self-adaptive control mode needs detection equipment, the coil, video detection, radar detection and other equipment generally adopted at home and abroad at present basically belong to section detection, the detection of non-motor vehicles and pedestrians also stops on the section detection level, and the detection data is inaccurate, so that the self-adaptive effect is far away from the target difference pursued by people.
Although the problem of section perception is solved, the problem that whether the data of a tracking detection target can not be verified accurately in real time exists, and the problems that the data of a tracking detector in each direction needs to be accessed into an input terminal specified by a signal controller in a one-to-one correspondence mode exist, and the like, because the method authorized by the inventor cannot clearly distinguish which lane of the motor vehicle comes from which direction from each piece of data of each target, the specific position of the target can be determined only after the data of the motor vehicle is associated with the position of the detector, in other words, the physical positions of the output end of each detector and the input end of the signal controller need to be in a one-to-one correspondence mode, and the biggest problem is caused when a plurality of detectors are in a crossing (the authorized patent does not relate to the motor vehicle in the crossing, the method has the advantages that the detection is convenient, and the cost is low Non-motor vehicles) detection range, the target false detection rate is easily improved.
When the camera is adopted to identify the vehicle image, in order to avoid the shielding among the vehicles, a plurality of cameras are usually arranged at the intersection, and image information is collected from different angles, but when the image information is collected through a plurality of devices, the information collected among the devices can repeatedly appear.
Disclosure of Invention
In order to solve the technical problem, embodiments of the present invention provide a visualized data input method based on intersection information holographic sensing, so as to solve the problem in the prior art that repeated images and redundant image information exist in image information acquired by a plurality of cameras.
Therefore, the invention provides a visual data input method based on holographic perception of intersection information, which comprises the following steps:
acquiring a plurality of pieces of image information in an intersection, which is acquired by a plurality of pieces of video acquisition equipment, wherein the image information comprises position information of the intersection to be detected and a target object, and the type of the target object comprises a motor vehicle, a non-motor vehicle and a pedestrian;
dividing the intersection to be detected into a plurality of virtual grid areas;
identifying a virtual grid area where a target object in the plurality of image information is located;
calculating the center position of the target object according to the virtual grid area where the target object is located;
calculating the distance between the center positions of any two target objects of the same type;
judging whether the distance is smaller than a preset threshold value or not;
when the distance is smaller than a preset threshold value, combining the two target objects of the same type into the same target object;
converting the plurality of image information into an electronic map of the same intersection information according to the intersection to be detected and the combined target object;
tracking the real-time position of a target object, and extracting the real-time parameter information of the target object;
and marking the real-time parameter information of the target object on the electronic map for real-time tracking display.
Optionally, the side lengths of the grids in different regions are set respectively, and the preset thresholds of different target object types are set respectively.
Optionally, the side length of the grid in the pedestrian crossing area and the pedestrian crossing waiting area is set to be 0.1-0.2m, the grid in the motor vehicle lane area in the intersection is set to be 1-2m, and the grid in the non-motor vehicle lane area is set to be 0.3-1 m.
Optionally, when the type of the target object is an automobile, the corresponding preset threshold is 1-2 m; when the type of the target object is a non-motor vehicle, the corresponding preset threshold value is 0.3-1 m; when the type of the target object is a pedestrian, the corresponding preset threshold value is 0.1-0.2 m;
optionally, when the target object is an automobile, the extracted real-time parameter information includes one or more of the following information:
sequence number: the motor vehicles in a certain direction start to enter the intersection from 0 point 0 min 0 sec; the main function is to count the number from 0 minute and 0 second to the current time at the 0 point of the day and the number of times of parking through the intersection in real time
Time: the time when the motor vehicle enters the intersection;
the type of motor vehicle; large or small vehicles;
vehicle identification information: a license plate or electronic number plate;
the current position of the motor vehicle;
the current speed of the motor vehicle;
the current direction of the motor vehicle;
distance of the motor vehicle from the crossing exit position;
the time when the motor vehicle exits the intersection;
and the motor vehicle exits the exit lane in the exiting direction.
Optionally, the parameter information of the intersection includes one or more of the following information:
the intersection identification information, the phase area, the pedestrian crossing waiting area, the non-motor vehicle lane area, the intersection inner area and the exit lane number.
Optionally, when the target object is a pedestrian, the extracted real-time parameter information includes one or more of the following information:
the distance from the pedestrian to the curb line;
the pedestrian is away from the opposite curb line position.
Optionally, when the target object is a non-motor vehicle, the extracted real-time parameter information includes one or more of the following information:
speed of the non-motor vehicle;
and the time when the non-motor vehicle is driven out of the intersection according to the current driving direction.
Optionally, part or all of the parameter information of the motor vehicle, the parameter information of the intersection, the parameter information of the non-motor vehicle and/or the parameter information of the pedestrian is marked on the electronic map through the electronic map.
Optionally, the intersection comprises an area surrounded by the intersection stop line and its extension line.
The invention provides a visual data input method based on intersection information holographic perception, which firstly obtains a plurality of pieces of image information collected by a plurality of video devices in an intersection, wherein the information comprises intersection images and images of a plurality of target objects such as motor vehicle, non-motor vehicle and pedestrian information, in order to effectively extract information related to traffic control in the intersection and avoid repeated images and redundant images (such as green belts, buildings and the like), the target objects are firstly subjected to de-coincidence, the target objects are de-duplicated in the plurality of images through virtual grids to recover the information of useful objects at the intersection, then the intersection and the target objects are converted into an electronic map form to remove redundant information irrelevant to the traffic control, thus the traffic information at the intersection can be displayed in a real-time electronic map manner to effectively restore the information at the intersection, the problem of repeated images and redundant image information in the image information that a plurality of cameras gathered is solved for the effectual traffic information of crossing is monitored more directly perceivedly and is arrived, thereby improves the intuition and the accuracy of crossing traffic information, provides the prerequisite for effectively carrying out crossing traffic control.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a visualized data input method based on holographic perception of intersection information in an embodiment of the invention.
FIG. 2 is a schematic diagram of an electronic map generated according to the method shown in FIG. 1.
Fig. 3 is a schematic structural diagram of a computer system based on the method shown in fig. 1.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some examples of the present invention, but not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The method in the embodiment is applied to the field of traffic control, and is particularly suitable for collecting and displaying effective traffic information of intersections. At present, in a traffic intersection, cameras in multiple traffic directions are generally arranged to collect traffic information in multiple directions, but the information is respectively displayed in a traffic control center, so that the overall traffic condition of the intersection cannot be well restored. The information collected by the plurality of cameras has repeated information, and some irrelevant information such as green belts, trees, buildings and the like also exists in the image collected by the cameras, so that the invalid information in the image information obtained by the traffic control center is much, even a large-scale picture is occupied, and the actual traffic condition of the intersection cannot be directly reflected. Based on this, the present embodiment provides a visualized data input method based on intersection information holographic sensing, which is used for better extracting effective information of an intersection and performing intuitive display, and as shown in fig. 1, the method includes the following steps:
s1, acquiring a plurality of image information in the intersection acquired by a plurality of video acquisition devices, wherein the image information comprises the position information of the intersection to be detected and the target object, and the type of the target object comprises motor vehicles, non-motor vehicles and pedestrians. The extracted information of the intersection to be detected comprises one or more of the following information: the intersection identification information, the phase area, the pedestrian crossing waiting area, the non-motor vehicle lane area, the intersection inner area and the exit lane number.
The intersection comprises an area surrounded by an intersection stop line and an extension line thereof, a pedestrian crosswalk passing through the intersection is arranged in the area formed inside the stop line, pedestrians and non-motor vehicles pass through the intersection through the pedestrian crosswalk or the non-motor vehicle lane (such as a bicycle lane), and the motor vehicles pass through the intersection according to the driving path according to the indication of a traffic light in the intersection. The cameras are arranged in multiple directions of the intersection, multiple pieces of image information can be acquired through the cameras in the multiple directions, the image information comes from different angles, and repeated images exist in the image information. The collected images include images of the intersection, images of motor vehicles, non-motor vehicles and pedestrians passing through the intersection, and redundant images of street lamps, buildings, trees and the like around the intersection. First, the same motor vehicles, non-motor vehicles or pedestrians in the plurality of images need to be merged to restore the actual passing target information of the intersection, and the following steps S2-S7 are to merge the same motor vehicles, non-motor vehicles or pedestrians in the plurality of images of the intersection.
And S2, dividing the intersection to be detected into a plurality of virtual grid areas.
The grid is a virtual grid, intersection information is restored according to intersection data in a plurality of images, namely an area surrounded by an intersection stop line and an extension line thereof, and then the intersection is divided into a plurality of virtual grid areas in the area. The size of the mesh may be set to the same size, or may be set to different sizes for different regions. In order to accurately identify pedestrians, the area setting of the grid needs to be small and not larger than the projection area normally occupied by a person. In this embodiment, the side lengths of the grids in different regions are set respectively, and the preset thresholds of different target object types are set respectively. Since the pedestrian passes through the pedestrian crossing area and the pedestrian crossing waiting area, the side length of the grids in the pedestrian crossing area and the pedestrian crossing waiting area is set to be 0.1-0.2m, preferably 0.1m, and the smaller the set grid is, the more accurate the identification is, but the larger the data amount is. In the area of the motor vehicle lane in the intersection, the grid is set to 1-2m, and since the length of the vehicle is generally more than 2m, the grid can be set to be larger, such as 1m or 1.5m, in the area where the motor vehicle passes. In the area where the non-motor vehicles pass, such as an autodrive lane, the length of the bicycles is more than 1m, so the grid can be set to 0.3-1m, such as 0.5, 0.8, and the like. The grid is set according to the fact that the width of the grid is not larger than the length of the projection of the passing target object on the road, and therefore the repeated target objects can be removed in the mode that the objects in the images with the closer centers are combined into the same object.
And S3, identifying the virtual grid area where the target object in the image information is located.
After the intersection information is gridded, the area of the virtual grid where the target object is located can be obtained according to the position of the target object.
And S4, calculating the center position of the target object according to the virtual grid area where the target object is located.
After the virtual grid area where the target object is located is obtained, the center position is easily determined. The center position here is the center of the projection of the target object in the road surface.
And S5, calculating the distance between the center positions of any two target objects of the same type.
When the duplicate removal of the same data is performed, duplicate data are respectively removed for the same type of the target object, namely for motor vehicles, pedestrians and non-motor vehicles appearing in a plurality of images acquired by different cameras. After the centers of two target objects of the same type are determined, the distance between the center positions can be easily calculated, and can be calculated through the relative coordinate positions and the side lengths of the grids.
And S6, judging whether the distance is smaller than a preset threshold value.
If the distance between the two target objects is small and is smaller than the minimum radius of the object, the two target objects can not be overlapped, so that the two objects can be judged to be actually the same target object, and the motor vehicle, the pedestrian and the non-motor vehicle are combined in the mode. The sizes of different types of target objects are different, so that the preset threshold values are different, and when the type of the target object is an automobile, the corresponding preset threshold value is 1-2 m; when the type of the target object is a non-motor vehicle, the corresponding preset threshold value is 0.3-1 m; when the type of the target object is a pedestrian, the corresponding preset threshold value is 0.1-0.2 m; these data can also be set and adjusted appropriately based on empirical information. When the distance between the target objects is greater than a preset threshold, it is considered to belong to a different object. When the distance is less than the preset threshold, step S7 is performed.
And S7, merging the two target objects of the same type into the same target object when the distance is smaller than a preset threshold value.
At this time, the distance between the two target objects is small and is smaller than the preset threshold value which is reasonably set, that is, the two target objects are overlapped, so that the two objects can be judged to be the same target object actually. Therefore, repeated information of the motor vehicle, the non-motor vehicle and the pedestrian is combined.
And S8, converting the image information into an electronic map of intersection information according to the intersection to be detected and the merged target object.
After the intersection information and the passing target in the intersection are extracted, in order to avoid the influence of redundant data (such as background information of street lamps, buildings, trees and the like) in the data collected by the camera, an electronic map of the intersection information is generated according to the intersection and the target, as shown in fig. 2. Therefore, only effective information of the intersection can be extracted, and the traffic information of the intersection can be more visually represented.
And S9, tracking the real-time position of the target object, and extracting the real-time parameter information of the target object.
Wherein, for the motor vehicle, the extracted real-time parameter information comprises one or more of the following information:
sequence number: the motor vehicles in a certain direction start to enter the intersection from 0 point 0 min 0 sec; the main function is to count the number from 0 minute and 0 second to the current moment at the 0 point today and the number of times of parking through the intersection in real time;
time: the time when the motor vehicle enters the intersection;
the type of motor vehicle; large or small vehicle
Vehicle identification information: a license plate or electronic number plate;
the current position of the motor vehicle;
the current speed of the motor vehicle;
wherein, aiming at the pedestrian, the extracted real-time parameter information comprises one or more of the following information:
the distance from the pedestrian to the curb line;
the pedestrian is away from the opposite curb line position.
Wherein, for the non-motor vehicle, the extracted real-time parameter information comprises one or more of the following information:
speed of the non-motor vehicle;
and the time when the non-motor vehicle is driven out of the intersection according to the current driving direction.
And S10, marking the real-time parameter information of the target object on the electronic map for real-time tracking display.
And marking part or all of the parameter information of the motor vehicle, the parameter information of the intersection, the parameter information of the non-motor vehicle and/or the parameter information of the pedestrian on an electronic map.
According to various data of all motor vehicles and non-motor vehicles which are tracked and detected by videos, radars and the like and extend to the upper and lower intersections from the intersection center to the periphery, pedestrian data of a pedestrian waiting area and a pedestrian crosswalk of the intersection are displayed on an electronic map of the intersection in real time, and the target data are directly output to a control module of a signal control system through a network interface, so that the condition data of '1' and '0' input by a connecting terminal of signal control detection data is promoted to a network input mode with various data being fused, and the signal control level of the intersection is promoted to step a new step.
The invention provides a visual data input method based on intersection information holographic perception, which firstly obtains a plurality of pieces of image information collected by a plurality of video devices at an intersection, wherein the information comprises intersection images and images of a plurality of target objects such as motor vehicle, non-motor vehicle and pedestrian information, in order to effectively extract information related to traffic control in the intersection and avoid repeated images and redundant images (such as green belts, buildings and the like), the target objects are firstly subjected to de-coincidence, the target objects are de-duplicated in the plurality of images through virtual grids to recover the information of useful objects at the intersection, then the intersection and the target objects are converted into an electronic map form to remove redundant information irrelevant to the traffic control, thus the traffic information at the intersection can be displayed in a real-time electronic map manner to effectively restore the intersection information, the problem of repeated images and redundant image information in the image information that a plurality of cameras gathered is solved for the effectual traffic information of crossing is monitored more directly perceivedly and is arrived, thereby improves the intuition and the accuracy of crossing traffic information, provides the prerequisite for effectively carrying out crossing traffic control.
The visualized data input method provided by the invention accurately identifies the position of the vehicle in the intersection through the virtual grid area, so that the same motor vehicles in the images shot by a plurality of cameras at the intersection are combined, the motor vehicles and the positions of the motor vehicles in the intersection are accurately identified, the real-time information of the vehicle is acquired by tracking the positions of the motor vehicles, and basic information is provided for realizing intelligent traffic control.
The method comprises the steps of obtaining data such as the current-time accurate position, the speed, the distance to a stop line, the current-time accurate position, the speed, the distance to a curb and the like of a motor vehicle and a non-motor vehicle of a control target in a set detection range of the intersection through a video tracking or radar tracking technology, sending the data to a data processing unit of an intersection annunciator through a network interface according to a specified protocol format, and simultaneously displaying all state data of the current-time control target and the current-time lamp color state of a signal lamp on an intersection plane electronic map in real time and accurately. Through the intersection plane electronic diagram, the method can visually verify that: whether the actual target positions of all motor vehicles, non-motor vehicles and pedestrians in the intersection detection range at the current moment are consistent with the actual target position displayed on the plane electronic map of the intersection at the current moment or not.
Furthermore, the present invention provides an electronic device for traffic intersection intelligent control, the device comprising:
a processor; and
a memory communicatively coupled to the processor and storing computer readable instructions executable by the processor, the processor executing the traffic congestion data analysis method according to the above when the computer readable instructions are executed.
Finally, the present invention provides a storage medium storing computer instructions for causing a computer to perform a method for visualized data input based on holographic perception of intersection information according to the above.
Specifically, fig. 3 shows a schematic structural diagram of a computer system 600 suitable for implementing the visualized data input method or processor based on holographic perception of intersection information according to the embodiment of the invention, and the system shown in fig. 3 implements corresponding functions of an electronic device and a processor.
As shown in fig. 3, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be understood that the above embodiments are only examples for clearly illustrating the present invention, and are not intended to limit the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A visualized data input method based on holographic perception of intersection information is characterized by comprising
Acquiring a plurality of pieces of image information in an intersection, which is acquired by a plurality of pieces of video acquisition equipment, wherein the image information comprises position information of the intersection to be detected and a target object, and the type of the target object comprises a motor vehicle, a non-motor vehicle and a pedestrian;
dividing the intersection to be detected into a plurality of virtual grid areas;
identifying a virtual grid area where a target object in the plurality of image information is located;
calculating the center position of the target object according to the virtual grid area where the target object is located;
calculating the distance between the center positions of any two target objects of the same type;
judging whether the distance is smaller than a preset threshold value or not;
when the distance is smaller than a preset threshold value, combining the two target objects of the same type into the same target object;
converting the plurality of image information into an electronic map of intersection information according to the intersection to be detected and the combined target object;
tracking the real-time position of a target object, and extracting the real-time parameter information of the target object;
and marking the real-time parameter information of the target object on the electronic map for real-time tracking display.
2. The method of claim 1, wherein the grid side lengths of the different regions are set separately, and the preset thresholds for the different target object types are set separately.
3. The method according to claim 1, wherein the side length of the mesh is set to 0.1-0.2m in the pedestrian crossing area and the pedestrian crossing waiting area, the mesh is set to 1-2m in the motor vehicle lane area in the intersection, and the mesh is set to 0.3-1m in the non-motor vehicle lane area.
4. The method according to claim 3, wherein when the target object type is a motor vehicle, the corresponding preset threshold is 1-2 m; when the type of the target object is a non-motor vehicle, the corresponding preset threshold value is 0.3-1 m; and when the type of the target object is a pedestrian, the corresponding preset threshold value is 0.1-0.2 m.
5. The method according to any one of claims 1 to 4, wherein the extracted real-time parameter information comprises one or more of the following information when the target object is a motor vehicle:
sequence number: the motor vehicles in a certain direction start to enter the intersection from 0 point 0 min 0 sec;
time: the time when the motor vehicle enters the intersection;
the type of motor vehicle; large or small vehicles;
vehicle identification information: a license plate or electronic number plate;
the current position of the motor vehicle;
the current speed of the motor vehicle;
the current direction of the motor vehicle;
distance of the motor vehicle from the crossing exit position;
the time when the motor vehicle exits the intersection;
and the motor vehicle exits the exit lane in the exiting direction.
6. The method of any one of claims 1-4, wherein the parameter information of the intersection comprises one or more of the following:
the intersection identification information, the phase area, the pedestrian crossing waiting area, the non-motor vehicle lane area, the intersection inner area and the exit lane number.
7. The method according to any one of claims 1 to 4, wherein when the target object is a pedestrian, the extracted real-time parameter information comprises one or more of the following information:
the distance from the pedestrian to the curb line;
the pedestrian is away from the opposite curb line position.
8. The method according to any one of claims 1 to 4, wherein the extracted real-time parameter information comprises one or more of the following information when the target object is a non-motor vehicle:
speed of the non-motor vehicle;
and the time when the non-motor vehicle is driven out of the intersection according to the current driving direction.
9. The method according to any one of claims 1 to 4, wherein part or all of the parameter information of the motor vehicle, the parameter information of the intersection, the parameter information of the non-motor vehicle and/or the parameter information of the pedestrian is marked on an electronic map by the electronic map.
10. The method of any one of claims 1-4, wherein the intersection comprises an area surrounded by an intersection stop line and its extension.
CN201910727177.4A 2019-08-07 2019-08-07 Visual data input method based on holographic perception of intersection information Active CN112349087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910727177.4A CN112349087B (en) 2019-08-07 2019-08-07 Visual data input method based on holographic perception of intersection information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910727177.4A CN112349087B (en) 2019-08-07 2019-08-07 Visual data input method based on holographic perception of intersection information

Publications (2)

Publication Number Publication Date
CN112349087A CN112349087A (en) 2021-02-09
CN112349087B true CN112349087B (en) 2021-10-15

Family

ID=74367270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910727177.4A Active CN112349087B (en) 2019-08-07 2019-08-07 Visual data input method based on holographic perception of intersection information

Country Status (1)

Country Link
CN (1) CN112349087B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593271B (en) * 2021-07-09 2022-06-03 青岛开元科润电子有限公司 Traffic signal control system
CN113470009A (en) * 2021-07-26 2021-10-01 浙江大华技术股份有限公司 Illegal umbrella opening detection and identification method and device, electronic equipment and storage medium
CN115346374B (en) * 2022-08-30 2023-08-22 北京星云互联科技有限公司 Intersection holographic perception method and device, edge computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113374A (en) * 1998-09-30 2000-04-21 Nippon Signal Co Ltd:The Device for extracting vehicle
CN102768799A (en) * 2011-12-21 2012-11-07 湖南工业大学 Method for detecting red light running of vehicle at night
CN106297330B (en) * 2016-08-29 2019-02-12 安徽科力信息产业有限责任公司 Reduce the method and system that pedestrian's street crossing influences plane perceptual signal control efficiency
CN109509345A (en) * 2017-09-15 2019-03-22 富士通株式会社 Vehicle detection apparatus and method
CN109935080A (en) * 2019-04-10 2019-06-25 武汉大学 The monitoring system and method that a kind of vehicle flowrate on traffic route calculates in real time

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000113374A (en) * 1998-09-30 2000-04-21 Nippon Signal Co Ltd:The Device for extracting vehicle
CN102768799A (en) * 2011-12-21 2012-11-07 湖南工业大学 Method for detecting red light running of vehicle at night
CN106297330B (en) * 2016-08-29 2019-02-12 安徽科力信息产业有限责任公司 Reduce the method and system that pedestrian's street crossing influences plane perceptual signal control efficiency
CN109509345A (en) * 2017-09-15 2019-03-22 富士通株式会社 Vehicle detection apparatus and method
CN109935080A (en) * 2019-04-10 2019-06-25 武汉大学 The monitoring system and method that a kind of vehicle flowrate on traffic route calculates in real time

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
行人重识别研究综述;宋婉茹等;《智能系统学报》;20171109(第06期);全文 *

Also Published As

Publication number Publication date
CN112349087A (en) 2021-02-09

Similar Documents

Publication Publication Date Title
US9224049B2 (en) Detection of static object on thoroughfare crossings
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
KR102197946B1 (en) object recognition and counting method using deep learning artificial intelligence technology
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
CN112349087B (en) Visual data input method based on holographic perception of intersection information
US10212397B2 (en) Abandoned object detection apparatus and method and system
CN111369831A (en) Road driving danger early warning method, device and equipment
CN105493502A (en) Video monitoring method, video monitoring system, and computer program product
CN110718061B (en) Traffic intersection vehicle flow statistical method and device, storage medium and electronic equipment
JP2023527265A (en) Method and device for detecting traffic abnormality, electronic device, storage medium and computer program
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN104282154A (en) Vehicle overload monitoring system and method
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN106446807A (en) Well lid theft detection method
CN113822285A (en) Vehicle illegal parking identification method for complex application scene
WO2021008039A1 (en) Systems and methods for object monitoring
WO2018068312A1 (en) Device and method for detecting abnormal traffic event
CN104463913A (en) Intelligent illegal parking detection device and method
WO2020210960A1 (en) Method and system for reconstructing digital panorama of traffic route
CN110660225A (en) Red light running behavior detection method, device and equipment
CN113538968B (en) Method and apparatus for outputting information
Panda et al. Application of Image Processing In Road Traffic Control
CN212570057U (en) Road driving danger early warning device and equipment
US11314974B2 (en) Detecting debris in a vehicle path
CN114333409A (en) Target tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant