WO2018153211A1 - Procédé et appareil pour obtenir des informations de condition de circulation routière, et support de stockage informatique - Google Patents

Procédé et appareil pour obtenir des informations de condition de circulation routière, et support de stockage informatique Download PDF

Info

Publication number
WO2018153211A1
WO2018153211A1 PCT/CN2018/074248 CN2018074248W WO2018153211A1 WO 2018153211 A1 WO2018153211 A1 WO 2018153211A1 CN 2018074248 W CN2018074248 W CN 2018074248W WO 2018153211 A1 WO2018153211 A1 WO 2018153211A1
Authority
WO
WIPO (PCT)
Prior art keywords
moving object
moving
video image
information
road
Prior art date
Application number
PCT/CN2018/074248
Other languages
English (en)
Chinese (zh)
Inventor
李洁
张恒生
贾霞
钱煜明
杨勇
沙文
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018153211A1 publication Critical patent/WO2018153211A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits

Definitions

  • Embodiments of the present invention relate to, but are not limited to, the field of information technology, and in particular, to a method and apparatus for acquiring traffic condition information, and a computer storage medium.
  • the transportation system has become increasingly complex.
  • the user can better plan the itinerary, avoid unnecessary traffic congestion, ensure the continuity and integrity of the traffic road operation, and realize the intelligent management of the traffic network to create a more efficient and orderly
  • the traffic network provides a new direction.
  • the practical application and promotion of the Internet of Vehicles are still limited to a considerable extent.
  • the vehicle networking built by the satellite positioning and wireless communication functions of mobile intelligent terminals enables more users to participate in the accurate sensing and sharing system of road condition information. It is an inevitable trend.
  • Embodiments of the present invention are directed to providing a method and apparatus for acquiring traffic road condition information, and a computer storage medium for accurately obtaining road condition information.
  • An embodiment of the present invention provides a method for acquiring traffic road condition information, including: acquiring a video image of road video surveillance; detecting an object entering a preset monitoring area in the video image; and feeding back the detected information in real time.
  • the process of detecting an object entering a preset monitoring area in the video image comprises: performing feature recognition on an object entering a preset monitoring area in the video image.
  • the characterizing the object entering the preset monitoring area in the video image comprises: detecting feature values of the moving object in the two images; and matching the feature values of the moving object by two or two, The moving objects in the two figures closest to the feature function are identified as the same object.
  • the method further comprises: extracting one or more feature values that uniquely represent the target moving object to represent the target moving object.
  • the method further includes: updating the road background in real time, and separating the moving object from the road background.
  • the detecting the object entering the preset monitoring area in the video image comprises: performing shadow elimination on the moving object.
  • the detecting the object entering the preset monitoring area in the video image further includes: if the mutual occlusion between different moving objects occurs, segmenting the moving objects that are mutually occluded .
  • the detecting the object entering the preset monitoring area in the video image further includes: when the distance between the different moving objects in the traveling direction is greater than the moving distance between the moving objects in the front and rear frames When it is determined that the distance between the two frames before and after is the same moving object.
  • the detecting the object entering the preset monitoring area in the video image comprises: detecting a moving object entering a preset monitoring area in the video image, and acquiring an operating state of the moving object. Information; forming a moving object time displacement map and/or a moving object trajectory graph according to the running state information of the moving object.
  • the obtaining the video image of the road video surveillance comprises: receiving location information of the specified moving object and requesting the traffic condition information.
  • the real-time feedback of the detected information includes: calculating a traffic condition of the specified moving object, and determining, in comparison with the preset parameter, whether the location of the designated moving object is congested in front of the location Sending a congestion condition to the specified moving object.
  • the embodiment of the present invention further provides an apparatus for acquiring traffic road condition information, which is applied to a vehicle network cloud platform, comprising: an acquisition module configured to acquire a video image of road video surveillance; and a processing module configured to enter the video image The object in the preset monitoring area is detected; the feedback module is configured to feedback the detected information in real time.
  • the processing module is configured to perform feature recognition on an object entering a preset monitoring area in the video image.
  • the processing module is configured to detect feature values of the moving objects in the two images; and to match the feature values of the moving objects, and to identify the moving objects in the two images with the closest feature function. For the same object.
  • the processing module is configured to extract one or more features that uniquely represent the target moving object to represent the target moving object.
  • the processing module is configured to update the road background in real time by using a video image background update technology to separate the moving object from the road background.
  • the processing module is further configured to perform shadow elimination on the moving object, and if there is mutual occlusion between different moving objects, the moving objects that are mutually occluded are segmented.
  • the processing module is configured to determine that the distance between the two moving frames is the same moving object when the distance between the different moving objects is far greater than the moving distance between the moving objects.
  • the processing module is configured to detect a moving object that enters a preset monitoring area in the video image, acquire operation state information of the moving object, and form an operation state information according to the moving object. Moving object time displacement map and/or moving object trajectory graph.
  • the acquiring module is configured to acquire a video image of the related road video surveillance after receiving the location information sent by the specified moving object and the information about the traffic condition information.
  • the processing module is configured to calculate a traffic condition in which the specified moving object is located, and compare with a preset parameter to determine whether the traffic information in front of the designated moving object is congested, and The traffic information is sent to the designated moving object.
  • the embodiment of the present invention further provides a system for acquiring traffic condition information, including a moving object and a device for acquiring traffic condition information according to an embodiment of the present invention.
  • An embodiment of the present invention further provides an apparatus for acquiring traffic condition information, including a processor and a memory for storing a computer program capable of running on a processor, the processor configured to execute when the computer program is executed The steps of the method of the embodiment of the invention.
  • the embodiment of the present invention further provides a computer storage medium on which a computer program is stored, and when the computer program is executed by the processor, the steps of the method in the embodiment of the present invention are implemented.
  • the method and device for acquiring traffic road condition information and the computer storage medium provided by the embodiments of the present invention can accurately obtain the road condition information.
  • FIG. 1 is a flowchart of a method for acquiring traffic road condition information according to Embodiment 1 of the present invention
  • FIG. 2 is a flowchart of a method for obtaining traffic road condition information according to Embodiment 2 of the present invention
  • FIG. 3 is a flowchart of a method for obtaining traffic road condition information according to Embodiment 3 of the present invention.
  • FIG. 4 is a schematic diagram of setting a tracking area according to Embodiment 3 of the present invention.
  • FIG. 5 is a schematic diagram of an apparatus for acquiring traffic road condition information according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for obtaining traffic road condition information according to an embodiment of the present invention. As shown in FIG. 1 , the method in this embodiment includes:
  • Step 11 Obtain a video image of road video surveillance
  • Step 12 detecting an object that enters a preset monitoring area in the video image
  • Step 13 Feedback the detected information in real time.
  • the specific road condition information on the road in real time, for example, the pedestrian situation, the road emergency, the traffic accident situation, and the vehicle where the user does not have visual vision. Change the road situation, the specific degree of congestion, whether there are emergency vehicles such as ambulances/firefighting/rush vehicles, and other comprehensive traffic information such as traffic information.
  • the process of detecting an object entering a preset monitoring area in the video image includes: performing feature recognition on an object entering a preset monitoring area in the video image.
  • the method further comprises: extracting one or more feature values that uniquely represent the target moving object to represent the target moving object.
  • the method further includes: real-time updating the road background by using a video image background update technology, and separating the moving object from the road background.
  • the detecting the object entering the preset monitoring area in the video image further includes: performing shadow elimination on the moving object, if mutual occlusion between different vehicles occurs, Separate the vehicles that are occluded from each other.
  • the detecting the object entering the preset monitoring area in the video image further includes: when the distance between the different moving objects in the traveling direction is greater than the moving distance between the moving objects in the front and rear frames When it is determined that the distance between the two frames before and after is the same moving object.
  • the detecting the object entering the preset monitoring area in the video image comprises: detecting a moving object entering a preset monitoring area in the video image, and acquiring an operating state of the moving object. Information; forming a moving object time displacement map and/or a moving object trajectory graph according to the running state information of the moving object.
  • the obtaining the video image of the road video surveillance comprises: receiving location information of the specified moving object and requesting the traffic condition information.
  • the real-time feedback of the detected information includes: calculating a traffic condition of the specified moving object, and determining, in comparison with the preset parameter, whether the location of the designated moving object is congested in front of the location Sending a congestion condition to the specified moving object.
  • the method of the embodiment can accurately obtain the road condition information by detecting the object in the preset monitoring area in the video image.
  • a method for obtaining traffic road condition information in this embodiment can provide good assist driving information sharing, as shown in FIG. 2, including the following steps:
  • Step 101 The in-vehicle mobile terminal acquires the accurate location information of the vehicle in real time, and acquires the traffic condition information demand that the driver independently selects and pays attention to;
  • Step 102 The vehicle-mounted mobile terminal sends the real-time location information of the vehicle to the vehicle network cloud platform, and the vehicle-mounted mobile terminal sends the traffic condition information demand that the driver independently selects to the vehicle network cloud platform;
  • Step 103 The vehicle network cloud platform performs data processing and analysis according to information such as traffic flow, vehicle speed, vehicle distance, occupancy rate, traffic time, and real-time road condition video;
  • Step 104 The vehicle network cloud platform fuses the data information processed in step 103 with other important influence information (for example, information sent by the traffic control department such as temporary traffic control, martial law, etc.), and obtains a pedestrian situation and a road situation that is invisible to the user.
  • important influence information for example, information sent by the traffic control department such as temporary traffic control, martial law, etc.
  • Comprehensive information on a series of traffic conditions such as incidents, traffic accidents, vehicle lane changes, specific levels of congestion, whether there are emergency vehicles such as ambulances/firefighting/rush vehicles, and other surrounding traffic information;
  • Step 105 Using the comprehensive information of the traffic road condition, the vehicle network cloud platform calculates the traffic condition of the current vehicle, and compares with the set parameters to determine whether the current location of the vehicle is congested. If the congestion is performed, step 106, if not, return Step 101;
  • step 106 the following vehicle receives the prompt information, so as to take the lane change and bypass measures in advance.
  • the embodiment provides a method for acquiring traffic road condition information, and uses road video monitoring to track the running target, obtains isolated data of each independent motion unit in the road video monitoring, and reveals the relationship between each independent motion unit in the image sequence. And contact.
  • a method for tracking a moving vehicle in road video surveillance includes the following steps:
  • Step 201 Acquire a video image sequence of road video surveillance.
  • step 202 the setting of the tracking area mainly considers the starting line, the ending line, and the constraint lines on both sides of the area.
  • the running speed of the vehicle on the road is between 0 and 180 km/h
  • the video frame rate of the camera is between 10 and 25 frames per second.
  • the actual distance between the start line and the end line is required to be 15 m.
  • Urban roads usually have a running speed of less than 100 km/h, and the actual distance between the starting line and the ending line should be greater than 8.3 m.
  • the constraint lines on both sides are mainly to remove the influence of other interference factors. Usually, the sidewalks, house buildings, etc. are photographed within the range of the camera shooting, and in order to avoid the influence of these factors, it is necessary to set the constraint lines.
  • FIG. 4 is a schematic diagram of setting a tracking area.
  • Step 203 Data collection, using the video image background update technology to update the road background in real time, and obtaining corresponding video image background data, the step further comprising:
  • Step 2031 background update, moving target detection needs to effectively separate the moving target from the background, and considering the weather change, day and night replacement, etc., the background image changes, so real-time background update is required.
  • Step 2032 Background generation, the background does not generally undergo a significant change in the short time, and the background of a certain moment is taken as the background of the motion detection of the vehicle tracking vehicle in the tracking area in a short time.
  • Step 204 detecting a moving object, and segmenting the moving target from the background.
  • Optional methods include, but are not limited to, at least one of the following methods: moving target detection based on Kalman filter, moving target detection based on background difference, moving target detection based on statistical probability, moving target detection based on interframe difference, based on Motion detection for edge detection and motion-based motion detection.
  • Step 205 the shadow elimination, the presence of the vehicle shadow makes the shadow area easy to be mistakenly recognized as the motion area, and when the vehicles are relatively close to each other, the shadows may cause adjacent vehicles to be bonded together, the moving vehicle
  • the shadow is also followed by the movement.
  • the target is extracted and extracted, the shadow is not within the target, and the target is the vehicle itself.
  • Optional methods include, but are not limited to, at least one of the following methods: model-based vehicle shadow detection method, feature point-based vehicle shadow detection method, color space-based detection method, vehicle-based shadow feature point detection method, vehicle-based contour Detection method.
  • Step 206 occlusion identification, the position, angle, and height of the fixed camera in the collected data and the monitoring road section are not suitable, and in the process of shooting, inevitably, mutual occlusion between different vehicles occurs in the image.
  • Mutual occlusion has the greatest impact on the accuracy and scope of the road traffic information video acquisition system.
  • step 207 is performed, if no occlusion occurs, step 208 is performed;
  • Step 207 Perform occlusion segmentation.
  • Optional methods include, but are not limited to, at least one of the following methods: a contour feature segmentation method, a statistical prediction method, a morphological model based occlusion elimination method, and the like.
  • Step 208 Vehicle feature extraction determines whether the positions of different frames in the video image are the same vehicle, according to the main feature information such as the position, model, color, and shape of the vehicle.
  • Step 209 extracting one or more features that uniquely represent the target vehicle to represent the moving target vehicle for vehicle tracking.
  • Step 210 motion detection, during the tracking process, it is possible to encounter a complicated road background, such as low visibility weather such as heavy fog and heavy rain, light conditions change in a short time (lightning, glare, etc.), cart and car Between the full occlusion, the motion vehicle detects the distortion of the result or the moving vehicle is lost, so the motion detection is required. When the external condition is good, the motion detection effect of the moving vehicle is good. If the motion detection effect is not good, step 212 is performed. .
  • Motion detection is to detect the motion of the vehicle in the monitoring.
  • the background in the general video is static, and the vehicle is moving.
  • the vehicle can be locked according to this feature without tracking. Trees, buildings, etc. on the roadside.
  • a complex road background such as low-visibility weather such as heavy fog and heavy rain
  • light conditions change in a short period of time (lightning, glare, etc.).
  • the algorithm will be greatly disturbed when locking moving objects, but the road background that changes in a short time does not belong to the moving vehicle, so it is necessary to do motion detection for vehicle tracking.
  • Step 211 feature matching tracking, according to detecting a correlation function of the moving object between the two images to calculate the information change of the target, the target matching degree in the two images is the highest, that is, the matching when the correlation function peaks, the best match is Go to step 213.
  • the dynamic video is divided into static pictures of one frame, and the eigenvalues of the vehicles in the two static pictures are compared, and the eigenvalues of the two vehicles are matched, if the feature function is closest (the correlation function appears peak) That is the best match, the best matching of the two static pictures, they are considered to be the same vehicle in the dynamic video, in order to achieve the dynamic target tracking effect.
  • Step 212 vehicle trajectory prediction tracking, optional methods include, but are not limited to, kalman filtering method, GM (1, 1) and the like.
  • Step 213 matching information, the distance between the different vehicles in the direction of travel on the road is much larger than the distance between the two frames in the front and rear, so it is considered that in the traveling direction, the distance between the two frames is the same as the nearest vehicle, and the dynamic vehicle is completed based on this. Match tracking.
  • Step 214 detecting whether there is a new moving vehicle in the tracking area, if yes, performing step 215; if there is no execution step 216.
  • Step 215 the vehicle number of the new moving vehicle in the tracking area is performed, and step 216 is performed.
  • Step 216 Perform continuous frame processing on the moving vehicle entering the tracking area according to the target vehicle feature extraction result, and obtain tracking information of the target vehicle information continuous trajectory analysis.
  • step 217 the state of the vehicle in the tracking area is constantly changing, so the recorded content of the tracking information table is dynamically updated as the target vehicle changes.
  • Step 218 Store tracking information, locate and track the video image vehicle, in order to obtain discrete vehicle running state information, mainly including displacement time information, speed information, size information, flow information, density information, occupancy information, and the like of the vehicle. Traffic information, and form a vehicle time displacement map, a vehicle trajectory graph, and the like.
  • Step 219 Perform tracking information processing in real time on the dynamically updated tracking information, and go to step 209 to perform motion detection more effectively and accurately.
  • Step 220 Step 218 stores the tracking information parameter real-time output for accurate road condition sensing calculation.
  • FIG. 5 is a schematic diagram of an apparatus for acquiring traffic condition information according to an embodiment of the present invention. As shown in FIG. 5, the apparatus 400 of this embodiment includes:
  • the obtaining module 401 is configured to obtain a video image of road video surveillance
  • the processing module 402 is configured to detect an object that enters a preset monitoring area in the video image
  • the feedback module 403 is configured to feed back the detected information in real time.
  • the processing module 402 is configured to update the road background in real time using a video image background update technique to separate moving objects from the road background.
  • the processing module 402 is further configured to perform shadow elimination on the moving object. If mutual occlusion occurs between different moving objects, the moving objects that are occluded are divided.
  • the processing module 402 is configured to detect an object entering a preset monitoring area in the video image, including detecting one or more of the following: a motion state of the object, the object on the road Occupancy, traffic flow, vehicle speed and vehicle distance, feature recognition of objects entering a preset monitoring area in the video image.
  • the processing module 402 is configured to perform feature recognition on an object that enters a preset monitoring area in the video image.
  • the processing module 402 is configured to detect feature values of the moving objects in the two images; the two pairs match the feature values of the moving object, and identify the moving objects in the two images with the closest feature function as The same object.
  • the processing module 402 is configured to extract one or more features that uniquely represent the target moving object to represent the moving target moving object.
  • the processing module 402 is configured to determine that the distance between the two moving frames is the same moving object when the distance between the different moving objects is far greater than the moving distance between the two moving frames.
  • the processing module 402 is configured to detect a moving object that enters a preset monitoring area in the video image, acquire operation state information of the moving object, and generate information according to the running state of the moving object. Form a time displacement map of the moving object and/or a trajectory graph of the moving object.
  • the obtaining module 401 is configured to acquire a video image of the related road video surveillance after receiving the location information of the specified moving object and the request for the traffic condition information.
  • the processing module 402 is configured to calculate a traffic condition in which the specified moving object is located, compare with a preset parameter, and determine whether traffic information in front of the designated moving object is congested, and the traffic information is Send to the specified moving object.
  • the obtaining module 401, the processing module 402, and the feedback module 403 in the device for acquiring traffic condition information may be implemented by a central processing unit (CPU) and a digital signal processor (in a practical application).
  • DSP Digital Signal Processor
  • MCU Micro Control Unit
  • FPGA Field-Programmable Gate Array
  • the device for obtaining the traffic condition information provided by the foregoing embodiment is only illustrated by the division of each of the foregoing program modules. In actual application, the foregoing processing may be allocated to different programs according to needs. The module is completed, dividing the internal structure of the device into different program modules to complete all or part of the processing described above.
  • the device for obtaining the traffic condition information provided by the foregoing embodiment is the same as the method for obtaining the traffic road condition information, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • An embodiment of the present invention further provides an apparatus for acquiring traffic condition information, including a processor and a memory for storing a computer program capable of running on a processor, the processor configured to execute when the computer program is executed : acquiring a video image of road video surveillance; detecting an object entering a preset monitoring area in the video image; and feeding back the detected information in real time.
  • the processor configured to run the computer program, performs: performing feature recognition on an object entering a preset monitoring area in the video image.
  • the processor is configured to: when the computer program is executed, perform: detecting feature values of moving objects in two images; and matching the feature values of the moving objects to each other to bring the feature function closest
  • the moving objects in the two figures are identified as the same object.
  • the processor configured to run the computer program, performs: extracting one or more feature values that uniquely represent the target moving object to represent the target moving object.
  • the processor configured to run the computer program, performs: real-time updating of the road background after acquiring the video image of the road video surveillance, and separating the moving object from the road background.
  • the processor configured to run the computer program, performs: performing shadow removal on the moving object during detection of an object entering a preset monitoring area in the video image.
  • the processor configured to execute the computer program, performs: performing segmentation processing on the mutually occluded moving objects if mutual occlusion occurs between different moving objects.
  • the processor configured to run the computer program, performs: in detecting a process of entering an object in a preset monitoring area of the video image, when different moving objects are in a traveling direction When the distance is much larger than the moving distance between the two frames before and after the moving object, it is determined that the distance between the two frames is the same moving object.
  • the processor is configured to: when the computer program is executed, perform: detecting a moving object that enters a preset monitoring area in the video image, and acquiring running state information of the moving object;
  • the running state information of the moving object forms a moving object time displacement map and/or a moving object trajectory graph.
  • the processor configured to run the computer program, performs: obtaining location information of a specified moving object and a request for attention to traffic condition information before acquiring a video image of the road video surveillance.
  • the processor is configured to: when the computer program is executed, perform: calculating a traffic condition of the specified moving object, and comparing with a preset parameter, determining that the designated moving object is in front of the location Whether it is congested; sending a congestion situation to the specified moving object.
  • the memory can be either volatile memory or non-volatile memory, and can include both volatile and nonvolatile memory.
  • the non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), or an Erasable Programmable Read (EPROM). Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or Compact Disc Read-Only Memory (CD-ROM); the magnetic surface memory can be a disk storage or a tape storage.
  • the volatile memory can be a random access memory (RAM) that acts as an external cache.
  • RAM Random Access Memory
  • SRAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • SSRAM Dynamic Random Access
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhancement Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Dynamic Random Access Memory
  • DRRAM Direct Memory Bus Random Access Memory
  • the method disclosed in the foregoing embodiments of the present invention may be applied to a processor or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in a processor or an instruction in a form of software.
  • the above described processors may be general purpose processors, DSPs, or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like.
  • the processor may implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a storage medium, the storage medium being located in the memory, the processor reading the information in the memory, and completing the steps of the foregoing methods in combination with the hardware thereof.
  • the device for in-vehicle information processing may be configured by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), and Complex Programmable Logic Devices. (CPLD, Complex Programmable Logic Device), FPGA, general purpose processor, controller, MCU, microprocessor, or other electronic component implementation for performing the aforementioned method.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal processor
  • PLDs Programmable Logic Devices
  • CPLD Complex Programmable Logic Device
  • FPGA field-programmable Logic Device
  • controller controller
  • MCU microprocessor
  • embodiments of the present invention also provide a computer storage medium, such as a memory including a computer program executable by a processor of an in-vehicle information processing device to perform the steps of the foregoing method.
  • the computer storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
  • the computer storage medium provided by the embodiment of the present invention has a computer program stored thereon, and the computer program is executed by the processor to: acquire a video image of road video surveillance; and detect an object entering a preset monitoring area in the video image. ; Real-time feedback of detected information.
  • the computer program is executed by the processor to perform feature recognition on an object entering a preset monitoring area in the video image.
  • the computer program is executed by the processor to: detect feature values of the moving objects in the two images; and match the feature values of the moving objects in pairs, the two images in the closest feature function
  • the moving object is identified as the same object.
  • the computer program is executed by the processor to: extract one or more feature values that uniquely represent the target moving object to represent the target moving object.
  • the computer program when executed by the processor, after obtaining the video image of the road video surveillance, the road background is updated in real time, and the moving object is separated from the road background.
  • the computer program is executed by the processor to perform shadow removal on the moving object during the detection of an object entering the preset monitoring area in the video image.
  • the computer program is executed by the processor to perform segmentation processing on the mutually occluded moving objects if mutual occlusion between different moving objects occurs.
  • the computer program when executed by the processor, in the process of detecting an object entering the preset monitoring area in the video image, when the distance between the different moving objects in the traveling direction is much larger than the moving object When the two frames move distance, the distance between the two frames is the same as the closest moving object.
  • the moving object that enters the preset monitoring area in the video image is detected, and the running state information of the moving object is acquired; according to the running of the moving object.
  • the status information forms a time displacement map of the moving object and/or a trajectory graph of the moving object.
  • the location information and the traffic condition information request for the specified moving object are received before the video image of the road video surveillance is acquired.
  • the traffic condition of the specified moving object is calculated, and the preset parameter is compared with the preset parameter to determine whether the location of the designated moving object is congested; the congestion situation is Send to the specified moving object.
  • the embodiment of the present invention utilizes the vehicle network to realize accurate sensing and sharing of road condition information, and collects and analyzes data of real-time road condition information, and pedestrians, road emergencies, and road emergencies where the user does not have visual vision.
  • Traffic accidents, vehicle lane changes, specific levels of congestion, whether there are emergency vehicles such as ambulances/fire-fighting/rescue vehicles in the front and rear, and other comprehensive traffic information such as surrounding traffic information will be used for early warning, and early warning of congested road sections will be given. Alleviate the spread of traffic congestion and achieve traffic guidance.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a removable storage device, a ROM, a RAM, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the technical solution of the embodiment of the invention utilizes the vehicle network to realize accurate sensing and sharing of road condition information, and collects and analyzes data of the real-time road condition information, and the pedestrian situation, road emergency, traffic accident situation, where the user does not have visual vision, Vehicles change lane conditions, specific degree of congestion, whether there are emergency vehicles such as ambulances/fire-fighting/rescue vehicles in the front and rear, and other comprehensive traffic information such as surrounding traffic information will be used for early warning. At the same time, early warnings will be given to the congested road sections to alleviate the spread of traffic congestion. To achieve traffic guidance.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Atmospheric Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé et un appareil pour obtenir des informations de condition de circulation routière, et un support de stockage informatique. Le procédé consiste à : obtenir une image vidéo lors de la surveillance vidéo d'une route (11) ; détecter un objet pénétrant dans une région surveillée prédéfinie de l'image vidéo (12) ; et renvoyer des informations détectées en temps réel (13).
PCT/CN2018/074248 2017-02-22 2018-01-26 Procédé et appareil pour obtenir des informations de condition de circulation routière, et support de stockage informatique WO2018153211A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710097475.0 2017-02-22
CN201710097475.0A CN108460968A (zh) 2017-02-22 2017-02-22 一种基于车联网获取交通路况信息的方法及装置

Publications (1)

Publication Number Publication Date
WO2018153211A1 true WO2018153211A1 (fr) 2018-08-30

Family

ID=63220127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/074248 WO2018153211A1 (fr) 2017-02-22 2018-01-26 Procédé et appareil pour obtenir des informations de condition de circulation routière, et support de stockage informatique

Country Status (2)

Country Link
CN (1) CN108460968A (fr)
WO (1) WO2018153211A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321395A (zh) * 2018-12-18 2019-10-11 济南百航信息技术有限公司 一种移动终端的高精度智能定位轨迹方法
CN110796360A (zh) * 2019-10-24 2020-02-14 吉林化工学院 一种固定型交通检测源多尺度数据融合方法
CN110889371A (zh) * 2019-11-26 2020-03-17 浙江大华技术股份有限公司 一种渣土车抛洒检测的方法及装置
CN111126261A (zh) * 2019-12-23 2020-05-08 珠海深圳清华大学研究院创新中心 视频数据分析方法及装置、树莓派装置及可读存储介质
CN111497847A (zh) * 2020-04-23 2020-08-07 江苏黑麦数据科技有限公司 车辆的控制方法和装置
CN111611938A (zh) * 2020-05-22 2020-09-01 浙江大华技术股份有限公司 一种逆行方向确定方法及装置
CN111695627A (zh) * 2020-06-11 2020-09-22 腾讯科技(深圳)有限公司 路况检测方法、装置、电子设备及可读存储介质
CN112069944A (zh) * 2020-08-25 2020-12-11 青岛海信网络科技股份有限公司 一种道路拥堵等级确定方法
CN112528729A (zh) * 2020-10-19 2021-03-19 浙江大华技术股份有限公司 基于视频的飞机靠桥事件检测方法和装置
CN113177440A (zh) * 2021-04-09 2021-07-27 深圳市商汤科技有限公司 图像同步方法、装置、电子设备和计算机存储介质
CN113334384A (zh) * 2018-12-05 2021-09-03 北京百度网讯科技有限公司 移动机器人控制方法、装置、设备及存储介质
CN113468974A (zh) * 2021-06-08 2021-10-01 深圳依时货拉拉科技有限公司 一种车流量统计方法、计算机可读存储介质及移动终端
CN114495520A (zh) * 2021-12-30 2022-05-13 北京万集科技股份有限公司 一种车辆的计数方法、装置、终端和存储介质
CN115762132A (zh) * 2022-10-18 2023-03-07 浙江省机电设计研究院有限公司 一种道路交通状态信息智能采集装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI723299B (zh) * 2018-11-20 2021-04-01 遠創智慧股份有限公司 路況監測方法與系統
CN110287885A (zh) * 2019-06-26 2019-09-27 长安大学 一种约束拐点线高速公路能见度检测方法

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07210795A (ja) * 1994-01-24 1995-08-11 Babcock Hitachi Kk 画像式交通流計測方法と装置
US20040131233A1 (en) * 2002-06-17 2004-07-08 Dorin Comaniciu System and method for vehicle detection and tracking
CN1556506A (zh) * 2003-12-30 2004-12-22 上海交通大学 视频监控系统的智能化报警处理方法
CN1909012A (zh) * 2005-08-05 2007-02-07 同济大学 一种用于交通信息实时采集的视频图像处理方法及系统
CN101510356A (zh) * 2009-02-24 2009-08-19 上海高德威智能交通系统有限公司 视频检测系统及其数据处理装置、视频检测方法
CN103514740A (zh) * 2012-06-15 2014-01-15 永泰软件有限公司 基于高清视频的交通拥堵监测方法及系统
CN103903434A (zh) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 基于图像处理的智能交通系统
CN104504913A (zh) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 视频车流检测方法及装置
CN104660956A (zh) * 2013-11-19 2015-05-27 苏州希格玛科技有限公司 基于射频识别和智能视频分析技术的车辆监视器及智能视频分析方法
CN106327880A (zh) * 2016-09-09 2017-01-11 成都通甲优博科技有限责任公司 一种基于监控视频的车速识别方法及其系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005215820A (ja) * 2004-01-28 2005-08-11 Yokogawa Electric Corp 状況認識監視システム
CN102289948B (zh) * 2011-09-02 2013-06-05 浙江大学 高速公路场景下一种多特征融合的多车辆视频跟踪方法
CN104240500A (zh) * 2014-08-25 2014-12-24 奇瑞汽车股份有限公司 一种路况信息预测方法及系统
CN104881998A (zh) * 2015-06-14 2015-09-02 蒋长叙 基于车联网的交通信息智能采集与处理系统
CN205003867U (zh) * 2015-07-16 2016-01-27 中国移动通信集团公司 一种道路危险预警装置
CN106791277A (zh) * 2016-12-27 2017-05-31 重庆峰创科技有限公司 一种视频监控中的车辆追踪方法
CN106530719A (zh) * 2016-12-27 2017-03-22 重庆峰创科技有限公司 一种基于车联网的路况信息精准感知与共享方法

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07210795A (ja) * 1994-01-24 1995-08-11 Babcock Hitachi Kk 画像式交通流計測方法と装置
US20040131233A1 (en) * 2002-06-17 2004-07-08 Dorin Comaniciu System and method for vehicle detection and tracking
CN1556506A (zh) * 2003-12-30 2004-12-22 上海交通大学 视频监控系统的智能化报警处理方法
CN1909012A (zh) * 2005-08-05 2007-02-07 同济大学 一种用于交通信息实时采集的视频图像处理方法及系统
CN101510356A (zh) * 2009-02-24 2009-08-19 上海高德威智能交通系统有限公司 视频检测系统及其数据处理装置、视频检测方法
CN103514740A (zh) * 2012-06-15 2014-01-15 永泰软件有限公司 基于高清视频的交通拥堵监测方法及系统
CN103903434A (zh) * 2012-12-28 2014-07-02 重庆凯泽科技有限公司 基于图像处理的智能交通系统
CN104660956A (zh) * 2013-11-19 2015-05-27 苏州希格玛科技有限公司 基于射频识别和智能视频分析技术的车辆监视器及智能视频分析方法
CN104504913A (zh) * 2014-12-25 2015-04-08 珠海高凌环境科技有限公司 视频车流检测方法及装置
CN106327880A (zh) * 2016-09-09 2017-01-11 成都通甲优博科技有限责任公司 一种基于监控视频的车速识别方法及其系统

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113334384A (zh) * 2018-12-05 2021-09-03 北京百度网讯科技有限公司 移动机器人控制方法、装置、设备及存储介质
CN113334384B (zh) * 2018-12-05 2024-03-01 北京百度网讯科技有限公司 移动机器人控制方法、装置、设备及存储介质
CN110321395A (zh) * 2018-12-18 2019-10-11 济南百航信息技术有限公司 一种移动终端的高精度智能定位轨迹方法
CN110321395B (zh) * 2018-12-18 2022-11-15 济南百航信息技术有限公司 一种移动终端的高精度智能定位轨迹方法
CN110796360A (zh) * 2019-10-24 2020-02-14 吉林化工学院 一种固定型交通检测源多尺度数据融合方法
CN110889371A (zh) * 2019-11-26 2020-03-17 浙江大华技术股份有限公司 一种渣土车抛洒检测的方法及装置
CN111126261B (zh) * 2019-12-23 2023-05-26 珠海深圳清华大学研究院创新中心 视频数据分析方法及装置、树莓派装置及可读存储介质
CN111126261A (zh) * 2019-12-23 2020-05-08 珠海深圳清华大学研究院创新中心 视频数据分析方法及装置、树莓派装置及可读存储介质
CN111497847A (zh) * 2020-04-23 2020-08-07 江苏黑麦数据科技有限公司 车辆的控制方法和装置
CN111611938B (zh) * 2020-05-22 2023-08-29 浙江大华技术股份有限公司 一种逆行方向确定方法及装置
CN111611938A (zh) * 2020-05-22 2020-09-01 浙江大华技术股份有限公司 一种逆行方向确定方法及装置
CN111695627A (zh) * 2020-06-11 2020-09-22 腾讯科技(深圳)有限公司 路况检测方法、装置、电子设备及可读存储介质
CN112069944A (zh) * 2020-08-25 2020-12-11 青岛海信网络科技股份有限公司 一种道路拥堵等级确定方法
CN112069944B (zh) * 2020-08-25 2024-04-05 青岛海信网络科技股份有限公司 一种道路拥堵等级确定方法
CN112528729A (zh) * 2020-10-19 2021-03-19 浙江大华技术股份有限公司 基于视频的飞机靠桥事件检测方法和装置
CN113177440A (zh) * 2021-04-09 2021-07-27 深圳市商汤科技有限公司 图像同步方法、装置、电子设备和计算机存储介质
CN113468974A (zh) * 2021-06-08 2021-10-01 深圳依时货拉拉科技有限公司 一种车流量统计方法、计算机可读存储介质及移动终端
CN113468974B (zh) * 2021-06-08 2024-04-19 深圳依时货拉拉科技有限公司 一种车流量统计方法、计算机可读存储介质及移动终端
CN114495520A (zh) * 2021-12-30 2022-05-13 北京万集科技股份有限公司 一种车辆的计数方法、装置、终端和存储介质
CN114495520B (zh) * 2021-12-30 2023-10-03 北京万集科技股份有限公司 一种车辆的计数方法、装置、终端和存储介质
CN115762132A (zh) * 2022-10-18 2023-03-07 浙江省机电设计研究院有限公司 一种道路交通状态信息智能采集装置

Also Published As

Publication number Publication date
CN108460968A (zh) 2018-08-28

Similar Documents

Publication Publication Date Title
WO2018153211A1 (fr) Procédé et appareil pour obtenir des informations de condition de circulation routière, et support de stockage informatique
CN106485233B (zh) 可行驶区域检测方法、装置和电子设备
US10212397B2 (en) Abandoned object detection apparatus and method and system
WO2016129403A1 (fr) Dispositif de détection d'objet
US10864906B2 (en) Method of switching vehicle drive mode from automatic drive mode to manual drive mode depending on accuracy of detecting object
US9180814B2 (en) Vehicle rear left and right side warning apparatus, vehicle rear left and right side warning method, and three-dimensional object detecting device
CN106611512B (zh) 前车起步的处理方法、装置和系统
CN113692587A (zh) 使用视觉图像估计对象属性
CN114375467B (zh) 用于检测紧急车辆的系统和方法
US20080137908A1 (en) Detecting and recognizing traffic signs
US20170278386A1 (en) Method and apparatus for collecting traffic information from big data of outside image of vehicle
US8848980B2 (en) Front vehicle detecting method and front vehicle detecting apparatus
EP3530521B1 (fr) Procédé et appareil d'assistance au conducteur
JP2002083297A (ja) 物体認識方法および物体認識装置
CN108491782A (zh) 一种基于行车图像采集的车辆识别方法
KR102051397B1 (ko) 안전운전 지원 장치 및 방법
US20170344835A1 (en) Method and system for categorization of a scene
JP5522475B2 (ja) ナビゲーション装置
US20190244041A1 (en) Traffic signal recognition device
CN111105644A (zh) 一种车辆盲区监测、行驶控制方法、装置及车路协同系统
CN113808418A (zh) 路况信息显示系统、方法、车辆、计算机设备和存储介质
CN111753579A (zh) 指定代步工具的检测方法及装置
CN107463886B (zh) 一种双闪识别以及车辆避障的方法和系统
JP2019207655A (ja) 検知装置及び検知システム
CN109344776B (zh) 数据处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18758242

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18758242

Country of ref document: EP

Kind code of ref document: A1