WO2021014464A1 - Système, dispositif multi-utilitaire et procédé de surveillance de véhicules pour la sécurité routière - Google Patents

Système, dispositif multi-utilitaire et procédé de surveillance de véhicules pour la sécurité routière Download PDF

Info

Publication number
WO2021014464A1
WO2021014464A1 PCT/IN2020/050624 IN2020050624W WO2021014464A1 WO 2021014464 A1 WO2021014464 A1 WO 2021014464A1 IN 2020050624 W IN2020050624 W IN 2020050624W WO 2021014464 A1 WO2021014464 A1 WO 2021014464A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
data
lane
detection
vehicles
Prior art date
Application number
PCT/IN2020/050624
Other languages
English (en)
Inventor
Alexander Valiyaveettil JOHN
Original Assignee
John Alexander Valiyaveettil
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by John Alexander Valiyaveettil filed Critical John Alexander Valiyaveettil
Publication of WO2021014464A1 publication Critical patent/WO2021014464A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Definitions

  • the present invention relates to network systems for managing motor vehicles. More particularly the invention related to systems, multi-utility devices and methods to monitor vehicle for road safety.
  • inventive concepts presented herein are illustrated in a number of different embodiments, each showing one or more concepts, though it should be understood that, in general, the concepts are not mutually exclusive and may be used in combination even when not so illustrated.
  • the method includes the steps of acquiring multiple data from at least one primary image capturing device and a secondary image capturing device mounted on a vehicle, the acquired data including object image data and related metadata, analyzing each acquired data by performing object-detection using machine learning (ML) model on the data, each object including vehicles, lane markings, street lights, potholes, etc. and determining the type of the analyzed object by collecting the data related to all the objects and further filtering of data as per requirements, wherein the type including lane crossing, vehicle analysis, scenery identification, a black box event, behavior analysis.
  • ML machine learning
  • a processor encoded with instruction enabling the processor on the acquired data captured by the primary and the secondary image capturing devices to perform one or more functions of surveillance including speed detection, lane detection and lane crossing, vehicle analysis, object detection, scenery identification, behavior analysis etc.
  • a mobile surveillance system for road safety in a vehicle.
  • the system including a first camera having a front field of view of the vehicle and a second camera having a back field of view of the vehicle, wherein the first and the second camera is to view the roads and the objects in their respective views, a third camera configured to be mounted at an in-cabin side of a windshield of a vehicle is to monitor the driver and fellow travelers’ behavior.
  • a processor including a memory, the processor is configured to the first, second and third camera, for acquiring multiple data from the respective cameras, the acquired data including image data and related metadata, analyzing each acquired data by performing object-detection using ML model on the data, each object including vehicles, lane markings, street lights, potholes, etc.
  • lane markers and other vehicles ahead of the equipped vehicle on a road identifies if any vehicle has crossed the lane in its FOV (Field of Vision).
  • FOV Field of Vision
  • capturing other vehicle data retrieves the vehicle information including license plate number, type of the vehicle, vehicle make, vehicle model, vehicle color and vehicle relative speed.
  • the processor of the acquired data captured by the first and second camera captures data of objects defining the scene including other vehicles, potholes, street lights, etc. and provides information with respect to the captured data including street lights on/off, traffic light color, potholes, accidents, wherein if the captures data detects an accident scene, initiating a Black-Box event in which a 30-second video feed is created including 15 sec. before and after the event which is saved and is marked to be send to cloud in offline mode, and further, in responsive at least in part to processing by the processor of the acquired data captured by the third camera, locates the driver and determine the behavior of the driver including drowsy, sad, excited, etc. to save and send for further processing.
  • a surveillance device mounted in a vehicle for road safety.
  • the device including at least one image capturing device mounted on the vehicle for capturing a plurality of images within the field of view and a processor including at least one memory, a GPS module, a database, and a machine learning model, the processor is configured for: acquiring multiple data from the image capturing device, the acquired data including image data and related metadata, analyzing each acquired data by performing object- detection using machine learning (ML) model on the data, each object including vehicles, lane markings, street lights, potholes, etc.
  • ML machine learning
  • the processor encoded with instruction enabling the processor on the acquired data captured by the first and second camera to perform one or more functions of surveillance including speed detection, lane detection and lane crossing, vehicle analysis, object detection, scenery identification, behavior analysis etc.
  • FIG. 1 is a perspective view of system and multi-utility device for monitoring vehicles on road in accordance with an embodiment of the invention.
  • FIG. 2 shows exemplary scenarios in which the surround view camera system facilitates detection and tracking of objects according to one or more embodiments
  • FIG. 3 is a flowchart depicting a surveillance method for road safety in accordance with an embodiment of the invention.
  • FIG. 4 is a flowchart depicting a process flow at a server-side model of the present system in accordance with an embodiment of the present invention.
  • FIG. 5 shows a system block diagram of performing the method of FIG. 3, according to one embodiment of the present invention.
  • spatially relative terms such as“detection,” or“capture, and the like, may be used herein for ease of description to describe one element or feature’s relationship to another element(s) or feature(s) as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the structure in use or operation in addition to the orientation depicted in the figures.
  • FIG. 1 depicts an exemplary embodiment of a surround view camera system 100 according to one or more embodiments.
  • the vehicle 1 10 shown in FIG. 1 is an automobile.
  • the surround view camera system includes two outdoor cameras 1 20a and 120b and one indoor camera 130 in the exemplary embodiment shown in FIG. 1 .
  • Camera 120a captures images on the front side of the vehicle 1
  • camera 120b captures images on the rear side of the vehicle 1 10.
  • Camera 130 captures images of the indoor of the vehicle 1 10.
  • fewer or more cameras may be used and can be arranged in other parts of the vehicle 1 10.
  • the images from the different cameras 120 (a, b) and 130 are sent to the processing system (not shown in figure) of the surround view camera system 100 for processing.
  • the cameras may have inbuilt processing system which are capable of processing the captured images locally.
  • the communication between the cameras and processing system may be over wires that are routed around the vehicle 1 10 or may be wireless.
  • the processing system may include an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
  • ASIC application specific integrated circuit
  • a controller (not shown in figure) of the vehicle 1 10 may be separate from and coupled to the processing system, or, in alternate embodiments, the functionality described for the processing system may be performed by components of the controller.
  • the controller can include or communicate with systems such as the systems that perform ACC, AEB. FCW, and other safety and autonomous driving functions.
  • Additional known sensors e.g., radar, lidar, ultrasonic sensors
  • the cameras (120, 130) may include extreme wide-angle lenses such that the images obtained by the cameras are distorted (i.e. , fisheye images).
  • the extreme wide-angle lenses have an ultra-wide field of view and, thus, provide images that facilitate 360 degree coverage around the vehicle 1 10 with the cameras shown in FIG. 1 .
  • the raw images obtained with the extreme wide-angle lenses also require pre-processing of the images to unwarp the image distortion or fisheye effect.
  • the pre-processing may also include image enhancement and virtual camera view synthesis.
  • FIG. 2 shows exemplary scenario in which the surround view camera system facilitates detection and tracking of objects according to one or more embodiments.
  • the present invention provides a device or a system 200 (which may be a camera system or a separate hardware unit including the camera) which is capable of processing within the device or through a system via a cloud server.
  • the device is primarily mounted on a vehicle 210, stationary or moving and has the capability of communicating with one or more of other devices which are having similar configuration.
  • the device which may include a camera 220 which is capable of monitoring and tracking activities of other vehicles like on the road. The activities may include but limited to that any vehicle 210 is cutting the lane, or parked at no-parking area 230 etc.
  • camera 220 is part of the system or device or both and not the entire system or device.
  • Each of the devices and their sub-elements are configured to perform one or more functions which may include but not limited to such as speed detection of the vehicle it’s deployed in, speed detection of all vehicles in its line of sight, number plate detection of all vehicles in its line of sight.
  • the device is configured to identify behavior of the vehicles, it’s deployed in as well as of all vehicles, people, animals and objects in its line of sight.
  • the present invention is directed to a system, having a device and performing various function of surveillance of one or more vehicles by performing a unique method.
  • the device which is mounted on a first moving vehicle identifies information about one or more of a second moving vehicle.
  • the identified information is processed locally or passed the identified information to a centralized database to identify potential problems with the second vehicle or its driver over the cloud.
  • the first vehicle may be a police car or other patrol unit; and the second vehicles can be in any of the road lane which is visible from the camera mounted to the device, including the same lane as that occupied by the first vehicle, other driving lanes, and or even in parking areas.
  • the present invention also provides a device which is a part of the system for processing or an independent device which monitors the driver’s behavior and alerts him in the event of any dangers. It also identifies the driver information from a preset database and automatically may select languages for auditory warnings.
  • the device may acts as a hub, which converts the car into a smart car. The device is accessed via a conversational interface or a physical touch interface on the device or on the companion app. The user can either use the device or the companion app to set destinations for driving instructions, choose a playlist, view the status of car health and reminders, etc.
  • FIG. 3 is a flowchart (300) depicting a surveillance method for road safety in accordance with an embodiment of the invention.
  • the method acquires multiple data from one or more primary image capturing device and a secondary image capturing device, the acquired data including object image data and related metadata.
  • the method analyzes each acquired data by performing object- detection using machine learning (ML) model on the data, each object including vehicles, lane markings, streetlights, potholes, etc.
  • ML machine learning
  • the method determines the type of the analyzed object by collecting the data related to all the objects and further filtering of data as per requirements, the type may be or may include lane crossing, vehicle analysis, scenery identification, a black box event, behavior analysis.
  • the method processes by a processor which is encoded with instruction thereby enabling the processor on the acquired data captured by the primary and the secondary image capturing device to perform one or more functions of surveillance including speed detection, lane detection and lane crossing, vehicle analysis, object detection, scenery identification, behavior analysis etc.
  • the surveillance of speed detection is by calculating the speed of the mounted vehicle using the GPS data, calculating the relative speed of other vehicles in the FOV (Field of View) from the image capturing device of the mounted vehicle.
  • the relative speed is calculated by generating average number of all the values including the focal distance of camera, camera to road distance, frame width, width of the road and digital distance for width of the road. Further, mapping the calculated pixels per metre (ppm) by dividing distance of road in pixels to metres, where distance in pixels gives the pixel distance travelled by the vehicle in one frame to other, converting the distance in pixel to distance in meter and calculating the absolute speed of vehicles in the FOV by adding the speed of the mounted vehicle and relating speed of the vehicle.
  • lane markers and other vehicles ahead of the equipped vehicle on a road and identifies if any vehicle has crossed the lane in its FOV (Field of Vision).
  • FOV Field of Vision
  • the surveillance of lane detection and lane crossing over is by retrieving lane information including lane-markings and bounding boxes from the land detection model, grouping the retrieved lanes to determine available lanes and boundaries by using the centroid information of the lane-markings and distance (horizontal and vertical) between the top and bottom points of different markings.
  • the vehicles For each tracked vehicle, finding the lane on which the vehicle majorly lies by finding the overlap between the lane boundaries and lower edge of the vehicle's bounding- box and determining, lane-crossing violations, if the bottom-corners of the vehicles (bounding-boxes) lie in different lanes crossing a set threshold, the vehicles can be said to have crossed the lane and marking the vehicle as lane-violated.
  • the processing of lane crossing of other vehicle is by receiving vehicle coordinates of detected vehicles along with lane-marking data with the frame, identifying on the received coordinates, checking whether each detected vehicle is in the FOV (Field of Vision) and driving in the lane and categorizing the vehicle vision into plurality of layers, the layers including vehicle is driving in correct lane, vehicle has crossed the lane, vehicle is driving in opposite direction and cannot predict.
  • FOV Field of Vision
  • The‘driving in correct lane’ and‘crossed the lane’ are retrieved by using the lane-violation detection algorithm, if the‘driving in opposite direction’ is detected, the vehicle is tracked or the tracker is initialized If no previous tracker exists for the particular vehicle, where, if the layer is unable to identify the vehicle it is marked as result‘cannot predict’ which marks the vehicle as normal so that it can be re-processed for changes in the next frame. Further, in responsive at least in part to processing by the processor of the acquired data captured by the primary image capturing device, capturing other vehicle data and retrieves the vehicle information including license plate number, type of the vehicle, vehicle make, vehicle model, vehicle color and vehicle relative speed.
  • the surveillance of vehicle analysis is by calculating the centroids of each detected vehicle using bounding-box data: ((x1 + x2)/2, (y1 +y2)/2), where (x1 , y1 ) and (x2, y2) are the top-left and bottom-right co-ordinates of the vehicle's location (bounding-box) in the image, comparing the Euclidian-distance of current centroids with the centroids calculated in the previous frame by matching the previous centroids with current centroids.
  • the minimum distance is greater than the set threshold, assigning a new identifier to the vehicle (unique vehicle), else assign the previous vehicle-identifier to the current centroid, wherein each previous vehicle-centroid will be matched to only one vehicle-centroid in current frame, and for each unmatched vehicle from the previous frame, the disappearance-count will increase by 1 for each consecutive non-matching, wherein if it increases above a set threshold, the identifier will be deleted.
  • the processing of capturing other vehicle data by detecting one or more vehicle data coordinates and frame, checking, if any preceding vehicle information exists for the particular detected vehicle, performing Number-Plate recognition on the vehicle area using ML model and retrieving license plate number and metadata to check types of violations including whether LP Number Incorrect, LP Number in regional language, Vehicle Registration Expired, Vehicle Insurance Expired, performing vehicle classification on the vehicle using ML model, and mapping the results and checking, if no tracker is assigned to the vehicle, assigning a tracker, and updating the vehicle-tracker for the vehicle and determine its relative speed using the ‘Speed-determination algorithm’.
  • the violations ‘LP Number Incorrect’ and ‘LP Number in regional language’ are checked by using the LP- metadata, and the violations of‘Vehicle Registration Expired’ and‘Vehicle Insurance Expired’ are checked by matching the LP Number with the existing Vehicle Database.
  • object detection on the road in its FOV Field of Vision
  • the object detection model which may be trained to learn for 4 labels: no-helmet, helmet, car-front and motorcyclist.
  • the model uses the overlap threshold (overlap between the bounding- boxes) to find such violations. For example, using the“car-front” label and vehicle- speed information, the method identifies a vehicle driving in opposite-direction. Further, the model may optimize to gain faster speed of execution.
  • the process of scenery identification is by defining the scene based on the received object-data including object-type and its coordinates, the scene type including vehicle, potholes, street lights, etc., determining the scene type of the object in order to trigger corresponding event, where turning on headlight if the street light is not ON, raise an alert if a pothole is detected, etc., if a traffic light is detected in the view, raise an event alert, if a person is detected in front of the car at less than a specific threshold and the self-speed of the car is not zero, then raise an alert and activation of a black-box event, if the scene type is an accident.
  • the processor provides information with respect to the captured data including streetlights on/off, traffic light color, potholes, and accidents. Further, if the captures data detects an accident scene, initiating a Black-Box event in which a 30-second video feed is created including 15 sec. before and after the event which is saved and is marked to be send to cloud in offline mode.
  • the step of activation of black-box include creation a 30- second video feed which includes all the frames from 15 seconds prior to the event trigger and 15 seconds after the event trigger, and wherein the video created is stored in the database and also sent to the cloud network.
  • the processor of the acquired data captured by the secondary image capturing device locates the driver and determine the behavior of the driver including drowsy, sad, excited, etc. to save and send for further processing.
  • the surveillance of behavioral analysis is by person detection model, face-detection model, activity- detection model and behavior classification model.
  • the person-detection model detects full-body in the given frame which will give the bounding-box data of the detected human-body, the face-detection model gives the facial co-ordinates of the people in view, the face-data and full-body data are related by checking bounding- box overlap between the two detections, the activity-detection model which is based on facial features (eyes blinking), it classifies the person as Active/ Inactive/ Hyperactive and raises event-based alerts on Inactive driver classification. For example, if there’s no blinking of eyes for a threshold period of time, the person is probably consider as inactive, further if there’s movement of head above a particular threshold, the person is consider to be hyperactive and might be looking here-and- there.
  • the behavior classification model is applied on detected faces and it classifies the face as: drowsy, happy, sad or excited. Further, determining the behavior of the driver by detecting the face of the person using a face-detection model and perform behavior classification like drowsy, happy, sad, excited or sleeping using behavior classification ML model, and performing respective tasks based on the event set.
  • FIG. 4 shows the flow (400) of segregation of data received at primary and secondary image capturing device in order to perform the operation of the method of FIG. 3.
  • the image capturing devices 420, 430
  • the object detection layer 440
  • the object detection layer 440
  • the object detection layer is further classified into one or more layers namely, a vehicle analysis layer (442), a lane crossing layer (444), a scenery identification layer (446), and a Black-box layer (448).
  • the lane crossing layer which takes lane-marking and vehicle data and identifies if any vehicle has crossed the lane in its FOV (Field of Vision), and further saves the data and send alerts.
  • the vehicle analysis layer takes the vehicle data and finds the vehicle information: License Plate Number, Vehicle type, Vehicle make, Vehicle model, Vehicle color, and vehicle relative speed.
  • the scenery identification layer takes data of objects defining the scene: vehicle, potholes, street lights, etc. and gives information regarding the scene: street lights on/off, traffic light color, potholes in the scene, accidents, etc.
  • a block-box layer records a 30-second video feed of 15 sec. before and after, further sends to cloud in offline mode, if any accident event is detected, and the behavior analysis layer locates the driver and processes the driver and his/ her surrounding to determine the behavior like: drowsy, sad, excited, etc.
  • FIG. 5 shows a system block diagram (500) of the present invention implementing the method of FIG. 3, according to one embodiment of the present invention.
  • the system includes a server (510), one or more image capturing device disposed on one or more vehicles (560),), an Artificial Intelligence /Machine Learning Engine (540) which are interconnected over a network (550).
  • the server (510) may include a processor (520) and a database (530).
  • the database 530 may be a distributed database as part of a block chain network that enables creation of data models for processing information faster and accurately. Further, a block chain network ensures security and easy access of past data of a defaulter in case of repeated violations.
  • the distributed or decentralized database enables faster processing times and easy access of required information.
  • the image capturing device which may be a camera which is mounted on a stationary or a moving vehicle, is connected on edge or via a cloud server and capable of accessing Al system in order to perform mobile surveillance for ensuring road safety.
  • the camera which is capable of processing the data individually or in conjunction with other cameras of mounted or deployed vehicle or other vehicles to form a virtual grid (VG) for processing.
  • VG virtual grid
  • the vehicles would be able to pool information about their speed, location, lane compliance, detected potholes, traffic conditions, etc., and also identify and report the vehicles, which are violating traffic laws, throughout the network and could also seek help from the connected vehicles in case of emergencies or accidents.
  • the device will also act as an auditory and/or visual companion for the driver, assisting them to drive safely. It will give directions, play music/radio, take calls, show the status of car health, make distress/emergency calls, serve as a‘black box’ repository for insurance companies by permanently storing the important events with respect to the vehicle it is deployed in and the other vehicles in its line of sight.
  • the system works like a hive of plurality of devices to form a Device Grid.
  • the grid can identify a flagged vehicle using the vehicle license plates, model, type, color, conditions and other details and inform the authorities around it. This could enable the authorities to find, track and apprehend criminals on the run, traffic rules violators, stolen vehicles and defaulters.
  • the grid can identify an accident or dangerous situation and alert all following and around vehicles of the accident or dangerous situation. This could help the other drivers drive safely and prepare themselves to avoid the accident or dangerous situation and save human lives and destruction of property.
  • the device grid can also enable a consumer to interact with other vehicles in their“trip” to help them stick together and follow each other to avoid any mishaps.
  • the device grid can enable a consumer to automate his parking by automatically opening or closing the garage door when the vehicle approaches the house.
  • the device grid can enable a consumer to automate his house by starting or stopping appliances when they arrive or leave the house.
  • the air conditioner can be turned on in hot environment when the vehicle is certain minutes away from the house so the customer can arrive home to an already cooled house.
  • the device grid can broadcast critical or emergency messages to the connected devices and also enable the drivers to co ordinate and assist the emergency personnel’s in imparting their duties effectively.
  • an ambulance can broadcast its route and all vehicles on that route will be informed and can track the location of the ambulance. This could allow the vehicles to clear a path and make way for the ambulance to drive faster and reach its destination sooner and hopefully save lives.
  • the device which can process the information locally without the dependency on the network.
  • This enables the device to be much faster and provide near real-time information to the driver and assist the driver with enhanced information.
  • the system can track when the driver is feeling drowsy, sleep, or not paying attention on the road, texting, checking his phone etc. and alert the driver of any oncoming vehicle, pedestrian detection, lane deviation, probability of a collision, red light violation, curve speed warning, stop sign assistance, speed limits warning, oversized vehicle warning, tailgating etc. This enables a normal car to function like an autonomous car where the car can read the information around it and perform several functions automatically to enhance and safeguard the vehicle.
  • the device also captures the GPS location, G- sensor data to analyze the user’s driving behavior.
  • the device can enable the driver to improve upon his mistakes and driving behavior.
  • the device can help the driver with relevant information to improve and elongate the car’s life, mileage, tyre conditions etc.
  • the device includes an audio system which allows the driver to be notified of any violations or warnings or assistance.
  • the device has a microphone which allows the driver or passengers to communicate with the device and gain information without being distracted from the road ahead.
  • the audio system of device can recognize and personalize the information being provided to the user to have a better effective result.
  • the device can communicate in multiple languages, adhere to various numerical configurations, personal settings and preferences to enhance the user experience.
  • the server (510) may be a management server, a web server, or any other electronic device or computing system capable of sending and receiving data.
  • server may be a laptop computer, tablet computer, netbook computer, personal computer (PC), a desktop computer a personal digital assistant (PDA), a smart phone, or any programmable electronic device capable of communicating with other client computing device and/or other server via network.
  • server may represent a server computing system utilizing multiple computers as a server system, such as in a cloud computing environment.
  • Server may be an enterprise server capable of providing any number of a variety of services to a large number of users.
  • the server may include software and/or algorithms to achieve the operations for processing, communicating, delivering, gathering, uploading, maintaining, and/or generally managing data, etc.
  • Such operations and techniques may be achieved by any suitable hardware, component, device, application specific integrated circuit (ASIC), additional software, field programmable gate array (FPGA), server, processor, algorithm, erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or any other suitable object that is operable to facilitate such operations.
  • the server allows a user to take advantage or avail of the services provided by the server.
  • the server may accept any of the enterprise services to provide services to users attempting to access server.
  • the nature of the services represented by enterprise services depends upon the services provided by server.
  • server may be an online retailer server, and enterprise services may include consumer insights analysis which may be useful in association with stores.
  • Network (550) may be a local area network (LAN), a wide area network (WAN) such as the Internet, a cellular data network, any combination thereof, or any combination of connections and protocols that will support communication between image capturing device mounted on the vehicle, and the server, in accordance with embodiments of the invention.
  • Network may include wired, wireless, or fiber optic connections.
  • Computing system may include additional computing devices, servers, computers, or other devices not shown.
  • the system further includes an Artificial Intelligence/Machine Learning (AI/ML) Engine (540).
  • AI/ML Artificial Intelligence/Machine Learning
  • the role of AI/ML is to capture the essence of a stimuli through image processing/analysis, NLP/NLU of free responses and the attributes. As the stimuli is experienced by users, these three pillars will provide clarity into the truth underlying the stimuli.
  • the AI/ML engine interacts with the server for facilitating a feedback of the target profiles in a network environment.
  • the feedback of the target profile provided by the AI/ML engine may include the learning of the previous interaction with the server and suggest a plurality of parameters which may be useful in determining the objective.
  • Various embodiments disclosed herein provide numerous advantages by providing a method and system for providing data insights based on artificial intelligence.
  • the present invention uses AI/ML engine to determine data insights, both simple and complex, based on artificial intelligence.
  • the present invention is of both analytics tool and data science in order to provide data insights to an end user based on learnings of previous data processing.
  • the present invention is operational at all times of day and further provides the data insights in question - answer format making it easier for The present invention allows reduction in time spent by managements during decision making, and to procure data at a right time.
  • the multi-utility device, system and method of the present invention creates a safer driving environment for everybody by a device grid comprising of all device which are communicating with each other that inform all stakeholders of risky on-road behavior, accidents and other imminent danger thereby creating a safer driving environment.
  • the device can also identify and map out the infrastructure of a city.
  • the device can identify potholes, broken posts, non functioning streetlights, broken or non-functioning traffic lights, work in conjunction with security and authority personnel to make the city safer and smarter.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de surveillance pour la sécurité routière. Dans un mode de réalisation, ce procédé comprend l'acquisition de multiples données à partir d'au moins un dispositif de capture d'images primaire et d'un dispositif de capture d'images secondaire, les dispositifs de capture d'images étant montés sur un véhicule. Les données acquises comprennent des données d'images d'objets et des métadonnées associées. Le procédé comprend en outre l'analyse de chacune des données acquises par la réalisation d'une détection d'objets à l'aide d'un modèle d'apprentissage machine (ML) sur les données, chaque objet comprenant des véhicules, des marquages de voies au sol, des lampadaires, des nids de poule et autres ; et la détermination du type de l'objet analysé par la collecte des données relatives à tous les objets et le filtrage supplémentaire des données selon les besoins, le type comprenant un franchissement de voie, une analyse de véhicule, une identification de scène, un événement de boîte noire, une analyse de comportement. En réponse à un traitement par un processeur codé avec une instruction, le procédé consiste en outre à permettre au processeur de traiter les données acquises pour réaliser une ou plusieurs fonctions de surveillance comprenant la détection de vitesse, la détection de voie et le franchissement de voie, une analyse de véhicule, une détection d'objets, une identification de scène et une analyse de comportement.
PCT/IN2020/050624 2019-07-19 2020-07-19 Système, dispositif multi-utilitaire et procédé de surveillance de véhicules pour la sécurité routière WO2021014464A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201921002360 2019-07-19
IN201921002360 2019-07-19

Publications (1)

Publication Number Publication Date
WO2021014464A1 true WO2021014464A1 (fr) 2021-01-28

Family

ID=74193141

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2020/050624 WO2021014464A1 (fr) 2019-07-19 2020-07-19 Système, dispositif multi-utilitaire et procédé de surveillance de véhicules pour la sécurité routière

Country Status (1)

Country Link
WO (1) WO2021014464A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022826A (zh) * 2022-01-05 2022-02-08 石家庄学院 一种基于区块链的铁轨检测方法及系统
CN114120658A (zh) * 2021-10-27 2022-03-01 北京云汉通航科技有限公司 一种高速公路智能巡检机器人系统以及巡检方法
CN114596704A (zh) * 2022-03-14 2022-06-07 阿波罗智联(北京)科技有限公司 交通事件处理方法、装置、设备及存储介质
CN116311949A (zh) * 2023-05-17 2023-06-23 山东新众通信息科技有限公司 城市交通智能控制系统
CN116386336A (zh) * 2023-05-29 2023-07-04 四川国蓝中天环境科技集团有限公司 基于卡口车牌数据的路网交通流量鲁棒计算方法及系统
CN116797436A (zh) * 2023-08-29 2023-09-22 北京道仪数慧科技有限公司 一种利用公交车辆进行道路病害巡检的处理系统
EP4312197A1 (fr) * 2022-07-27 2024-01-31 Bayerische Motoren Werke Aktiengesellschaft Véhicule, appareil, programme informatique et procédé de surveillance d'un environnement d'un véhicule

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7460149B1 (en) * 2007-05-28 2008-12-02 Kd Secure, Llc Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system
WO2016083553A1 (fr) * 2014-11-27 2016-06-02 Kapsch Trafficcom Ab Procédé de commande d'un système de surveillance de trafic
EP3366522A1 (fr) * 2014-12-15 2018-08-29 Ricoh Company Ltd. Système de surveillance et véhicule pouvant être équipé d'un système de surveillance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7460149B1 (en) * 2007-05-28 2008-12-02 Kd Secure, Llc Video data storage, search, and retrieval using meta-data and attribute data in a video surveillance system
WO2016083553A1 (fr) * 2014-11-27 2016-06-02 Kapsch Trafficcom Ab Procédé de commande d'un système de surveillance de trafic
EP3366522A1 (fr) * 2014-12-15 2018-08-29 Ricoh Company Ltd. Système de surveillance et véhicule pouvant être équipé d'un système de surveillance

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114120658A (zh) * 2021-10-27 2022-03-01 北京云汉通航科技有限公司 一种高速公路智能巡检机器人系统以及巡检方法
CN114120658B (zh) * 2021-10-27 2022-06-24 北京云汉通航科技有限公司 一种高速公路智能巡检机器人系统以及巡检方法
CN114022826A (zh) * 2022-01-05 2022-02-08 石家庄学院 一种基于区块链的铁轨检测方法及系统
CN114596704A (zh) * 2022-03-14 2022-06-07 阿波罗智联(北京)科技有限公司 交通事件处理方法、装置、设备及存储介质
CN114596704B (zh) * 2022-03-14 2023-06-20 阿波罗智联(北京)科技有限公司 交通事件处理方法、装置、设备及存储介质
EP4312197A1 (fr) * 2022-07-27 2024-01-31 Bayerische Motoren Werke Aktiengesellschaft Véhicule, appareil, programme informatique et procédé de surveillance d'un environnement d'un véhicule
WO2024022633A1 (fr) * 2022-07-27 2024-02-01 Bayerische Motoren Werke Aktiengesellschaft Véhicule, appareil, programme informatique et procédé de surveillance de l'environnement d'un véhicule
CN116311949A (zh) * 2023-05-17 2023-06-23 山东新众通信息科技有限公司 城市交通智能控制系统
CN116386336A (zh) * 2023-05-29 2023-07-04 四川国蓝中天环境科技集团有限公司 基于卡口车牌数据的路网交通流量鲁棒计算方法及系统
CN116386336B (zh) * 2023-05-29 2023-08-08 四川国蓝中天环境科技集团有限公司 基于卡口车牌数据的路网交通流量鲁棒计算方法及系统
CN116797436A (zh) * 2023-08-29 2023-09-22 北京道仪数慧科技有限公司 一种利用公交车辆进行道路病害巡检的处理系统
CN116797436B (zh) * 2023-08-29 2023-10-31 北京道仪数慧科技有限公司 一种利用公交车辆进行道路病害巡检的处理系统

Similar Documents

Publication Publication Date Title
WO2021014464A1 (fr) Système, dispositif multi-utilitaire et procédé de surveillance de véhicules pour la sécurité routière
US11763669B2 (en) Technology for real-time detection and mitigation of remote vehicle anomalous behavior
US20220262239A1 (en) Determining causation of traffic events and encouraging good driving behavior
US11062414B1 (en) System and method for autonomous vehicle ride sharing using facial recognition
Bila et al. Vehicles of the future: A survey of research on safety issues
US11042619B2 (en) Vehicle occupant tracking and trust
KR102189569B1 (ko) 다기능 스마트 객체인식 사고방지 표지 안내판 및 그것을 이용한 사고방지방법
CN111402612A (zh) 一种交通事件通知方法及装置
CN112712717B (zh) 一种信息融合的方法、装置和设备
Loce et al. Computer vision in roadway transportation systems: a survey
CN109345829B (zh) 无人车的监控方法、装置、设备及存储介质
KR102122859B1 (ko) 교통 영상감시시스템의 멀티 표적 추적 방법
CN111699519A (zh) 用于检测地理位置中的异常交通事件的系统、设备和方法
KR102174556B1 (ko) 차량번호 인식과 인공지능을 이용해 교통정보를 관제하는 영상감시장치
KR102282800B1 (ko) 라이다와 영상카메라를 이용한 멀티 표적 추적 방법
US20230138112A1 (en) Artificial intelligence methods and systems for remote monitoring and control of autonomous vehicles
Desai et al. Accident detection using ml and ai techniques
KR101498582B1 (ko) 교통사고 데이터 제공 시스템 및 방법
Thevendran et al. Deep Learning & Computer Vision for IoT based Intelligent Driver Assistant System
US20230048304A1 (en) Environmentally aware prediction of human behaviors
US11830357B1 (en) Road user vulnerability state classification and reporting system and method
Fowdur et al. A mobile application for real-time detection of road traffic violations
Uchiyama et al. Risky Traffic Situation Detection and Classification Using Smartphones
TWI786725B (zh) 用於偵測及處理交通違規行為的方法及系統
KR102483250B1 (ko) 해양도시 인프라의 효율적 관제를 위한 영상 ai 기반 안전관리 통합장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20844295

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20844295

Country of ref document: EP

Kind code of ref document: A1