WO2023227351A1 - Procédé pour faire fonctionner un système de caméra d'un véhicule guidé sur rails, et système pour un véhicule guidé sur rails à l'aide d'une intelligence artificielle - Google Patents

Procédé pour faire fonctionner un système de caméra d'un véhicule guidé sur rails, et système pour un véhicule guidé sur rails à l'aide d'une intelligence artificielle Download PDF

Info

Publication number
WO2023227351A1
WO2023227351A1 PCT/EP2023/061931 EP2023061931W WO2023227351A1 WO 2023227351 A1 WO2023227351 A1 WO 2023227351A1 EP 2023061931 W EP2023061931 W EP 2023061931W WO 2023227351 A1 WO2023227351 A1 WO 2023227351A1
Authority
WO
WIPO (PCT)
Prior art keywords
track
bound vehicle
situation
camera
recording
Prior art date
Application number
PCT/EP2023/061931
Other languages
German (de)
English (en)
Inventor
Keno Buss
Philip LEUPOLD
Original Assignee
Siemens Mobility GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Mobility GmbH filed Critical Siemens Mobility GmbH
Publication of WO2023227351A1 publication Critical patent/WO2023227351A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the invention relates to a method for operating a camera system of a track-bound vehicle, which comprises a plurality of camera devices.
  • the invention further relates to a system for a track-bound vehicle, which includes a plurality of camera devices.
  • DE 10 2020 207 068 A1 describes a method for triggering a function in a means of transport.
  • An operating object is captured using a camera and corresponding image data is generated.
  • Based on the image data a position of the actuation object relative to a virtual button is determined and a control signal is generated based on the determined relative position using artificial intelligence.
  • a detection area within and/or in the surroundings of the vehicle is detected by means of at least one camera device of the plurality of camera devices. It will Image information, which represents the detection area, is generated based on the detection using the at least one camera device. A recording situation, which represents a situation intended for recording the image information within the detection area, is recognized based on the image information using the at least one camera device.
  • a land-based facility uses an artificial neural network to create a model that is set up to recognize the recording situation.
  • a computer program representing the model is executed on the camera device to recognize the recording situation.
  • the invention is based on the finding that in previous systems for track-bound vehicles, image information that is generated using camera devices is sent to a central video server device regardless of their content.
  • the video server device is usually used to record or at least temporarily store the image information. This leads to large amounts of data being transmitted on communication networks between the camera devices and video server devices. In addition, these amounts of data must be stored on the video server devices or at least buffered until forwarded, for example to a land-based device. In this line of thought, it is particularly important to take into account that only a small fraction of the recorded image information shows situations that are relevant for storage: these recording situations rarely occur during operation.
  • the solution according to the invention solves this problem by recognizing situations worth drawing on the camera device.
  • the camera device can already decide how the image information is to be handled, for example whether these should be transmitted to a central video server facility or not.
  • the method according to the invention is based on the further finding that track-bound vehicles have a large number of camera devices that capture a detection area within and/or in the surroundings of the vehicle.
  • track-bound vehicles have a large number of camera devices that capture a detection area within and/or in the surroundings of the vehicle.
  • large amounts of data representing the image information have to be handled, for example by transmitting them to a central video server device and storing them by the video server device.
  • the application of the method according to the invention in a track-bound vehicle is particularly expedient and advantageous, since unnecessary transmission of image information can be prevented.
  • the camera device is preferably designed to detect light in the visible range and/or light in the infrared range.
  • Several camera devices can capture a common detection area: This can be the sum of the areas that are each optically captured by the individual camera devices. Alternatively, it can be an overlapping area that is captured by several camera devices at the same time.
  • the image information is generated using the camera device.
  • the image information is available, for example, in the form of image data.
  • the image data represents the image information.
  • the person skilled in the art preferably understands the phrase “situation within the detection range” to mean that the situation occurs within the detection range of the camera device and the image information generated by the camera device represents the captured situation.
  • the camera device preferably stores the image information generated for a predetermined time interval in the past (starting from the time the recording situation was recognized). In this way, when a recording situation is recognized, image information is also available for the situations that were immediately preceding the recording situation and may have been the cause of the recording situation.
  • the camera device recognizes the recording situation based on the image information. To recognize the recording situation, the camera device executes a computer program that represents a model. The model is created with the help of an artificial neural network.
  • the computer program is installed for use on the camera device (English: “deployment”).
  • the track-bound vehicle is a rail vehicle.
  • the rail vehicle is, for example, a multiple unit.
  • the method according to the invention is particularly useful in rail vehicles, since these generally have a large number of interior spaces and surrounding areas that can be monitored using camera devices. Especially when there are many interior spaces and surrounding areas to be monitored, the potential for savings in the transmission of image information on the rail vehicle's communication network and in storage is particularly high.
  • the image information which represents the recording situation is transmitted to a video server device via a communication network of the track-bound vehicle and stored by means of a storage device of the video server device.
  • the person skilled in the art understands the formulation according to which "the image information which represents the recording situation is transmitted to a video server device" to mean that only the image information which represents the recording situation is transmitted to the video server device. In other words: If based on If a recording situation is recognized by the captured image information using the at least one camera device, this image information is transmitted to the central video server device.
  • This embodiment is particularly advantageous for the operation of track-bound vehicles: operators of track-bound vehicles often have to handle large fleets and the associated extensive amounts of data that represent the captured image information. If, according to the embodiment, only the image information representing the recording situation is now transmitted on the communication networks of the vehicles and via communication connections to the land side and stored on appropriate video server devices, the potential for saving the amount of data to be handled is considerable.
  • the image information is "stored by means of a storage device of the video server device", preferably to the effect that the image information is at least temporarily stored.
  • the image information is optionally sent to a land-based device (e.g. an operations control center ) transmitted via a communication link between the track-bound vehicle and the land-based facility.
  • the video server device is preferably a central video server device of the track-bound vehicle, to which several camera devices are assigned.
  • the camera devices send the generated image information via the communication network to the central video server device.
  • several central video server devices can be provided within the track-bound vehicle. For example, one of the several video server devices is active during operation, while another of the several video server devices is in sleep mode.
  • the video server device is a central, land-based video server device to which several track-bound vehicles are assigned.
  • the camera devices send the generated image information to the landside video server device via the vehicle's communication network and via a communication connection to the landside.
  • Several video server devices can also be provided on land for redundancy.
  • the video server device can also be a distributed server device, which has a vehicle-side part and a land-side part.
  • the communication network is preferably an Ethernet network. Further preferably, the communication network is an operator network of the track-bound vehicle (in contrast to a control network).
  • a warning message is triggered if a dangerous situation is recognized as a recording situation.
  • the warning message is preferably triggered by the video server device.
  • the video server device receives a recognition message from the camera device that has recognized that as There is a dangerous situation in the recording situation. Based on the detection message, the video server device sends a warning message to a suitable output device for outputting the warning message.
  • the warning message is issued by means of an output device that can be perceived by a vehicle driver.
  • the warning message is issued by means of an output device that can be perceived by a companion of the vehicle.
  • the vehicle driver notices the warning message, the vehicle driver has the opportunity to prevent effects on the safety of the operation of the track-bound vehicle by actively intervening in the operation (e.g. braking or stopping) of the vehicle.
  • the intervention in the operation can be carried out alternatively or additionally (partly) automatically. For example, intermediate doors that separate different interior areas of the vehicle can be closed.
  • the video server device receives operating information which indicates whether the track-bound vehicle is in ferry service. Based on this information, the video server device can decide whether the warning message is sent to the output device that can be perceived by the vehicle driver.
  • the output device that can be perceived by the vehicle driver is preferably an output device arranged in a driver's desk of the vehicle driver.
  • the output device that can be perceived by the companion of the track-bound vehicle is preferably integrated in a portable terminal that the companion carries with him.
  • the warning message is alternatively or additionally issued by means of an output device of an operations control center that can be perceived by an operator of the vehicle.
  • the warning message is received by a data processing device of an executive authority.
  • the video server device sends the warning message to the data processing device of the executive authority based on the detection message and depending on a configuration.
  • the warning message can be issued using an output device in the authority's data processing device.
  • the recording situation includes one or more rioting people and/or a physical altercation between at least two people. Since both of the situations mentioned represent a danger to the surroundings of the rioting person (s) (e.g. through damage to property) and/or to the people involved in the altercation as well as other people, for example passengers of the track-bound vehicle in the area it is a dangerous situation worth recording.
  • the recording situation includes the appearance of a person wanted by the authorities.
  • the person wanted by the authorities can preferably be recognized based on their face using the camera device.
  • the recording situation includes a person in need of help.
  • the camera device can be used to detect that a person inside the vehicle has a medical emergency and the person is therefore in need of help.
  • the recording situation includes the occupancy of at least one seat of a plurality of seats, in particular the occupancy of a seat by a piece of luggage and/or an animal.
  • the method according to the invention can be used particularly advantageously for determining the occupancy of seats in the track-bound vehicle and for corresponding documentation.
  • the recording situation includes a weapon located inside the vehicle. Since the weapon can pose a danger to people inside the vehicle, it is a dangerous situation worth recording.
  • a recording situation is recognized using a front or rear camera device.
  • the plurality of camera devices is part of a video surveillance system for monitoring an interior of the track-bound vehicle.
  • the video surveillance system is in particular a so-called CCTV system (CCTV: Closed Circuit Television).
  • CCTV system Closed Circuit Television
  • the artificial neural network has one or more layers of neurons that are not input neurons or output neurons.
  • the layers of neurons that are not input neuron (English: input layer) or output neuron (English: output layer) are often technically referred to as hidden layers.
  • the hidden layers are preferably changed during training and learning of the artificial neural network.
  • Machine learning which concerns the artificial neural network with several hidden layers, is often referred to as deep learning.
  • the artificial neural network is trained using training data, the training taking place in a secure state in which unwanted manipulation of the training is excluded.
  • Training data includes data relating to recording situations and data relating to other situations.
  • the data relating to a recording situation represents a “desired” recording situation.
  • the term “undesirable” must be understood against the background that it is not one given for training purposes recording situation.
  • An undesirable manipulation of the training includes, for example, training data that teaches the artificial neural network to incorrectly start from a recording situation in certain situations or to incorrectly not recognize a recording situation in certain situations. In other words: The manipulation could cause a deliberate misdirection during training so that real recording situations are not recognized as such in the company.
  • An undesirable manipulation of the training occurs, for example, if training data for the training of the artificial neural network is introduced as a result of a data attack, which allows dangerous behavior by people in the vehicle that is not recognized as such by the camera device during operation.
  • This embodiment is based, among other things, on the knowledge that manipulation of the recognition of recording situations can already be prepared during training.
  • training data can be introduced during training, which trains the artificial neural network in such a way that certain situations when using the camera device are not recognized as such.
  • so-called adversarial attacks can be prepared by injected training data so that the detection device is more susceptible to an adversarial attack during use.
  • the secured state is achieved, for example, by only using tested training data for training.
  • the training data is collected during the commissioning and/or testing phase when it can be ensured that the vehicle is not connected to the landside and that no unauthorized personnel who initiate an attack are present or have data access to the training.
  • the invention further relates to a computer program comprising commands which, when the program is executed by a computing device of a track-bound vehicle and/or a land-based device, cause it to carry out the method described above.
  • the invention further relates to a computer program product with a computer program of this type.
  • the invention further relates to a computer-readable storage medium, comprising commands which, when executed by a computing device of a track-bound vehicle and/or a land-based device, cause it to carry out the method described above.
  • the system according to the invention for a track-bound vehicle comprises a plurality of camera devices.
  • the camera devices are each set up to capture a detection area within and/or in the surroundings of the track-bound vehicle.
  • the camera devices are also each set up to generate image information, which represents the detection area, based on the detection.
  • the camera devices are also each set up to recognize a recording situation, which represents a situation intended for recording the image information within the detection area.
  • the camera system also includes a land-based device which is designed to use an artificial neural network to form a model that is set up to recognize the recording situation.
  • the camera system further comprises a computing device of the camera device, which is designed to execute a computer program representing the model for recognizing the situation.
  • the task mentioned at the beginning is further solved by a track-bound vehicle with a plurality of camera devices.
  • the camera facilities are each set up to capture a detection area within and/or in the surroundings of the track-bound vehicle.
  • the camera devices are also each set up to generate image information, which represents the detection area, based on the detection.
  • the camera devices are also each set up to recognize a recording situation, which represents a situation intended for recording the image information within the detection area.
  • a computing device of the camera device is designed to execute a computer program for recognizing the situation, the computer program representing a model which is formed using an artificial neural network and which is set up to recognize the recording situation.
  • Figure 1 shows schematically the structure of a
  • Figure 2 shows a schematic example of a situation for the use of the method and system according to the invention
  • FIG. 3 shows schematically the process of a
  • Figure 1 shows a schematic view of a system 1 with a track-bound vehicle 3, a land-based vehicle Facility 4 and a landside facility 105.
  • Figure 2 shows an example of a situation for using the system 1.
  • the track-bound vehicle 3 is a rail vehicle 4.
  • the landside facility 14 is part of an operations control center.
  • the track-bound vehicle 3 has a communication network 7, which is preferably designed as an Ethernet network.
  • a camera device 8, a camera device 9 and a video server device 10, among other things, are connected to the communication network 7 for data purposes.
  • a communications gateway 11 is connected to the communications network 7.
  • two camera devices 8 and 9 are present on the track-bound vehicle 3.
  • This configuration serves for a simple illustration of the invention and can be clearly transferred to a constellation with more than two camera devices 8 and 9.
  • the camera devices 8 and 9 are set up and aligned to capture a detection area 5 within the track-bound vehicle 3 .
  • the camera system 6 is in particular a video surveillance system, for example a so-called closed-circuit television system (CCTV system).
  • the communication gateway 11 is, for example, a so-called mobile communication gateway (MCG).
  • the land-side device 14 has a communication network 17, which is designed as an Ethernet network.
  • a video server device 20 and an output device 22 are connected for data purposes to the communication network 17.
  • a ground communication gateway 21 is connected to the communication network 17, which is connected to a wireless communication interface 23.
  • the communication devices 15 and 25 together form a communication connection 30 for transmitting data between the track-bound vehicle 3 and the land-based device 14, i.e. H . starting from the track-bound vehicle 3 to the land-side facility 14 and starting from the land-side facility 14 to the track-bound vehicle 3.
  • a communication connection 230 between the track-bound vehicle 3 and the land-based device 105.
  • a communication connection 130 between the land-side device 14 and the land-side device 105, which runs, for example, via the Internet 32.
  • Figure 3 shows a schematic flow diagram which represents an exemplary embodiment of the method according to the invention for operating the camera system 6:
  • an artificial neural network 106 is created on a land-based device 105.
  • the artificial neural network 106 has several layers of Neurons that are not input neurons or output neurons.
  • the artificial neural network 106 forms a model which is intended to be able to recognize a recording situation based on image information, which is a situation intended for recording the image information.
  • the model therefore acts as a classifier, which assigns input values (image information) to the classes “recording situation” or the class “no recording situation”.
  • a method step B1 the artificial neural network 106 is trained using training data.
  • the training takes place in a secure state in which unwanted manipulation of the training is impossible.
  • the secured state is achieved, for example, by only using tested training data for training.
  • This training data is collected during the commissioning and/or testing phase when it can be ensured that the vehicle is not connected to the landside and that no unauthorized personnel who initiate an attack are present or have data access to the training.
  • a model is formed with the help of the artificial neural network (method step B2), which detects whether a recording situation exists within the recorded detection area 5.
  • a computer program representing the model is installed on the camera device 8 and camera device 9 in a method step C and used in the further method, in particular in the operation of the track-bound vehicle 3.
  • the computer program is stored using a storage device 18 or 19 of the camera device 8 or 9 stored and by means of a computing device 118 or. 119 of the camera device 8 or 9 executed.
  • the detection area 5 within the track-bound vehicle 3 is captured in a method step D by means of the camera devices 8 and 9.
  • the camera devices 8 and 9 each generate image information in a method step E, which represents the detection area 5.
  • a recording situation 47 which represents a situation intended for recording the image information within the detection area 5, is recognized based on the image information using the camera devices 8 and 9:
  • the image information generated by the camera device 8 and 9 is each processed during the execution of the computer program by means of a computing device 118 and 119 and classified as a recording situation 47.
  • the computer program is executed by means of the computing devices 118 and 119 in a method step Fl.
  • the image information (which represents the recording situation) is transmitted to the video server device 10 via the communication network 7 in a method step G and stored in a method step H by means of a storage device 12 of the video server device 10.
  • a warning message is triggered in a method step J.
  • the warning message is preferred triggered by the video server device 10.
  • the video server device 10 receives a detection message from the camera device 8 and/or 9, which has recognized that a dangerous situation is present as the recording situation. Based on the detection message, the video server device 10 sends a warning message to a suitable output device for outputting the warning message.
  • a warning message 134 is transmitted to an output device 133 arranged in a driver's desk of the vehicle driver and output by it in a method step K2.
  • the output device 133 is connected to the communication network 7 for data technology.
  • the warning message is transmitted in a method step KK1 to an output device 50 that can be perceived by a companion 44 of the track-bound vehicle 3 and output by this in a method step KK2.
  • the warning message is transmitted in a method step KKK1 to the output device 22 that can be perceived by an operator of the track-bound vehicle 3 and output in a method step KKK2.
  • the warning message is transmitted from the video server device 10 via the communication connection 30 to the landside device 14 and there to the output device 22.
  • the warning message can be received in a procedural step L by an institution of an executive authority.
  • the warning message arrives, for example, from a video server device 20 of the operations control center via the Internet 32 to a server of the authority.
  • the video server device 20 receives the warning message in advance, for example via the communication connection 30 from the video server device 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé pour faire fonctionner un système de caméra (6) d'un véhicule guidé sur rails (3), comprenant une pluralité de dispositifs de caméra (8, 9). L'invention concerne en outre un système (1) pour un véhicule guidé sur rails (3), comprenant une pluralité de dispositifs de caméra (8, 9). Une zone de détection (5) à l'intérieur et/ou autour du véhicule guidé (3) est détectée (D) à l'aide d'au moins un dispositif de caméra (8, 9) de la pluralité de dispositifs de caméra (8, 9). Des informations d'image qui représentent la zone de détection (5) sont générées (E) sur la base de la détection à l'aide du ou des dispositifs de caméra (8, 9), et une situation d'enregistrement (47) qui représente la présente situation à l'intérieur de la zone de détection (5) pour enregistrer les informations d'image est détectée (F) au moyen du ou des dispositifs de caméra (8, 9) à l'aide des informations d'image. Un dispositif côté terre (105) génère un modèle à l'aide d'un réseau neuronal artificiel (106) conçu pour détecter la situation d'enregistrement (47), et un programme informatique qui représente le modèle est exécuté (FI) au moyen du dispositif de caméra (8, 9) afin de détecter la situation d'enregistrement (47).
PCT/EP2023/061931 2022-05-25 2023-05-05 Procédé pour faire fonctionner un système de caméra d'un véhicule guidé sur rails, et système pour un véhicule guidé sur rails à l'aide d'une intelligence artificielle WO2023227351A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022205244.7 2022-05-25
DE102022205244 2022-05-25

Publications (1)

Publication Number Publication Date
WO2023227351A1 true WO2023227351A1 (fr) 2023-11-30

Family

ID=86378510

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/061931 WO2023227351A1 (fr) 2022-05-25 2023-05-05 Procédé pour faire fonctionner un système de caméra d'un véhicule guidé sur rails, et système pour un véhicule guidé sur rails à l'aide d'une intelligence artificielle

Country Status (1)

Country Link
WO (1) WO2023227351A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019042689A1 (fr) * 2017-08-29 2019-03-07 Siemens Aktiengesellschaft Reconnaissance de personnes dans des zones ayant des transmission de données et traitement de données limités
DE102020207068A1 (de) 2020-06-05 2021-08-12 Siemens Mobility GmbH Verfahren und System zum Auslösen einer Funktion sowie Transportmittel
US20210309183A1 (en) * 2020-04-03 2021-10-07 Micron Technology, Inc. Intelligent Detection and Alerting of Potential Intruders

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019042689A1 (fr) * 2017-08-29 2019-03-07 Siemens Aktiengesellschaft Reconnaissance de personnes dans des zones ayant des transmission de données et traitement de données limités
US20210309183A1 (en) * 2020-04-03 2021-10-07 Micron Technology, Inc. Intelligent Detection and Alerting of Potential Intruders
DE102020207068A1 (de) 2020-06-05 2021-08-12 Siemens Mobility GmbH Verfahren und System zum Auslösen einer Funktion sowie Transportmittel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BROWNLEE JASON: "A Gentle Introduction to Deep Learning for Face Recognition", 31 May 2019 (2019-05-31), pages 1 - 21, XP093050377, Retrieved from the Internet <URL:https://machinelearningmastery.com/introduction-to-deep-learning-for-face-recognition/> [retrieved on 20230530] *

Similar Documents

Publication Publication Date Title
EP3517398B1 (fr) Procédé de surveillance d&#39;état de l&#39;espace intérieur ainsi que véhicule doté d&#39;un dispositif de surveillance d&#39;état de l&#39;espace intérieur
EP3899682B1 (fr) Surveillance de fonctions de conduite basée sur les réseaux neuronaux
DE102018007432A1 (de) Verfahren zum Betrieb eines Omnibusses
EP3697712B1 (fr) Procédé de surveillance d&#39;une cabine d&#39;ascenseur
EP3499473A1 (fr) Détection automatisée de situations dangereuses
WO2018153563A1 (fr) Réseau neuronal artificiel et aéronef sans pilote permettant de détecter un accident de la circulation
DE102019104822A1 (de) Verfahren und Vorrichtung zum Überwachen eines industriellen Prozessschrittes
EP3947100B1 (fr) Procédé et dispositif de génération d&#39;un signal d&#39;état
WO2023227351A1 (fr) Procédé pour faire fonctionner un système de caméra d&#39;un véhicule guidé sur rails, et système pour un véhicule guidé sur rails à l&#39;aide d&#39;une intelligence artificielle
DE102017219292A1 (de) Verfahren und vorrichtung zum erfassen von ereignisbezogenen daten bezüglich eines fahrzeugs
DE102020215885A1 (de) Verfahren und system zur erkennung und mitigation von störungen
DE102019219140B4 (de) Verfahren zur automatischen Reaktion auf einen Notfall in einem Fahrzeug
DE102020215652A1 (de) Fahrzeug-bildverarbeitungssystem
DE102021000806A1 (de) Branderkennungssystem
WO2022058074A1 (fr) Procédé de détection et de classification d&#39;objets dans le trafic routier
DE102019007285A1 (de) Vorrichtung zur Vermeidung einer Kollision einer Fahrzeugtür eines Kraftfahrzeugs mit einem Fahrzeugrad
DE102019204359A1 (de) Situationserkennungseinrichtung, flugzeugpassagierabteil und verfahren zur überwachung von flugzeugpassagierabteilen
DE102022205084B3 (de) Verfahren, Computerprogramm und Vorrichtung zur Umfeldwahrnehmung im Fahrzeug sowie entsprechendes Fahrzeug
DE102021208244A1 (de) Verfahren und System zum Überwachen einer Verteilung von Fahrgästen innerhalb eines Fahrzeugs zur Personenbeförderung
EP4095750A1 (fr) Procédé d&#39;apprentissage d&#39;un système de surveillance
WO2022233584A1 (fr) Procédé et système de détection d&#39;une attaque informatique sur un véhicule à l&#39;aide d&#39;un procédé d&#39;apprentissage profond
WO2023194135A1 (fr) Procédé et dispositif de surveillance automatisée de l&#39;opération de conduite d&#39;un système de transport de passagers
EP4299409A1 (fr) Système de comptage de passagers
WO2023036699A1 (fr) Procédé et dispositif de surveillance de l&#39;opération de déplacement d&#39;une installation de transport de personnes
DE102019220338A1 (de) Verfahren zur Übermittlung eines Notrufs bei einem autonomen Fahrzeug

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23723579

Country of ref document: EP

Kind code of ref document: A1