CN117698628A - Method and device for controlling danger inside and outside vehicle, vehicle and storage medium - Google Patents

Method and device for controlling danger inside and outside vehicle, vehicle and storage medium Download PDF

Info

Publication number
CN117698628A
CN117698628A CN202410077648.2A CN202410077648A CN117698628A CN 117698628 A CN117698628 A CN 117698628A CN 202410077648 A CN202410077648 A CN 202410077648A CN 117698628 A CN117698628 A CN 117698628A
Authority
CN
China
Prior art keywords
vehicle
scene information
preset
scene
risk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410077648.2A
Other languages
Chinese (zh)
Inventor
廖玉竹
余晓雪
王友兰
夏勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Original Assignee
Chery Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202410077648.2A priority Critical patent/CN117698628A/en
Publication of CN117698628A publication Critical patent/CN117698628A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to a method and a device for controlling risks inside and outside a vehicle, the vehicle and a storage medium, wherein the method comprises the following steps: acquiring state information, in-vehicle scene information and out-of-vehicle scene information of a vehicle, respectively carrying out fusion analysis on the in-vehicle scene information and the out-of-vehicle scene information based on a preset scene model database to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-of-vehicle scene information, determining a first danger evaluation level of the in-vehicle scene information according to the plurality of first danger coefficients, obtaining a second danger evaluation level of the out-of-vehicle scene information according to the plurality of second danger coefficients, determining a control instruction of the vehicle according to the state information and the evaluation level of the vehicle, and controlling the vehicle to execute corresponding actions according to the control instruction. Therefore, the problems that the in-vehicle hazard identification accuracy is low, the hazard intervention is not timely, accidents are caused by neglecting out-vehicle hazard monitoring are solved, the safety of the in-vehicle children is ensured, and the risk of collision of the out-vehicle children is avoided.

Description

Method and device for controlling danger inside and outside vehicle, vehicle and storage medium
Technical Field
The present disclosure relates to the field of vehicle technologies, and in particular, to a method and apparatus for controlling risks inside and outside a vehicle, and a storage medium.
Background
In recent years, the use rate of vehicles is greatly improved, and with frequent accidents of children safety, for example, accidents such as falling or clamping of a door lock by a child, falling of the child over a window, non-discovery of the child in a dead zone of the vehicle, hypoxia caused by the missing of the child in the vehicle, and the like are caused, which is painful. While child protection measures for these scenarios are limited.
In the related art, the monitoring in the vehicle is mainly based on a visual reminding mode of a camera, but the monitoring method has the problems of low identification accuracy, untimely reminding time, weak policy execution effect and the like; the monitoring outside the vehicle is less, and the effect of protecting children can not be effectively achieved, so that the problem is to be solved.
Disclosure of Invention
The application provides a dangerous control method, a dangerous control device, a dangerous control vehicle and a dangerous control storage medium in and out of a vehicle, which are used for solving the problems that the dangerous identification accuracy is low, dangerous intervention is not timely in the vehicle, accidents are caused by neglecting monitoring of the dangerous outside the vehicle, ensuring the safety of children in the vehicle and avoiding the collision risk of the children outside the vehicle.
An embodiment of a first aspect of the present application provides a method for controlling risk inside and outside a vehicle, including the following steps:
Acquiring current state information, in-vehicle scene information and out-of-vehicle scene information of a vehicle;
based on a preset scene model database, respectively carrying out fusion analysis on the in-vehicle scene information and the out-vehicle scene information to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-vehicle scene information;
determining a first risk assessment level of the scene information in the vehicle according to the first risk coefficients, obtaining a second risk assessment level of the scene information outside the vehicle according to the second risk coefficients, determining a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and controlling the vehicle to execute corresponding actions according to the control instruction.
Optionally, in some embodiments, before performing fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on the preset scene model database, the method further includes: collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information; based on the accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic; and generating the preset scene model database according to the accident scene information and the risk coefficient corresponding to the human behavior characteristic.
Optionally, in some embodiments, based on the preset scene model database, fusion analysis is performed on the in-vehicle scene information and the out-vehicle scene information to obtain a plurality of first risk coefficients corresponding to the in-vehicle scene information and a plurality of second risk coefficients corresponding to the out-vehicle scene information, where the fusion analysis includes: comparing the scene information in the vehicle with the accident scene information in the preset scene model database to obtain a plurality of first target personal behavior characteristics; obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the plurality of first target individual behavior characteristics; comparing the scene information outside the vehicle with the accident scene information in the preset scene model database to obtain a plurality of second target individual behavior characteristics; and obtaining a plurality of second risk coefficients corresponding to the out-of-car scene information according to the plurality of second target individual behavior characteristics.
Optionally, in some embodiments, the acquiring the in-vehicle scene information and the out-of-vehicle scene information of the vehicle includes: acquiring a video stream in a vehicle and a video stream outside the vehicle through a preset camera, and acquiring a human body lattice in the vehicle and a human body lattice outside the vehicle through a preset laser radar; the in-car scene information is obtained based on the in-car video stream and the in-car human body lattice, and the out-car scene information is obtained based on the out-car video stream and the out-car human body lattice.
Optionally, in some embodiments, before acquiring the in-vehicle scene information and the out-of-vehicle scene information, the method further includes: acquiring an in-vehicle riding condition and an out-vehicle environment image, judging whether the in-vehicle riding condition meets a first preset intervention condition or not, and judging whether the out-vehicle environment image meets a second preset intervention condition or not; and if the in-vehicle riding condition does not meet the first preset intervention condition and/or the out-vehicle environment image does not meet the second preset intervention condition, adding the acquired in-vehicle scene information and/or the out-vehicle scene information to the preset scene model database.
An embodiment of a second aspect of the present application provides an in-vehicle and out-of-vehicle hazard control device, including:
the acquisition module is used for acquiring the current state information, the in-vehicle scene information and the out-of-vehicle scene information of the vehicle;
the analysis module is used for respectively carrying out fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on a preset scene model database to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-vehicle scene information;
the control module is used for determining a first risk assessment level of the scene information in the vehicle according to the first risk coefficients, obtaining a second risk assessment level of the scene information outside the vehicle according to the second risk coefficients, determining a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and controlling the vehicle to execute corresponding actions according to the control instruction.
Optionally, in some embodiments, before performing fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on the preset scene model database, the analysis module is further configured to: collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information; based on the accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic; and generating the preset scene model database according to the accident scene information and the risk coefficient corresponding to the human behavior characteristic.
Optionally, in some embodiments, the analysis module is specifically configured to: comparing the scene information in the vehicle with the accident scene information in the preset scene model database to obtain a plurality of first target personal behavior characteristics; obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the plurality of first target individual behavior characteristics; comparing the scene information outside the vehicle with the accident scene information in the preset scene model database to obtain a plurality of second target individual behavior characteristics; and obtaining a plurality of second risk coefficients corresponding to the out-of-car scene information according to the plurality of second target individual behavior characteristics.
Optionally, in some embodiments, the acquiring module is specifically configured to: acquiring a video stream in a vehicle and a video stream outside the vehicle through a preset camera, and acquiring a human body lattice in the vehicle and a human body lattice outside the vehicle through a preset laser radar; the in-car scene information is obtained based on the in-car video stream and the in-car human body lattice, and the out-car scene information is obtained based on the out-car video stream and the out-car human body lattice.
Optionally, in some embodiments, before acquiring the in-vehicle scene information and the out-of-vehicle scene information, the acquiring module is further configured to: acquiring an in-vehicle riding condition and an out-vehicle environment image, judging whether the in-vehicle riding condition meets a first preset intervention condition or not, and judging whether the out-vehicle environment image meets a second preset intervention condition or not; and adding the acquired in-vehicle scene information and/or the acquired out-of-vehicle scene information to the preset scene model database under the condition that the in-vehicle riding condition does not meet the first preset intervention condition and/or the out-of-vehicle environment image does not meet the second preset intervention condition.
An embodiment of a third aspect of the present application provides a vehicle, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the method for controlling the risk inside and outside the vehicle according to the embodiment.
An embodiment of the fourth aspect of the present application provides a computer-readable storage medium having stored thereon a computer program that is executed by a processor for implementing the in-vehicle and out-of-vehicle hazard control method as described in the above embodiment.
Therefore, the scene model database is established, fusion analysis is carried out on the acquired scene information in the vehicle and the scene information outside the vehicle respectively, a plurality of first danger coefficients corresponding to the scene information in the vehicle and a plurality of second danger coefficients corresponding to the scene information outside the vehicle are obtained, the first danger evaluation grade of the scene information in the vehicle is determined according to the plurality of first danger coefficients, the second danger evaluation grade of the scene information outside the vehicle is determined according to the plurality of second danger coefficients, and the control instruction of the vehicle is determined based on the current state information of the vehicle, the first danger evaluation grade and the second danger evaluation grade, so that the vehicle is controlled to execute corresponding actions according to the control instruction. Therefore, the problems that the in-vehicle hazard identification accuracy is low, the hazard intervention is not timely, accidents are caused by neglecting out-vehicle hazard monitoring are solved, the safety of the in-vehicle children is ensured, and the risk of collision of the out-vehicle children is avoided.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of a method for controlling risk inside and outside a vehicle according to an embodiment of the present application;
FIG. 2 is a schematic diagram of acquiring images of a person and a face according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a hardware installation location according to one embodiment of the present application;
FIG. 4 is a schematic diagram of an image acquisition and analysis unit according to one embodiment of the present application;
FIG. 5 is a schematic diagram of software and hardware of a vehicle network interaction method for in-vehicle and out-of-vehicle child risk assessment according to one embodiment of the present application;
FIG. 6 is a block schematic diagram of an in-vehicle and out-of-vehicle hazard control apparatus provided in accordance with an embodiment of the present application;
fig. 7 is a block schematic diagram of a vehicle provided according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a vehicle interior and exterior hazard control method, apparatus, vehicle, and storage medium of the embodiments of the present application with reference to the accompanying drawings. Aiming at the problems that the in-vehicle hazard identification accuracy is low, hazard intervention is not timely, and accidents are caused by out-of-vehicle hazard monitoring are ignored, the application provides an in-vehicle hazard control method, wherein current state information, in-vehicle scene information and out-of-vehicle scene information of a vehicle are acquired; based on a preset scene model database, respectively carrying out fusion analysis on in-vehicle scene information and out-vehicle scene information to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-vehicle scene information; determining a first risk assessment level of scene information in the vehicle according to the first risk coefficients, obtaining a second risk assessment level of scene information outside the vehicle according to the second risk coefficients, determining a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and controlling the vehicle to execute corresponding actions according to the control instruction. Therefore, the problems that the in-vehicle hazard identification accuracy is low, the hazard intervention is not timely, accidents are caused by neglecting out-vehicle hazard monitoring are solved, the safety of the in-vehicle children is ensured, and the risk of collision of the out-vehicle children is avoided.
Specifically, fig. 1 is a schematic flow chart of a method for controlling risk inside and outside a vehicle according to an embodiment of the present application.
As shown in fig. 1, the method for controlling the danger inside and outside the vehicle comprises the following steps:
in step S101, current state information, in-vehicle scene information, and out-of-vehicle scene information of the vehicle are acquired.
It can be understood that the method for controlling the danger inside and outside the vehicle is used for monitoring and controlling the danger scenes inside and outside the vehicle in real time, and in order to achieve accurate control of the danger inside and outside the vehicle, the embodiment of the application needs to acquire the current state information of the vehicle before controlling the current vehicle, so as to determine the current control instruction.
Specifically, the current state information of the vehicle according to the embodiment of the present application may include: door lock open/close state, window open/close state, air conditioning state information, vehicle speed, steering state, and the like. In addition, the current state information of the plurality of vehicles may be obtained by a correlation sensor.
Optionally, in some embodiments, acquiring in-vehicle scene information and out-of-vehicle scene information of the vehicle includes: acquiring a video stream in a vehicle and a video stream outside the vehicle through a preset camera, and acquiring a human body lattice in the vehicle and a human body lattice outside the vehicle through a preset laser radar; the method comprises the steps of obtaining in-car scene information based on in-car video streams and in-car human body lattices, and obtaining out-car scene information based on out-car video streams and out-of-car human body lattices.
It can be appreciated that with the development of the whole vehicle technology, the vehicle has the functions of in-vehicle and out-of-vehicle monitoring and remote networking. In-vehicle safety monitoring includes the addition of cameras, such as DMS (Driver Monitor System, driver monitoring system) and OMS (occupant monitoring cameras); with the development of automatic driving technology, many vehicles are equipped with a plurality of laser radars, millimeter wave radars, long and short focal cameras, 360-degree periscope cameras and the like, and are used for monitoring the surrounding state of the vehicles, and the maximum area can reach 2.5 football stadium sizes; vehicle networking functionality has evolved from basic remote initiation to remote control, status queries, and the like. Therefore, the method and the device can acquire the scene information in the vehicle and the scene information outside the vehicle by using various cameras, laser radars and other devices.
Specifically, the in-vehicle scene information and the out-vehicle scene information in the embodiments of the present application refer to the height of a human body, the position of the human body, and the behavior characteristics of the human body (such as facial expression, action trend, and the like). In order to accurately acquire in-vehicle scene information and out-of-vehicle scene information in real time, the embodiment of the application acquires in-vehicle video stream and out-of-vehicle video stream by using the camera and acquires a human body lattice by using the laser radar, as shown in fig. 2, fig. 2 (a) shows a schematic diagram of acquiring a human body lattice by using the in-vehicle laser radar in the embodiment of the application, fig. 2 (b) shows a schematic diagram of acquiring a human face feature lattice by using the in-vehicle laser radar in the embodiment of the application, and fig. 2 (c) shows a schematic diagram of acquiring a human body position by using the out-of-vehicle camera in the embodiment of the application.
Further, the embodiment of the application can sense the structure (size, emotion, position and the like) of the ingested human body/face, and the acquisition of the scene information in the vehicle and the scene information outside the vehicle can comprise three product models of human body/face detection, key points of human body/face limbs, motion amplitude and trend of the human body/face and the like, and is combined with business strategies of human body tracking, face snapshot and the like, so that a complete information acquisition channel is constructed. In addition, the obtaining of the in-vehicle scene information and the out-vehicle scene information in the embodiment of the application may further include auxiliary recognition of a human skeletal muscle structure, a human face skeletal muscle structure and the like.
It should be noted that, including front and back row's camera and laser radar in the vehicle of this application embodiment, in order to let the information coverage degree of accuracy of discernment higher, this application is proposed as the hardware mounted position schematic diagram that fig. 3 shows, in fig. 3, serial number (1) is laser radar, serial number (2) is the camera, this application embodiment installs camera and laser radar in car rear-view mirror position (front row) and front row seat back top (two rows) to information such as human height, position, gesture, action, expression in the better acquisition car. Considering that the coverage of the periphery of the vehicle is required to be considered outside the vehicle, in order to ensure the accuracy of recognition, as shown in fig. 3, in the embodiment of the application, 3-4 positions with equal dividing distances are arranged on the head of the vehicle, two sides of the vehicle, and 3-4 positions with equal dividing distances on the tail of the vehicle are respectively provided with a camera and a laser radar, so that the periphery of the vehicle is covered by 360 degrees.
Meanwhile, the heights of the arrangement positions of the camera and the laser radar are also required, and in order to enable the camera to acquire a better visual field, the camera needs to be arranged at a relatively high position of a vehicle, such as a back door, an outside rearview mirror, a front grille, a fender and the like; the laser radar has low requirements on the height relative to the camera, and the laser radar is arranged in the front and back of the vehicle body, around the vehicle body and the like and is lower than the position of the camera.
In addition, as shown in fig. 3, the vehicle of the embodiment of the present application is configured with a DMC (Dynamic Motion Control, internet of vehicles controller), includes an identification module of an image, and outputs the acquired image information using the internet of vehicles technology.
It should be noted that the above arrangements of the camera and the lidar are merely exemplary, and are not limited herein, and those skilled in the art may set the arrangements according to actual requirements during actual implementation.
From this, this application carries out dynamic human body tracking, discernment through all-round installation camera and laser radar on the vehicle, acquires scene information and car outside scene information in the car in real time, promotes recognition accuracy and efficiency greatly.
In step S102, fusion analysis is performed on the in-vehicle scene information and the out-of-vehicle scene information based on a preset scene model database, so as to obtain a plurality of first risk coefficients corresponding to the in-vehicle scene information and a plurality of second risk coefficients corresponding to the out-of-vehicle scene information.
Specifically, after human body/face tracking and behavior detection are performed, in-car scene information and out-car scene information are acquired, and in order to judge whether the acquired scene information is a dangerous scene, a scene model database is built in advance.
Optionally, in some embodiments, before the fusion analysis is performed on the in-vehicle scene information and the out-vehicle scene information based on the preset scene model database, the method further includes: collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information; based on each accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic; and generating a preset scene model database according to each accident scene information and the risk coefficient corresponding to each human behavior characteristic.
The accident scene information in the embodiment of the application refers to child safety accident scene information, each child safety accident scene information comprises a plurality of human body behavior features, and the human body behavior features in the embodiment of the application comprise human body behaviors, human body actions, limb amplitudes, moving speeds, postures, facial emotions and the like.
It can be appreciated that the scene model database of the embodiment of the application includes frequent child safety accidents to the greatest extent. The scene information in the scene model database is obtained by extracting typical scenes through a large number of experience libraries, and the experience libraries are established through multiple channels of gathering daily use experiences, social security accidents, security early warning, user feedback and the like by experience teams inside and outside a group. After the scene is established, multiple simulation scenes are generated, and the dangerous coefficient corresponding to each human body behavior characteristic is obtained through observation and estimation. Therefore, the scene model database is obtained through a large amount of data extraction, simulation, estimation and verification.
Optionally, in some embodiments, before acquiring the in-vehicle scene information and the out-of-vehicle scene information, the method further includes: acquiring an in-vehicle riding condition and an out-vehicle environment image, judging whether the in-vehicle riding condition meets a first preset intervention condition or not, and judging whether the out-vehicle environment image meets a second preset intervention condition or not; if the in-vehicle riding condition does not meet the first preset intervention condition and/or the in-vehicle external environment image does not meet the second preset intervention condition, the acquired in-vehicle scene information and/or out-of-vehicle scene information is added to a preset scene model database.
The in-vehicle riding conditions comprise whether a driver exists, whether a child exists and whether the child accompanies a adult or not; the vehicle exterior environment image of the embodiment of the application includes whether children exist in the blind area range.
It will be appreciated that in some cases, there is no need to monitor for hazards even if there is a child inside/outside the vehicle. The method comprises the steps that a first preset intervention condition is set, before the scene information in the vehicle is obtained, the riding condition of an occupant is firstly obtained, and dangers do not need to be monitored under the condition that a child accompanies the occupant; according to the method and the device, the second preset intervention condition is set, before the scene information outside the vehicle is acquired, the environment image in the dead zone of the vehicle is firstly acquired, and when the image and the outline of the child are not captured, danger does not need to be monitored. That is, in the case where the in-vehicle seating condition does not satisfy the first preset intervention condition or the in-vehicle exterior environment image does not satisfy the second preset intervention condition or the in-vehicle seating condition does not satisfy the first preset intervention condition and the in-vehicle exterior environment image does not satisfy the second preset intervention condition, the current scene is determined to be dangerous behavior, and further, the obtained dangerous scene is added to the basic scene database, so that the database model is continuously enriched, and the comprehensive analysis capability of the system is improved.
Next, the dangerous behavior analysis algorithm of the fusion scenario of the embodiment of the present application will be specifically described by way of examples.
Optionally, in some embodiments, based on a preset scene model database, respectively performing fusion analysis on in-vehicle scene information and out-of-vehicle scene information to obtain a plurality of first risk coefficients corresponding to the in-vehicle scene information and a plurality of second risk coefficients corresponding to the out-of-vehicle scene information, where the method includes: comparing the scene information in the vehicle with a plurality of accident scene information in a preset scene model database to obtain a plurality of first target personal behavior characteristics; obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the behavior characteristics of the plurality of first target individuals; comparing the scene information outside the vehicle with a plurality of accident scene information in a preset scene model database to obtain a plurality of second target individual behavior characteristics; and obtaining a plurality of second risk coefficients corresponding to the external scene information according to the second target individual behavior characteristics.
Based on the above embodiment, it may be understood that the scene model database in the embodiment of the present application includes a plurality of accident scene information, and each accident scene information includes a plurality of human behavior features and corresponding risk coefficients thereof, so that the embodiment of the present application may compare the acquired in-vehicle scene information with the plurality of accident scene information in the preset scene model database to obtain a plurality of human behavior features matched with the in-vehicle scene information, that is, a plurality of first target human behavior features in the embodiment of the present application, and further obtain a plurality of risk coefficients, that is, a plurality of first risk coefficients in the embodiment of the present application; according to the method and the device for obtaining the human body behavior characteristics, the obtained scene information outside the vehicle can be compared with a plurality of accident scene information in a preset scene model database, so that a plurality of human body behavior characteristics matched with the scene information outside the vehicle are obtained, namely, the plurality of second target human body behavior characteristics of the method and the device are obtained, and further, a plurality of danger coefficients are obtained, namely, the plurality of second danger coefficients of the method and the device are obtained.
In some embodiments, a process of image acquisition and analysis in the embodiments of the present application may be as shown in fig. 4, and fig. 4 is a schematic diagram of an image acquisition and analysis unit in a specific embodiment of the present application, where the embodiments of the present application use a camera to acquire video streams inside and outside a vehicle, use a laser radar to acquire a human body lattice inside and outside the vehicle, and perform human body/human face tracking and behavior detection based on the acquired video streams inside and outside the vehicle and the human body lattice, and mainly include the following aspects: detecting human bodies and tracking targets of children inside and outside the vehicle; child bones, posture, relative safe area position, etc.; the behavior and movement of the children inside and outside the vehicle are obtained in real time by the amplitude of limbs and the moving speed; and detecting the faces and identifying the emotion of the children in the vehicle.
Further, based on the detected scene information inside and outside the vehicle, judging whether the current scene is a dangerous scene or not by utilizing the dangerous behavior analysis algorithm of the fused scene in the embodiment of the application. The analysis algorithm mainly comprises the following aspects: the current gesture and movement trend of the children are overlapped with the coincidence coefficient of the dangerous scene, and are progressively analyzed, multi-dimensional confirmation is carried out; current vehicle state (door, window, locked state, real-time vehicle speed, in-vehicle temperature, air-conditioning state, etc.); in-car personnel conditions (whether an adult accompanies the car, whether the child safety belt is tied, and whether the car is near a dangerous location).
For example, the state information of the vehicle obtained by the embodiment of the application is that the window lock is in an open state, and at the moment, children are in the vehicle and no co-row parents accompany the vehicle; the scene information in the vehicle is obtained by using the camera and the laser radar: the child is positioned close to the window and is provided with a motion for buckling the window; comparing the current in-car scene information with a preset basic model database to obtain target human behavior characteristics and dangerous coefficients thereof, wherein the target human behavior characteristics and dangerous coefficients are as follows: the body of the child is abutted against the car window for 30 minutes, the car window is buckled for 80 minutes, and the danger coefficient is 110 minutes.
In step S103, a first risk assessment level of the in-vehicle scene information is determined according to the plurality of first risk coefficients, a second risk assessment level of the out-of-vehicle scene information is obtained according to the plurality of second risk coefficients, a control command of the vehicle is determined according to the first risk assessment level and/or the second risk assessment level and the current state information, and the vehicle is controlled to execute a corresponding action according to the control command.
It can be understood that in step S102, in the embodiment of the present application, fusion analysis is performed on in-vehicle scene information and out-of-vehicle scene information based on a preset scene model database, so as to obtain a plurality of risk coefficients, and further, the embodiment of the present application may use the risk coefficients as a criterion for determining whether safety assistance is required.
Specifically, the embodiment of the application needs to combine the current vehicle condition of the whole vehicle, namely the current state information of the vehicle and the personnel conditions in the vehicle, and superimpose the dangerous coefficient on the behavior characteristics of each person to obtain a dangerous evaluation grade, and the dangerous evaluation grade is judged through multiple dimensions until the whole judgment result reaches the early warning or intervention grade, and the safety assistance is carried out by adopting effective measures by utilizing the interaction assistance function of the Internet of vehicles.
In some cases, if the in-vehicle scene information is determined to be a dangerous scene, the embodiment of the application may determine a control instruction of the vehicle according to the first risk assessment level and the current state information; in other cases, if the out-of-vehicle scene information is determined to be a dangerous scene, the embodiment of the application may determine a control instruction of the vehicle according to the second risk assessment level and the current state information; in other cases, if the scene information in the vehicle and the scene information outside the vehicle are both dangerous scenes through analysis, the control instruction of the vehicle can be determined according to the first risk assessment level, the second risk assessment level and the current state information.
The control instruction of the vehicle in the embodiment of the present Application may be determined according to a risk assessment level, for example, when the risk assessment level is three-level early warning, the control instruction of the vehicle may be "the vehicle-mounted sound sends out a voice prompt", "the mobile phone APP (Application) pushes the early warning information", etc.; when the risk assessment level is the secondary early warning, the control instruction of the vehicle can be active intervention measures such as locking the vehicle, locking the window, locking the vehicle and leaving a gap, opening an air conditioner, and active braking; when the risk assessment grade is the first-level early warning, the control instruction of the vehicle can be an emergency control instruction such as dialing the owner's phone and 110 alarming on the basis of the second-level early warning.
For example, table 1 is a typical security accident scene and an interaction policy table thereof according to an embodiment of the present application, and the embodiment of the present application describes and summarizes human body characteristics of a specific scene and sets different vehicle network interaction policies for early warning or taking measures to ensure child security.
TABLE 1
It should be noted that the above-mentioned embodiments are only exemplary, and are not limiting to the present application. In the actual implementation process, a person skilled in the art can perform control strategies of different scenes and different levels through scene fusion analysis.
Therefore, the method and the device utilize a fusion scene analysis technology, and through a series of comprehensive judgment such as tracking recognition, behavior prejudgment, risk coefficient superposition and the like and layer-by-layer risk coefficient superposition, the accuracy of judgment and the effective degree of execution reminding intervention are ensured, the effective reminding (including remote reminding) is completed by combining the Internet of vehicles and a whole vehicle control strategy, and active intervention measures are adopted when necessary.
In order to enable those skilled in the art to further understand the method for controlling the risk inside and outside the vehicle according to the embodiments of the present application, the following examples illustrate the implementation strategy of the method, and the software and hardware related to the interaction method of the internet of vehicles.
Fig. 5 is a schematic software and hardware diagram of a vehicle network interaction method for in-vehicle and out-vehicle risk assessment according to an embodiment of the present application, where as shown in fig. 5, the hardware related to the embodiment of the present application may include: front and rear cameras, front and rear laser radars, an out-of-vehicle 360 looking around the cameras, and an out-of-vehicle 360 surrounding laser radars; the software related to the embodiment of the application may include: an image recognition and scene comprehensive judgment system and an IHU (Infotainment Head Unit, intelligent Internet access control system) intelligent Internet access control system.
In some embodiments, the application scene is in a vehicle, and the embodiment of the application firstly obtains the riding condition of passengers, whether children exist, whether adults accompany the children, only drivers, only children and the like; information such as age, height, weight and the like of the child in the vehicle is acquired again, so that behavior and actions can be distinguished and judged conveniently; depending on cameras (one in front and back rows) of the whole vehicle, shooting and capturing real-time states of the children in real time, sizes and the like of the children relative to environmental objects in the vehicle, and scanning the outlines, actions and movement conditions of the children by combining with a laser radar; the system is characterized in that an image recognition unit is utilized, information obtained through a camera and a laser radar is utilized, the image recognition and judgment are carried out in real time based on a scene comprehensive judgment system, and when the system detects that the specific scene in a data model base is matched with the specific scene, necessary reminding and vehicle control are carried out through interaction measures of an intelligent network control system, wherein the intelligent network control system comprises effective measures such as in-vehicle prompt tones, mobile phone APP information reminding, hardware control of a vehicle door, a vehicle window, an air conditioner and the like. After the dangerous reminding is completed, the vehicle can record and learn the processing process and the result according to the processing condition of the vehicle owner, perfects and enriches the data model, so that more accurate and timely judgment can be made later.
In some embodiments, the application scene is outside the vehicle, and the embodiment of the application firstly obtains an environment image of a dead zone of the vehicle based on hardware such as a laser radar, a millimeter wave radar, a 360-circle camera and the like under the whole vehicle automatic driving architecture, and when capturing images and contours of children, develops a key warning state and carries out special analysis processing on key information such as actions, movement conditions, positions relative to the vehicle and the like; when the acquired information is matched with the scene comprehensive judgment system identification risk, a specific intelligent network connection control system interaction measure is timely adopted to remind the user of paying attention to the surrounding environment, and a 360 panorama is automatically opened for the user to quickly confirm the surrounding environment, and if necessary, active braking and avoiding operations are adopted to ensure the safety of children around the vehicle. After the dangerous reminding or braking danger avoiding is completed, the vehicle machine can record and learn the processing process and the result according to the processing condition of the vehicle owner, perfects and enriches the data model so as to make more accurate and timely judgment later.
From this, scene information in the car inside and outside that this application obtained through whole car hardware gathers children dangerous actual form through basic scene model database, scene information carries out scene fusion analysis (including position, size, emotion, motion tracking, motion trend analysis etc.) with in-car scene information and car outside scene information, accurate judgement in-car personnel, especially children's action risk, then give comprehensive danger evaluation level, this output signal is as the basis of next action, cooperate car networking interaction to carry out safety warning or whole car control system and carry out accuse car intervention, in order to guarantee in-car children's safety or avoid outside children to take place collision danger etc..
According to the in-vehicle and out-of-vehicle hazard control method provided by the embodiment of the application, a scene model database is established, fusion analysis is respectively carried out on the acquired in-vehicle scene information and out-of-vehicle scene information to obtain a plurality of first hazard coefficients corresponding to the in-vehicle scene information and a plurality of second hazard coefficients corresponding to the out-of-vehicle scene information, further, a first hazard evaluation level of the in-vehicle scene information is determined according to the plurality of first hazard coefficients, a second hazard evaluation level of the out-of-vehicle scene information is determined according to the plurality of second hazard coefficients, and a control instruction of the vehicle is determined based on the current state information, the first hazard evaluation level and the second hazard evaluation level of the vehicle, so that the vehicle is controlled to execute corresponding actions according to the control instruction. Therefore, through technologies such as scene construction, image acquisition, scene fusion analysis and the like, child danger reminding or active safety measure taking is carried out to ensure child safety, the problems that in-car danger identification accuracy is low, danger intervention is not timely, accidents are caused by neglecting out-car danger monitoring are solved, the safety of children in the car is ensured as much as possible, collision risks of children outside the car are avoided, and driving and navigation are guaranteed for each family.
Next, an in-vehicle and out-of-vehicle hazard control device according to an embodiment of the present application will be described with reference to the accompanying drawings.
Fig. 6 is a block schematic diagram of an in-vehicle and out-of-vehicle hazard control device according to an embodiment of the present application.
As shown in fig. 6, the in-vehicle and out-of-vehicle hazard control device 10 includes: an acquisition module 100, an analysis module 200 and a control module 300.
Specifically, the acquiring module 100 is configured to acquire current state information, in-vehicle scene information, and out-of-vehicle scene information of the vehicle; the analysis module 200 is configured to perform fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on a preset scene model database, so as to obtain a plurality of first risk coefficients corresponding to the in-vehicle scene information and a plurality of second risk coefficients corresponding to the out-vehicle scene information; the control module 300 is configured to determine a first risk assessment level of the in-vehicle scene information according to the plurality of first risk coefficients, obtain a second risk assessment level of the out-of-vehicle scene information according to the plurality of second risk coefficients, determine a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and control the vehicle to execute a corresponding action according to the control instruction.
Optionally, in some embodiments, before performing fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on the preset scene model database, the analysis module 200 is further configured to: collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information; based on each accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic; and generating a preset scene model database according to each accident scene information and the risk coefficient corresponding to each human behavior characteristic.
Optionally, in some embodiments, the analysis module 200 is specifically configured to: comparing the scene information in the vehicle with a plurality of accident scene information in a preset scene model database to obtain a plurality of first target personal behavior characteristics; obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the behavior characteristics of the plurality of first target individuals; comparing the scene information outside the vehicle with a plurality of accident scene information in a preset scene model database to obtain a plurality of second target individual behavior characteristics; and obtaining a plurality of second risk coefficients corresponding to the external scene information according to the second target individual behavior characteristics.
Optionally, in some embodiments, the obtaining module 100 is specifically configured to: acquiring a video stream in a vehicle and a video stream outside the vehicle through a preset camera, and acquiring a human body lattice in the vehicle and a human body lattice outside the vehicle through a preset laser radar; the method comprises the steps of obtaining in-car scene information based on in-car video streams and in-car human body lattices, and obtaining out-car scene information based on out-car video streams and out-of-car human body lattices.
Optionally, in some embodiments, before acquiring the in-vehicle scene information and the out-of-vehicle scene information, the acquiring module 100 is further configured to: acquiring an in-vehicle riding condition and an out-vehicle environment image, judging whether the in-vehicle riding condition meets a first preset intervention condition or not, and judging whether the out-vehicle environment image meets a second preset intervention condition or not; and adding the acquired in-vehicle scene information and/or out-of-vehicle scene information to a preset scene model database under the condition that the in-vehicle riding condition does not meet the first preset intervention condition and/or the out-of-vehicle environment image does not meet the second preset intervention condition.
It should be noted that the foregoing explanation of the embodiment of the method for controlling the risk inside and outside the vehicle is also applicable to the apparatus for controlling the risk inside and outside the vehicle of this embodiment, and will not be repeated here.
According to the in-vehicle and out-of-vehicle hazard control device provided by the embodiment of the application, the scene model database is established, the acquired in-vehicle scene information and the acquired out-of-vehicle scene information are respectively subjected to fusion analysis to obtain a plurality of first hazard coefficients corresponding to the in-vehicle scene information and a plurality of second hazard coefficients corresponding to the out-of-vehicle scene information, further, the first hazard evaluation level of the in-vehicle scene information is determined according to the plurality of first hazard coefficients, the second hazard evaluation level of the out-of-vehicle scene information is determined according to the plurality of second hazard coefficients, and the control command of the vehicle is determined based on the current state information, the first hazard evaluation level and the second hazard evaluation level of the vehicle, so that the vehicle is controlled to execute corresponding actions according to the control command. Therefore, through technologies such as scene construction, image acquisition, scene fusion analysis and the like, child danger reminding or active safety measure taking is carried out to ensure child safety, the problems that in-car danger identification accuracy is low, danger intervention is not timely, accidents are caused by neglecting out-car danger monitoring are solved, the safety of children in the car is ensured as much as possible, collision risks of children outside the car are avoided, and driving and navigation are guaranteed for each family.
Fig. 7 is a schematic structural diagram of a vehicle according to an embodiment of the present application. The vehicle may include:
memory 701, processor 702, and computer programs stored on memory 701 and executable on processor 702.
The processor 702 implements the in-vehicle risk control method provided in the above-described embodiment when executing a program.
Further, the vehicle further includes:
a communication interface 703 for communication between the memory 701 and the processor 702.
Memory 701 for storing a computer program executable on processor 702.
The memory 701 may include high-speed RAM (Random Access Memory ) memory, and may also include non-volatile memory, such as at least one disk memory.
If the memory 701, the processor 702, and the communication interface 703 are implemented independently, the communication interface 703, the memory 701, and the processor 702 may be connected to each other through a bus and perform communication with each other. The bus may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, external device interconnect) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 701, the processor 702, and the communication interface 703 are integrated on a chip, the memory 701, the processor 702, and the communication interface 703 may communicate with each other through internal interfaces.
The processor 702 may be a CPU (Central Processing Unit ) or ASIC (Application Specific Integrated Circuit, application specific integrated circuit) or one or more integrated circuits configured to implement embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the in-vehicle and out-of-vehicle hazard control method as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "N" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with another embodiment, if implemented in hardware, may be implemented with a combination of any one or more of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable gate arrays, field programmable gate arrays, and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. The method for controlling the danger inside and outside the vehicle is characterized by comprising the following steps of:
acquiring current state information, in-vehicle scene information and out-of-vehicle scene information of a vehicle;
based on a preset scene model database, respectively carrying out fusion analysis on the in-vehicle scene information and the out-vehicle scene information to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-vehicle scene information; and
determining a first risk assessment level of the scene information in the vehicle according to the first risk coefficients, obtaining a second risk assessment level of the scene information outside the vehicle according to the second risk coefficients, determining a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and controlling the vehicle to execute corresponding actions according to the control instruction.
2. The method according to claim 1, further comprising, before performing fusion analysis on the in-vehicle scene information and the out-of-vehicle scene information, respectively, based on the preset scene model database:
collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information;
based on the accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic;
and generating the preset scene model database according to the accident scene information and the risk coefficient corresponding to the human behavior characteristic.
3. The method according to claim 2, wherein the performing, based on the preset scene model database, fusion analysis on the in-vehicle scene information and the out-of-vehicle scene information to obtain a plurality of first risk coefficients corresponding to the in-vehicle scene information and a plurality of second risk coefficients corresponding to the out-of-vehicle scene information includes:
comparing the scene information in the vehicle with the accident scene information in the preset scene model database to obtain a plurality of first target personal behavior characteristics;
Obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the plurality of first target individual behavior characteristics;
comparing the scene information outside the vehicle with the accident scene information in the preset scene model database to obtain a plurality of second target individual behavior characteristics;
and obtaining a plurality of second risk coefficients corresponding to the out-of-car scene information according to the plurality of second target individual behavior characteristics.
4. The method of claim 1, wherein the acquiring the in-vehicle scene information and the out-of-vehicle scene information of the vehicle comprises:
acquiring a video stream in a vehicle and a video stream outside the vehicle through a preset camera, and acquiring a human body lattice in the vehicle and a human body lattice outside the vehicle through a preset laser radar;
the in-car scene information is obtained based on the in-car video stream and the in-car human body lattice, and the out-car scene information is obtained based on the out-car video stream and the out-car human body lattice.
5. The method of claim 1, further comprising, prior to acquiring the in-vehicle scene information and the out-of-vehicle scene information:
Acquiring an in-vehicle riding condition and an out-vehicle environment image, judging whether the in-vehicle riding condition meets a first preset intervention condition or not, and judging whether the out-vehicle environment image meets a second preset intervention condition or not;
and if the in-vehicle riding condition does not meet the first preset intervention condition and/or the out-vehicle environment image does not meet the second preset intervention condition, adding the acquired in-vehicle scene information and/or the out-vehicle scene information to the preset scene model database.
6. An in-vehicle hazard control apparatus, comprising:
the acquisition module is used for acquiring the current state information, the in-vehicle scene information and the out-of-vehicle scene information of the vehicle;
the analysis module is used for respectively carrying out fusion analysis on the in-vehicle scene information and the out-vehicle scene information based on a preset scene model database to obtain a plurality of first danger coefficients corresponding to the in-vehicle scene information and a plurality of second danger coefficients corresponding to the out-vehicle scene information; and
the control module is used for determining a first risk assessment level of the scene information in the vehicle according to the first risk coefficients, obtaining a second risk assessment level of the scene information outside the vehicle according to the second risk coefficients, determining a control instruction of the vehicle according to the first risk assessment level and/or the second risk assessment level and the current state information, and controlling the vehicle to execute corresponding actions according to the control instruction.
7. The apparatus of claim 6, wherein the analysis module is further configured to, prior to performing a fusion analysis on the in-vehicle scene information and the out-of-vehicle scene information based on the preset scene model database, respectively:
collecting a plurality of accident scene information and a plurality of human behavior characteristics corresponding to each accident scene information;
based on the accident scene information, a preset simulation strategy is utilized to obtain a risk coefficient corresponding to each human behavior characteristic;
and generating the preset scene model database according to the accident scene information and the risk coefficient corresponding to the human behavior characteristic.
8. The apparatus according to claim 7, wherein the analysis module is specifically configured to:
comparing the scene information in the vehicle with the accident scene information in the preset scene model database to obtain a plurality of first target personal behavior characteristics;
obtaining a plurality of first risk coefficients corresponding to the in-vehicle scene information according to the plurality of first target individual behavior characteristics;
comparing the scene information outside the vehicle with the accident scene information in the preset scene model database to obtain a plurality of second target individual behavior characteristics;
And obtaining a plurality of second risk coefficients corresponding to the out-of-car scene information according to the plurality of second target individual behavior characteristics.
9. A vehicle, characterized by comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the in-vehicle risk control method of any one of claims 1-5.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that the program is executed by a processor for realizing the in-vehicle risk control method according to any one of claims 1 to 5.
CN202410077648.2A 2024-01-18 2024-01-18 Method and device for controlling danger inside and outside vehicle, vehicle and storage medium Pending CN117698628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410077648.2A CN117698628A (en) 2024-01-18 2024-01-18 Method and device for controlling danger inside and outside vehicle, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410077648.2A CN117698628A (en) 2024-01-18 2024-01-18 Method and device for controlling danger inside and outside vehicle, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117698628A true CN117698628A (en) 2024-03-15

Family

ID=90155514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410077648.2A Pending CN117698628A (en) 2024-01-18 2024-01-18 Method and device for controlling danger inside and outside vehicle, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117698628A (en)

Similar Documents

Publication Publication Date Title
JP6888950B2 (en) Image processing device, external world recognition device
JP6607164B2 (en) Vehicle safe driving system
US20120050021A1 (en) Method and Apparatus for In-Vehicle Presence Detection and Driver Alerting
US20150365810A1 (en) Vehicular emergency report apparatus and emergency report system
EP4027307A1 (en) Method and device for protecting child inside vehicle, computer device, computer-readable storage medium, and vehicle
WO2009084042A1 (en) Driver state monitoring system
CN106467057A (en) The method of lane departure warning, apparatus and system
CN112489425A (en) Vehicle anti-collision early warning method and device, vehicle-mounted terminal equipment and storage medium
CN106740458A (en) A kind of enabling safety prompting system and method for vehicle
CN109409259A (en) Drive monitoring method, device, equipment and computer-readable medium
EP4100284A1 (en) Artificial intelligence-enabled alarm for detecting passengers locked in vehicle
CN113838265A (en) Fatigue driving early warning method and device and electronic equipment
CN115743031A (en) System and method for deterrence of intruders
CN110712651A (en) Method and device for in-vehicle monitoring
DE102013013226A1 (en) Motor vehicle with a detection of an exit intention
US11132585B2 (en) System and method for detecting abnormal passenger behavior in autonomous vehicles
CN117698628A (en) Method and device for controlling danger inside and outside vehicle, vehicle and storage medium
Von Jan et al. Don't sleep and drive-VW's fatigue detection technology
CN111791823A (en) Intelligent child safety monitoring seat system based on image recognition
Baltaxe et al. Marker-less vision-based detection of improper seat belt routing
WO2018066456A1 (en) Vehicle-mounted storage device
CN115760970A (en) System and method for capturing images of the surroundings of a vehicle for insurance claim processing
CN109624904B (en) Vehicle active defense method and system and vehicle
CN113911131A (en) Responsibility sensitive safety model calibration method for human-vehicle conflict in automatic driving environment
CN113561983A (en) Intelligent security detection system for personnel in vehicle and detection method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination