CN114926971A - In-vehicle scene detection method and device, electronic equipment and storage medium - Google Patents

In-vehicle scene detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114926971A
CN114926971A CN202210669286.7A CN202210669286A CN114926971A CN 114926971 A CN114926971 A CN 114926971A CN 202210669286 A CN202210669286 A CN 202210669286A CN 114926971 A CN114926971 A CN 114926971A
Authority
CN
China
Prior art keywords
vehicle
alarm
data set
module
perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210669286.7A
Other languages
Chinese (zh)
Inventor
蔡世民
谭明伟
徐刚
韩贤贤
冷长峰
高如杉
陈汉尧
李鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202210669286.7A priority Critical patent/CN114926971A/en
Publication of CN114926971A publication Critical patent/CN114926971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • B60R21/0153Passenger detection systems using field detection presence sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • B60R21/01Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents
    • B60R21/015Electrical circuits for triggering passive safety arrangements, e.g. airbags, safety belt tighteners, in case of vehicle accidents or impending vehicle accidents including means for detecting the presence or position of passengers, passenger seats or child seats, and the related safety parameters therefor, e.g. speed or timing of airbag inflation in relation to occupant position or seat belt use
    • B60R21/01512Passenger detection systems
    • B60R21/0153Passenger detection systems using field detection presence sensors
    • B60R21/01538Passenger detection systems using field detection presence sensors for image processing, e.g. cameras or sensor arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a method and a device for detecting a scene in a vehicle, electronic equipment and a storage medium. The in-vehicle scene detection method comprises the following steps: acquiring an image data set and a sensing data set generated by a visual sensing module and a radar sensing module detecting a scene in a vehicle; identifying an object to be alarmed according to the image data set and the perception data set; and generating object leaving prompt information according to the alarm object list and the object to be alarmed. According to the embodiment of the invention, through the fusion perception of vision and radar, the reliability of data is improved, the defect of a single sensor in the detection of the object to be alarmed is overcome, the detection that the single sensor cannot cover the whole scene and has the whole functions is solved, the accuracy of target detection is improved, the object loss probability is reduced, the object loss prompt information is generated based on the alarm object list, the configuration of the object to be lost is facilitated, the adaptation degree of the object loss prompt and a user is enhanced, the actual life of the user can be closely attached, and the use experience of the user is enhanced.

Description

In-vehicle scene detection method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computer application, in particular to a method and a device for detecting a scene in a vehicle, electronic equipment and a storage medium.
Background
Automobiles have been introduced into more and more households, which bring great convenience to people and other problems, for example, when valuables are left in the automobiles, lawbreakers can steal the valuables in the automobiles, causing property loss of users. Furthermore, the infant is forgotten by parents in the automobile, and once the infant encounters high-temperature weather, the action can bring great threat to the life of the infant.
Along with the gradual concern of the whole society about the problem that articles are left in automobiles, more and more automobile manufacturers can configure a plurality of functions such as driver fatigue detection, attention detection, emotion detection, forgetting reminding of children, left object detection, health monitoring, gesture recognition, face recognition and the like in the automobiles. Once the children or valuables are lost in the automobile, the automobile can actively give an alarm to solve the problems.
However, the mode of detecting the object missing is simple at present, the object detecting is realized only by a sensor with a single function configured in an automobile, the object detecting accuracy is poor, the false alarm condition is more, the user cannot really use the object missing detecting function, and even more, the false alarm information of the object missing becomes a main factor disturbing the normal life of the user, so that the use experience of the automobile is greatly reduced.
Disclosure of Invention
The invention provides a method and a device for detecting an in-vehicle scene, electronic equipment and a storage medium, which are used for realizing the fusion of vision and radar perception, optimizing the in-vehicle scene detection and improving the accuracy of detecting an in-vehicle target to be alarmed.
In a first aspect, an embodiment of the present invention provides a method for detecting an in-vehicle scene, where the method includes:
acquiring an image data set and a sensing data set generated by a visual sensing module and a radar sensing module detecting a scene in a vehicle;
identifying an object to be alarmed according to the image data set and the perception data set;
and generating object leaving prompt information according to the alarm object list and the object to be alarmed.
In a second aspect, an embodiment of the present invention provides an in-vehicle scene detection apparatus, where the apparatus includes:
the data acquisition module is used for acquiring an image data set and a perception data set generated by the vision perception module and the radar perception module when the scene in the vehicle is detected;
the object detection module is used for identifying an object to be alarmed according to the image data set and the perception data set;
and the leaving alarm module is used for generating object leaving prompt information according to the alarm object list and the object to be alarmed.
In a third aspect, an embodiment of the present invention provides an in-vehicle scene detection electronic device, where the electronic device includes:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the in-vehicle scene detection method of any of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium for detecting an in-vehicle scene, where the computer-readable storage medium stores computer instructions, and the computer instructions are configured to, when executed, cause a processor to implement the in-vehicle scene detection method according to any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, the in-vehicle scene is detected through the visual perception module and the radar perception module, the image data set and the perception data set which are correspondingly generated are obtained, the fusion of visual and radar data is realized, the defect of a single sensor in the detection of the object to be alarmed is overcome, the object to be alarmed is identified in the image data set and the perception data set respectively, the detection accuracy of the object to be alarmed is improved, the object loss probability is reduced, the object loss prompt information is generated based on the alarm object list, the configuration of the lost object is facilitated, the adaptation degree of the object loss prompt and a user is enhanced, the actual life of the user can be closely fitted, and the use experience of the user is enhanced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of an in-vehicle scene detection method according to an embodiment of the present invention;
fig. 2 is a flowchart of an in-vehicle scene detection method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an in-vehicle scene detection apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an in-vehicle scene detection apparatus according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing a method for detecting an in-vehicle scene according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of an in-vehicle scene detection method according to an embodiment of the present invention, where the embodiment is applicable to a situation of detecting an in-vehicle object in a full scene, and the method may be executed by an in-vehicle scene detection device, which may be implemented in a form of hardware and/or software, and may be configured in a vehicle or a remote server. As shown in fig. 1, the method includes:
s110, an image data set and a perception data set generated by the vision perception module and the radar perception module are obtained.
The visual perception module can be equipment for collecting scene objects in the vehicle in a visual mode, can receive videos, images and the like and is used for identifying the appearance, the space where the objects are located, the change of the objects in the appearance and the space and the like, and the visual perception module has high perception accuracy on the shape and the category of the objects; the radar perception module can be equipment for collecting scene objects in the vehicle in a radar mode, radar perception is wireless perception skill, the orientation, the shape, the motion characteristics and the motion track of a target can be extracted and found through analyzing the characteristics of the received target echo, the characteristics of the target and the environment can be further distinguished, and the radar can accurately perceive the distance of the object and is not influenced by light and shade; the image data set can be a data set formed by integrating data acquired by the visual perception module, the image data set can comprise images, videos and the like, and the corresponding sensors of different types can be different image data sets; the sensing data set can be a data set formed by data collected by a radar sensing module, the radar data set can comprise information, positions and the like of objects in the vehicle, and the corresponding different types of sensors can be different sensing data sets. The time to acquire sensor data may include the time to shut down the engine, open the doors, etc.
Specifically, the objects in the scene in the vehicle can be respectively collected through the visual perception module and the radar perception module, and an image data set and a perception data set which are correspondingly generated according to collected information can be obtained. The number of the visual perception modules can be one or more, and the visual perception modules can be installed at any position of the vehicle where the view is not blocked, and the visual perception modules are vehicle-mounted cameras. The visual perception module collects images and videos and specifically comprises a camera for detection, the camera can monitor all articles in a scene in a vehicle based on an image recognition technology, and an image data set is generated. The quantity of radar sensing module can be one or more, can install the arbitrary position that does not have the apparent decay to the radar wave at the vehicle, and radar sensing module can include millimeter wave radar, UWB radar etc. and radar sensing module data collection can send the electric wave including using radar sensor through the radar including, judges the condition of scene life body in the car according to the electric wave that reflects back, generates the perception data set. It can be appreciated that the visual perception module and the radar perception module can perform data collection on items in an in-vehicle scene, which can include all items in the vehicle, including in-vehicle configured items, user left items, in-vehicle detained passengers, etc.
And S120, identifying the object to be alarmed according to the image data set and the perception data set.
The object to be alarmed can be a marked object which needs to be alarmed, the object to be alarmed can be a static object or a moving object, the object to be alarmed can comprise an organism and a non-organism, and the organism can specifically comprise people and animals with vital signs; the non-living body can specifically comprise articles such as a mobile phone, a throw pillow, a tablet personal computer, jewelry and the like. The alarm scenario may include the satisfaction of an alarm occurring when an object to be alarmed is present in the image data set or when an object to be alarmed is present in the perception data set. The mode of identifying the object to be alarmed can comprise image detection, three-dimensional point cloud detection, deep neural network learning and the like.
Specifically, the object to be alarmed can be identified in the acquired image data set and the perception data set. Specifically, the object to be alarmed can be identified from the image data set and the perception data set respectively through image detection, three-dimensional point cloud detection, deep neural network learning and other modes. The number of the objects to be alarmed can be one or more, and different identification modes can be adopted to identify the objects to be alarmed according to different data sets. And identifying the object to be alarmed according to different image data sets and perception data sets, and analyzing whether the object to be alarmed is contained in the image data sets and the perception data sets.
And S130, generating object leaving prompt information according to the alarm object list and the object to be alarmed.
The alarm object list may be an item list for alarming, the item list may include one or more objects, items in the item list may be set by a user or a car manufacturer, and the alarm object list may be a list in different forms, including a text list or an image list. The object leaving prompt message can be generated when living or non-living objects are left in the vehicle, and the object leaving prompt message can comprise different types of prompt messages, can comprise prompting in visual, auditory and other modes, and can also comprise prompting in cloud remote and other modes. In the example, the user can be prompted by means of double-flashing of the car lamp, whistling, alarming of the user terminal and the like.
In an exemplary embodiment, the image dataset and the perception dataset may identify an object to be alarmed by image detection, three-dimensional point cloud detection, and the like, compare the identified object to be alarmed with an alarm object list set by a user or an automobile manufacturer, and determine whether the object to be alarmed appears in an in-automobile scene. And generating object drop prompt information under the condition that an object to be alarmed appears, and alarming based on the object drop prompt information, wherein the object drop prompt information can prompt a user in modes of double flashing of a car lamp, whistling, alarming of a user terminal and the like.
According to the embodiment of the invention, the scene in the vehicle is detected through the visual perception module and the radar perception module, the image data set and the perception data set which are correspondingly generated are obtained, the object to be alarmed is identified according to the image data set and the perception data set, the visual perception and the radar fusion perception are realized, the advantages and the physical characteristics of the image data set and the perception data set are taken into consideration, the defect of a single sensor in the detection of the object to be alarmed is overcome, the maximization of the function and the performance is realized, the object to be alarmed in the vehicle is identified more precisely, the accuracy of the detection of the object to be alarmed is improved, the object loss prompt information is generated according to the alarm object list and the object to be alarmed, a user can find the lost object in time conveniently, and the object loss probability is reduced.
Example two
Fig. 2 is a flowchart of a method for detecting an in-vehicle scene according to a second embodiment of the present invention, where the technical scheme of this embodiment is further detailed on the basis of the above technical scheme, and specifically includes the following steps:
and S210, monitoring trigger information of scene detection in the vehicle.
In the embodiment of the invention, a user can input triggering information to the vehicle so as to trigger the vehicle to enter the in-vehicle scene detection mode, and the in-vehicle sensor can execute the in-vehicle scene detection function. It is understood that the monitoring may be a process of detecting and receiving the user trigger information, the vehicle may monitor the trigger information in one or more ways, and the trigger information may include many types, for example, a lock command, a door open command, a vehicle power off command, and the like. According to different forms of trigger information, the user trigger instruction can be monitored in different ways.
Further, the trigger information includes at least one of: a vehicle locking instruction, a vehicle door opening instruction and a vehicle power-off instruction. Specifically, according to the trigger information of the user, the search of the scene in the vehicle is started. The triggering information may have different forms, such as a lock command, a door open command, a vehicle power off command, etc. According to different forms of trigger information, different forms of trigger information can be monitored. For example, when the trigger instruction is a car locking instruction, monitoring can be performed by whether a car door can be opened; when the trigger instruction is a vehicle door opening instruction, monitoring can be carried out through opening and closing of the vehicle door; when the trigger instruction is a vehicle power-off instruction, the monitoring can be carried out by judging whether the vehicle shuts off the electric quantity supply of some functions or not and cannot continue running movement.
S220, under the condition that the vehicle speed is determined to be 0, the visual perception module is started to collect the image data set of the scene in the vehicle and the perception data set of the scene in the vehicle is collected by the radar perception module.
When the vehicle speed is greater than 0, the functions of fatigue driving detection, attention detection, emotion monitoring and the like of a driver are started; part in-car detection function can only start when the speed of a motor vehicle is 0, for example, functions such as reminding children forget, detecting objects left, etc. The vehicle speed signal can be collected by a vehicle speed collecting module, and the signal collecting time can be continuous or discontinuous for multiple times.
Specifically, after the speed of a motor vehicle collection module detects that the speed of a motor vehicle signal is 0, can start one or more vision perception module and radar perception module, the vision perception module and the radar perception module of start-up can be installed in the arbitrary position that does not constitute the detection influence to the sensor of vehicle, and the kind of vision perception module and radar perception module can be the same also can be different. The visual perception module identifies scene objects in the vehicle interior by means of image perception to generate an image data set; the radar perception module identifies scene objects in the vehicle in a radar perception mode to generate a perception data set. For example, a plurality of vision sensors and radar sensors may be activated to collect information in an in-vehicle scene, it being understood that the collected information may include in-vehicle object image information, biological or non-biological information, and the like. The type of data in the image data set and the radar data set may be the same or different, and different sensors may generate different types of data sets.
S230, at least one preconfigured object is identified in the image dataset to form a first item list.
The pre-configured object can be an identifiable object preset by a vehicle manufacturer according to experience, the pre-configured object can be an object which needs to be subjected to alarm prompting, and the pre-configured object can comprise a living body, a valuable article and the like, wherein the living body can comprise a person, an animal and the like; the valuables may include mobile phones, purses, and the like. The pre-configured object may also be adjusted by the user as desired based on the configuration of the vehicle manufacturer. The preconfigured object may be identified in the image dataset collected by the visual perception module to determine whether an alarm is required. It will be appreciated that one or more of the same or different types of preconfigured objects may be included in the image dataset, the first item list may be a list of preconfigured objects identified in the image dataset, the first item list may be a list of different types of data, depending on the acquisition data sensor, and may include images, videos, location information of each image object in the in-vehicle coordinate system, and the like. Wherein the in-vehicle coordinate system may be used to identify location information of the preconfigured object.
Specifically, the data information collected by the visual perception module may be read, the preconfigured objects may be extracted from the data information, and after the image data set is generated, the identified preconfigured objects may be configured into the first item list. Specifically, the object to be alarmed can be identified from the image data set in the modes of image detection, deep neural network learning and the like.
S240, at least one pre-configured object is identified in the perception dataset to form a second item list.
The preconfigured object can be identified in the sensing data set acquired by the radar sensing module to judge whether to alarm. It will be appreciated that the sensing dataset may include one or more pre-configured objects of the same or different types, the second item list may be a list of objects of the pre-configured items identified in the sensing dataset, the second item list may be a list of different types of data, depending on the radar sensing module sensor, may include distance data, location information of the respective items in the in-vehicle coordinate system, etc.
In an exemplary embodiment, the data information collected by the radar sensing module may be read, the preconfigured objects may be extracted from the data information through the sensing data set, and after the sensing data set is generated, the identified preconfigured objects may be configured into the second item list. Specifically, the object to be alarmed can be identified from the perception data set in the modes of three-dimensional point cloud detection, deep neural network learning and the like.
And S250, merging the first item list and the second item list into a third item list, and taking the pre-configured object in the third item list as an object to be alarmed.
The third item list may be a list of objects for alarming, the third item list may be a list of pre-configured items generated by merging a first item list of pre-configured items identified in the image data set with a second item list of pre-configured items identified in the perception data set, and the third item list may be a list of different types of data, and may include images, videos, position information of each image target in the in-vehicle coordinate system, and the like.
In particular, a first item list corresponding to the image data set and a second item list corresponding to the perception data set may be merged into a third item list, and pre-configured objects in both the first item list and the second item list may be added to the third item list, and in an exemplary embodiment, in order to reduce a false alarm rate, only the same pre-configured items in the first item list and the second item list may be added to the third item list; in yet another embodiment, to reduce the probability of alarm false positives, all preconfigured items in the first item list and the second item list may be added to a third item list, it being understood that either one of the first item list or the second item list may be added to the third item list for the same preconfigured items in the first item list and the second item list.
Further, the third item list further includes a confidence of each preconfigured object, where the confidence is a higher item confidence in the first item list and the second item list.
The confidence may be the reliability of the detection of each preconfigured object by the visual perception module or the radar perception module, that is, the probability of detecting each preconfigured object correctly for multiple times. The confidence may be generated based on a deep neural network or from a comparison of grey-scale maps, and since the confidence of the visual perception module and the radar perception module for each preconfigured object detection may be the same or different for different items, the item confidence may also be the same or different in the correspondingly generated first item list and second item list. And extracting the detection information of the pre-configured object in the article list with higher confidence coefficient of the pre-configured object into a third article list by comparing the confidence coefficients of the pre-configured objects in the first article list and the second article list.
Specifically, by extracting variable parameters for confidence comparison in the first item list and the second item list, the confidence of comparison is performed according to corresponding pre-configured objects in the first item list and the second item list, and data with higher confidence of the items in the first item list and the second item list is stored in the third item list. The confidence degrees of the objects in the first object list and the second object list are compared in a one-to-one correspondence manner, and the first object list can be used for detecting each image target in an in-vehicle scene from a video image provided by a visual sensor, acquiring the confidence degree of each image target and position information of each image target in an in-vehicle coordinate system; the second item list detects each biological object target of the in-vehicle environment from the point cloud provided by the radar sensor, and obtains the confidence of each biological object target and the position information of each biological object target in the in-vehicle coordinate system.
S260, searching the emergency degree matched with the object to be alarmed in the alarm object list, generating object drop prompt information corresponding to the object to be alarmed according to the emergency degree, and alarming.
Wherein, different emergency degree disposes different alarm means, and emergency degree can include two at least grades, can adopt different alarm means under different grades, and alarm means can include car end warning, high in the clouds warning etc.. The alarm object list can be a recognizable object list preset by a vehicle manufacturer according to experience, and is an object list which can be collected by the visual perception module and the radar perception module and generates a corresponding data set, and the alarm object list can comprise objects such as organisms and valuables, wherein the organisms can comprise people, animals and the like; the valuables may include mobile phones, purses, and the like.
Furthermore, the emergency degree at least comprises vehicle-end alarm and cloud alarm.
Specifically, after a third item list is generated according to the item confidence, the emergency degree of the object to be alarmed in the alarm object list in the third item list is searched, corresponding object missing information is generated for the object to be alarmed according to different emergency degrees, and different types of alarms are triggered. The emergency degree can be divided into different grades, and different alarm modes can be adopted for different emergency degrees, such as vehicle-end alarm, cloud alarm and the like. The vehicle-end alarm may be a first emergency level, alerting the driver and passengers visually or audibly. The vehicle-end alarming and warning modes can comprise double flashing of vehicle lights, vehicle whistling and the like. The cloud alarm can be that the alarm signal of reminding children who need the cloud alarm forget and reminding the object of leaving the alarm information is sent to the cloud backstage, and the alarm signal can include data such as emergency degree decision result, picture of waiting to detect the article, and the cloud backstage can backup the data and in time push the alarm signal to the car owner to the mobile terminal.
And S270, acquiring vehicle control information corresponding to the object leaving prompt information, and executing processing operation according to the vehicle control information.
The vehicle control information may be a control request for the vehicle generated according to different object loss information. For example, a child left in the vehicle may generate a request to turn on an air conditioner, open a window, or the like. The processing operation may be an operation process that may be performed after the vehicle owner receives the vehicle control information. The processing operation can be the operation of controlling the vehicle implemented according to the correctness and the emergency degree of the alarm signals of the reminding information of forgetting children and the reminding information of leaving objects judged by the vehicle owners. The processing operation may include unlocking the vehicle, remotely starting the engine, turning on the air conditioner, etc. The user can select different processing operations according to the requirements of the user.
Further, the processing operation includes at least one of: unlocking the vehicle, remotely starting the engine and starting the air conditioner.
Specifically, when the vision perception module and the radar perception module detect that the object left in the car or the child left in the car, different types of alarm information can be generated, and the cloud alarm can generate corresponding object left prompt information. After the cloud alarm, vehicle control information corresponding to the prompt information of the objects left in different types can be acquired through the user mobile terminal, and the user can perform processing operation according to the acquired vehicle control information, such as vehicle unlocking, engine remote starting, air conditioner starting and the like.
According to the embodiment of the invention, the in-vehicle scene is detected by starting the visual perception module and the radar perception module, the image data set and the perception data set which are correspondingly generated are obtained, the object to be alarmed is identified according to the image data set and the perception data set, a more accurate object list to be alarmed is extracted by comparing confidence coefficients, the fusion of visual and radar data is realized, and the accuracy of identifying the object to be alarmed is improved. According to the different emergency degrees of the object to be alarmed, the alarming in different modes is carried out, so that a user can find out that the object is left and the child is forgotten more timely, a vehicle owner can execute processing operation through vehicle control information, the danger caused by high temperature when the child stays in the vehicle is avoided, and the occurrence of accidents is reduced.
Further, generating object leaving prompt information corresponding to the object to be alarmed according to the emergency degree and alarming, wherein the method comprises the following steps:
and under the condition that the emergency degree is the vehicle-end alarm, generating object loss prompt information and sending the object loss prompt information to the vehicle alarm module so as to trigger the vehicle alarm module to alarm.
And under the condition that the emergency degree is cloud alarm, generating object drop prompt information, and uploading the object drop prompt information to a cloud background so that the cloud background pushes the object drop information for the user.
Specifically, the emergency degree can include two at least grades, can adopt different alarm modes under different grades, and alarm mode can include car end warning, high in the clouds warning etc.. The urgency level of the object to be alarmed can be the urgency level of an identifiable object preset by a vehicle manufacturer according to experience, and the object can be an object which can be collected and generated by the vision perception module and the radar perception module and corresponds to the data set. Illustratively, when the emergency degree is the warning at the vehicle end, the generated object leaving prompt information can be sent to the vehicle warning module, the vehicle can give different types of warnings according to the object leaving information, and the driver and the passengers can be warned in a visual and auditory manner. The method specifically comprises the steps of carrying out double flashing on the lamp of the vehicle, whistling the vehicle and the like. Under the condition that emergency degree is the high in the clouds warning, the object that generates loses prompt message and can upload the high in the clouds backstage, and the information that the high in the clouds backstage uploaded can specifically include children in the car or loses the thing photo, health status etc. for the user judges the exactness of reporting to the police and emergency degree high in the clouds backstage can be for user mobile terminal propelling movement object information that loses, and the user can in time discover children or the loss of object.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an in-vehicle scene detection apparatus according to a third embodiment of the present invention. As shown in fig. 3, the apparatus includes: a data acquisition module 31, an object detection module 32 and a leaving alarm module 33.
The data acquisition module 31 is configured to acquire an image data set and a sensing data set generated by the vision sensing module and the radar sensing module when the scene in the vehicle is detected.
And the object detection module 32 is used for identifying the object to be alarmed according to the image data set and the perception data set.
And the leaving alarm module 33 is used for generating object leaving prompt information according to the alarm object list and the object to be alarmed.
According to the embodiment of the invention, the image data set and the sensing data set generated by the scene in the vehicle detected by the vision sensing module and the radar sensing module are obtained by the data acquisition module 31, the object to be alarmed is identified by the object detection module 32 according to the image data set and the sensing data set, the vision and radar fusion sensing is realized, the advantages and the physical characteristics of the image data set and the sensing data set are considered, the defects of a single sensor in the detection of the object to be alarmed are overcome, the maximization of the function and the performance is realized, the accuracy of the detection of the object to be alarmed is improved, the object loss probability is reduced, the object loss prompt information is generated by the loss alarm module 33 according to the alarm object list and the object to be alarmed, the configuration of the object to be lost is facilitated, the adaptation degree of the object loss prompt and the user to the actual life is enhanced, and the use experience of the user is enhanced.
Further, on the basis of the above embodiment of the invention, the data acquisition module 31 includes:
the trigger information monitoring unit is used for monitoring trigger information of scene detection in the vehicle, wherein the trigger information comprises at least one of the following information: the method comprises a vehicle locking instruction, a vehicle door opening instruction and a vehicle power-off instruction.
And the information acquisition starting unit is used for starting the visual perception module to acquire the image data set of the scene in the vehicle and the perception data set of the scene in the vehicle acquired by the radar perception module under the condition that the vehicle speed is determined to be 0.
Further, on the basis of the above-mentioned embodiment of the invention, the object detection module 32 includes:
a first data acquisition unit for identifying at least one preconfigured object in the image dataset to form a first item list.
A second data acquisition unit for identifying at least one preconfigured object in the perception data set to form a second item list.
And the data integration unit is used for merging the first item list and the second item list into a third item list and taking the pre-configured object in the third item list as the object to be alarmed. The third item list also includes a confidence for each preconfigured object, the confidence being a higher item confidence in the first item list and the second item list.
Further, on the basis of the above embodiment of the present invention, the leaving alarm module 33 includes:
and the list matching unit is used for searching the emergency degree matched with the object to be alarmed in the alarm object list.
The object alarm unit is used for generating object leaving prompt information corresponding to the object to be alarmed according to the emergency degree and alarming; wherein, the emergency degree at least comprises vehicle end alarm and cloud alarm.
Further, on the basis of the above embodiment of the present invention, the in-vehicle scene detection apparatus further includes:
and the vehicle-end alarm unit is used for generating object leaving prompt information and sending the object leaving prompt information to the vehicle alarm module to trigger the vehicle alarm module to alarm under the condition that the emergency degree is the vehicle-end alarm.
And the cloud alarm unit is used for generating object leaving prompt information under the condition that the emergency degree is cloud alarm, and uploading the object leaving prompt information to the cloud background so that the cloud background can push the object leaving information for the user.
Further, on the basis of the above embodiment of the present invention, the in-vehicle scene detection apparatus further includes:
the vehicle control unit is used for acquiring vehicle control information corresponding to the object leaving prompt information and executing processing operation according to the vehicle control information, wherein the processing operation comprises at least one of the following operations: unlocking the vehicle, remotely starting the engine and starting the air conditioner.
The in-vehicle scene detection device provided by the embodiment of the invention can execute the in-vehicle scene detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an in-vehicle scene detection apparatus according to a fourth embodiment of the present invention. In an exemplary embodiment, the apparatus may include: the vehicle monitoring system comprises a vehicle speed acquisition module 40, an intelligent entering module 41, a visual perception module 42, a radar perception module 43, a perception fusion module 44, a monitoring decision module 45, a vehicle-end alarm module 46, a cloud-end alarm module 47, a cloud-end background 48 and a vehicle owner mobile terminal 49.
The vehicle speed acquisition module 40 multiplexes a vehicle speed sensor of the vehicle, acquires a vehicle speed signal, and is used for judging a function entering condition. Partial functions such as driver fatigue detection, attention detection, emotion detection and the like are available only when the vehicle speed is greater than 0; and partial functions such as reminding of children and detection of fallen objects can be only available when the vehicle speed is 0.
The intelligent entering module 41 is used for multiplexing the intelligent entering function of the vehicle, judging the entering condition of the function, and starting to work after the vehicle is locked for the functions of reminding children forgetting, reminding objects to be left and the like.
The visual perception module 42, a module for perceiving the passenger compartment and the passenger through the camera, monitors from the visual dimension.
And a radar sensing module 43, which senses the organism in the passenger compartment through the radar and monitors the organism from the dimension of the radar.
The perception fusion module 44 is a module for performing data fusion on the vision perception result and the radar perception result; make up the detection principle of various sensors not enough for function and performance reach the optimum.
And the monitoring decision module 45 is a module for receiving the vehicle information and the perception result of the perception fusion module, firstly judging which functions meet the entering conditions, and further performing function alarm decision according to the fusion perception result.
And the vehicle-end alarm module 46 is used for giving alarms such as vehicle-end vision, auditory sense and the like according to the decision result of the monitoring decision module and warning a driver or passengers.
The cloud alarm module 47 is used for judging the correctness and the emergency degree of alarm for the car owner according to the decision result of the monitoring decision module and data such as pictures and videos, for example, photos of children or objects left in the car, and needs to send an alarm signal to a cloud background for reminding the children who need cloud alarm to forget and reminding the objects left.
And the cloud background 48 receives the alarm and the data such as pictures and videos sent by the cloud alarm module, for example, photos of children or lost articles in the vehicle, and is a remote device for judging the correctness and the emergency degree of the alarm by the vehicle owner, backing up the data and pushing the data to the vehicle owner in time.
The vehicle owner mobile terminal 49 is a mobile terminal that receives the background push information, such as a mobile phone and an iPad, and warns a vehicle owner and performs the next processing. The car owner can carry out vehicle unblock, remote start engine and open the air conditioner through mobile terminal, opens operation such as door and window and cools down in to the car, avoids in the summer day that is inflammatory, in the baby is detained the car, because high temperature causes the tragedy that the baby died to take place.
The in-vehicle scene detection device provided by the embodiment of the invention can execute the in-vehicle scene detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
FIG. 5 illustrates a schematic diagram of an electronic device 50 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 50 includes at least one processor 51, and a memory communicatively connected to the at least one processor 51, such as a Read Only Memory (ROM)52, a Random Access Memory (RAM)53, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 51 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM)52 or the computer program loaded from a storage unit 58 into the Random Access Memory (RAM) 53. In the RAM 53, various programs and data necessary for the operation of the electronic apparatus 50 can also be stored. The processor 51, the ROM 52, and the RAM 53 are connected to each other via a bus 54. An input/output (I/O) interface 55 is also connected to bus 54.
A plurality of components in the electronic apparatus 50 are connected to the I/O interface 55, including: an input unit 56 such as a keyboard, a mouse, or the like; an output unit 57 such as various types of displays, speakers, and the like; a storage unit 58 such as a magnetic disk, an optical disk, or the like; and a communication unit 59 such as a network card, modem, wireless communication transceiver, etc. The communication unit 59 allows the electronic device 50 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 51 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the processor 51 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, or the like. The processor 51 performs the various methods and processes described above, such as an in-vehicle scene detection method.
In some embodiments, an in-vehicle scene detection method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 58. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 50 via the ROM 52 and/or the communication unit 59. When the computer program is loaded into the RAM 53 and executed by the processor 51, one or more steps of an in-vehicle scene detection method as described above may be performed. Alternatively, in other embodiments, the processor 51 may be configured to perform an in-vehicle scene detection method by any other suitable means (e.g., by way of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired result of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An in-vehicle scene detection method is characterized by comprising the following steps:
acquiring an image data set and a sensing data set generated by a vision sensing module and a radar sensing module for detecting a scene in a vehicle;
identifying an object to be alarmed according to the image dataset and the perception dataset;
and generating object leaving prompt information according to the alarm object list and the object to be alarmed.
2. The method of claim 1, wherein the obtaining the vision perception module and the radar perception module detects an image dataset and a perception dataset generated by an in-vehicle scene, comprising:
monitoring trigger information of the scene detection in the vehicle, wherein the trigger information comprises at least one of the following: a vehicle locking instruction, a vehicle door opening instruction and a vehicle power-off instruction;
and starting the visual perception module to collect the image data set of the scene in the vehicle and the radar perception module to collect the perception data set of the scene in the vehicle under the condition that the vehicle speed is determined to be 0.
3. The method of claim 1, wherein identifying the object to be alerted from the image dataset and the perception dataset comprises:
identifying at least one preconfigured object in the image dataset to form a first item list;
identifying at least one of the preconfigured objects in the sensory dataset to form a second item list;
merging the first item list and the second item list into a third item list, and taking the preconfigured object in the third item list as the object to be alarmed.
4. The method of claim 3, wherein the third item list further comprises a confidence level for each of the preconfigured objects, the confidence level being a higher item confidence level in the first item list and the second item list.
5. The method according to claim 1, wherein the generating of the object loss prompt message according to the alarm object list and the object to be alarmed comprises:
searching the emergency degree matched with the object to be alarmed in the alarm object list;
generating object leaving prompt information corresponding to the object to be alarmed according to the emergency degree and alarming;
wherein, the emergency degree at least comprises vehicle end alarm and cloud alarm.
6. The method according to claim 5, wherein the generating and alarming the object leaving prompt information corresponding to the object to be alarmed according to the emergency degree comprises:
generating the object leaving prompt information under the condition that the emergency degree is the vehicle-end alarm, and sending the object leaving prompt information to a vehicle alarm module to trigger the vehicle alarm module to alarm;
and generating the object leaving prompt information under the condition that the emergency degree is the cloud alarm, uploading the object leaving prompt information to a cloud background, and pushing the object leaving information for a user by the cloud background.
7. The method of claim 1, further comprising: acquiring vehicle control information corresponding to the object left-falling prompting information, and executing processing operation according to the vehicle control information, wherein the processing operation comprises at least one of the following operations: unlocking the vehicle, remotely starting the engine and starting the air conditioner.
8. An in-vehicle scene detection device, comprising:
the data acquisition module is used for acquiring an image data set and a perception data set generated by the vision perception module and the radar perception module when the scene in the vehicle is detected;
the object detection module is used for identifying an object to be alarmed according to the image data set and the perception data set;
and the leaving alarm module is used for generating object leaving prompt information according to the alarm object list and the object to be alarmed.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the in-vehicle scene detection method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to perform the in-vehicle scene detection method of any one of claims 1 to 7 when executed.
CN202210669286.7A 2022-06-14 2022-06-14 In-vehicle scene detection method and device, electronic equipment and storage medium Pending CN114926971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210669286.7A CN114926971A (en) 2022-06-14 2022-06-14 In-vehicle scene detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210669286.7A CN114926971A (en) 2022-06-14 2022-06-14 In-vehicle scene detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114926971A true CN114926971A (en) 2022-08-19

Family

ID=82814125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210669286.7A Pending CN114926971A (en) 2022-06-14 2022-06-14 In-vehicle scene detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114926971A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715766A (en) * 2021-08-17 2021-11-30 厦门星图安达科技有限公司 Method for detecting people in vehicle
CN114332941A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Alarm prompting method and device based on riding object detection and electronic equipment
CN114572141A (en) * 2020-12-01 2022-06-03 采埃孚汽车科技(上海)有限公司 Occupant monitoring system, occupant leaving reminding method and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114572141A (en) * 2020-12-01 2022-06-03 采埃孚汽车科技(上海)有限公司 Occupant monitoring system, occupant leaving reminding method and related equipment
CN113715766A (en) * 2021-08-17 2021-11-30 厦门星图安达科技有限公司 Method for detecting people in vehicle
CN114332941A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Alarm prompting method and device based on riding object detection and electronic equipment

Similar Documents

Publication Publication Date Title
US20210274130A1 (en) Vehicle monitoring
US9955326B2 (en) Responding to in-vehicle environmental conditions
US20200342230A1 (en) Event notification system
EP3965082B1 (en) Vehicle monitoring system and vehicle monitoring method
US10997430B1 (en) Dangerous driver detection and response system
US10657383B1 (en) Computer vision to enable services
CN207427199U (en) Interior legacy detecting system
US12046118B2 (en) Intra-vehicle situational awareness featuring child presence
CN117056558A (en) Distributed video storage and search using edge computation
WO2021189641A1 (en) Left-behind subject detection
WO2019095887A1 (en) Method and system for realizing universal anti-forgotten sensing device for in-vehicle passengers
US20190272755A1 (en) Intelligent vehicle and method for using intelligent vehicle
CN111599140A (en) Vehicle rear-row living body monitoring system and method
CN108734056A (en) Vehicle environmental detection device and detection method
CN112319367A (en) Monitoring and protecting system and equipment for people staying in vehicle
US11845390B2 (en) Cabin monitoring system
CN114926971A (en) In-vehicle scene detection method and device, electronic equipment and storage medium
US20220335725A1 (en) Monitoring presence or absence of an object using local region matching
KR20170038232A (en) Apparatus and method for providing object reminder in vehicle
CN117022169A (en) Vehicle early warning method, device, equipment and storage medium
CN111753581A (en) Target detection method and device
US20230081909A1 (en) Training an object classifier with a known object in images of unknown objects
CN114809833B (en) Control method for opening vehicle door, vehicle door control device and vehicle door control system
CN111931734A (en) Method and device for identifying lost object, vehicle-mounted terminal and storage medium
CN116729212A (en) In-vehicle living matter monitoring method and device, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination