CN112896176A - Vehicle use environment perception method, system, computer device and storage medium - Google Patents

Vehicle use environment perception method, system, computer device and storage medium Download PDF

Info

Publication number
CN112896176A
CN112896176A CN202110095111.5A CN202110095111A CN112896176A CN 112896176 A CN112896176 A CN 112896176A CN 202110095111 A CN202110095111 A CN 202110095111A CN 112896176 A CN112896176 A CN 112896176A
Authority
CN
China
Prior art keywords
vehicle
state data
information
motion state
travel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110095111.5A
Other languages
Chinese (zh)
Other versions
CN112896176B (en
Inventor
杨立峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hycan Automobile Technology Co Ltd
Original Assignee
GAC NIO New Energy Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC NIO New Energy Automobile Technology Co Ltd filed Critical GAC NIO New Energy Automobile Technology Co Ltd
Priority to CN202110095111.5A priority Critical patent/CN112896176B/en
Publication of CN112896176A publication Critical patent/CN112896176A/en
Application granted granted Critical
Publication of CN112896176B publication Critical patent/CN112896176B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/049Number of occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle use environment perception method, a system, a computer device and a storage medium. The method comprises the following steps: acquiring vehicle state data through a vehicle sensor mounted on a vehicle; the vehicle state data comprises motion state data, vehicle body state data and position information data of the vehicle; determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively; when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data; determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database; and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle. The method can solve the technical problem of poor capability of coping with external environment change in the traditional method.

Description

Vehicle use environment perception method, system, computer device and storage medium
Technical Field
The present application relates to the field of vehicle technologies, and in particular, to a method, a system, a computer device, and a storage medium for sensing a vehicle usage environment.
Background
With the development of society, vehicles have become the main vehicles for most people to perform activities such as commuting, shopping and traveling, and in order to provide personalized intelligent travel services for different drivers or passengers, an on-board system needs to have the ability of sensing the use environment of the vehicle. However, since the vehicle may stop to receive a person or a passenger to get off the vehicle at any time, the usage environment of the vehicle changes at any time, however, the current method for sensing the usage environment of the vehicle has a poor ability to cope with the change of the external environment.
Disclosure of Invention
Based on this, it is necessary to provide a vehicle usage environment sensing method, system, computer device, and storage medium for solving the technical problem that the existing vehicle usage environment sensing method has poor ability to cope with external environment changes.
A vehicle use environment awareness method, the method comprising:
acquiring vehicle state data through a vehicle sensor mounted on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively;
when the travel state of the vehicle is determined to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data;
determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database;
and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information pre-stored in a database.
In one embodiment, the determining the driver and passenger information in the vehicle according to the vehicle body state data includes:
and obtaining the type, the number and the activity of the drivers and the passengers in the vehicle as the information of the drivers and the passengers in the vehicle according to the vehicle body state data and the preset mapping relationship among the vehicle body state data, the activities of the drivers and the passengers and the types of the drivers and the passengers.
In one embodiment, the determining the current travel of the vehicle according to the motion state of the vehicle, the driver and passenger information in the vehicle and the current position of the vehicle includes:
determining a current travel landmark of the vehicle according to the current position of the vehicle;
and determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current travel landmark of the vehicle.
In one embodiment, the determining the current travel landmark of the vehicle according to the current position of the vehicle includes:
acquiring a plurality of sub-regions corresponding to a plurality of preset landmark regions, and if the current position is detected to be located in any one of the sub-regions, taking the landmark corresponding to the sub-region where the current position is located as the current travel landmark of the vehicle;
and if the current position is detected not to be located in any one of the preset landmark areas, taking the landmark corresponding to the sub-area adjacent to the current position as the travel landmark of the vehicle.
In one embodiment, the process of determining that the trip state of the vehicle has changed includes:
and when the change of the motion state of the vehicle is detected according to the motion state data or the change of the driver and the passenger in the vehicle is detected according to the vehicle body state data, determining that the travel state of the vehicle is changed.
In one embodiment, the method further comprises:
if the vehicle is detected to be in the set time according to the motion state data and then is converted from the stop state to the motion state, determining that the vehicle is in the stroke starting state;
and if the vehicle is detected to be converted from the motion state to the stop state according to the motion state data and the set time is kept, determining that the vehicle is in the stroke end state.
In one embodiment, the determining the current travel of the vehicle according to the motion state of the vehicle, the driver and passenger information in the vehicle and the current position of the vehicle includes:
acquiring time information, and determining the current travel of the vehicle according to the time information, the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle.
A vehicle use environment awareness system, the system comprising:
the data acquisition module is used for acquiring vehicle state data through a vehicle sensor arranged on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
the motion state sensing module is used for determining the motion state of the vehicle according to the motion state data;
the passenger sensing module is used for determining driver and passenger information in the vehicle according to the vehicle body state data;
the landmark sensing module is used for acquiring the current position of the vehicle according to the position information data when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data;
the travel module is used for determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle;
and the context perception module is used for obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information stored in a database.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring vehicle state data through a vehicle sensor mounted on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively;
when the travel state of the vehicle is determined to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data;
determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database;
and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information pre-stored in a database.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring vehicle state data through a vehicle sensor mounted on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively;
when the travel state of the vehicle is determined to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data;
determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database;
and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information pre-stored in a database.
The vehicle using environment sensing method, system, computer equipment and storage medium acquire vehicle state data through vehicle sensors mounted on the vehicle; the vehicle state data comprises motion state data, vehicle body state data and position information data of the vehicle; determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively; when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data; determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database; and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle. The method utilizes different sensors, fuses data of the different sensors, senses the using environment of the vehicle, can quickly detect any change of the context of the vehicle, and improves the sensing capability of the using environment of the vehicle, thereby solving the technical problem of poor capability of coping with the change of the external environment in the traditional method.
Drawings
FIG. 1 is a block diagram of an architecture of a vehicle context awareness system in one embodiment;
FIG. 2 is a schematic diagram of a vehicle implementing a vehicle context awareness method in one embodiment;
FIG. 3 is a schematic flow chart of a vehicle usage context awareness method in one embodiment;
FIG. 4 is a schematic diagram of the internal architecture of a passenger module in one embodiment;
FIG. 5 is a schematic illustration of a daily routine in one embodiment;
FIG. 6 is a schematic diagram of a vehicle occupant classification method according to one embodiment;
FIG. 7 is a schematic illustration of a landmark made up of three centroids in one embodiment;
FIGS. 8(a) and 8(b) are schematic diagrams of a landmark discovery algorithm in one embodiment;
FIGS. 8(c) and 8(d) are further schematic diagrams of a landmark finding algorithm in one embodiment;
FIG. 9 is a schematic illustration of daily routine and vehicle perception levels in one embodiment;
FIG. 10 is a diagram of an in-vehicle context information model in one embodiment;
FIG. 11 is a flowchart illustrating a method for sensing a vehicle usage environment according to another embodiment;
FIG. 12 is a block diagram of an exemplary embodiment of a vehicle context-aware system;
FIG. 13 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, a schematic diagram of an architecture of a vehicle context awareness system, which may be implemented as a part of a vehicle device, for example, may be installed in a vehicle terminal as a vehicle application. The System can be divided into 4 parts of System input, context sensing, context processing and System output, wherein the System input is vehicle position information data, transmitter state data, vehicle speed and vehicle acceleration, the data input by the System can be respectively obtained through a vehicle GPS (Global Positioning System) unit, a vehicle-mounted OBD-II interface (On-Board Diagnostic II), an odometer and an Inertial Measurement Unit (IMU), and the System output is vehicle context sensing information. The context sensing part of the Vehicle mainly comprises 5 sensing modules, such as a Vehicle State Module (VSM), a passenger Module (OM), a Landmark Module (LM), an Intelligent Selection Module (ISM), a journey Module (IM), and the like, wherein each sensing Module senses specific characteristics of a dynamic operating environment of the Vehicle by Processing data of various sensors or other modules, and can be associated through a CEP (Complex Event Processing) frame and a Complex Event Processing technology. The Context processing part of the vehicle comprises an in-vehicle Context processing module (IVCH) for processing Context information sensed by each sensing module of the Context sensing part, so that the in-vehicle Context processing module can sense a system entity of the in-vehicle Context, when any sensing module detects a Context change, a vehicle Context change event can be created to represent a new dynamic Context of the vehicle, and therefore, the system can continuously update the sensed Context information.
In a context-aware system, data from onboard sensors input by the system may be considered as floor events of the system, each floor event may represent the activity of the vehicle or the driver and passengers, and there is a complex mapping relationship between floor events, for example, when the vehicle engine is stopped, the door may be opened, the passenger gets off or on the vehicle, and the seat belt will be loosened or fastened accordingly. Thus, individual underlying events are monitored by complex event processing technology (CEP) to sense the activities of the vehicle or occupants. Thus, a key point of Complex Event Processing (CEP) technology is the continuous identification of complex event patterns in the event stream flowing through the system, where the activities of the vehicle or occupants can represent different characteristics of the vehicle's dynamic context, as indicated by the flow segments between the various awareness modules in fig. 1, each awareness module handling information from the other awareness modules as events with the aim of perceiving more complex contextual information. Conversely, these new information may be processed by other awareness modules as events to obtain richer information, whereby each context awareness module operates in a cooperative manner and constitutes a CEP-based multi-hierarchy to detect the dynamic context of the vehicle.
Referring to fig. 2, in order to implement the vehicle using environment sensing method provided by the present application, corresponding sensors are disposed at doors (including a main Driver door (Driver door), a sub-Driver door (Co-Driver door), a Left rear door (Left rear door), a Right rear door (Right rear door)) at various positions of a vehicle body, a trunk door (Boot door), windows (including a main Driver window (Driver window), a sub-Driver window (Co-Driver window), a Left rear window (Left rear window), a Right rear window (Right rear window)), various seats (including a main Driver seat (Driver seat), a sub-Driver seat (Co-Driver seat), a Left rear seat (Left rear seat), a Right rear seat (Right rear seat), and a center rear seat (center rear seat)) and seat belts of the various seats, the system is used for acquiring state data of a vehicle door and a vehicle window, load bearing data of a seat and data of whether a safety belt is fastened or not, a vehicle-mounted application program can be installed in a vehicle, the vehicle-mounted application program is used for carrying out fusion analysis on data from different sensors of the vehicle and data from other external data sources, the using condition in the vehicle is sensed, passengers, destinations, travel targets and the like of the current travel of the vehicle are inferred, and the sensing results are used as the context of the vehicle.
In one embodiment, as shown in fig. 3, a method for sensing a vehicle using environment is provided, which is described by taking an example of an in-vehicle terminal applied to a vehicle, and includes the following steps:
step S302, vehicle state data is obtained through vehicle sensors installed on a vehicle; the vehicle state data includes motion state data, body state data, and position information data of the vehicle.
The motion state data of the vehicle may include engine state data of the vehicle, speed data of the vehicle, and acceleration data of the vehicle, among others.
The vehicle body state data may include vehicle door state data, vehicle window state data, seat load data, seatbelt state data, and trunk door state data.
In specific implementation, data of sensors mounted On doors and windows of a vehicle can be read through an On-Board Diagnostic II (On-Board Diagnostic II) interface or a sensor reading interface opened by the vehicle, speed data and acceleration data are obtained by calculation from data provided by an odometer and an Inertial Measurement Unit (IMU) of the vehicle, and position information data of the vehicle is obtained from an On-Board GPS unit. And then sending the read vehicle state data to a vehicle-mounted terminal installed in the vehicle, specifically sending the read vehicle state data to a vehicle-mounted application program installed in the vehicle-mounted terminal, and after the vehicle-mounted application program acquires the motion state data, the vehicle body state data and the position information data of the vehicle, further analyzing the vehicle state data such as the motion state data, the vehicle body state data and the position information data of the vehicle.
And step S304, determining the motion state of the vehicle and the information of the driver and the passenger in the vehicle according to the motion state data and the vehicle body state data respectively.
The motion state of the vehicle may include a stop state and a moving state, among others.
The driver and passenger information may include the number of drivers and passengers, the types of drivers and passengers, and the activities of drivers and passengers.
The type of the driver and the crew can be children, independent personnel, people needing help or disabled people and the like.
Wherein, the activities of the driver and the passengers can be getting on, getting off and other activities.
In a specific implementation, the motion state of the vehicle may be determined by a Vehicle State Module (VSM) in fig. 1, and particularly, the motion state of the vehicle may be sensed by a current speed and an acceleration of the vehicle, that is, the vehicle is sensed to be in a stopped state or a moving state, so that determining the motion state of the vehicle may be regarded as a Classification task, but since data returned by a vehicle sensor is usually noisy and inaccurate, this problem may be handled by a Fuzzy logic method, and particularly, the sensing of the motion state of the vehicle may be implemented by a Fuzzy Rule Based Classification System (FRBCS) to improve reliability of a detection result of the motion state of the vehicle. In addition, if the motion state of the vehicle is sensed to be changed through the motion state data of the vehicle, a new vehicle motion state event can be created, and the new vehicle motion state event is processed by other sensing modules.
The information of drivers and passengers in the vehicle can be determined through a passenger module (OM) in fig. 1, and the number of the drivers and passengers, the types of the drivers and passengers and the activities of the drivers and passengers can be determined as the information of the drivers and passengers through the vehicle body state data and the mapping relation between the preset vehicle body state data and the activities of the drivers and passengers. More specifically, in order to facilitate detection of the drivers and the passengers, the vehicles can be divided into a main driving event processing agent, a secondary driving event processing agent and a rear row event processing agent, the occupation situation of a main driving seat, the occupation situation of a secondary driving seat and the occupation situation of a rear row seat are respectively and specifically detected, so that the activities of the drivers and the passengers entering the vehicles, leaving the vehicles or other activities are detected, and the number of the drivers and the passengers is calculated according to the number of the occupied seats. Referring to fig. 4, which is a schematic diagram of an internal architecture of a passenger module (OM), a primary driver seat occupancy may be determined by a vehicle engine state, a primary driver door state, and a primary driver seat belt state; determining the occupation condition of a passenger seat according to the state of a passenger door, the state of a passenger safety belt and the state of a passenger window; determining the occupation condition of the right rear seat according to the state of the right rear row vehicle window, the state of the right rear row vehicle door and the state of the right rear row safety belt; determining the occupation condition of the left rear seat according to the state of the left rear window, the state of the left rear door and the state of the left rear safety belt; and determining the occupation condition of the middle rear-row seat according to the states of the right/left rear-row vehicle windows, the right/left rear-row vehicle doors and the state of the middle rear-row safety belt. Further, after the seat occupancy is determined, the data of the state of the trunk door and the time for the passenger to get on or off the vehicle can be combined to determine whether the passenger is a person alone or a person needing help, such as a trunk or a wheelchair, and the type of the passenger is determined accordingly. For example, if the passenger gets on the front of the vehicle and opens the trunk door first, the passenger type of the passenger can be determined as a person who needs help. And if the boarding time and the alighting time of the passenger exceed the boarding time threshold and the alighting time threshold, judging the passenger as a person needing help.
And step S306, when the travel state of the vehicle is determined to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data.
Among them, the trip state of the vehicle may be divided into a trip start state and a trip end state.
In a specific implementation, the journey state of the vehicle can be detected by the sub-journey module (ISM) shown in fig. 1, that is, the time when the vehicle starts a new journey or finishes a journey is detected, and the current position and the current journey landmark of the vehicle are determined by the Landmark Module (LM). In addition, when the travel state of the vehicle is changed, if the position information data of the vehicle is not acquired from the GPS unit due to signal interruption or other reasons, the current position of the vehicle can be determined according to the available position information data at the latest moment, and then the current travel landmark of the vehicle can be determined. Where a trip may represent a daily driving trip of a user from one place to another in order to achieve a goal (e.g., commute, school, shopping, etc.), the trip may be considered a route from the same origin to the same destination. In addition, the driver may travel with other passengers from the origin to the destination, and thus each trip may include intermediate stations for passengers to get on and off, and each trip may have three landmarks, i.e., a start landmark, a destination landmark, and an intermediate landmark, and may be defined as a set of interrelated sub-trips, each of which is a route between two landmarks, and the passenger does not change throughout the route, and the destination landmark of the previous sub-trip is also the start landmark of the next sub-trip.
Referring to fig. 5, a daily trip diagram is shown, including landmark and passenger conditions, where home, mall and company shown in shaded ovals represent origin/destination landmarks, and where family companies and schools shown in white ovals represent intermediate landmarks, where a new trip may be considered to be initiated when the driver restarts the vehicle engine after the vehicle has been stopped for a period of time, and where the trip may be considered to be terminated when the driver shuts off the vehicle engine and does not restart the vehicle for a period of time. The trips in fig. 5 include a trip from the home to the company of the driver in the morning (trip 1), a trip from the company to the shopping mall after work (trip 2), and a trip from the shopping mall back to the home (trip 3). Since a person gets on or off the vehicle in the middle of the trips 1 and 3, the trips 1 and 3 respectively include one intermediate landmark, the trip 1 further includes a sub-trip 1 in which the driver takes the family from the home to the school and a sub-trip 2 in which the driver takes the family from the school to the company, and the trip 3 further includes a sub-trip 1 in which the driver takes the family from the company and a sub-trip 2 in which the driver returns to the home together with the family.
Considering that each trip is defined by two landmarks and that the passenger between the two landmarks is unchanged, by sensing the context information of the vehicle by detecting the order of the respective sub-trips of each trip, not only the destination and starting point of the vehicle's travel can be determined, but also the driver and passenger information can be determined.
And step S308, determining the current travel of the vehicle according to the motion state of the vehicle, the information of the driver and the passengers in the vehicle and the current position of the vehicle.
Where a trip may represent route information from one location to another.
In a specific implementation, the current travel landmark may be determined based on the current position of the vehicle by a Landmark Module (LM) shown in fig. 1. And further performing context fusion and analysis processing on the motion state of the vehicle, the driver and passenger information in the vehicle and the current travel landmark of the vehicle through a travel module (IM) to obtain the current travel of the vehicle. In which a landmark is represented by one area instead of one point, for example, in the case where the destination is a parking lot, the driver cannot always park the vehicle precisely on the same parking lot, but can park on different parking lots of the parking lot, and thus the landmark representing the destination should be the entire parking lot.
Further, in an embodiment, the step S308 further includes: and acquiring time information, and determining the current travel of the vehicle according to the time information, the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle.
In this embodiment, the current travel of the vehicle is determined by adding the time information, and the accuracy of determining the current travel can be improved.
Step S310, obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information pre-stored in a database.
In a specific implementation, after the current trip of the vehicle is determined, context information related to the destination of the current trip of the vehicle, for example, whether to pick up or send people to an intermediate landmark in the current trip, or the like, may be obtained according to historical trip information corresponding to the current trip stored in the database. Further, in addition to sensing contextual information on the vehicle's route from one landmark to another based on the vehicle's current travel and historical travel information pre-stored in the database, the manner in which the vehicle reaches the destination may also be sensed, for example, if a road between two landmarks causes the driver to perform many acceleration and stopping operations, which may indicate that the road is dense or there are many traffic lights along the road. Therefore, a Fuzzy Rule Based Classification System (FRBCS) used in a Vehicle State Module (VSM) can detect motions such as speed, acceleration, stop, and movement to recognize these road states.
Wherein the database may be provided in the journey module (IM) of fig. 1, the journey module continuously reads the event stream of the end of the journey and the event stream of each journey segment, the event of each journey segment includes all the journey events related to the journey, and each time a new journey end event is received, a new journey event is created, representing the journey just ended, this type of event includes an identifier and various parts of the trip, and when a new trip event is created, the trip module will check whether the created new trip event already exists in the database, therefore, the new travel event needs to be compared with the stored travel event, and if the same travel as the created new travel event is not found in the database, a new trip event is added as a new instance to the database, thereby resulting in a database containing all the different types of routes. When the starting points of the two trips are the same, the destinations of the two trips are the same, and the driver information and the passenger information are the same, the two trips can be judged to be the same.
Further, after obtaining the context-aware results of the vehicle usage environment, it can be combined with other on-board systems (e.g., traffic information systems) to inform the driver in advance of traffic problems in the route through the next destination. And multiple use functions can be provided according to the types of passengers, and self-adaptive music playing service, safety guarantee and the like can be provided for the passengers in the vehicle. For example, if the sensing result indicates that a passenger in the vehicle is a child, a child song can be automatically played in the vehicle music player, and if the sensing result indicates that the window beside the child is in a fully opened state, an early warning is given out or the window is automatically closed, so that the possible danger problem is avoided.
In the vehicle using environment sensing method, vehicle state data is acquired through a vehicle sensor arranged on a vehicle; the vehicle state data comprises motion state data, vehicle body state data and position information data of the vehicle; determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively; when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data; determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database; and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle. The method utilizes different sensors, fuses data of the different sensors, senses the using environment of the vehicle, can quickly detect any change of the context of the vehicle, and improves the sensing capability of the using environment of the vehicle, thereby solving the technical problem of poor capability of coping with the change of the external environment in the traditional method.
In one embodiment, the determining the driver and passenger information in the vehicle in step S304 includes: and obtaining the types, the number and the activities of the drivers and the passengers in the vehicle as the information of the drivers and the passengers in the vehicle according to the vehicle body state data and the preset mapping relationship among the vehicle body state data, the activities of the drivers and the passengers and the types of the drivers and the passengers.
In specific implementation, as shown in table 1 below, the mapping relationship between the vehicle body state data, the driver activity and the driver type under different conditions such as the main driving module, the assistant driving module and the rear row module can be represented. For example, if the vehicle body state data is main driver door open → safety belt fastening of main driver seat → main driver door closed, it indicates that the driver enters the vehicle and the main driver seat is occupied, and the driver is generally a healthy independent person, so the default driver type is an independent person; and if the vehicle body state data is that the right rear row door is opened → the safety seat belt of the right rear row is fastened → the getting-on time is larger than the getting-on time threshold → the right rear row door is closed, indicating that a child enters the vehicle. It should be noted that the following table is only an example of the present embodiment, and the relationship between the specific vehicle body state data and the associated event may be set according to the requirement, which is not limited in the present application.
Figure BDA0002913606960000121
TABLE 1
By fusing different body state data to form event patterns and accordingly deducing passenger conditions in the vehicle, the pattern-based method can find different types of passengers, so that the system can accurately sense the characteristics of the passengers in the vehicle. In the vehicle, the classification of the passengers can be realized by the vehicle passenger classification method as shown in fig. 6, in the figure, when the seat is occupied, the occupied persons can be classified into goods and persons, the persons can be classified into independent persons and dependent persons (i.e., persons needing help), and the dependent persons can be classified into adults and children which cannot be independent. The more to the right the elements in fig. 6 indicate more vulnerability, which a priori can be used to suggest priorities to facilitate personnel assistance in the order of priority when a traffic accident occurs. In addition, in the tree structure shown in fig. 6, the similarity between two classifications can also be evaluated by distance. If the detail level of the elements in FIG. 6 is denoted as detail-degree, and the distance is referred to as Semantic-Hierarchical Similarity (SHS) for the current scope, the Similarity between elements A and B can be defined by the following relation:
Figure BDA0002913606960000131
cca (closest Common indicator) represents the nearest Common ancestor, i.e., the classification element, representing the closest Common ancestor of the two elements a and B that need to be compared. For example, the CCA of goods and people is an occupancy situation, the CCA of independent and non-independent people is a person, and the CCA of children and adults who cannot be independent is a non-independent person. Thus, the more similar two elements are, the higher their SHS value. Therefore, by calculating the SHS between its inferred and actual values, the accuracy of the system can be effectively evaluated.
It will be appreciated that conventional methods of vehicle occupant detection, such as based on invasive sensors mounted on the vehicle seat, are capable of detecting different types of vehicle occupants, however, the method relies on sensors temporarily mounted on the vehicle seat, and such as by optical methods using cameras mounted in the vehicle interior or as road infrastructure, although occupant detection may also be achieved. However, the devices required by these methods are Complex to install and high in cost, and the present application proposes a CEP (Complex Event Processing) method to implement passenger detection by fusing data from multiple simple sensors of a vehicle, thereby avoiding some intrusive problems, and by applying the method to a complete context-aware architecture, the mechanism is enriched, thereby overcoming the accuracy and cost problems of the previous methods on the basis of detecting the change and detailed range of passengers on all seats of a target vehicle. In addition, a method for detecting a vehicle passenger through an RFID (Radio Frequency Identification) reader and a wearable card, or a method for detecting a vehicle passenger through image recognition by a camera, or a life detection radar may be used as an alternative method for passenger detection.
In this embodiment, the type of the driver and the passenger, the number of the driver and the passenger, and the activity of the driver and the passenger in the vehicle are obtained through the vehicle body state data and the preset mapping relationship among the vehicle body state data, the activities of the driver and the passenger, and are used as the information of the driver and the passenger in the vehicle, so that the current travel of the vehicle is further determined according to the information of the driver and the passenger. In addition, the mode-based detection method provided by the embodiment can not only infer the number of passengers in the vehicle, but also infer different passenger types at a finer granularity, thereby expanding the application scene of sensing the functions of the driver and the passengers.
In an embodiment, the step S108 specifically includes: determining a current travel landmark of the vehicle according to the current position of the vehicle; and determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current travel landmark of the vehicle.
Further, in one embodiment, the step of determining a current travel landmark of the vehicle based on the current position of the vehicle includes: acquiring a plurality of sub-regions corresponding to a plurality of preset landmark regions, and if the current position is detected to be located in any sub-region, taking the landmark corresponding to the sub-region where the current position is located as the current travel landmark of the vehicle; and if the current position is detected not to be located in any preset landmark area, taking the landmark corresponding to the sub-area adjacent to the current position as the travel landmark of the vehicle.
In specific implementation, the landmark in the inferred journey can be realized by adopting a new density-based clustering algorithm, and the density-based clustering is a clustering method which is commonly used for position discovery on position data and is based on the local neighborhood density of points. There are two key parameters in this type of clustering, Eps (radius of circle) and minpoits (the minimum number of points within the circle considered as a cluster). These two parameters are used to define a density-based neighborhood N of point p, as shown below,
N(p)={q∈S|dist(p,q)≤Eps)
in the formula, S is a set of available points, dist (p, q) is a straight-line distance from p to q; furthermore, if a point o in S is contained in both n (p) and n (q), one density-based neighborhood n (p) is densitometrically engageable with another density-based neighborhood n (q).
Based on this, a Density-based clustering algorithm is proposed, called Density-and-Join algorithm (DJ-cluster). The algorithm can not only detect dense clusters from a data set, but also merge the dense clusters when certain conditions are met; however, since it is an off-line algorithm, it requires at least one previous scan of the entire data to run. The embodiment describes a landmark finding algorithm, which improves the DJ-cluster algorithm, so that the DJ-cluster algorithm can operate as an online algorithm, and can find density clusters while receiving reading positions.
The definition of the centroid set based on the above concept of density neighborhood is as follows:
Figure BDA0002913606960000151
Figure BDA0002913606960000152
thus, each point q in n (C) (C e C) has a centroid C, whereby each point in S can be associated with only one of the centroids, even though it may be in the neighborhood of one or more centroids; thus, each point in S will be associated with one or no centroid. According to the centroid concept, the present embodiment defines a landmark as a set of centroids, wherein the neighborhood of each centroid is connected in density to at least one neighborhood of any other centroid within the same landmark, as shown in fig. 7, which is a schematic diagram of a landmark composed of three centroids, the landmark having 3 centers c1, c2 and c3, and the dots located within the two neighborhoods are shown in gray. Therefore, in the execution process of the landmark finding algorithm in this embodiment, the current position of the vehicle determined according to the GPS data read by the GPS unit of the vehicle is used as an input, and the travel landmark corresponding to the current position is returned (if the travel landmark can be found).
More specifically, the step of determining the current travel landmark from the current position of the vehicle includes: and detecting whether the current position (marked as an input point p) is close to any existing centroid, and if so, associating the current position with the first found centroid, namely, taking the landmark of the first found centroid as the travel landmark corresponding to the current position. And if the point p is located in two neighborhoods, the landmarks in the two neighborhoods can be merged to obtain a new landmark containing the centroid of the two landmarks. With reference to fig. 8(a) and 8(b), the start and result of this step is described. At the beginning (fig. 8(a)), input point p is within two existing neighborhoods with centroids c1 and c2, each located in a different Landmark (Landmark _1 and Landmark _ 2); after merging the two landmarks, a new Landmark (Landmark _1_2) will be created that contains two centroids.
If the input point p is not contained in any previous neighborhood, then connecting p with points in its neighborhood as centroids, its new landmark is merged with any other landmarks in the surroundings whose at least one neighborhood density can be joined with N (p). As shown in fig. 8(c) and 8(d), it can be seen from fig. 8(c) that given a density of input points p, n (p) that is sufficiently large, p can be considered as the centroid, a new landmark can be created and associated with n (p); in addition, since n (p) and another Landmark (Landmark _1) both have a position q, n (p) has density engageability with a neighborhood of another Landmark (Landmark _1), two landmarks can be merged and a new Landmark (merged Landmark) is created, as shown in fig. 8 (d).
The trip module (LM) performs the above landmark finding steps at different Eps (radius of circle) values, forming a hierarchy of landmarks, whereby the trip module can detect landmarks with different granularity. Thus, the trip module can always return the detected landmarks with the lower Eps, and if any are found, the trip module will return its feed point as an output. Further, each landmark may be automatically tagged as an identifier to facilitate naming of the individual landmarks, with the output of the landmark discovery algorithm being reflected as landmark events in the system event stream.
It is understood that the conventional location detection method based on clustering technology, such as the partition clustering method (applying the K-means algorithm to the tracking of GPS location) and the time-based clustering method, can perform the physical location detection, but is not suitable for detecting two different clusters with the same physical location, while the density-based clustering method for location detection can overcome the problems of the partition clustering method and the time-based clustering method, but the density-based clustering method needs to obtain the whole available GPS location data set in advance to detect the clustering, i.e. the conventional density-based clustering is substantially an off-line algorithm. The method improves the traditional density-based clustering algorithm, and provides an algorithm which can find different clusters while reading GPS positioning information, so that all positions do not need to be collected before the algorithm is operated.
The landmark finding algorithm provided by the embodiment can detect different sets while receiving the GPS position, and can be regarded as an online real-time algorithm, so that the limitation that the traditional clustering algorithm needs to take the whole GPS data set as input before operation is overcome, therefore, the landmark finding algorithm provided by the embodiment can be operated as a part of a vehicle-mounted system, and the real-time online detection of landmarks is realized. Each detected cluster will automatically mark as a landmark for a certain trip, and other in-vehicle system application functions, such as system functions based on geographical location, etc., can be enhanced through a landmark discovery algorithm.
In one embodiment, the process of determining that the trip state of the vehicle has changed includes: when the change of the motion state of the vehicle is detected according to the motion state data or the change of the driver and the passenger in the vehicle is detected according to the vehicle body state data, the change of the travel state of the vehicle is judged.
Further, in one embodiment, the method further comprises: if the vehicle is detected to be in the set time according to the motion state data and then is converted from the stop state to the motion state, determining that the vehicle is in the travel starting state; and if the vehicle is detected to be converted from the motion state to the stop state according to the motion state data and the set time is kept, determining that the vehicle is in the stroke end state.
In a particular implementation, the sub-trip module (ISM) is used to infer the moment of the start and end of the trip, i.e. to detect when the vehicle is stopped and running, in addition to sensing the sequence of parts of the sub-trip covered by the vehicle. Where a new stroke may be considered to have started after a period of time starting the engine and ended after a period of time remaining after shutting down the engine, the pattern defining the start and end of the stroke may be expressed as:
after a preset time → the vehicle engine is started,
the vehicle transmitter is turned off and the hold time is greater than or equal to the preset time.
The first mode is used for detecting the start of the stroke, the second mode is used for detecting the end of the stroke, and the preset time of the two modes can be the same or different.
Considering that when a change occurs in the passengers in the vehicle, a new trip will start, the following pattern is defined:
motion state of the vehicle (moving) → motion state of the vehicle (stopping),
any vehicle door (open).
This mode reflects the activity of the vehicle stopping after moving and opening either door, which indicates that the passenger will change and therefore will end the current trip and start a new trip. When this mode is triggered, a trip event will be generated, indicating the trip that just ended.
In this embodiment, whether the motion state of the vehicle changes is detected through the motion state data, and whether the driver and the passenger in the vehicle change is detected according to the vehicle body state data, so that whether the travel state of the vehicle changes is judged according to the detection result, and whether the current position of the vehicle needs to be determined can be further determined.
In one embodiment, referring to FIG. 9, a diagram of daily trips and vehicle awareness levels illustrating in-vehicle context aware work objectives, the physical world of the diagram shows a trip representing the morning occupancy of a father and son, for which the awareness level of the system is shown in the vehicle awareness portion of FIG. 2, where the system is able to discover three trip landmarks and passenger conditions, and thus, the system can provide many useful functions. For example, in the event of an emergency, if a traffic accident occurs, the system of each involved vehicle will be able to inform the center of its passengers' status when the accident occurs, so that the emergency center will be able to manage its resources (e.g., ambulance) in a more efficient manner. The output of the context awareness system in the vehicle is the raw data in fig. 2, and then the context awareness of the current work can be enriched by using more data sources. For example, the three landmarks may be labeled with more representative names by some location-based services. Thus, the system will be able to provide a more powerful level of functionality. For example, if a minor is detected in the vehicle, the doors of the vehicle adjacent to the minor's seat are automatically locked to avoid a potentially dangerous situation. The context awareness of the current work will be enriched by using more data sources.
The goal of the in-vehicle context awareness system is to understand the overall situation of the vehicle, and it becomes critical to define a formal representation for this information. Referring to FIG. 10, a diagram of an in-vehicle context information model is illustrated, depicting an information hierarchy for modeling vehicle context. It can be seen that the context of the vehicle is mainly composed of a combination of both static context and dynamic context. The in-vehicle dynamic environment includes four dimensions: location, activity, time and identity, wherein the location attribute relates to the origin and destination of the vehicle, the activity attribute can be regarded as a dual feature defining both the current motion state of the vehicle and the covered or just finished journey, the time attribute representing the time of the journey and the identity feature representing the occupancy of the vehicle by the passenger.
In another embodiment, as shown in fig. 11, there is provided a vehicle use environment sensing method including the steps of:
step S1102, vehicle state data is acquired through a vehicle sensor mounted on a vehicle; the vehicle state data includes motion state data, body state data, and position information data of the vehicle.
And step S1104, obtaining the type of the drivers and conductors, the number of the drivers and conductors and the activities of the drivers and conductors in the vehicle as the information of the drivers and conductors in the vehicle according to the vehicle body state data and the preset mapping relation among the vehicle body state data, the activities of the drivers and conductors and the types of the drivers and conductors.
In step S1106, the motion state of the vehicle is determined based on the motion state data.
And step S1108, determining the current position of the vehicle according to the position information data when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data.
In step S1110, the current travel landmark of the vehicle is determined according to the current position of the vehicle.
Step S1112 is to acquire time information and determine the current travel of the vehicle according to the time information, the motion state of the vehicle, the information of the driver and the passenger in the vehicle, and the current travel landmark of the vehicle.
Step S1114 obtains a context awareness result of the vehicle usage environment according to the current trip of the vehicle and historical trip information pre-stored in the database.
The vehicle environment sensing method provided by the embodiment is based on defining the form representation of the environment in the vehicle, and proposes to fuse data from different sensors of the vehicle through a Complex Event Processing (CEP) method so as to sense two characteristics of the vehicle environment, namely, the driver/passenger condition and the most important place or landmark in the journey normally covered by the vehicle. For the driver/passenger situation, a novel pattern-based approach is applied, while for landmark determination, a new density-based online real-time clustering approach is applied. The implementation difficulty of the complex event processing method is reduced through a complex event processing engine, vehicle information, passenger information and route information are detected through the help of the engine, the accurate state of the current vehicle environment is obtained by combining a clustering algorithm based on density, vehicle-mounted context sensing application is realized, and the intelligent travel service quality of the system is improved. In addition, the method can detect frequent trips of the vehicle in the real environment and use this information to improve the accuracy of the vehicle perception environment.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 12, there is provided a vehicle use environment sensing system including: a data collection module 1202, a motion state awareness module 1204, a passenger awareness module 1206, a landmark awareness module 1208, a stroke module 1210, and a context awareness module 1212, wherein:
a data acquisition module 1202 for acquiring vehicle state data through vehicle sensors mounted on a vehicle; the vehicle state data comprises motion state data, vehicle body state data and position information data of the vehicle;
a motion state sensing module 1204, configured to determine a motion state of the vehicle according to the motion state data;
the passenger perception module 1206 is used for determining driver and passenger information in the vehicle according to the vehicle body state data;
the landmark sensing module 1208 is used for acquiring the current position of the vehicle according to the position information data when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data;
the journey module 1210 is used for determining the current journey of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle;
the context awareness module 1212 is configured to obtain a context awareness result of the vehicle using environment according to the current travel of the vehicle and the historical travel information stored in the database.
In one embodiment, the passenger sensing module 1206 is specifically configured to obtain the type of the driver and the passenger, the number of the driver and the passenger, and the activity of the driver and the passenger in the vehicle as the information of the driver and the passenger in the vehicle according to the vehicle body state data and the preset mapping relationship between the vehicle body state data, the activities of the driver and the passenger, and the types of the driver and the passenger.
In one embodiment, the journey module 1210 is specifically configured to determine a current journey landmark of the vehicle according to a current position of the vehicle; and determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current travel landmark of the vehicle.
In one embodiment, the journey module 1210 is further configured to obtain a plurality of sub-areas corresponding to a plurality of preset landmark areas, and if it is detected that the current position is located in any sub-area, take the landmark corresponding to the sub-area where the current position is located as the current journey landmark of the vehicle;
and if the current position is detected not to be located in any preset landmark area, taking the landmark corresponding to the sub-area adjacent to the current position as the travel landmark of the vehicle.
In one embodiment, the landmark sensing module 1208 is further configured to determine that the travel state of the vehicle changes when the change of the motion state of the vehicle is detected according to the motion state data or the change of the driver and the passenger in the vehicle is detected according to the vehicle body state data.
In one embodiment, the landmark sensing module 1208 is further configured to determine that the vehicle is in the trip start state if it is detected that the vehicle is converted from the stop state to the moving state after the set time according to the moving state data; and if the vehicle is detected to be converted from the motion state to the stop state according to the motion state data and the set time is kept, determining that the vehicle is in the stroke end state.
In one embodiment, the journey module 1210 is further configured to obtain time information, and determine a current journey of the vehicle according to the time information, a motion state of the vehicle, driver and passenger information in the vehicle, and a current position of the vehicle.
It should be noted that, the vehicle using environment sensing system and the vehicle using environment sensing method of the present application correspond to each other one to one, and the technical features and the advantages thereof described in the embodiments of the vehicle using environment sensing method are all applicable to the embodiments of the vehicle using environment sensing system, and specific contents may refer to descriptions in the embodiments of the method of the present application, and are not described herein again, and thus, the present application claims.
Furthermore, the various modules in the vehicle usage environment sensing system described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 13. The computer device includes a processor, a memory, a communication interface, a display screen, and an input system connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a vehicle usage environment awareness method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input system of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structure shown in fig. 13 is a block diagram of only a portion of the structure associated with the present application, and does not constitute a limitation on the computing devices to which the present application may be applied, and that a particular computing device may include more or fewer bodies than shown, or some bodies in combination, or have a different body arrangement.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A vehicle use environment perception method, characterized in that the method comprises:
acquiring vehicle state data through a vehicle sensor mounted on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
determining the motion state of the vehicle and the information of drivers and passengers in the vehicle according to the motion state data and the vehicle body state data respectively;
when the travel state of the vehicle is determined to be changed according to the motion state data and the vehicle body state data, determining the current position of the vehicle according to the position information data;
determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle, the current position of the vehicle and historical travel information stored in a database;
and obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information pre-stored in a database.
2. The method of claim 1, wherein said determining occupant information in said vehicle from said body state data comprises:
and obtaining the type, the number and the activity of the drivers and the passengers in the vehicle as the information of the drivers and the passengers in the vehicle according to the vehicle body state data and the preset mapping relationship among the vehicle body state data, the activities of the drivers and the passengers and the types of the drivers and the passengers.
3. The method according to claim 1, wherein determining the current trip of the vehicle based on the motion state of the vehicle, the occupant information in the vehicle, and the current location of the vehicle comprises:
determining a current travel landmark of the vehicle according to the current position of the vehicle;
and determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current travel landmark of the vehicle.
4. The method of claim 3, wherein determining the current travel landmark for the vehicle based on the current location of the vehicle comprises:
acquiring a plurality of sub-regions corresponding to a plurality of preset landmark regions, and if the current position is detected to be located in any one of the sub-regions, taking the landmark corresponding to the sub-region where the current position is located as the current travel landmark of the vehicle;
and if the current position is detected not to be located in any one of the preset landmark areas, taking the landmark corresponding to the sub-area adjacent to the current position as the travel landmark of the vehicle.
5. The method according to claim 1, wherein the determination process that the trip state of the vehicle is changed includes:
and when the change of the motion state of the vehicle is detected according to the motion state data or the change of the driver and the passenger in the vehicle is detected according to the vehicle body state data, determining that the travel state of the vehicle is changed.
6. The method of claim 5, further comprising:
if the vehicle is detected to be in the set time according to the motion state data and then is converted from the stop state to the motion state, determining that the vehicle is in the stroke starting state;
and if the vehicle is detected to be converted from the motion state to the stop state according to the motion state data and the set time is kept, determining that the vehicle is in the stroke end state.
7. The method according to claim 1, wherein determining the current trip of the vehicle based on the motion state of the vehicle, the occupant information in the vehicle, and the current location of the vehicle comprises:
acquiring time information, and determining the current travel of the vehicle according to the time information, the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle.
8. A vehicle use environment awareness system, the system comprising:
the data acquisition module is used for acquiring vehicle state data through a vehicle sensor arranged on a vehicle; the vehicle state data comprises motion state data, body state data and position information data of the vehicle;
the motion state sensing module is used for determining the motion state of the vehicle according to the motion state data;
the passenger sensing module is used for determining driver and passenger information in the vehicle according to the vehicle body state data;
the landmark sensing module is used for acquiring the current position of the vehicle according to the position information data when the travel state of the vehicle is judged to be changed according to the motion state data and the vehicle body state data;
the travel module is used for determining the current travel of the vehicle according to the motion state of the vehicle, the information of drivers and passengers in the vehicle and the current position of the vehicle;
and the context perception module is used for obtaining a context perception result of the vehicle using environment according to the current travel of the vehicle and historical travel information stored in a database.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110095111.5A 2021-01-25 2021-01-25 Vehicle use environment perception method, system, computer device and storage medium Active CN112896176B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110095111.5A CN112896176B (en) 2021-01-25 2021-01-25 Vehicle use environment perception method, system, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110095111.5A CN112896176B (en) 2021-01-25 2021-01-25 Vehicle use environment perception method, system, computer device and storage medium

Publications (2)

Publication Number Publication Date
CN112896176A true CN112896176A (en) 2021-06-04
CN112896176B CN112896176B (en) 2022-10-18

Family

ID=76117619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110095111.5A Active CN112896176B (en) 2021-01-25 2021-01-25 Vehicle use environment perception method, system, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN112896176B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117104161A (en) * 2023-09-19 2023-11-24 深圳达普信科技有限公司 Intelligent cabin environment sensing and controlling method and system based on vehicle sensor

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
CN108241371A (en) * 2016-12-27 2018-07-03 丰田自动车株式会社 Automated driving system
CN109050520A (en) * 2018-08-16 2018-12-21 上海小蚁科技有限公司 Vehicle driving state based reminding method and device, computer readable storage medium
CN109383415A (en) * 2017-08-08 2019-02-26 通用汽车环球科技运作有限责任公司 Context aware vehicular communication system and control logic with adaptive crowd's sensing function
CN110118567A (en) * 2018-02-06 2019-08-13 北京嘀嘀无限科技发展有限公司 Trip mode recommended method and device
CN110654343A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Safety belt monitoring system and method, vehicle and storage medium
WO2020058234A1 (en) * 2018-09-18 2020-03-26 Continental Automotive France Device for predicting the most probable route of a vehicle
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium
CN111301330A (en) * 2020-03-10 2020-06-19 马瑞利汽车电子(广州)有限公司 Vehicle rear door and window control method
CN111497701A (en) * 2020-04-30 2020-08-07 广州小鹏汽车制造有限公司 Seat adjusting method and device and vehicle
CN111813113A (en) * 2020-07-06 2020-10-23 安徽工程大学 Bionic vision self-movement perception map drawing method, storage medium and equipment
US20200356909A1 (en) * 2019-05-10 2020-11-12 Didi Research America, Llc Method and system for recommending multi-modal itineraries
CN111942115A (en) * 2019-05-16 2020-11-17 北京车和家信息技术有限公司 Control method of vehicle glass and vehicle

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108241371A (en) * 2016-12-27 2018-07-03 丰田自动车株式会社 Automated driving system
CN107328410A (en) * 2017-06-30 2017-11-07 百度在线网络技术(北京)有限公司 Method and automobile computer for positioning automatic driving vehicle
CN109383415A (en) * 2017-08-08 2019-02-26 通用汽车环球科技运作有限责任公司 Context aware vehicular communication system and control logic with adaptive crowd's sensing function
CN110118567A (en) * 2018-02-06 2019-08-13 北京嘀嘀无限科技发展有限公司 Trip mode recommended method and device
CN110654343A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Safety belt monitoring system and method, vehicle and storage medium
CN109050520A (en) * 2018-08-16 2018-12-21 上海小蚁科技有限公司 Vehicle driving state based reminding method and device, computer readable storage medium
WO2020058234A1 (en) * 2018-09-18 2020-03-26 Continental Automotive France Device for predicting the most probable route of a vehicle
US20200356909A1 (en) * 2019-05-10 2020-11-12 Didi Research America, Llc Method and system for recommending multi-modal itineraries
CN111942115A (en) * 2019-05-16 2020-11-17 北京车和家信息技术有限公司 Control method of vehicle glass and vehicle
CN111159459A (en) * 2019-12-04 2020-05-15 恒大新能源汽车科技(广东)有限公司 Landmark positioning method, device, computer equipment and storage medium
CN111301330A (en) * 2020-03-10 2020-06-19 马瑞利汽车电子(广州)有限公司 Vehicle rear door and window control method
CN111497701A (en) * 2020-04-30 2020-08-07 广州小鹏汽车制造有限公司 Seat adjusting method and device and vehicle
CN111813113A (en) * 2020-07-06 2020-10-23 安徽工程大学 Bionic vision self-movement perception map drawing method, storage medium and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117104161A (en) * 2023-09-19 2023-11-24 深圳达普信科技有限公司 Intelligent cabin environment sensing and controlling method and system based on vehicle sensor
CN117104161B (en) * 2023-09-19 2024-04-16 深圳达普信科技有限公司 Intelligent cabin environment sensing and controlling method and system based on vehicle sensor

Also Published As

Publication number Publication date
CN112896176B (en) 2022-10-18

Similar Documents

Publication Publication Date Title
US7239252B2 (en) Communications system and program
US20190071101A1 (en) Driving assistance method, driving assistance device which utilizes same, autonomous driving control device, vehicle, driving assistance system, and program
US11935401B2 (en) Vehicle control device, vehicle control method, and vehicle control system
WO2018220971A1 (en) Communication control device, communication control method, and computer program
JPWO2018146882A1 (en) Information providing system, server, mobile terminal, and computer program
CN109841090A (en) For providing the system and method for the safety alarm based on infrastructure
Terroso-Sáenz et al. A complex event processing approach to perceive the vehicular context
US11047704B2 (en) System for providing a notification of a presence of an occupant in a vehicle through historical patterns and method thereof
CN104508719A (en) Driving assistance system and driving assistance method
CN110738870A (en) System and method for avoiding collision routes
US20220080976A1 (en) Mobile device and system for identifying and/or classifying occupants of a vehicle and corresponding method thereof
US20230134342A1 (en) System and/or method for vehicle trip classification
US20210140787A1 (en) Method, apparatus, and system for detecting and classifying points of interest based on joint motion
CN112896176B (en) Vehicle use environment perception method, system, computer device and storage medium
US20210142187A1 (en) Method, apparatus, and system for providing social networking functions based on joint motion
US20230033672A1 (en) Determining traffic violation hotspots
US20210142435A1 (en) Method, apparatus, and system for providing ride-sharing functions based on joint motion
US20220281482A1 (en) Vehicle control device, vehicle control method, and computer-readable storage medium storing program
CN115840637A (en) Method and system for evaluation and development of autopilot system features or functions
US11333523B2 (en) Vehicle control device, output device, and input and output device
WO2019216142A1 (en) Vehicular information provision system and onboard apparatus
CN114973743B (en) Vehicle allocation management device for public vehicle and automatic driving vehicle
US20200218263A1 (en) System and method for explaining actions of autonomous and semi-autonomous vehicles
US20240135719A1 (en) Identification of unknown traffic objects
JP2023062463A (en) Onboard device, onboard system, and vehicle information provision method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 511458 room 1201, No. 37, Jinlong, Nansha street, Xiangjiang financial and business center, Nansha District, Guangzhou City, Guangdong Province (office only)

Applicant after: Hechuang Automotive Technology Co.,Ltd.

Address before: 511458 room 1201, No. 37, Jinlong, Nansha street, Xiangjiang financial and business center, Nansha District, Guangzhou City, Guangdong Province (office only)

Applicant before: Hechuang Smart Technology Co.,Ltd.

Address after: 511458 room 1201, No. 37, Jinlong, Nansha street, Xiangjiang financial and business center, Nansha District, Guangzhou City, Guangdong Province (office only)

Applicant after: Hechuang Smart Technology Co.,Ltd.

Address before: Room 1201, No. 37, Jinsha street, Nansha street, Xiangjiang financial and business center, Nansha District, Guangzhou City, Guangdong Province

Applicant before: Guangzhou Auto Weilai New Energy Automotive Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant