CN115520201B - Vehicle main driving position function dynamic response method and related device - Google Patents

Vehicle main driving position function dynamic response method and related device Download PDF

Info

Publication number
CN115520201B
CN115520201B CN202211319818.0A CN202211319818A CN115520201B CN 115520201 B CN115520201 B CN 115520201B CN 202211319818 A CN202211319818 A CN 202211319818A CN 115520201 B CN115520201 B CN 115520201B
Authority
CN
China
Prior art keywords
intention
control operation
child
source
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211319818.0A
Other languages
Chinese (zh)
Other versions
CN115520201A (en
Inventor
王洁
陈曦
刘跃全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xihua Technology Co Ltd
Original Assignee
Shenzhen Xihua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xihua Technology Co Ltd filed Critical Shenzhen Xihua Technology Co Ltd
Priority to CN202211319818.0A priority Critical patent/CN115520201B/en
Priority to CN202310447643.XA priority patent/CN116534034A/en
Publication of CN115520201A publication Critical patent/CN115520201A/en
Application granted granted Critical
Publication of CN115520201B publication Critical patent/CN115520201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/01Occupants other than the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/043Identity of occupants

Abstract

The embodiment of the application discloses a dynamic response method and a related device for a vehicle main driving position function, which are applied to a cockpit domain controller in a domain control system of a target vehicle, and the method comprises the following steps: acquiring target data, wherein the target data comprises at least one of audio data and image data; under the condition that the child is detected to execute the control operation at the main driving position, identifying an intention source corresponding to the control operation according to the target data; if the intention source is the intention of the child, matching a first function authority for the main driving position, and judging whether to respond to the control operation according to the first function authority; if the intention source is other occupant intention, the control operation is responded according to the other occupant intention. This application is favorable to avoiding children to scramble and goes into the owner driver's seat after, triggers vehicle driving correlation function and causes the influence to follow-up driving under the condition that other drivers and conductors are not aware of, can improve driving safety.

Description

Vehicle main driving position function dynamic response method and related device
Technical Field
The application relates to the technical field of data processing, in particular to a dynamic response method for a main driving position function of a vehicle and a related device.
Background
In real life, for example, in scenes such as holiday vehicles, people and the like, children in the vehicles may climb to the main driving position due to boredom, and trigger structures such as a steering wheel, a loudspeaker, a central control display screen and the like of the main driving position. If the triggering operation is vehicle starting, changing wheel direction, horn triggering, lowering chassis, etc., operations for vehicle driving, potential safety hazards are caused to subsequent driving of the vehicle.
Disclosure of Invention
The embodiment of the application provides a dynamic response method and a related device for a main driving seat function of a vehicle, so that the influence on driving caused by the fact that a child climbs into the main driving seat to trigger the related driving function of the vehicle is avoided, and driving safety is improved.
In a first aspect, an embodiment of the present application provides a dynamic response method for a vehicle main driving seat function, which is applied to a cockpit domain controller in a domain control system of a target vehicle, where the domain control system includes the cockpit domain controller, a pickup device, and an in-vehicle camera, and the pickup device and the in-vehicle camera are in communication connection with the cockpit domain controller; the method comprises the following steps:
acquiring target data, wherein the target data comprises at least one of audio data and image data, the audio data is audio information in the target vehicle collected by the sound pickup equipment, and the image data is image information in the target vehicle collected by the in-vehicle camera;
under the condition that it is detected that the child executes the control operation at the main driving seat, identifying an intention source corresponding to the control operation according to the target data, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation;
if the intention source is the intention of the child, matching a first function authority for the main driving position, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child executing the control operation;
and if the intention source is the intention of other drivers, responding to the control operation according to the intention of other drivers.
In a second aspect, the present application provides a dynamic response apparatus for a vehicle main driving seat function, which is applied to a cockpit domain controller in a domain control system of a target vehicle, where the domain control system includes the cockpit domain controller, a pickup device, and an in-vehicle camera, and the pickup device and the in-vehicle camera are in communication connection with the cockpit domain controller; the device comprises:
the acquisition unit is used for acquiring target data, wherein the target data comprises at least one of audio data and image data, the audio data is audio information in the target vehicle acquired by the pickup equipment, and the image data is image information in the target vehicle acquired by the in-vehicle camera; the judging unit is used for judging whether the child enters a main driving seat or not according to the target data;
the identification unit is used for identifying an intention source corresponding to the control operation according to the target data if the control operation executed by the child at the main driving position is detected, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation;
the matching unit is used for matching a first function authority for the main driving seat when the intention source is the intention of the child, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child currently executing the control operation;
and the response unit is also used for responding to the control operation according to the other occupant intentions when the intention source is the other occupant intentions.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, where the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer storage medium storing a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the first aspect of the present embodiment.
It can be seen that, in the present embodiment, target data is obtained by the cockpit area controller, where the target data includes at least one of audio data and image data; under the condition that the control operation of the child is detected to be executed at the main driving position, identifying an intention source corresponding to the control operation according to the target data, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation; if the intention source is the intention of the child, matching a first function authority for the main driving seat, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child executing the control operation; and if the intention source is the intention of other drivers, responding to the control operation according to the intention of other drivers. So, this application can be based on the intention source judgement to children's execution control operation, it is based on self wish still other driver and crew's wish to discern children and carry out this control operation, thereby judge the control operation of whether response children were carried out according to the different situation, satisfy driver and crew's demand, be favorable to avoiding children to scramble into the owner and drive a position after, trigger vehicle driving correlation function and cause the influence to follow-up driving under the condition that other driver and crew are not aware of, can improve driving safety.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1A is a schematic diagram illustrating an architecture of an exemplary dynamic response system for a master driver's seat function of a vehicle according to an embodiment of the present application;
fig. 1B is a diagram illustrating an exemplary composition of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart of a method for dynamically responding to a master driving seat function of a vehicle according to an embodiment of the present application;
FIG. 3A is a block diagram of functional units of a dynamic response device for a main driving position function of a vehicle according to an embodiment of the present disclosure;
fig. 3B is a block diagram illustrating functional units of another dynamic response device for a vehicle main driving position function according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The embodiments of the present application will be described below with reference to the drawings.
The technical solution of the present application may be applied to an example domain control system 10 as shown in fig. 1A, and the example domain control system 10 includes a cockpit domain controller, a body domain controller, an autopilot domain controller, a power domain controller, a chassis domain controller, and a central control display screen. The cockpit domain controller, the body domain controller, the automatic driving domain controller, the power domain controller and the chassis domain controller are all in communication connection with the central control display screen, and any two of the cockpit domain controller, the body domain controller, the automatic driving domain controller, the power domain controller and the chassis domain controller are in communication connection. The cockpit domain controller, the body domain controller, the automatic driving domain controller, the power domain controller, the chassis domain controller and the central control display screen can comprehensively provide functions of controlling a vehicle body structure (such as a vehicle window, a skylight and the like), controlling a chassis structure (such as a chassis, tires and the like), controlling a driving function, controlling a power system and the like for drivers and passengers through information interaction.
The electronic device in the present application may be composed as shown in fig. 1B, and the electronic device may be an electronic device cockpit domain controller, a vehicle body domain controller, an automatic driving domain controller, a power domain controller, a chassis domain controller, or a central control display screen. The electronic device may comprise a processor 110, a memory 120, a communication interface 130, and one or more programs 121, wherein the one or more programs 121 are stored in the memory 120 and configured to be executed by the processor 110, and wherein the one or more programs 121 comprise instructions for performing any of the steps of the method embodiments described above.
The communication interface 130 is used to support communication between the electronic device 100 and other devices. The Processor 110 may be, for example, a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, units, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others.
The memory 120 may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
In a specific implementation, the processor 110 is configured to perform any one of the steps performed by the cockpit domain controller in the method embodiments described below, and when performing data transmission such as sending, the communication interface 130 is optionally invoked to perform the corresponding operation.
It should be noted that the structural schematic diagram of the electronic device is merely an example, and more or fewer devices may be specifically included, which is not limited herein.
Referring to fig. 2, fig. 2 is a schematic flow chart of a dynamic response method for a main driving seat function of a vehicle according to an embodiment of the present disclosure, where the method may be applied to a cockpit area controller in a dynamic response system for a main driving seat function of a vehicle shown in fig. 1A, where the area control system includes the cockpit area controller, a sound pickup device, and an in-vehicle camera, and the sound pickup device and the in-vehicle camera are in communication connection with the cockpit area controller; as shown in fig. 2, the dynamic response method for the vehicle main driving position function includes:
s210, target data is obtained, wherein the target data comprises at least one of audio data and image data.
The audio data is the audio information in the target vehicle collected by the pickup equipment, and the image data is the image information in the target vehicle collected by the camera in the vehicle.
In a specific implementation, the cockpit domain controller may obtain the target data in real time, so that the cockpit domain controller analyzes the target data in real time to identify whether information indicating intentions of other drivers and passengers exists in the target data. In this way, in the case that it is detected that a child enters the main driving seat to perform a control operation, the cockpit area controller can directly acquire the recognition result, thereby improving the response efficiency of the target vehicle.
In another implementation, before the target data of the child is obtained, the cabin domain control may first detect whether the child enters the main driving seat, and if so, obtain the target data. Specifically, the domain controller system further comprises a gravity sensor arranged at the main driving position, the cockpit domain controller firstly acquires gravity data from the gravity sensor, and the gravity data is used for representing the gravity data change condition of the main driving position. The cockpit area controller determines whether the gravity of the main driving position is increased or not according to the gravity data; if so, determining whether the increased gravity difference value is larger than a preset range; if so, determining that the child enters the main driving position if the gravity difference value is within a preset range. And if the gravity difference value is not within the preset range, determining that no child enters the main driving position. Wherein the preset range is used for representing the preset weight range of the child. Therefore, after the condition that a child enters the main driving seat is detected according to the weight data, the target data is obtained, and the target data is analyzed to obtain the identification result, so that the real-time processing pressure of the cockpit area controller can be reduced.
When the target data is acquired after the condition that the child enters the main driving seat is detected according to the weight data, the acquired target data can be at least one of all audio data and all image data in the driving process. Alternatively, the target data may be at least one of audio data and image data within a preset time period before a situation that a child enters the main driving seat is detected, so that the task amount of data transmission can be reduced, and the data transmission efficiency can be improved.
Further, after detecting that a child enters the main driving position according to the weight data and acquiring the target data, the cockpit area controller may further determine whether a child enters the main driving position according to the audio information or the image information. Specifically, the process of determining whether the child enters the main driving seat according to the image data includes: and extracting a target image of a main driving seat area in the image data, and further judging whether the child enters the main driving seat according to the target image. Determining whether the child has entered the primary driving position based on the audio data includes: and extracting audio information of the children in the audio data, and identifying the corresponding sound source position according to the audio information, so as to judge whether the children enter the main driving position according to whether the sound source position is located in the main driving position.
And S220, under the condition that the control operation executed by the child at the main driving position is detected, identifying an intention source corresponding to the control operation according to the target data, wherein the intention source is the intention of the child or the intention of other drivers and passengers.
Wherein the intent source is for characterizing an object that motivates a child to perform a control operation. That is, the intention of the child refers to the control operation performed by the child according to the intention of the child, and the intention of the other occupants refers to the control operation performed by the child according to the intention of the other occupants.
And S230, if the intention source is the intention of the child, matching a first function authority for the main driving position, and judging whether to respond to the control operation according to the first function authority.
Wherein the first function right is used for representing the use right provided by the target vehicle to the child currently executing the control operation.
In the specific implementation, in the aspect of matching the first function authority for the main driving seat, the audio information or the image information of the child performing the control operation may be obtained first, and then the child identity information corresponding to the audio information or the image information of the child may be obtained by querying the database according to the mapping relationship between the audio information or the image information and the identity information. And then, according to the mapping relation between the child identity information and the function authority, matching a first function authority corresponding to the child identity information, namely matching the first function authority matched with the main driving seat at present.
Further, if the first functional authority includes a control operation currently performed by the child, the control operation may be responded to and communicated with at least one of a body zone controller, an automatic driving zone controller, a power zone controller, a chassis zone controller, and a center control display screen to perform the control operation. Illustratively, if the control operation is a video playing operation for a target video, and the first function right comprises a video playing right, the cockpit area controller may communicate with the central control display screen to control the central control display screen to play the video.
And S240, if the intention source is the intention of other drivers, responding to the control operation according to the intention of the other drivers.
In specific implementation, when the first function permission corresponding to the child does not include the permission of the function corresponding to the current control operation of the child, if the intention source is the intention of other drivers, it indicates that the driver allows the child to use the function corresponding to the current control operation, that is, the target vehicle may temporarily open the permission of the child to use the function corresponding to the current control operation according to the intention of the other drivers, so that the cockpit area controller may respond to the control operation according to the intention of the other drivers.
In the concrete implementation, the mode of responding to the control operation includes that the cockpit domain controller directly controls the response, or the cockpit domain controller sends an execution instruction to the vehicle body domain controller, the power domain controller, the automatic driving domain controller, the chassis domain controller and the central control display screen, and then the cockpit domain controller controls the vehicle body domain controller, the power domain controller, the automatic driving domain controller, the chassis domain controller or the central control display screen to realize the function corresponding to the control operation.
For example, when the currently performed control operation of the child is a horn activation operation, the cockpit domain controller may send a horn activation command to the power domain controller in response to the horn activation control operation, the horn activation command instructing the power domain controller to control the horn to sound.
It can be seen that, in the present embodiment, target data is acquired by the cockpit area controller, the target data including at least one of audio data and image data; under the condition that the control operation of the child is detected to be executed at the main driving position, identifying an intention source corresponding to the control operation according to the target data, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation; if the intention source is the intention of the child, matching a first function authority for the main driving position, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child executing the control operation; and if the intention source is the intention of other drivers, responding to the control operation according to the intention of the other drivers. So, this application can be based on the intention source judgement to children execution control operation, discerns that children carry out this control operation and be based on self wish or other driver and crew's wish to judge the control operation of whether response children carry out according to different situation, satisfy driver and crew's demand, be favorable to avoiding children to scramble behind the driving seat of giving first place to, trigger vehicle driving correlation function and cause the influence to follow-up driving under the condition that other driver and crew are not aware of, can improve driving safety.
In one possible example, when the target data includes audio data, the identifying an intent source corresponding to the control operation from the target data includes: identifying the audio data to obtain a semantic identification result; determining whether a directional sentence for representing an intention of other occupants to drive a child to perform a control operation is included in the semantic recognition result, the occupants in the target vehicle including the child entering a main driver seat and the other occupants; if yes, determining that the intention source is the intention of other drivers and passengers; if not, determining that the intention source is the intention of the child.
In specific implementation, the cockpit domain controller can identify the audio information to obtain a semantic identification result. After the semantic recognition result comprises the directional sentence, the relevance between the directional sentence and the control operation executed by the child is determined, if the directional sentence is correlated with the control operation executed by the child, the intention source can be determined to be the intention of other drivers, and if the directional sentence is not correlated with the control operation executed by the child, the intention source can be determined to be the intention of the child.
Specifically, the association between the directional sentence and the control operation performed by the child includes a temporal association and an object association. The time relevance refers to the sequence of the control operation executed by the child and the occurrence time of the directional sentence, if the directional sentence occurs before the child executes the control operation, the relevance is established, otherwise, the relevance is not established. When the time relevance is determined to be established, the object relevance is determined, the object relevance refers to whether the execution object in the directional sentence is the child or not, if yes, the relevance is established, and if not, the relevance is not established.
Therefore, in the example, the condition that the directional sentence exists can be identified through the audio data, so that the accuracy of the identification of the intention source can be ensured, the accuracy of the response strategy made by the cockpit area controller aiming at the control operation executed by the child is improved, and the riding experience of the driver and the passenger is improved.
In one possible example, when the target data includes image data, the identifying an intent source corresponding to the control operation from the target data includes: identifying actions of occupants, including children and other occupants entering a primary driver's seat, in the target vehicle based on the image data; judging whether other drivers and passengers execute auxiliary actions, wherein the auxiliary actions are used for representing actions for assisting the children to enter the main driving position; if yes, determining that the intention source is the intention of other drivers and passengers; if not, determining that the intention source is the intention of the child.
In a specific implementation, the cockpit domain controller may directly acquire the image data and extract a partial image related to the content of the control operation performed by the child in the image data, where the partial image includes an image within a first preset time period before the child performs the control operation. The partial image is input to a motion recognition model, and whether other drivers perform auxiliary motions is determined according to the recognition result of the motion recognition model. The motion recognition model is a model generated according to a preset auxiliary motion. For example, the preset auxiliary action includes that other riders hold hands of the child to operate, and the like, and when the action recognition model recognizes that image frames in which other riders hold hands of the child to operate exist in the partial image related to the content of the control operation performed by the child, it is determined that other riders who perform the auxiliary action exist.
Or, in a specific implementation, the cabin zone controller may obtain the audio data and the image data at the same time, and when the cabin zone controller does not recognize the directional sentence according to the audio data, the cabin zone controller may further recognize whether other drivers and passengers perform the auxiliary action according to the image data, so that the accuracy of the recognition result is improved through comprehensive evaluation of the audio data and the image data.
Therefore, in the present example, the intention source of the child performing the current control operation can be identified by determining whether the auxiliary action performed by the other driver occurs in the image data, so that the accuracy of the intention source determination can be ensured.
In one possible example, in the case where the intention source is an intention of another occupant, before the responding to the control operation according to the intention of the other occupant, the method includes: determining a target occupant identity corresponding to other occupant intentions; matching a second function authority corresponding to the target driver and passenger identity; judging whether the other drivers and passengers have the authority to execute the control operation according to the second function authority; if yes, executing the control operation according to the other driver and passenger intentions; if not, the control operation is not responded.
Wherein the second functional right is used to characterize a range of available functions that the target vehicle provides to the target occupant. A target occupant refers to an occupant who presents the intent of other occupants.
In specific implementation, after the intention source is determined to be the intention of other drivers, voiceprint information or face information corresponding to the intentions of the other drivers can be acquired, so that the corresponding target driver identity is matched according to the voiceprint information or the face information, and then the second function permission having the mapping relation with the target driver identity is obtained through matching. Therefore, whether the target driver and the passenger have the use permission of the control operation currently executed by the child is judged according to the second function permission, if yes, the control operation can be responded, and if not, the control operation cannot be responded.
Specifically, if other drivers and passengers intend to determine according to the audio data, the cockpit area controller can perform voiceprint recognition processing on the audio data to obtain a voiceprint recognition result; determining a target driver and a target passenger corresponding to the indicative sentences according to the voiceprint recognition result; and then matching is carried out in a sound database according to the sound information of the target driver and passenger, the target driver and passenger identity corresponding to the sound information of the target driver and passenger is obtained after matching, and a second function authority corresponding to the target driver and passenger identity is matched.
Or, specifically, if other drivers and passengers intend to determine according to the image data, the cabin domain controller may perform face recognition processing on the image data to obtain a face recognition result; determining a target driver and passenger corresponding to the execution of auxiliary actions according to the face recognition result; and then matching is carried out in an image database according to the face information of the target driver, the identity of the target driver corresponding to the face information of the target driver is obtained after matching, and a second function authority corresponding to the identity of the target driver is matched.
It can be seen that, in this example, by determining the second function authority of the target occupant corresponding to the intention of the other occupants, and determining whether the second function authority has the use authority of the control operation, whether to respond to the control operation is further determined, so that the occupants can be served more accurately, and the safety and reliability of the response control operation are ensured.
In one possible example, in the case where the intention source is an intention of the other occupant, before responding to the control operation according to the intention of the other occupant, the method includes: determining whether the function corresponding to the control operation is a child forbidden function; if yes, the control operation is not responded; and if not, executing the control operation according to the intention of other drivers and passengers.
The function corresponding to the control operation refers to a hardware function or a software function of the target vehicle. For example: video playing function, vehicle starting function, loudspeaker starting function and the like.
The child prohibition function is a function in which the target vehicle prohibits all children from using the vehicle.
In specific implementation, if the control operation executed by the child according to the intention of other drivers and passengers is detected, the function corresponding to the control operation is compared with the child forbidding function set by the target vehicle, if the comparison is successful, the control operation is not responded, and if the comparison is failed, the control operation is responded according to the intention of other drivers and passengers.
For example, if the child prohibition function includes a vehicle start function, if the intention source is the intention of another occupant, if the control operation is a vehicle start operation, the vehicle start function corresponding to the vehicle start operation is compared with the child prohibition function to obtain a comparison result that the vehicle start function is the child prohibition function, and at this time, even if the intention source of the control authority is the intention of another occupant, the cockpit area controller does not respond to the control operation.
As can be seen, in the present example, by setting the child prohibition function to restrict the response to the control operation performed according to the intention of the other occupant, it is possible to further improve the safety of the control operation performed in response to the child.
In one possible example, the first functional right includes a usage right of an entertainment function provided by the target vehicle to a child currently performing the control operation, and the domain controller system further includes a central control display screen, the entertainment function being provided by a software application function presented by the central control display screen.
Wherein the control operations include hardware control operations and software control operations, the software control operations being operations for the software application function. The hardware control operation refers to that a child directly touches hardware through a touch screen to trigger control operation aiming at the hardware. For example, the hardware control operation may be a triggering operation of a horn, a triggering operation of a vehicle start, a control operation of an air conditioner start-stop, and the like. The software control operation refers to the operation of the children for software or hardware triggered by the central control display screen. For example, the operation for software triggered by the child through the central control display screen includes an audio playing operation, a video playing operation, and the like, and the operation for hardware triggered by the child through the central control display screen includes an operation for controlling opening and closing of a skylight, an operation for adjusting the height of a chassis, and the like.
Further, as the age of the child does not meet the driving condition, the target vehicle can provide the child with the use permission of the entertainment function in a default condition when the target vehicle is specifically implemented, and the use permission of the entertainment function is the permission of the child to trigger part of software of the entertainment property through the central control display screen.
In specific implementation, the content of the first function authority can be adjusted and set when the identity information of the child is input. Specifically, when entering children's identity information, the accessible well accuse display screen provides the authority scope option, and the car owner can set up the scope of the first function authority that corresponds with children according to the demand. The permission range options comprise entertainment function permission and non-entertainment function permission, wherein the entertainment function is provided by entertainment software provided by the central control display screen, the driving function is provided by software provided by the central control display screen except the entertainment software and hardware which can be triggered by the target vehicle through direct contact, the entertainment function permission comprises audio playing permission, video playing permission and the like, and the driving function permission comprises starting permission, window regulation permission, chassis regulation permission, loudspeaker triggering permission and the like of the target vehicle. Specifically, the owner can add part of the driving function using permission or delete at least part of the entertainment function using permission according to the requirement.
Therefore, in this example, the target vehicle can meet the entertainment requirement of the child by providing the first function permission including the entertainment function use permission for the child, and the riding experience of the child is improved.
In one possible example, in the case where the intention source is the child's own intention, if the control operation is an operation for a non-entertainment function, the method further includes: sending an identity authentication request to the central control display screen; receiving authentication information input by the central control display screen according to the authentication request; if the verification is passed, responding to the control operation; and if the verification fails, not responding to the control operation.
In specific implementation, when the first function permission does not include a function corresponding to a control operation currently executed by the child, the cockpit domain controller may determine, according to the first function permission, that the control operation currently executed by the child is not responded to. At this time, the cockpit area controller may send an authentication request to the central control display, and acquire authentication information input by the child according to the central control display screen. If the identity verification information passes verification, the actual intention source of the control operation currently executed by the child is indicated to be the intention of other drivers and passengers, the control operation can be responded at the moment, and if the identity verification fails, the actual intention source of the control operation currently executed by the child is still the intention of the child, so the control operation can not be responded.
For example, after all the occupants leave the target vehicle, if the sunroof of the target vehicle is found to be not closed, the occupants can instruct the children to go to the target vehicle to close the sunroof. When the child executes the skylight closing function aiming at the target vehicle, the cockpit domain controller determines that the intention source of the child executing the operation is the intention of the child according to the acquired target data, and if the corresponding first function authority does not include the skylight control authority, the cockpit domain controller does not respond. At this time, the cockpit area controller may send an authentication request, and if the password input by the child according to the request passes the authentication, it indicates that the actual intention source of the window control operation currently performed by the child is the intention of other occupants, so the control operation may be responded, and if the password input by the child according to the request does not pass the authentication, it indicates that the actual intention source of the control operation currently performed by the child is still the intention of the child itself, so the control operation may not be responded.
Therefore, in the example, when the cockpit area controller does not respond to the control operation made according to the intention of the child, the identity verification request is sent to the central control display screen, so that the accuracy of intention source determination can be improved, the use permission can be provided for the driver and passengers more accurately, and the use requirements of the driver and passengers are met.
The present application may perform the division of the functional units for the electronic device according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 3A is a block diagram illustrating functional units of a dynamic response device for a main driving position function of a vehicle according to an embodiment of the present disclosure. The dynamic response device 30 for the vehicle main driving position function can be applied to the electronic device in the dynamic response system for the vehicle main driving position function shown in fig. 1A, and the dynamic response device 30 for the vehicle main driving position function includes:
an obtaining unit 310, where the obtaining unit 310 is configured to obtain target data, where the target data includes at least one of audio data and image data, where the audio data is audio information in the target vehicle collected by the sound pickup device, and the image data is image information in the target vehicle collected by the in-vehicle camera; the judging unit is used for judging whether the child enters a main driving seat or not according to the target data;
an identification unit 320, configured to, if it is detected that the child performs a control operation in the master driving seat, identify, according to the target data, an intention source corresponding to the control operation, where the intention source is an intention of the child or an intention of another occupant, and the intention source is an object that characterizes a desire to drive the child to perform the control operation;
a matching unit 330, where the matching unit 330 is configured to match a first function right for the master driver seat when the intention source is an intention of a child, and determine whether to respond to the control operation according to the first function right, where the first function right is used to represent a usage right provided by the target vehicle to the child currently performing the control operation;
a response unit 340, wherein the response unit 340 is further configured to respond to the control operation according to the other occupant intention when the intention source is the other occupant intention.
In one possible example, when the target data includes audio data, in terms of identifying an intention source corresponding to the control operation according to the target data, the identifying unit 320 is specifically configured to identify the audio data, and obtain a semantic identification result; determining whether a directional sentence for representing an intention of other occupants to drive a child to perform a control operation is included in the semantic recognition result, the occupants in the target vehicle including the child entering a main driver seat and the other occupants; if yes, determining that the intention source is the intention of other drivers and passengers; if not, determining that the intention source is the intention of the child.
In one possible example, when the target data includes image data, the identifying unit 320 is specifically configured to identify an action of an occupant including a child entering a main driving seat and other occupants in the target vehicle, in terms of the identifying an intention source corresponding to the control operation from the target data; judging whether other drivers and passengers execute auxiliary actions, wherein the auxiliary actions are used for representing actions for assisting the children to enter the main driving position; if yes, determining that the intention source is the intention of other drivers and passengers; if not, determining that the intention source is the intention of the child.
In one possible example, where the source of intent is other occupant intent, the apparatus further includes an identification unit to determine a target occupant identity corresponding to the other occupant intent; matching a second function authority corresponding to the identity of the target driver and passenger; judging whether the other drivers and passengers have the authority to execute the control operation according to the second function authority; if yes, executing the control operation according to the other driver and passenger intentions; and if not, not responding to the control operation.
In one possible example, in a case where the intention source is an intention of another occupant, the apparatus further includes a determination unit configured to determine whether a function corresponding to the control operation is a child-resistant function before responding to the control operation according to the intention of the other occupant; if yes, not responding to the control operation; and if not, executing the control operation according to the other driver and passenger intentions.
In one possible example, the first functional right includes a usage right of an entertainment function provided by the target vehicle to a child currently performing the control operation, and the domain controller system further includes a central control display screen, the entertainment function being provided by a software application function presented by the central control display screen.
In a possible example, in a case where the intention source is the child's own intention, if the control operation is an operation for a non-entertainment function, the apparatus further includes a sending unit, specifically configured to send an authentication request to the central display screen; receiving authentication information input by the central control display screen according to the authentication request; if the verification is passed, responding to the control operation; and if the verification fails, not responding to the control operation.
In the case of using an integrated unit, a functional unit of the dynamic response device 40 for the vehicle main driving position function provided by the embodiment of the present application constitutes a block diagram as shown in fig. 3B. In fig. 3B, the vehicle main-driver-position function dynamic response device 40 includes: a processing module 420 and a communication module 410. The processing module 420 is used for controlling and managing the actions of the vehicle master cockpit function dynamic response device 40, such as the steps performed by the obtaining unit 310, the identifying unit 320, the matching unit 330, the responding unit 340, and/or other processes for performing the techniques described herein. The communication module 410 is used for supporting the interaction between the vehicle main driving position function dynamic response device 40 and other devices. As shown in fig. 3B, the dynamic response device 40 for vehicle main driving position function may further include a storage module 430, and the storage module 430 is used for storing the program codes and data of the dynamic response device 40 for vehicle main driving position function.
The Processing module 420 may be a Processor or a controller, and may be, for example, a Central Processing Unit (CPU), a general-purpose Processor, a Digital Signal Processor (DSP), an ASIC, an FPGA or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure of the embodiments of the application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 410 may be a transceiver, an RF circuit or communication interface, or the like. The storage module 430 may be a memory.
All relevant contents of each scene related to the method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. The vehicle main driving position function dynamic response device can execute the steps executed by the cockpit area controller in the vehicle main driving position function dynamic response method shown in fig. 2.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps of the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, the memory including: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing embodiments have been described in detail, and specific examples are used herein to explain the principles and implementations of the present application, where the above description of the embodiments is only intended to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. A dynamic response method for the functions of a main driving seat of a vehicle is characterized in that the method is applied to a cockpit domain controller in a domain control system of a target vehicle, the domain control system comprises the cockpit domain controller, pickup equipment and an in-vehicle camera, and the pickup equipment and the in-vehicle camera are in communication connection with the cockpit domain controller; the method comprises the following steps:
acquiring target data, wherein the target data comprises at least one of audio data and image data, the audio data is audio information in the target vehicle collected by the sound pickup equipment, and the image data is image information in the target vehicle collected by the in-vehicle camera;
under the condition that the child is detected to execute the control operation at the main driving position, identifying an intention source corresponding to the control operation according to the target data, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation; wherein, if the target data includes audio data, the identifying an intention source corresponding to the control operation according to the target data includes: identifying the audio data to obtain a semantic identification result; determining whether a directional sentence is included in the semantic recognition result, wherein the directional sentence is used for representing the intention of other drivers and passengers to drive children to execute control operation, and the drivers and passengers in the target vehicle comprise the children entering a main driving position and the other drivers and passengers; if the directional sentence is included, determining that the intention source is the intention of other drivers and passengers; if the directional sentence is not included, determining that the intention source is the intention of the child; if the target data comprises image data, identifying an intention source corresponding to the control operation according to the target data comprises: identifying actions of occupants, including children and other occupants entering a primary driver's seat, in the target vehicle based on the image data; judging whether other drivers and passengers execute auxiliary actions, wherein the auxiliary actions are used for representing actions for assisting the children to enter the main driving position; if the auxiliary action is executed, determining that the intention source is the intention of other drivers and passengers; if the auxiliary action is not executed, determining that the intention source is the intention of the child;
if the intention source is the intention of the child, matching a first function authority for the main driving position, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child executing the control operation;
and if the intention source is the intention of other drivers, responding to the control operation according to the intention of the other drivers.
2. The method of claim 1, wherein, in the event the intent source is an intent of another occupant, prior to said responding to the control operation in accordance with the intent of the other occupant, comprising:
determining a target occupant identity corresponding to other occupant intentions;
matching a second function authority corresponding to the identity of the target driver and passenger;
judging whether the other drivers and passengers have the authority to execute the control operation according to the second function authority;
if yes, executing the control operation according to the other driver and passenger intentions;
if not, the control operation is not responded.
3. The method of claim 1, wherein, in the event the intent source is an intent of another occupant, prior to responding to the control operation in accordance with the intent of the other occupant, comprising:
determining whether the function corresponding to the control operation is a child forbidding function;
if yes, the control operation is not responded;
and if not, executing the control operation according to the other driver and passenger intentions.
4. The method of claim 1, wherein the first functional right comprises a usage right of an entertainment function provided by the target vehicle to a child currently performing the control operation, the domain controller system further comprising a central display screen, the entertainment function being provided by a software application function presented by the central display screen.
5. The method of claim 4, wherein if the source of intent is the child's own intent, the method further comprises, if the control operation is an operation for a non-entertainment function:
sending an identity authentication request to the central control display screen;
receiving authentication information input by the central control display screen according to the authentication request;
if the verification is passed, responding to the control operation;
and if the verification fails, not responding to the control operation.
6. A vehicle main driving position function dynamic response device is characterized in that the device is applied to a cockpit domain controller in a domain control system of a target vehicle, the domain control system comprises the cockpit domain controller, a pickup device and an in-vehicle camera, and the pickup device and the in-vehicle camera are in communication connection with the cockpit domain controller; the device comprises:
the acquisition unit is used for acquiring target data, wherein the target data comprises at least one of audio data and image data, the audio data is audio information in the target vehicle acquired by the pickup equipment, and the image data is image information in the target vehicle acquired by the in-vehicle camera; the judging unit is used for judging whether the child enters a main driving seat or not according to the target data;
the identification unit is used for identifying an intention source corresponding to the control operation according to the target data if the control operation executed by the child at the main driving position is detected, wherein the intention source is the intention of the child or the intention of other drivers and passengers, and the intention source is used for representing an object driving the child to execute the control operation; if the target data comprises audio data, in the aspect of identifying the intention source corresponding to the control operation according to the target data, the identification unit is specifically configured to identify the audio data to obtain a semantic identification result; determining whether a directional sentence for representing an intention of other occupants to drive a child to perform a control operation is included in the semantic recognition result, the occupants in the target vehicle including the child entering a main driver seat and the other occupants; if the directional sentence is included, determining that the intention source is the intention of other drivers and passengers; if the directional sentence is not included, determining that the intention source is the intention of the child; if the target data comprises image data, in the aspect of identifying the intention source corresponding to the control operation according to the target data, the identification unit is specifically used for identifying the actions of the driver and passengers according to the image data, and the driver and passengers in the target vehicle comprise children entering a main driving seat and other drivers and passengers; judging whether other drivers and passengers execute auxiliary actions, wherein the auxiliary actions are used for representing actions for assisting the children to enter the main driving position; if the auxiliary action is executed, determining that the intention source is the intention of other drivers and passengers; if the auxiliary action is not executed, determining that the intention source is the intention of the child;
the matching unit is used for matching a first function authority for the main driving seat when the intention source is the intention of the child, and judging whether to respond to the control operation according to the first function authority, wherein the first function authority is used for representing the use authority provided by the target vehicle to the child executing the control operation;
and the response unit is also used for responding to the control operation according to the other occupant intentions when the intention source is the other occupant intentions.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the steps in the method according to any of the claims 1-5.
CN202211319818.0A 2022-10-26 2022-10-26 Vehicle main driving position function dynamic response method and related device Active CN115520201B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211319818.0A CN115520201B (en) 2022-10-26 2022-10-26 Vehicle main driving position function dynamic response method and related device
CN202310447643.XA CN116534034A (en) 2022-10-26 2022-10-26 Dynamic response method and device for vehicle main driver's seat function, storage medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211319818.0A CN115520201B (en) 2022-10-26 2022-10-26 Vehicle main driving position function dynamic response method and related device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202310447643.XA Division CN116534034A (en) 2022-10-26 2022-10-26 Dynamic response method and device for vehicle main driver's seat function, storage medium and program

Publications (2)

Publication Number Publication Date
CN115520201A CN115520201A (en) 2022-12-27
CN115520201B true CN115520201B (en) 2023-04-07

Family

ID=84703482

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310447643.XA Pending CN116534034A (en) 2022-10-26 2022-10-26 Dynamic response method and device for vehicle main driver's seat function, storage medium and program
CN202211319818.0A Active CN115520201B (en) 2022-10-26 2022-10-26 Vehicle main driving position function dynamic response method and related device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202310447643.XA Pending CN116534034A (en) 2022-10-26 2022-10-26 Dynamic response method and device for vehicle main driver's seat function, storage medium and program

Country Status (1)

Country Link
CN (2) CN116534034A (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009068265A (en) * 2007-09-13 2009-04-02 Toyota Motor Corp Opening/closing control device, electronic control unit, body control computer, window control computer, and opening/closing control method
JP6358987B2 (en) * 2015-06-12 2018-07-18 三菱電機株式会社 In-vehicle information equipment
JP2019197964A (en) * 2018-05-08 2019-11-14 トヨタ自動車株式会社 Microphone control device
CN111301330A (en) * 2020-03-10 2020-06-19 马瑞利汽车电子(广州)有限公司 Vehicle rear door and window control method
CN111768776A (en) * 2020-06-28 2020-10-13 戴姆勒股份公司 In-vehicle voice control method
CN113963692A (en) * 2020-07-03 2022-01-21 华为技术有限公司 Voice instruction control method in vehicle cabin and related equipment
CN112037380B (en) * 2020-09-03 2022-06-24 上海商汤临港智能科技有限公司 Vehicle control method and device, electronic equipment, storage medium and vehicle

Also Published As

Publication number Publication date
CN116534034A (en) 2023-08-04
CN115520201A (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN110654389B (en) Vehicle control method and device and vehicle
CN112455466B (en) Automatic driving control method, automatic driving control equipment, storage medium and device
CN112041201B (en) Method, system, and medium for controlling access to vehicle features
WO2018138980A1 (en) Control system, control method, and program
CN109584871B (en) User identity recognition method and device of voice command in vehicle
JP2018151909A (en) Operation changeover determination device, operation changeover determination method, and program for determining operation changeover
CN112590735B (en) Emergency braking method and device based on driver habits and vehicle
CN112581750A (en) Vehicle running control method and device, readable storage medium and electronic equipment
CN115520201B (en) Vehicle main driving position function dynamic response method and related device
CN111278708B (en) Method and device for assisting driving
US11753047B2 (en) Impaired driver assistance
WO2018058267A1 (en) Car adjustment method and system
US11429425B2 (en) Electronic device and display and control method thereof to provide display based on operating system
CN115635978A (en) Vehicle human-computer interaction method and device and vehicle
CN113427973A (en) Vehicle-mounted air conditioner control method and device, automobile and storage medium
CN115268334A (en) Vehicle window control method, device, equipment and storage medium
JP7176383B2 (en) Information processing device and information processing program
CN111993997A (en) Pedestrian avoidance prompting method, device, equipment and storage medium based on voice
CN112455432A (en) Automatic parking safety control method, device, equipment and storage medium
CN113844456B (en) ADAS automatic opening method and device
WO2018058263A1 (en) Driving method and system
CN113858944B (en) Automobile false stepping prevention method and system and automobile
WO2024048185A1 (en) Occupant authentication device, occupant authentication method, and computer-readable medium
CN117302267A (en) Automobile driving control right switching method, computer device and storage medium
CN115416649A (en) Automatic emergency braking method and device for vehicle, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant