CN117734603A - Vehicle control method and device, electronic equipment and storage medium - Google Patents

Vehicle control method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117734603A
CN117734603A CN202311755013.5A CN202311755013A CN117734603A CN 117734603 A CN117734603 A CN 117734603A CN 202311755013 A CN202311755013 A CN 202311755013A CN 117734603 A CN117734603 A CN 117734603A
Authority
CN
China
Prior art keywords
target
execution
scene
vehicle
control instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311755013.5A
Other languages
Chinese (zh)
Inventor
陈凯德
万旎春
严立康
徐坚江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avatr Technology Chongqing Co Ltd
Original Assignee
Avatr Technology Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Avatr Technology Chongqing Co Ltd filed Critical Avatr Technology Chongqing Co Ltd
Priority to CN202311755013.5A priority Critical patent/CN117734603A/en
Publication of CN117734603A publication Critical patent/CN117734603A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the invention relates to the technical field of intelligent driving, and discloses a vehicle control method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a target scene rule triggered by a current target vehicle; inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue; and generating a corresponding control instruction based on the target operation event, and sending the control instruction to a target component of the target vehicle according to the execution relation, wherein the target component is used for executing the operation indicated by the target operation event according to the control instruction. The method and the device realize the operation of completing multiple vehicle control functions in batches at one time, and reduce repeated operation steps. The users do not need to perform interactive operation one by one in the whole process, but the system automatically executes corresponding operation by triggering the target scene rule, so that the cognitive load and the distraction of the users are reduced, the operation difficulty is reduced, and the convenience and the efficiency of the operation are improved.

Description

Vehicle control method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of intelligent driving, in particular to a vehicle control method, a vehicle control device, electronic equipment and a storage medium.
Background
Along with the development of the intelligent cabin, more and more vehicle control functions are integrated into the vehicle control application of the central control screen, and when a user continuously operates a plurality of vehicle control functions in the intelligent cabin, the control needs to be completed one by one through the interaction mode (such as clicking, voice and the like) of the vehicle control functions. This mode of operation is repeatable and suffers from the following drawbacks:
the time cost is high: because the user needs to complete the operation of a plurality of vehicle control functions one by one in an interactive mode, the time cost of the operation is increased. The user needs to spend more time to complete the required vehicle control function, and the operation efficiency and the convenience of the user are reduced.
Complicated operation: clicking or voice instructions one by one is needed for each operation of a plurality of car control functions, and the operation mode has repeatability and complexity. The user needs to repeatedly perform similar operations, which may feel inconvenience and fatigue, and reduce the user experience and satisfaction.
Risk of misoperation: due to the repeatability and the frequency of the operation, a user may perform misoperation when continuously operating a plurality of vehicle control functions. For example, when the user interacts through clicking or voice, a false touch or false recognition occurs. This may lead to incorrect operation, reducing the accuracy and safety of the operation.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a vehicle control method, apparatus, electronic device, and storage medium, which are used to solve the problems of high time cost, frequent operation, and risk of misoperation existing in the prior art when a user continuously operates a plurality of vehicle control functions in an intelligent cabin.
According to an aspect of an embodiment of the present invention, there is provided a vehicle control method including:
acquiring a target scene rule triggered by a current target vehicle;
inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue;
and generating a corresponding control instruction based on the target operation event, and sending the control instruction to a target component of the target vehicle according to the execution relation, wherein the target component is used for executing the operation indicated by the target operation event according to the control instruction.
In an embodiment of the present application, the obtaining a target scene rule currently triggered based on a target vehicle includes:
detecting a selected operation acting on a plurality of scene rule identifications displayed on a target interface, determining a selected target scene rule identification based on the selected operation, and triggering the target scene rule by utilizing the target scene rule identification;
Or detecting target environment data of the current environment of the target vehicle, and determining the target scene rule by using the target environment data.
In an embodiment of the present application, the target operation event includes: operation content, execution parameters and execution conditions;
the generating a corresponding control instruction based on the target operation event includes:
acquiring current target scene data of the target vehicle, and acquiring key scene data associated with each target operation event from the target scene data;
determining whether the critical scene data satisfies the execution condition;
and generating the control instruction based on the operation content and the execution parameter when the target scene data satisfies the execution condition.
In an embodiment of the present application, the determining an execution relationship between the target operation events in the target operation event queue includes:
analyzing the execution parameters to determine an execution type corresponding to the target operation event, wherein the execution type comprises: delay execution, distributed execution and synchronous execution;
and determining the execution relation between the target operation events by utilizing the execution type.
In an embodiment of the present application, the sending the control instruction to the target component of the target vehicle according to the execution relationship includes:
determining a transmission sequence of control instructions based on the execution relation, and arranging the control instructions according to the transmission sequence to obtain a control instruction sequence;
acquiring a first control instruction with highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle;
detecting an execution result of the first component for executing the control operation corresponding to the first control instruction;
if the execution result is that the execution is not completed, a second control instruction is sent to a second component of the target vehicle, and whether the first component executes a control operation corresponding to the first control instruction is detected, wherein the sending sequence of the second control instruction is lower than that of the first control instruction;
and transmitting a third control command to a third component of the target vehicle under the condition that the first component executes and completes the control operation corresponding to the first control command, wherein the transmission sequence of the second control command is lower than that of the second control command, and the first component, the second component and the third component form the target component.
In an embodiment of the present application, before acquiring the target scene rule currently triggered based on the target vehicle, the method further includes:
acquiring a rule configuration request triggered by a target user;
responding to the rule configuration request, and acquiring a scene rule identifier and a scene security condition corresponding to the scene rule identifier;
acquiring a plurality of operation events created by the target user based on the scene rule identification;
checking whether the operation event meets the scene safety condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene safety condition;
generating a scene rule by using the scene rule identification and the operation event queue.
In an embodiment of the present application, the method further includes:
acquiring a history operation record and history environment information corresponding to the target user;
acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information;
and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
According to another aspect of an embodiment of the present invention, there is provided a vehicle control apparatus including:
the acquisition module is used for acquiring a target scene rule triggered by a target vehicle currently;
the query module is used for querying a target operation event queue corresponding to the target scene rule and determining an execution relationship between target operation events in the target operation event queue;
and the processing module is used for generating corresponding control instructions based on the target operation events and sending the control instructions to target components of the target vehicle according to the execution relation, wherein the target components are used for executing the operations indicated by the target operation events according to the control instructions.
According to another aspect of an embodiment of the present invention, there is provided an electronic apparatus including: the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions to perform the method of the first aspect or any implementation manner corresponding to the first aspect.
According to another aspect of the embodiments of the present invention, there is provided a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of the first aspect or any of its corresponding embodiments.
According to the method provided by the embodiment of the application, the target operation event queue corresponding to the target scene rule is inquired, and a user does not need to manually operate a corresponding vehicle control function. And then, by determining the execution relation of the operation events in the target operation event queue, generating a corresponding control instruction and sending the corresponding control instruction to a target component of the target vehicle for execution. Therefore, the operation of a plurality of vehicle control functions is finished in batches at one time, and repeated operation steps are reduced. The users do not need to perform interactive operation one by one in the whole process, but the system automatically executes corresponding operation by triggering the target scene rule, so that the cognitive load and the distraction of the users are reduced, the operation difficulty is reduced, and the convenience and the efficiency of the operation are improved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
The drawings are only for purposes of illustrating embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic flow chart of a vehicle control method according to the present invention;
FIG. 2 is a schematic diagram showing interaction of control instructions provided by the present invention;
FIG. 3 is a schematic flow chart of a vehicle control method according to the present invention;
fig. 4 is a schematic view showing a structure of a vehicle control apparatus provided by the present invention;
fig. 5 shows a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
According to an embodiment of the present invention, there is provided a vehicle control method, apparatus, electronic device, and storage medium, it is to be noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order different from that shown or described herein.
In this embodiment, a vehicle control method is provided, fig. 1 is a flowchart of a vehicle control method according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
step S11, a target scene rule triggered by a target vehicle is acquired.
In this embodiment, a scenario rule may be understood as a set of rules defined according to environmental data and a vehicle state, and is used for guiding an operation under a corresponding environment, where different scenario rules are configured with different operation event queues, and the operation event queues include a plurality of operation events, and an operation event may be understood as an action performed by a certain component in a vehicle, and the component may be: seats, windows, sensors, lights, etc. The action performed by the components may be seat ventilation, seat heating, window closing, window opening, vehicle lamp mode, etc.
In an embodiment of the present application, obtaining a target scene rule currently triggered based on a target vehicle includes: detecting a selected operation acting on a plurality of scene rule identifications displayed on a target interface, determining a selected target scene rule identification based on the selected operation, and triggering a target scene rule by using the target scene rule identification.
Specifically, first, it is necessary to detect whether there is a selected operation acting on a plurality of scene rule identifications displayed on the target interface, and the scene rule identifications may be rule names of scene rules. This may be detected by means of user interaction, voice control or touch screen operation, etc. The selected target scene rule identification may be determined based on the selecting operation. The selected target scene rule identification may be determined by retrieving a user selected interface element or recording a user selected identifier.
In an embodiment of the present application, obtaining a target scene rule currently triggered based on a target vehicle includes: and detecting target environment data of the current environment of the target vehicle, and determining a target scene rule by utilizing the target environment data.
Specifically, first, the environment in which the target vehicle is currently located is detected, and target environment data related thereto is acquired. The target environmental data may include information about road conditions, traffic conditions, weather conditions, road conditions, etc. around the vehicle. May be detected and acquired by an in-vehicle sensor, an external sensor, an in-vehicle camera, or other environmental awareness device. And secondly, obtaining rule descriptions of all the scene rules, matching the rule descriptions with the target environment data, and taking the scene rule corresponding to the rule description with the highest matching degree as the target scene rule.
As one example, the following two scene rules and their rule descriptions: scene rule 1: and running in rainy days. Rule description: in a rainy driving environment, the speed is reduced and the brake sensitivity is increased. Scene rule 2: night driving rule description: in a night driving environment, a vehicle headlight is turned on and a safety distance is increased. The currently detected target environment data are: currently at night and with low visibility. And matching the target environment data with the rule description. The target environment data is matched with the "night driving environment" in the rule description. Thus scenario rule 2 is taken as the target scenario rule.
Step S12, inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue.
In an embodiment of the present application, determining an execution relationship between target operation events in a target operation event queue includes: analyzing the execution parameters to determine the execution type corresponding to the target operation event, wherein the execution type comprises: delay execution, distributed execution and synchronous execution; an execution relationship between the target operation events is determined using the execution type.
It should be noted that the delay execution: indicating that the operational event needs to be performed after a certain delay. The specific delay time may be determined from delay parameters among the execution parameters. Distribution performs: indicating that the target operational events may be performed simultaneously or concurrently. The specific execution mode may be determined according to the distribution parameters in the execution parameters, for example, executing multiple events simultaneously or executing concurrently according to a certain rule. And (3) synchronous execution: indicating that the target operational events need to be performed in a certain order and in a synchronized relationship. The specific execution order and synchronization conditions may be determined based on synchronization parameters among the execution parameters.
In the embodiment of the application, according to the execution type of the target operation event, the execution relationship between the target operation event and the target operation event can be determined. An execution relationship refers to a sequential, concurrent, or synchronous relationship between target operational events.
The time-delay execution events can be executed according to the sequence of the time delays, and the next operation event is executed after the execution of one operation event is completed. The distributed execution of the operation events may be performed simultaneously or concurrently, and there may be no explicit order between them, and optimization and adjustment may be performed as needed. The operation events synchronously executed need to be executed according to a certain sequence and a synchronous condition, for example, the operation event B can be executed after the operation event A is executed, or the operation event A and the operation event B need to be completed simultaneously to execute the operation event C.
Step S13, corresponding control instructions are generated based on the target operation events, and the control instructions are sent to target components of the target vehicle according to the execution relation, wherein the target components are used for executing operations indicated by the target operation events according to the control instructions.
In an embodiment of the present application, the target operation event includes: operation content, execution parameters and execution conditions;
Generating corresponding control instructions based on the target operation event, including the following steps A1-A3:
and A1, acquiring current target scene data of a target vehicle, and acquiring key scene data associated with each target operation event from the target scene data.
In an embodiment of the present application, the current target scene data of the target vehicle may include: vehicle travel data, driving environment data, and the like. The vehicle travel data includes: vehicle speed, gear, position, navigation, etc. The driving environment data includes: weather, road sections, road conditions, temperature and humidity, etc.
In the embodiment of the present application, the key scene data associated with different target operation events are different, for example: the operational event associated with the seat, the corresponding key scene data may be an off-board temperature. The key scene data corresponding to the operation event related to the vehicle window can be the temperature outside the vehicle, the wind speed, the temperature inside the vehicle, the gas concentration inside the vehicle and the like. The operational event related to the wiper sensor, the corresponding key scene data thereof may be a rainfall, etc.
And step A2, determining whether the key scene data meets the execution condition.
In the embodiment of the application, the key scene data is matched with the execution conditions, and whether the key scene data meets the execution conditions is determined. For example: the operation content is to open the seat ventilation, the execution parameter is the number of times of circulation for 5 times, and the interval time is 1000 milliseconds. The execution condition is that the temperature outside the vehicle is more than 35 ℃. In this case, the matching is carried out with an off-vehicle temperature of 35 ℃. Or the operation content is to open the seat massage, the execution parameter is the number of times of circulation for 5 times, and the interval time is 0 millisecond. The execution condition is that the driving time is longer than 1h. At this time, the vehicle travel time length is matched with 1h.
And step A3, generating a control instruction based on the operation content and the execution parameters when the target scene data meets the execution conditions.
In the embodiment of the present application, if the target scene data satisfies the execution condition, the control instruction is generated based on the operation content and the execution parameter, the operation content is to open the seat massage, the execution parameter is the number of cycles 5, and the interval time is 0 ms. The execution condition is that the driving time is longer than 1h. At this time, the vehicle travel time length is matched with 1h.
In the embodiment of the application, the control command is sent to the target component of the target vehicle according to the execution relationship, and the method comprises the following steps of B1-B5:
and B1, determining the sending sequence of the control instructions based on the execution relation, and arranging the control instructions according to the sending sequence to obtain a control instruction sequence.
In the embodiment of the present application, the transmission order may be: priority transmission, delayed transmission, etc. When the operation event queue is arranged according to the sending sequence, a first priority in the operation event queue can be used as a first control instruction, a control instruction corresponding to the operation event which is sent in a delayed mode is not set to be placed at the head of the sequence, then a second priority in the operation event queue is used as a second control instruction, and the control instruction corresponding to the operation event which is sent in the delayed mode is not set to be placed after the first control instruction. And finally, setting a third priority in the operation event queue, and setting a control instruction corresponding to the operation event which is transmitted in a delayed mode as a third control instruction, and placing the third control instruction after the second instruction, so that a control instruction sequence is obtained.
And step B2, acquiring a first control instruction with the highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle.
In the embodiment of the application, the transmission sequence of each instruction is determined by traversing the control instruction sequence, and the first control instruction with the highest transmission sequence is selected. And determining a first component corresponding to the first control instruction and sending the first control instruction to the first component. For example, a center screen, a sound system, a navigation system, etc. Ensuring that the instructions are properly conveyed and that the corresponding operations are performed by the first component.
And step B3, detecting an execution result of the control operation corresponding to the first control instruction executed by the first component.
In the embodiment of the application, the target component is detected, and the execution condition of the first control instruction is determined. The detection can be performed by means of feedback information, state change, etc. of the component.
And step B4, when the execution result is that the execution is not completed, transmitting a second control instruction to a second component of the target vehicle, and detecting whether the first component executes the control operation corresponding to the first control instruction, wherein the transmission sequence of the second control instruction is lower than that of the first control instruction.
In the embodiment of the application, whether the first component completes the control operation corresponding to the first control instruction is judged through modes of feedback, sensor detection or data recording and the like. If the first component does not complete execution, the second control instruction may be sent to the second component of the target vehicle, and since the sending order of the second control instruction is lower than that of the first control instruction, the sending (i.e., distributed execution) may be performed without waiting for the first component to complete the corresponding operation. After the second control instruction is sent, the execution state of the first component for executing the first control instruction is detected again. Whether the first component is still executing the control operation of the first control instruction or whether the execution is completed is judged by means of observation, sensor detection or data recording and the like.
And step B5, when the first component executes the control operation corresponding to the first control instruction, transmitting a third control instruction to a third component of the target vehicle, wherein the transmission sequence of the second control instruction is lower than that of the second control instruction, and the first component, the second component and the third component form the target component.
In this embodiment of the present application, the third control instruction is a control instruction that is executed after the first control instruction in a delayed manner.
As one example, as shown in fig. 2, there are currently the following control instructions to be executed: control instruction a: closing the car window, and controlling the command B: opening the air conditioner, and controlling the instruction C: the seat heating is activated. The sending order is determined according to the execution relationship, and the control instruction sequence is A > B > C assuming that the execution relationship is A > B > C.
The first control command A with the highest transmission sequence is acquired and transmitted to the first component of the target vehicle at the moment T0, namely, a command for closing the window is transmitted. A result of whether the first component (window) successfully performs a control operation of closing the window is detected. If the first part does not complete the control operation of closing the window, a second control command B is sent to the second part (air conditioner) of the target vehicle at time T1, and whether the first part (window) completes the control operation of closing the window is detected.
In the case where the first part (window) has completed the control operation of closing the window, a third control command C is sent to a third part (seat heating) of the target vehicle at time T2. The transmission order of the second control instruction B is lower than that of the first control instruction a. The first component (window), the second component (air conditioner), and the third component (seat heating) together constitute the target component. T0 is less than T1 and less than T2.
According to the above example, the transmission and execution sequence of the control instruction is: closing the window > opening the air conditioner > initiating seat heating. The sending and execution of each control instruction depends on the execution result of the last control instruction. Note that the actual control instructions and execution sequence may vary depending on the design and function of the vehicle, which is just one example to illustrate the transmission and execution of control instructions.
According to the method provided by the embodiment of the application, the target operation event queue corresponding to the target scene rule is inquired, and a user does not need to manually operate a corresponding vehicle control function. And then, by determining the execution relation of the operation events in the target operation event queue, generating a corresponding control instruction and sending the corresponding control instruction to a target component of the target vehicle for execution. Therefore, the operation of a plurality of vehicle control functions is finished in batches at one time, and repeated operation steps are reduced. The users do not need to perform interactive operation one by one in the whole process, but the system automatically executes corresponding operation by triggering the target scene rule, so that the cognitive load and the distraction of the users are reduced, the operation difficulty is reduced, and the convenience and the efficiency of the operation are improved.
Fig. 3 is a flowchart of a vehicle control method according to an embodiment of the present invention, as shown in fig. 3, the flowchart including the steps of:
Step S21, a rule configuration request triggered by a target user is acquired.
In this embodiment of the present application, in this step, the system receives a rule configuration request sent by the target user, the request containing the user's requirements or rule configuration for a particular scenario.
Step S22, responding to the rule configuration request, and acquiring the scene rule identification and the scene security condition corresponding to the scene rule identification.
In the embodiment of the application, a scene rule identifier and a scene security condition corresponding to the scene rule identifier are acquired in response to a rule configuration request. The system responds to the rule configuration request and acquires the scene rule identification and the scene security condition corresponding to the scene rule identification according to the requirement of the user. The scene rule identification is used for uniquely identifying a specific scene rule, and the scene security condition describes a condition which needs to be met in the scene so as to ensure the security and the accuracy of operation.
Step S23, a plurality of operation events created by the target user based on the scene rule identification are acquired.
In the embodiment of the application, a plurality of operation events created by a target user based on scene rule identification are acquired. Based on the context rule identification, the system may obtain a plurality of operational events created by the target user. These operational events are user-defined and are used to describe specific operations that need to be performed in a particular scenario.
Step S24, checking whether the operation event meets the scene security condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene security condition.
In the embodiment of the application, whether the operation event meets the scene security condition is checked, and an operation event queue is constructed based on the operation event under the condition that the operation event meets the scene security condition. The system will check each operation event to determine if they meet the scene security conditions. Only operational events that meet the scene security conditions can be included in the operational event queue. An operation event queue is a sequence of operation events arranged in a specific order for ensuring the order and security of operations in a specific scenario.
Step S25, generating a scene rule by using the scene rule identification and the operation event queue.
In the embodiment of the application, the scene rule is generated by using the scene rule identification and the operation event queue. The context rules describe the operational events that need to be performed in a particular context and their order and security requirements.
The method provided by the embodiment of the invention acquires the rule configuration request triggered by the target user and responds in time, can provide real-time service and instant feedback, and meets the requirement of the user on the scene rule configuration. By responding to the rule configuration request and acquiring the scene rule identification and the corresponding scene security conditions, the constructed scene rule can be ensured to accord with the security conditions, thereby ensuring the security and reliability of the user. After the user is accommodated, a plurality of operation events which are created by the target user based on the scene rule identification are acquired, the personalized requirements of the user can be met, and the user can create the operation events meeting the requirements according to the situation and preference of the user. Whether the operation event meets the scene safety condition is checked, so that only the operation event meeting the condition can be selected when the operation event queue is constructed, and the efficiency and the accuracy of the system are improved. Finally, a scene rule is generated by utilizing the scene rule identification and the operation event queue. Thus, rules with more flexibility and adaptability can be generated according to the requirements of users and the conditions of the current scene, and more intelligent and personalized services can be provided.
As one example, the created context rule identifies and creates a plurality of operational events as follows:
name of scene rule: comfort mode.
Description of scene rules: when the comfort mode is opened, the cabin can automatically adjust the environment in the cabin according to the conditions of the environment inside and outside the vehicle.
Content of the scene rules:
operation 1 is performed: and closing the main driving window, the auxiliary driving window, the left rear row window and the right rear row window.
Cycle times: 0, interval time: 0 ms, delay execution: 0 ms, distributed execution: is the result.
And (3) condition judgment: and if the feedback value of the rainfall sensor is judged, if the value is light rain, medium rain or heavy rain, executing the operation 1.
Operation 2 is performed: the seat ventilation is opened.
Cycle times: 5, interval time: 1000 milliseconds, delay execution: 0 ms, distributed execution: the condition judgment is as follows: the temperature outside the vehicle is more than 35 ℃.
Performing operation 3: closing the seat ventilation.
Cycle times: 5, interval time: 1000 milliseconds, delay execution: 300000 ms, after performing operation 2, delay 300000 ms, perform operation 3, distribute execution: is the result.
And (3) condition judgment: the temperature outside the vehicle is less than 35 DEG C
Performing operation 4: the seat is opened for massage.
Cycle times: 5
Interval time: 0 ms, delay execution: 200 ms, beginning execution after executing operation 2, performing distributed execution: is the result.
And (3) condition judgment: and (4) executing operation 4 after judging that the running time of the vehicle is longer than 1 h.
It should be noted that, the rule description of the scene rule may be displayed on the on-vehicle screen when the user selects the scene rule.
In an embodiment of the present application, the method further includes: acquiring a history operation record and history environment information corresponding to a target user; acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information; and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
Specifically, a past vehicle operation record of the target user is obtained through a data collection mode. May be a record of the user's interactions in the vehicle system, a record of driving, or other relevant operations. These operation records may include specific operation steps, operation times, operation objects, and the like of the user. In addition to the operation record, historical environmental information related to the target user operation may also be acquired. Such information may include environmental conditions in which the vehicle is located, vehicle status, sensor data, and the like. For example, the time of vehicle operation, weather conditions, traffic conditions, road conditions, etc. are obtained.
And secondly, screening target operation data related to the events from the historical operation records according to the operation events defined in the vehicle scene rules. For example, if there is an emergency braking event in the context rule, relevant emergency braking operation data may be obtained from the historical operating record.
And then, according to the acquired target operation data, screening target environment data matched with the operation data from the historical environment information. For example, if a certain operation is to adjust the light brightness of the vehicle according to the weather conditions, the light brightness similar to the past weather conditions can be screened out.
Finally, comprehensively analyzing the target operation data and the target environment data, and adjusting the operation event in the vehicle scene rule. According to the relevance of the historical operation data and the historical environment data, the operation events in the scene rules can be optimized, updated or adjusted so as to better adapt to the driving requirements and the environment of the target user.
The embodiment also provides a control device for a folding table board of a vehicle, which is used for realizing the above embodiment and the preferred embodiment, and the description is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
The present embodiment provides a control device for a folding table of a vehicle, as shown in fig. 4, including:
an obtaining module 41, configured to obtain a target scene rule currently triggered based on a target vehicle;
the query module 42 is configured to query a target operation event queue corresponding to the target scene rule, and determine an execution relationship between target operation events in the target operation event queue;
the processing module 43 is configured to generate a corresponding control instruction based on the target operation event, and send the control instruction to a target component of the target vehicle according to the execution relationship, where the target component is configured to execute an operation indicated by the target operation event according to the control instruction.
In this embodiment of the present application, the obtaining module 41 is configured to detect a selection operation acting on a plurality of scene rule identifiers displayed on the target interface, determine a selected target scene rule identifier based on the selection operation, and trigger a target scene rule by using the target scene rule identifier; or, the obtaining module 41 is configured to detect target environment data of an environment where the target vehicle is currently located, and determine a target scene rule by using the target environment data.
In an embodiment of the present application, the target operation event includes: operation content, execution parameters and execution conditions;
The query module 42 is configured to parse the execution parameters to determine an execution type corresponding to the target operation event, where the execution type includes: delay execution, distributed execution and synchronous execution; an execution relationship between the target operation events is determined using the execution type.
In this embodiment of the present application, the processing module 43 is configured to obtain current target scene data of a target vehicle, and obtain key scene data associated with each target operation event from the target scene data; determining whether the key scene data meets an execution condition; in the case where the target scene data satisfies the execution condition, a control instruction is generated based on the operation content and the execution parameter.
In the embodiment of the present application, the processing module 43 is configured to determine a sending order of the control instructions based on the execution relationship, and obtain a control instruction sequence based on arranging the control instructions according to the sending order; acquiring a first control instruction with the highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle; detecting an execution result of the first component for executing the control operation corresponding to the first control instruction; if the execution result is that the execution is not completed, a second control instruction is sent to a second component of the target vehicle, and whether the first component executes a control operation corresponding to the completion first control instruction is detected, wherein the sending sequence of the second control instruction is lower than that of the first control instruction; and transmitting a third control command to a third component of the target vehicle when the first component executes and completes the control operation corresponding to the first control command, wherein the transmission sequence of the second control command is lower than that of the second control command, and the first component, the second component and the third component form the target component.
In an embodiment of the present application, the apparatus further includes: the configuration module is used for acquiring a rule configuration request triggered by a target user; responding to the rule configuration request, and acquiring a scene rule identifier and a scene security condition corresponding to the scene rule identifier; acquiring a plurality of operation events created by a target user based on a scene rule identification; checking whether the operation event meets the scene security condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene security condition; generating the scene rule by using the scene rule identification and the operation event queue.
In an embodiment of the present application, the apparatus further includes: the updating module is used for acquiring a history operation record and history environment information corresponding to the target user; acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information; and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
Fig. 5 shows a schematic structural diagram of an embodiment of the electronic device according to the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 5, the apparatus may include: a processor 402, a communication interface (Communications Interface) 404, a memory 406, and a communication bus 408. Wherein: processor 402, communication interface 404, and memory 406 communicate with each other via communication bus 408. A communication interface 404 for communicating with network elements of other devices, such as clients or other servers. The processor 402 is configured to execute the program 410, and may specifically perform the relevant steps described above for the vehicle control method embodiment.
In particular, program 410 may include program code including computer-executable instructions.
The processor 402 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 406 for storing programs 410. Memory 406 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Program 410 may be specifically invoked by processor 402 to cause an electronic device to:
acquiring a target scene rule triggered by a current target vehicle; inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue; and generating a corresponding control instruction based on the target operation event, and sending the control instruction to a target component of the target vehicle according to the execution relation, wherein the target component is used for executing the operation indicated by the target operation event according to the control instruction.
In an embodiment of the present application, obtaining a target scene rule currently triggered based on a target vehicle includes: detecting a selected operation acting on a plurality of scene rule identifications displayed on a target interface, determining a selected target scene rule identification based on the selected operation, and triggering a target scene rule by utilizing the target scene rule identification; or detecting target environment data of the current environment of the target vehicle, and determining a target scene rule by using the target environment data.
In an embodiment of the present application, the target operation event includes: operation content, execution parameters and execution conditions; generating a corresponding control instruction based on the target operation event, including: acquiring current target scene data of a target vehicle, and acquiring key scene data associated with each target operation event from the target scene data; determining whether the key scene data meets an execution condition; in the case where the target scene data satisfies the execution condition, a control instruction is generated based on the operation content and the execution parameter.
In an embodiment of the present application, determining an execution relationship between target operation events in a target operation event queue includes: analyzing the execution parameters to determine the execution type corresponding to the target operation event, wherein the execution type comprises: delay execution, distributed execution and synchronous execution; an execution relationship between the target operation events is determined using the execution type.
In an embodiment of the present application, sending a control instruction to a target component of a target vehicle according to an execution relationship includes:
determining a transmission sequence of control instructions based on the execution relation, and arranging the control instructions according to the transmission sequence to obtain a control instruction sequence; acquiring a first control instruction with the highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle; detecting an execution result of the first component for executing the control operation corresponding to the first control instruction; if the execution result is that the execution is not completed, a second control instruction is sent to a second component of the target vehicle, and whether the first component executes a control operation corresponding to the completion first control instruction is detected, wherein the sending sequence of the second control instruction is lower than that of the first control instruction; and transmitting a third control command to a third component of the target vehicle when the first component executes and completes the control operation corresponding to the first control command, wherein the transmission sequence of the second control command is lower than that of the second control command, and the first component, the second component and the third component form the target component.
In an embodiment of the present application, before acquiring the target scene rule currently triggered based on the target vehicle, the method further includes:
acquiring a rule configuration request triggered by a target user; responding to the rule configuration request, and acquiring a scene rule identifier and a scene security condition corresponding to the scene rule identifier; acquiring a plurality of operation events created by a target user based on a scene rule identification; checking whether the operation event meets the scene security condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene security condition; generating the scene rule by using the scene rule identification and the operation event queue.
In an embodiment of the present application, the method further includes: acquiring a history operation record and history environment information corresponding to a target user; acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information; and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
An embodiment of the present invention provides a computer readable storage medium storing at least one executable instruction that, when executed on a vehicle control apparatus, causes the vehicle control apparatus to execute the vehicle control method in any of the above-described method embodiments.
The executable instructions may be specifically for causing the vehicle control apparatus to:
acquiring a target scene rule triggered by a current target vehicle; inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue; and generating a corresponding control instruction based on the target operation event, and sending the control instruction to a target component of the target vehicle according to the execution relation, wherein the target component is used for executing the operation indicated by the target operation event according to the control instruction.
In an embodiment of the present application, obtaining a target scene rule currently triggered based on a target vehicle includes: detecting a selected operation acting on a plurality of scene rule identifications displayed on a target interface, determining a selected target scene rule identification based on the selected operation, and triggering a target scene rule by utilizing the target scene rule identification; or detecting target environment data of the current environment of the target vehicle, and determining a target scene rule by using the target environment data.
In an embodiment of the present application, the target operation event includes: operation content, execution parameters and execution conditions; generating a corresponding control instruction based on the target operation event, including: acquiring current target scene data of a target vehicle, and acquiring key scene data associated with each target operation event from the target scene data; determining whether the key scene data meets an execution condition; in the case where the target scene data satisfies the execution condition, a control instruction is generated based on the operation content and the execution parameter.
In an embodiment of the present application, determining an execution relationship between target operation events in a target operation event queue includes: analyzing the execution parameters to determine the execution type corresponding to the target operation event, wherein the execution type comprises: delay execution, distributed execution and synchronous execution; an execution relationship between the target operation events is determined using the execution type.
In an embodiment of the present application, sending a control instruction to a target component of a target vehicle according to an execution relationship includes:
determining a transmission sequence of control instructions based on the execution relation, and arranging the control instructions according to the transmission sequence to obtain a control instruction sequence; acquiring a first control instruction with the highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle; detecting an execution result of the first component for executing the control operation corresponding to the first control instruction; if the execution result is that the execution is not completed, a second control instruction is sent to a second component of the target vehicle, and whether the first component executes a control operation corresponding to the completion first control instruction is detected, wherein the sending sequence of the second control instruction is lower than that of the first control instruction; and transmitting a third control command to a third component of the target vehicle when the first component executes and completes the control operation corresponding to the first control command, wherein the transmission sequence of the second control command is lower than that of the second control command, and the first component, the second component and the third component form the target component.
In an embodiment of the present application, before acquiring the target scene rule currently triggered based on the target vehicle, the method further includes:
acquiring a rule configuration request triggered by a target user; responding to the rule configuration request, and acquiring a scene rule identifier and a scene security condition corresponding to the scene rule identifier; acquiring a plurality of operation events created by a target user based on a scene rule identification; checking whether the operation event meets the scene security condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene security condition; generating the scene rule by using the scene rule identification and the operation event queue.
In an embodiment of the present application, the method further includes: acquiring a history operation record and history environment information corresponding to a target user; acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information; and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. In addition, embodiments of the present invention are not directed to any particular programming language.
In the description provided herein, numerous specific details are set forth. It will be appreciated, however, that embodiments of the invention may be practiced without such specific details. Similarly, in the above description of exemplary embodiments of the invention, various features of embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. Wherein the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Except that at least some of such features and/or processes or elements are mutually exclusive.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (10)

1. A vehicle control method, characterized in that the method comprises:
acquiring a target scene rule triggered by a current target vehicle;
inquiring a target operation event queue corresponding to the target scene rule, and determining an execution relationship between target operation events in the target operation event queue;
and generating a corresponding control instruction based on the target operation event, and sending the control instruction to a target component of the target vehicle according to the execution relation, wherein the target component is used for executing the operation indicated by the target operation event according to the control instruction.
2. The method of claim 1, wherein the obtaining the target scene rule currently triggered based on the target vehicle comprises:
detecting a selected operation acting on a plurality of scene rule identifications displayed on a target interface, determining a selected target scene rule identification based on the selected operation, and triggering the target scene rule by utilizing the target scene rule identification;
or detecting target environment data of the current environment of the target vehicle, and determining the target scene rule by using the target environment data.
3. The method of claim 1, wherein the target operational event comprises: operation content, execution parameters and execution conditions;
the determining the execution relationship between the target operation events in the target operation event queue comprises the following steps:
analyzing the execution parameters to determine an execution type corresponding to the target operation event, wherein the execution type comprises: delay execution, distributed execution and synchronous execution;
and determining the execution relation between the target operation events by utilizing the execution type.
4. A method according to claim 3, wherein said generating a corresponding control instruction based on said target operational event comprises:
acquiring current target scene data of the target vehicle, and acquiring key scene data associated with each target operation event from the target scene data;
determining whether the critical scene data satisfies the execution condition;
and generating the control instruction based on the operation content and the execution parameter when the target scene data satisfies the execution condition.
5. The method of claim 1, wherein said sending said control command to a target component of said target vehicle in said execution relationship comprises:
Determining a transmission sequence of control instructions based on the execution relation, and arranging the control instructions according to the transmission sequence to obtain a control instruction sequence;
acquiring a first control instruction with highest transmission sequence from the control instruction sequence, and transmitting the first control instruction to a first component of the target vehicle;
detecting an execution result of the first component for executing the control operation corresponding to the first control instruction;
if the execution result is that the execution is not completed, a second control instruction is sent to a second component of the target vehicle, and whether the first component executes a control operation corresponding to the first control instruction is detected, wherein the sending sequence of the second control instruction is lower than that of the first control instruction;
and transmitting a third control command to a third component of the target vehicle under the condition that the first component executes and completes the control operation corresponding to the first control command, wherein the transmission sequence of the second control command is lower than that of the second control command, and the first component, the second component and the third component form the target component.
6. The method of claim 1, wherein prior to acquiring the target context rule that is currently triggered based on the target vehicle, the method further comprises:
acquiring a rule configuration request triggered by a target user;
responding to the rule configuration request, and acquiring a scene rule identifier and a scene security condition corresponding to the scene rule identifier;
acquiring a plurality of operation events created by the target user based on the scene rule identification;
checking whether the operation event meets the scene safety condition, and constructing an operation event queue based on the operation event under the condition that the operation event meets the scene safety condition;
generating a scene rule by using the scene rule identification and the operation event queue.
7. The method of claim 6, wherein the method further comprises:
acquiring a history operation record and history environment information corresponding to the target user;
acquiring target operation data associated with each scene rule from the historical operation record, and acquiring target environment data matched with the operation data from the historical environment information;
and adjusting the operation event in the scene rule by utilizing the target operation data and the target environment data to obtain an adjusted scene rule.
8. A vehicle control apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target scene rule triggered by a target vehicle currently;
the query module is used for querying a target operation event queue corresponding to the target scene rule and determining an execution relationship between target operation events in the target operation event queue;
and the processing module is used for generating corresponding control instructions based on the target operation events and sending the control instructions to target components of the target vehicle according to the execution relation, wherein the target components are used for executing the operations indicated by the target operation events according to the control instructions.
9. An electronic device, comprising: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the operations of the vehicle control method of any one of claims 1-7.
10. A computer readable storage medium, characterized in that at least one executable instruction is stored in the storage medium, which executable instruction, when run on a vehicle control device, causes the vehicle control device to perform the operations of the vehicle control method according to any one of claims 1-7.
CN202311755013.5A 2023-12-19 2023-12-19 Vehicle control method and device, electronic equipment and storage medium Pending CN117734603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311755013.5A CN117734603A (en) 2023-12-19 2023-12-19 Vehicle control method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311755013.5A CN117734603A (en) 2023-12-19 2023-12-19 Vehicle control method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117734603A true CN117734603A (en) 2024-03-22

Family

ID=90255905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311755013.5A Pending CN117734603A (en) 2023-12-19 2023-12-19 Vehicle control method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117734603A (en)

Similar Documents

Publication Publication Date Title
CN110001505A (en) A kind of customizing method and system, vehicle of Vehicle lamp effect
CN107683237A (en) Driving assistance method and drive assistance device, steering control device, vehicle, the driving auxiliary program that make use of the driving assistance method
CN111762184B (en) Vehicle control system and vehicle
CN108566439A (en) Distributing communication network topology structure suitable for vehicle
US11648959B2 (en) In-vehicle operation of simulation scenarios during autonomous vehicle runs
US11738766B2 (en) Control of vehicle functions
CN108762240A (en) A kind of vehicle diagnosis method and device
CN114330778A (en) Intelligent function management method and device for vehicle, vehicle and computer storage medium
CN116034574A (en) Vehicle service control method and device, vehicle, equipment and storage medium
CN114604191A (en) Intelligent cabin active interaction system and method, electronic equipment and storage medium
EP3796159B1 (en) Operating system startup acceleration
CN112440900A (en) Vehicle control method and device, control equipment and automobile
CN117075576A (en) Vehicle intelligent cabin testing method, cloud server, vehicle-mounted terminal and storage medium
CN117734603A (en) Vehicle control method and device, electronic equipment and storage medium
CN217435657U (en) Electrical system of automatic driving vehicle and automatic driving vehicle
CN115688481A (en) Hardware-in-loop simulation test system and method for man-machine common-driving type vehicle
CN115509572A (en) Method for dynamically configuring business logic, cloud platform, vehicle and storage medium
JP7465147B2 (en) Vehicle control device, server, verification system
CN113501004A (en) Control method and device based on gestures, electronic equipment and storage medium
CN113561988A (en) Voice control method based on sight tracking, automobile and readable storage medium
CN117734608A (en) Vehicle control method, and updating method and device of vehicle scene library
CN110794735A (en) Remote control device and method
CN115589434B (en) Request processing method, service-oriented system, ECU, vehicle and storage medium
CN115588432B (en) Voice interaction method, server and computer readable storage medium
US7454271B2 (en) Method for organizing and executing a plurality of services in particular on board a motor vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination