CN116795468A - Contextual model creation method, contextual model execution device and storage medium - Google Patents

Contextual model creation method, contextual model execution device and storage medium Download PDF

Info

Publication number
CN116795468A
CN116795468A CN202310655944.1A CN202310655944A CN116795468A CN 116795468 A CN116795468 A CN 116795468A CN 202310655944 A CN202310655944 A CN 202310655944A CN 116795468 A CN116795468 A CN 116795468A
Authority
CN
China
Prior art keywords
instruction set
scene
vehicle
historical
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310655944.1A
Other languages
Chinese (zh)
Inventor
许超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect Nanjing Co Ltd
Original Assignee
Pateo Connect Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect Nanjing Co Ltd filed Critical Pateo Connect Nanjing Co Ltd
Priority to CN202310655944.1A priority Critical patent/CN116795468A/en
Publication of CN116795468A publication Critical patent/CN116795468A/en
Pending legal-status Critical Current

Links

Landscapes

  • Navigation (AREA)

Abstract

The embodiment of the application discloses a contextual model creation method, a contextual model execution method, contextual model execution equipment and a storage medium, and relates to the technical field of Internet of vehicles. The method comprises the following steps: determining a scene where a vehicle is currently located; responding to the scene to construct the situation mode requirement, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set; invoking a historical instruction set of a vehicle user in a scene, and determining a target instruction set based on the current instruction set and the historical instruction set; and creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by utilizing the target instruction set.

Description

Contextual model creation method, contextual model execution device and storage medium
Technical Field
The present application relates to, but not limited to, the field of internet of vehicles, and in particular, to a contextual model creation method, a contextual model execution device, and a storage medium.
Background
With the popularization of automobiles and the increase of the frequency of automobile use, the intelligent control of automobiles becomes one of important consideration of people. For example, automobiles need to design various profiles so that the vehicle is automatically controlled to be in a corresponding profile according to the current environment.
The creation mode of the contextual model in the related technology is either a contextual model which is designed and defined in advance by a host factory or a contextual model which is defined in advance by a user; however, the predefined profiles are limited in variety and cannot meet the use requirements of the user.
Disclosure of Invention
The invention aims to provide at least one contextual model creation method, which has the advantages that the scene where the vehicle is currently located is determined; responding to the scene to construct the situation mode requirement, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set; invoking a historical instruction set of a vehicle user in a scene, and determining a target instruction set based on the current instruction set and the historical instruction set; and creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by utilizing the target instruction set. In this way, under the condition that the vehicle user determines that the situation mode constructing requirement exists in the corresponding scene in different scenes, the target instruction set is determined according to the current instruction set formed by the operation instructions input by the vehicle user and the historical instruction set of the vehicle user in the scene, so that the scene mode corresponding to the scene is automatically established in the vehicle system of the vehicle by utilizing the target instruction set; further, different settings are carried out on the vehicle functions aiming at the vehicle users in different scenes, so that the adaptability of the automobile to the habit of the users is improved.
Another object of the present application is to provide at least one contextual model execution method, which has the advantages that under the condition that a contextual model corresponding to a scene is created, it is determined that a vehicle is in the scene again, and the contextual model corresponding to the scene is obtained; issuing an operation instruction in a target instruction set corresponding to the execution scene mode to the vehicle controlled component so as to enable the vehicle controlled component to execute the corresponding operation instruction; at least one of a vehicle controlled component, execution progress, and a prompt animation that is executing an operation instruction in the target instruction set is displayed. In this way, when the controlled component in the vehicle executes the corresponding operation instruction, the prompt animation and/or execution progress of the operation instruction being executed by the controlled component of the vehicle is displayed through the multimedia, so that the visual experience perception of the execution process of the operation instruction is increased. In this way, the process of executing the operation instruction by the controlled component of the vehicle is associated with the visualization function, so that the visual dynamic experience of the user is improved, and the user experience is improved.
In order to achieve the above object, the technical solution of the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a contextual model creation method, where the method includes:
Determining a scene where a vehicle is currently located;
responding to the scene to have a requirement of building a situation mode, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set;
invoking a historical instruction set of the vehicle user in the scene, and determining a target instruction set based on the current instruction set and the historical instruction set;
and creating a first scene mode corresponding to the scene in a vehicle-to-machine system of the vehicle by utilizing the target instruction set.
In a second aspect, an embodiment of the present application provides a contextual model execution method, where the method includes:
under the condition that a scene mode corresponding to a scene is created, determining that a vehicle is in the scene again, and obtaining the scene mode corresponding to the scene;
issuing an operation instruction in a target instruction set corresponding to the scene mode to a vehicle controlled component so that the vehicle controlled component executes the corresponding operation instruction;
displaying at least one of the vehicle controlled component, execution progress, and cue animation that is executing an operation instruction in the target instruction set.
In a third aspect, embodiments of the present application provide a computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, the processor implementing some or all of the steps of the above method when the program is executed.
In a fourth aspect, embodiments of the present application provide a storage medium storing one or more computer programs executable by one or more processors to implement some or all of the steps of the above-described methods.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is an interface diagram of an alternative contextual model creation method provided in the related art;
fig. 2 is a schematic flow chart of an alternative contextual model creation method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an alternative contextual model creation method according to an embodiment of the present application;
fig. 4 is a schematic flow chart of an alternative contextual model creation method according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of an alternative contextual model execution method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of an alternative contextual model creation and execution process provided by an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application.
Detailed Description
The technical solution of the present application will be further elaborated with reference to the accompanying drawings and examples, which should not be construed as limiting the application, but all other embodiments which can be obtained by one skilled in the art without making inventive efforts are within the scope of protection of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict. The term "first/second/third" is merely to distinguish similar objects and does not represent a particular ordering of objects, it being understood that the "first/second/third" may be interchanged with a particular order or precedence, as allowed, to enable embodiments of the application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing the application only and is not intended to be limiting of the application.
The embodiment of the application provides a contextual model creation method and a contextual model execution method, which can be executed by a processor of computer equipment. The computer device may be a device with data processing capability, such as a server, a notebook computer, a tablet computer, a desktop computer, a smart television, a set-top box, a mobile device (e.g., a mobile phone, a portable video player, a personal digital assistant, a dedicated messaging device, and a portable game device). In some embodiments, the computer device may be an in-vehicle terminal device. The vehicle-mounted terminal device can be a terminal device deployed on a vehicle, and the terminal device is in communication connection with the vehicle, can be used independently of the vehicle, and can be integrated in a vehicle control system. In some embodiments, the computer device may also be a vehicle-mounted cloud server, where the vehicle-mounted cloud server is in communication connection with the vehicle, and the application is not limited to the computer device specifically.
Before describing a method for creating a contextual model and a method for executing the contextual model according to the embodiments of the present application, a description is given of a manner of creating the contextual model in the related art.
More creation modes of the contextual models in the related art are user-predefined contextual models, and referring to fig. 1, the modes of the user-predefined contextual models include two modes, wherein the first mode is that a hong-Meng-machine system displays a desktop widget of an application program on a desktop in the hong-Meng-machine system by utilizing the desktop widget function of the application program, so that the user can start the widget in a one-key mode, and the user operation is reduced; secondly, a software called 'contextual model' is generally arranged in a vehicle-mounted system, and mainly, actions of some vehicle controls are set in advance and can be manually executed by one key when needed; however, the first method at least has the problems that the native gadget can only execute a certain function of a single application and needs to be clicked manually for effect each time, and the second method at least has the problems that the types of predefined contextual models are limited, the design is relatively dead, the custom actions are generally relatively less, no expansibility exists, and the vehicle user needs to click manually for effect.
In order to solve the above technical problem, referring to fig. 2, fig. 2 is a schematic implementation flow chart of a contextual model creation method according to an embodiment of the present application, the method may be executed by a processor of a computer device, where, in this case, the steps shown in fig. 2 will be described,
step 101, determining a current scene of the vehicle.
In an embodiment of the present application, determining a scene in which a vehicle is currently located may include: acquiring scene information of a current vehicle; wherein the scene information includes one or more of: the method comprises the steps of current time information of a vehicle, current position information of the vehicle, environment information of the position of the vehicle and vehicle state information; and determining the current scene of the vehicle according to the scene information.
In the embodiment of the application, the scene information can also comprise the battery electric quantity information of the vehicle.
In an embodiment of the present application, the vehicle state information includes, but is not limited to, a start state of the vehicle, a running state of the vehicle, a parking state of the vehicle, and a flameout state of the vehicle.
In the embodiment of the application, the current time information of the vehicle can be obtained from a vehicle-to-machine system, and the current position information of the vehicle can be positioned by utilizing the position-based service (Location Based Services, LBS) function of the computer equipment; positioning may also be accomplished using a global positioning system (Global Positioning System, GPS) module within the computer device; positioning can also be achieved by using a base station positioning mode supported by computer equipment, and the method is not particularly limited herein.
In the embodiment of the application, the environmental information of the position of the vehicle comprises air quality information and weather information of the position of the vehicle, wherein the weather information comprises, but is not limited to, weather phenomena (such as sunny days, cloudy days, rainy days, cloudy days and snowy days), the temperature, humidity, wind level, ultraviolet intensity, visibility intensity and the like of the current air; the air quality is used for reflecting the air pollution degree, and the air quality information comprises excellent air quality, good air quality, medium air quality and poor air quality, wherein the excellent air quality represents that the air pollution is basically not generated at present, the excellent air quality represents that the light air pollution exists at present, the medium air quality represents that the heavy air pollution exists at present, and the poor air quality represents that the serious air pollution exists at present.
It can be understood that the acquisition modes of the environmental information in the embodiment of the disclosure are various, for example, the acquisition of the weather information can be obtained by calling weather controls of the Android system, the acquisition of the environmental information around the vehicle body obtained by analyzing the vehicle-mounted camera, and the acquisition of the environmental information can also be obtained by a terminal device in communication connection with the vehicle-mounted camera system, and the application is not limited in particular.
In the embodiment of the application, determining the current scene of the vehicle according to the scene information can be understood as determining the current scene of the vehicle according to one or more of the current time information, the current position information, the air quality of the position of the vehicle, the weather information, the vehicle state information, the battery power information and the like, and recording the frequency of the vehicle in the scene.
In the embodiment of the application, the scene where the vehicle is currently located is determined according to the scene information, and the scene where the vehicle is located can be a scene where the vehicle arrives at a company destination, a scene where the vehicle leaves the company destination, or a scene where the vehicle arrives at a set place. The embodiment of the application does not limit the scene where the vehicle is located.
In a first possible embodiment, if the scene information includes that the current time information of the vehicle is 8:50, the current location information is a location near the company, and the vehicle is in a parking state, that is, the gear of the vehicle is located in a park (P) gear; and determining the scene where the vehicle is currently located as the scene where the vehicle arrives at the company destination according to the scene information.
In a second possible implementation manner, if the scene information includes that the current time information of the vehicle is 17:50, the current position information is the position near the company, and the vehicle is in a starting state; and determining the scene where the vehicle is currently located as the destination scene of the vehicle driving away company according to the scene information.
In a third possible implementation manner, if the scene information includes that the vehicle is in a charging state, determining that a scene in which the vehicle is currently located is a charging scene according to the scene information; or the scene information comprises the electric quantity of a storage battery of the vehicle, the electric quantity of the storage battery is lower than an electric quantity threshold value, and the current scene of the vehicle is determined to be the scene to be charged.
In a fourth possible embodiment, if the scene information includes location information and time information of the vehicle, determining, according to the scene information, a scene in which the vehicle is currently located as a dining scene provided for the vehicle user.
It should be emphasized that the foregoing embodiments are merely exemplary, and the embodiment of the present application does not specifically limit the manner in which the scene in which the vehicle is currently located is determined.
Step 102, in response to the scene having the requirement of building a situation mode, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set.
In an embodiment of the present application, the existence of a requirement for building a contextual model may include: when the frequency of occurrence of the scene meets the frequency condition, the scene is determined to have the requirement of building the situation mode, or the scene is determined to have the requirement of building the situation mode through the specification of a vehicle user, and the application is not particularly limited.
In the embodiment of the present application, the operation instruction refers to a corresponding control instruction generated based on a series of operations performed on the vehicle by the vehicle user, for example: the method comprises the steps of opening a door, opening an air conditioner, adjusting temperature, adjusting audio volume in a vehicle, starting the vehicle and the like, wherein the operation modes comprise, but are not limited to, directly operating the vehicle or issuing an operation instruction to the vehicle through other equipment terminals, and the mode of issuing the operation instruction comprises, but is not limited to, direct operation based on limbs of a user or voice operation based on the user.
It should be noted that, when the operation instruction is acquired, a timestamp corresponding to the operation instruction may be synchronously recorded, and before the operation instruction is integrated, a comparison is made to determine whether a time interval between the current time and the timestamp of the last operation instruction received by the vehicle is greater than or equal to a preset integration time threshold; if yes, indicating that the automobile function required by the user is already operated in a certain operation period, and integrating the previous operation instructions to obtain a current instruction set; if not, the automobile function required by the user is not operated in a certain operation period, and the operation instruction may still need to be continuously received, and then the subsequent operation instruction is waited, and the operation instructions are not integrated.
Similarly, in an alternative embodiment, before integrating the operation instruction, it may be further compared to determine whether a time interval between the timestamp of the currently received operation instruction and the timestamp of the received previous operation instruction is greater than or equal to a preset integration time threshold.
In an exemplary illustration, the computer device determines a scenario of a company destination where the vehicle is currently located, responds to the scenario having a requirement of building a situation mode, collects a current instruction set obtained by integrating a station closing instruction, an air conditioner closing instruction, a trunk opening instruction and a door unlocking instruction when a vehicle user closes the station, closes the air conditioner, opens the trunk, and unlocks the door in the scenario and does not receive a new operation instruction within a certain time interval.
For another example, the computer device determines that the vehicle is currently located in a destination scene of a driving-away company, responds to the situation that the scene has a requirement of building a situation mode, collects that a vehicle user opens a window, plays music and locks a vehicle door in the scene, and integrates the window opening instruction, the music playing instruction and the vehicle door locking instruction if a new operation instruction is not received within a certain time interval, so as to obtain a current instruction set.
In other embodiments of the present application, step 102 may further perform the following procedure before building the context schema requirement in response to the existence of the scene: detecting the occurrence frequency of scenes; and if the frequency meets the frequency condition, displaying first prompt information, wherein the first prompt information is used for prompting whether a vehicle user has a situation mode requirement corresponding to the construction scene.
In the embodiment of the application, the frequency meeting the frequency condition comprises the following steps: the frequency is greater than a frequency threshold, which may be a preset value, such as 10; or the frequencies of all the scenes are arranged in a descending order, wherein the frequencies of the scenes are positioned in the front N in the arrangement combination, N is an integer, and N is the number of the scenes which is more than or equal to 1 and less than or equal to 1; the present application is not particularly limited in this regard.
In the embodiment of the application, after determining the scene where the vehicle is currently located, the computer equipment detects and counts the occurrence frequency of the scene, and if the occurrence frequency of the scene meets the frequency condition, displays first prompt information for prompting the vehicle user whether the situation mode requirement corresponding to the scene is established; further, an action instruction (including existence of a situation mode requirement corresponding to the construction of the scene or absence of a situation mode requirement corresponding to the construction of the scene) corresponding to the first prompt information is allocated to a corresponding operation window/control, and when a user clicks the operation window/control corresponding to the situation mode requirement corresponding to the construction of the scene, the construction of the situation mode requirement exists in response to the scene.
And step 103, calling a historical instruction set of a vehicle user in a scene, and determining a target instruction set based on the current instruction set and the historical instruction set.
In the embodiment of the application, the historical instruction sets of the vehicle user in the same scene comprise one or more than one historical instruction set, and the operation instructions carried by any two historical instruction sets may be identical or not identical; that is, the operation instructions input by the same vehicle user in the same scene are not identical.
In the embodiment of the present application, determining the target instruction set based on the current instruction set and the historical instruction set may be understood as determining the target instruction set based on the similarity between the current instruction set and the historical instruction set, or determining the target instruction set based on the instruction intersection set or the instruction union set of the operation instruction carried by the current instruction set and the operation instruction carried by the historical instruction set, which is not particularly limited in this aspect.
Step 104, creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by using the target instruction set.
In the embodiment of the application, the first scene mode is a scene mode corresponding to a scene where a vehicle is currently located and created by using a target instruction set.
In the embodiment of the application, the computer equipment calls a historical instruction set of a vehicle user in a scene, and after a target instruction set is determined based on the current instruction set and the historical instruction set, a first scene mode corresponding to the scene is created in a vehicle machine system of the vehicle by using the target instruction set.
The embodiment of the application provides a method for creating a contextual model, which is used for determining the current scene of a vehicle; responding to the scene to construct the situation mode requirement, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set; invoking a historical instruction set of a vehicle user in a scene, and determining a target instruction set based on the current instruction set and the historical instruction set; and creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by utilizing the target instruction set. In this way, under the condition that the vehicle user determines that the situation mode constructing requirement exists in the corresponding scene in different scenes, the target instruction set is determined according to the current instruction set formed by the operation instructions input by the vehicle user and the historical instruction set of the vehicle user in the scene, so that the scene mode corresponding to the scene is automatically established in the vehicle system of the vehicle by utilizing the target instruction set; further, different settings are carried out on the vehicle functions aiming at the vehicle users in different scenes, so that the adaptability of the automobile to the habit of the users is reversely improved.
In some embodiments, the process of invoking the historical instruction set of the vehicle user in the scenario in step 103 is described in connection with FIG. 3,
step 201, obtaining all historical instruction sets input by a vehicle user in a scene.
Step 202, classifying and counting all the historical instruction sets based on the carried operation instructions to obtain the historical execution times corresponding to each classified historical instruction set.
Step 203, screening a historical instruction set with the historical execution times meeting the times condition from the classified historical instruction set.
In the embodiment of the application, the historical execution times meet the times conditions, which comprise: the historical execution times are larger than an execution times threshold, and the execution times threshold can be a preset value such as 10; or the historical execution times corresponding to the classified historical instruction sets are arranged in a descending order, wherein in the arrangement combination, the historical execution times are located in the front N scenes, N is an integer, and N is the number which is greater than or equal to 1 and less than or equal to the number of the scenes; the present application is not particularly limited in this regard.
In the embodiment of the application, the history instruction sets input by the vehicle user in the same scene are different, at this time, the computer equipment obtains all the history instruction sets input by the vehicle user in the scene, and classifies all the history instruction sets based on the operation instructions carried by each history instruction set to obtain the classified history instruction sets; further, counting the historical execution times corresponding to each classified historical instruction set; and finally, screening the classified historical instruction sets based on the historical execution times to obtain the historical instruction sets with the historical execution times meeting the times condition. When the number of times of execution of the history instruction set satisfies the number of times condition, the history instruction set may be considered as one or more vehicular habits of the user, so that a target instruction set for recording vehicular habits of the user may be generated according to the history instruction set and the current instruction set, and a first scene mode corresponding to a scene may be created in a vehicle system of the vehicle by using the target instruction set, so that the computer device determines that the vehicle is in the scene again, and the functions of each controlled component in the vehicle may be set by using the first scene mode corresponding to the scene.
In one possible scenario, taking the current scenario of the vehicle as the scenario of the vehicle reaching the company destination, the computer equipment obtains all the history instruction sets input by the vehicle user in the scenario of the vehicle reaching the company destination, and classifies all the history instruction sets based on the operation instructions carried by each history instruction set to obtain classified history instruction sets; further, counting the historical execution times corresponding to each classified historical instruction set, wherein the statistical results are shown in a table 1, and the table 1 is a classification result and a statistical result obtained by classifying and counting all the historical instruction sets in the same scene based on the carried operation instructions; finally, the computer device screens the classified historical instruction sets based on the historical execution times to obtain a historical instruction set with the historical execution times being larger than an execution times threshold value such as 10, namely, the operation instructions carried by the historical instruction set A comprise { radio station closing instructions, trunk opening instructions, vehicle door unlocking closing instructions } and the operation instructions carried by the historical instruction set B comprise { radio station closing instructions, vehicle door unlocking closing instructions }.
TABLE 1
According to the method, all the historical instruction sets input by the vehicle user in the scene are obtained, the historical execution times corresponding to each classified historical instruction set are obtained by classifying and counting all the historical instruction sets based on the carried operation instructions, and when the historical execution times of the historical instruction sets meet the times conditions, the historical instruction sets can be considered to be one or more of the vehicle habits or the operation habits of the user, so that the corresponding contextual model in the scene is generated according to the vehicle habits or the operation habits of the vehicle user. When the computer equipment determines that the vehicle is in the scene again, the functions of all controlled components in the vehicle are set through the target instruction set by utilizing the first scene mode corresponding to the scene. Therefore, according to the obtained historical instruction set with the historical execution times being greater than the times, a contextual model meeting the personalized requirements of the user in the scene is formed, on the other hand, the user self-defining capability can be effectively mined according to the historical instruction set and the current instruction set, and a new contextual model with higher value is automatically and continuously provided for the user; compared with a mode of manually customizing the situation mode by the user, the method disclosed by the invention realizes automatic creation of the habit scene of the user and improves the intelligent experience of the user.
In some embodiments, determining the target instruction set based on the current instruction set and the historical instruction set in step 103 may be accomplished in several ways,
first, the similarity between the current instruction set and the historical instruction set is calculated, and the historical instruction set with the highest similarity is determined as the target instruction set.
In the embodiment of the application, the similarity is used for comparing the similarity degree between the current instruction set and the historical instruction set. Here, the similarity between the current instruction set and the historical instruction set may be calculated using a trained neural network model, a machine learning model such as a TensorFlow, or may be implemented using a tanimoto algorithm, or of course, may be calculated using cosine similarity or euclidean distance, which is not particularly limited in this regard.
In one implementation scenario, under the condition of calling a historical instruction set of a vehicle user in a scenario, an operation instruction carried by a current instruction set T includes { a station closing instruction, an air conditioner closing instruction, a trunk opening instruction, a vehicle door unlocking instruction }, the historical instruction set includes two historical instruction sets a and B, the operation instruction carried by the historical instruction set a includes { a station closing instruction, a trunk opening instruction, a vehicle door unlocking closing instruction } and the operation instruction carried by the historical instruction set B includes { a station closing instruction, a vehicle door unlocking closing instruction }. Here, the similarity between the current instruction set and each of the historical instruction sets is calculated using the tanimoto algorithm. Here, the similarity between the current instruction set T and the history instruction set a is calculated first, the number of operation instructions in common between the current instruction set T and the history instruction set a is obtained, the similarity between the current instruction set T and the history instruction set a is determined by the following formula (1) based on the number of the current instruction set T, the number of the history instruction set a, and the number of operation instructions in common,
Wherein S represents the similarity between the current instruction set and the historical instruction set, N c Representing the number of shared operating instructions between the current instruction set T and the historical instruction set A, N T Indicating the number N of operation instructions carried by the current instruction set T A The number of operation instructions carried by the history instruction set A is represented.
Calculating the similarity between the current instruction set T and the historical instruction set A to be 0.75 according to the formula (1); similarly, the similarity between the current instruction set T and the historical instruction set B is calculated to be 0.5. Obviously, the similarity between the current instruction set T and the historical instruction set A is larger than that between the current instruction set T and the historical instruction set B, and at the moment, the historical instruction set A with the highest similarity is determined to be the target instruction set. In this way, the historical instruction set with the highest similarity with the current instruction set is taken as the target instruction set, and the historical execution times of the target instruction set meet the times condition, so that the target instruction set is characterized as the train habit or the operation habit of the user in the current scene, and the corresponding contextual model in the scene is generated according to the train habit or the operation habit of the vehicle user. When the computer equipment determines that the vehicle is in the scene again, the functions of all controlled components in the vehicle are set through the target instruction set by utilizing the first scene mode corresponding to the scene. Thus, according to the obtained historical instruction set with the historical execution times being greater than the times condition, a contextual model which accords with the personalized appeal of the user in the scene is formed; further, the contextual model corresponding to the historical instruction set with the highest similarity to the current instruction set is determined to be the first contextual model, so that personalized experience of the high-frequency scene of the user is improved.
Second, the operation instructions belonging to the current instruction set and belonging to the history instruction set are combined to obtain the target instruction set.
In the embodiment of the application, the target instruction set consists of operation instructions belonging to both the current instruction set and the historical instruction set, namely the target instruction set is the intersection of the current instruction set and the historical instruction set.
In the embodiment of the application, because the historical instruction set is the habit of using the vehicle or the habit of operating the user in the scene, the computer equipment combines the operating instructions belonging to the current instruction set and the historical instruction set to obtain the target instruction set, and the target instruction set is more indicated to be the most basic habit of using the vehicle and the habit of operating the user in the scene, so that the corresponding contextual model in the scene is generated according to the most basic habit of using the vehicle or the habit of operating the user. When the computer equipment determines that the vehicle is in the scene again, the functions of all controlled components in the vehicle are set through the target instruction set by utilizing the first scene mode corresponding to the scene. Therefore, according to the obtained historical instruction set with the historical execution times being greater than the times, a contextual model meeting the personalized requirements of the user in the scene is formed, on the other hand, the user self-defining capability can be effectively mined according to the intersection of the historical instruction set and the current instruction set, and a more valuable new contextual model can be automatically and continuously provided for the user; compared with a mode of manually customizing the situation mode by the user, the method disclosed by the application realizes automatic creation of the habit scene of the user and improves the intelligent experience of the user.
Thirdly, combining the operation instruction carried by the current instruction set and the operation instruction carried by the historical instruction set to obtain a target instruction set.
In the embodiment of the application, the target instruction set comprises all operation instructions carried by the current instruction set and all operation instructions carried by the historical instruction set, namely the target instruction set is the union of the current instruction set and the historical instruction set.
In the embodiment of the application, because the historical instruction set is the habit of using the vehicle or the habit of operating the user in the scene, the computer equipment combines the operation instruction in the current instruction set and the operation instruction in the historical instruction set to obtain the target instruction set, and the target instruction set is more indicated to be the most complete habit of using the vehicle and the habit of operating the user in the scene, so that the corresponding contextual model in the scene is generated according to the most complete habit of using the vehicle or the habit of operating the vehicle user. When the computer equipment determines that the vehicle is in the scene again, the functions of all controlled components in the vehicle are set through the target instruction set by utilizing the first scene mode corresponding to the scene. Therefore, according to the obtained historical instruction set with the historical execution times being greater than the times, a contextual model meeting the personalized requirements of the user in the scene is formed, on the other hand, the user self-defining capability can be effectively mined according to the union of the historical instruction set and the current instruction set, and a more valuable new contextual model can be automatically and continuously provided for the user; compared with a mode of manually customizing the situation mode by the user, the method disclosed by the application realizes automatic creation of the habit scene of the user and improves the intelligent experience of the user.
In some embodiments, step 104 uses the target instruction set to create a first scene mode corresponding to a scene within the on-board system of the vehicle, as described in connection with FIG. 4,
step 301, obtaining a first execution sequence of an operation instruction carried by a current instruction set and a second execution sequence of an operation instruction carried by a history instruction set;
step 302, calibrating a target execution sequence of a target instruction set based on the first execution sequence and the second execution sequence;
step 303, creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by using the target instruction set and the target execution sequence of the target instruction set.
In the embodiment of the present application, the first execution sequence may be an instruction input sequence of an operation instruction input by a vehicle user when the current instruction set is obtained, and the first execution sequence may also be a sequence obtained by optimizing and sequencing the operation instruction in the current instruction set according to the sequence of instruction execution by using the instruction input sequence of the operation instruction of the vehicle user.
The second execution sequence is the execution sequence of the operation instructions in the execution history instruction set.
The target execution sequence is a sequence obtained by optimally sequencing the operation instructions in the target instruction set according to the first execution sequence of the operation instructions carried in the current instruction set and the second execution sequence of the operation instructions carried in the history instruction set.
In the embodiment of the application, the computer equipment obtains a first execution sequence of the operation instruction carried by the current instruction set and a second execution sequence of the operation instruction carried by the historical instruction set; the method comprises the steps that a first execution sequence of operation instructions carried in a current instruction set and a second execution sequence of operation instructions carried in a historical instruction set are used for optimally sequencing the operation instructions in a target instruction set to obtain a target execution sequence of the target instruction set; and finally, creating a first scene mode corresponding to the scene in a vehicle machine system of the vehicle by utilizing the target instruction set and the target execution sequence of the target instruction set, so that under the condition that the operation instructions in the target instruction set accord with the operation habits of the user of the vehicle in the current scene, the execution sequence in the target instruction set is optimally ordered, and the execution time of the vehicle can be reduced when the vehicle executes the operation instructions in the target instruction set according to the target execution sequence.
In some embodiments, after performing step 104 to create a first scene mode corresponding to a scene within the on-board system of the vehicle using the target instruction set, a process may also be performed,
if the same editing operation is monitored to be executed on the target instruction set by the vehicle user under the condition that the vehicle is in a scene once or a plurality of times, editing information of the vehicle user is obtained; and adjusting a target instruction set corresponding to the scene based on the editing information, and modifying the first scene mode into a second scene mode.
In the embodiment of the application, the editing operation comprises adding, modifying and deleting operations, and the editing operation also comprises skipping operations.
The second scene mode may be the same as or different from the first scene mode.
In one achievable application scenario, in the event that the vehicle is again in the scenario, the computer device displays prompt information for prompting the vehicle user whether to modify or skip a target instruction set corresponding to the scenario; in one case, modification information of a vehicle user for a target instruction set is obtained in response to modification operation of the vehicle user, the target instruction set corresponding to the scene is adjusted according to the modification information, the first scene mode is modified to a second scene mode, and the editing information comprises the modification information. In another case, in response to a skip operation of the vehicle user, the first profile corresponding to the scene is skipped, that is, the vehicle does not execute the operation instruction carried in the target instruction set.
In another possible application scenario, if it is monitored that the vehicle user performs the same editing operation on the target instruction set multiple times when the vehicle is in the scenario multiple times, for example, the vehicle user performs the same modifying operation or deleting operation on the same operation instruction in the target instruction set multiple times when the vehicle user is in the scenario multiple times, or adds the same operation instruction to the target instruction set; at this time, the computer device obtains the deleting information of the operation instruction modified or deleted by the vehicle user, adjusts the target instruction set corresponding to the scene based on the modifying information, the deleting information or the adding information, and modifies the first scene mode to obtain a second scene mode corresponding to the scene; wherein the editing information includes deletion information based on the modification information, the deletion information, and the addition information. Therefore, in order to avoid the editing operation of the target instruction set corresponding to the scene by the vehicle user for a small number of times under the sudden condition, the method realizes the self-adaptive adjustment of the operation instruction in the target instruction set according to the editing information of the target instruction of the user for the scene by the vehicle user for a large number of times under the scene, thereby being more in line with the habit of using the vehicle or the operation habit of the vehicle user under the scene.
In some embodiments, the same vehicle may be used by multiple users, and for different users, each user has different use habits, and the human-vehicle interaction mode with the same design concept is difficult to adapt to the automobile use scene of thousands of people and thousands of faces.
Where identity information of a vehicle user refers to information that can verify the identity of the user, including but not limited to biometric information that can verify the identity of the user, such as: fingerprint, voiceprint, pupil, face, etc., or digital information that can verify the identity of the user, such as: account passwords, keys, passwords, and the like.
In order to improve the adaptability of the automobile to the automobile habit of the user, the automobile function setting is convenient, after the target instruction set is obtained, the target instruction set and the identity information of the automobile user can be correspondingly bound, and the target instruction set is stored in equipment such as an automobile end local server or a cloud server.
In some embodiments, before invoking a historical instruction set of a vehicle user in a scene, determining whether the vehicle is bound with identity information; if so, acquiring a historical instruction set of the vehicle user corresponding to the identity information under the scene, further determining a target instruction set based on the current instruction set and the historical instruction set, and creating a first scene mode corresponding to the scene in a vehicle machine system of the vehicle by using the target instruction set.
In some examples, invoking the historical instruction set of the vehicle user in the scenario may further comprise: verifying a current user using the vehicle based on the identity information; acquiring current identity information of a current vehicle user, and performing fitting comparison on the current identity information and pre-stored identity information; and if the current identity information is matched with the identity information, calling a historical instruction set of the vehicle user in the scene according to the identity information.
It should be noted that, the pre-stored identity information may be stored in a storable device such as a local vehicle end and/or a cloud server, which is not limited herein.
In an alternative embodiment, when the vehicle detects that the vehicle user sits on the main driving position, pupil information and/or face information of the vehicle user are collected through a camera of a driver monitoring system (Driver Monitor System, DMS), the collected information is subjected to feature extraction to obtain current identity information, the current identity information is subjected to fitting comparison with the identity information in the storage device, if the current identity information is matched with the identity information, permission and functions corresponding to the identity information are obtained, a historical instruction set and the like are obtained, a target instruction set is determined based on the current instruction set and the historical instruction set, and a first scene mode corresponding to a scene is created in a vehicle system of the vehicle by using the target instruction set so as to start corresponding settings on the vehicle functions.
It should be noted that, regarding the entry of the identity information, it may also be understood by combining the description of the above embodiment, and the identity information may be acquired or obtained by a vehicle end sensor or other communicable terminals and stored in a vehicle end local or cloud server, which is not described herein again.
Referring to fig. 5, fig. 5 is a schematic flowchart of an implementation of a contextual model execution method according to an embodiment of the present application, the method may be executed by a processor of a computer device, where the steps shown in fig. 5 will be described,
step 401, determining that the vehicle is in the scene again under the condition that the scene mode corresponding to the scene is created, and obtaining the scene mode corresponding to the scene.
And step 402, issuing an operation instruction in a target instruction set corresponding to the scene mode to the vehicle controlled component so as to enable the vehicle controlled component to execute the corresponding operation instruction.
Step 403, displaying at least one of a vehicle controlled component, execution progress and prompt animation that is executing an operation instruction in the target instruction set.
In the embodiment of the application, the vehicle controlled component is a component in the vehicle for executing the operation instruction in the target instruction set, such as an air conditioner, a vehicle window, a vehicle door, a trunk, a radio station and the like.
In the embodiment of the present application, issuing an operation instruction in a target instruction set corresponding to a scene mode to a vehicle controlled component may be understood as obtaining a target execution sequence in the target instruction set corresponding to the scene mode, and issuing an operation instruction in the target instruction set corresponding to the scene mode to the vehicle controlled component according to the target execution sequence.
In the embodiment of the application, issuing the operation instruction in the target instruction set corresponding to the scene mode to the controlled component of the vehicle can be understood as obtaining the operation instruction in the target instruction set corresponding to the scene mode, sorting the operation instruction in the target instruction set according to the history habit of the user or the operation instruction in the history instruction set to obtain the target execution sequence, and issuing the operation instruction in the target instruction set corresponding to the scene mode to the controlled component of the vehicle according to the target execution sequence.
In the embodiment of the application, under the condition that the computer equipment has created the contextual model corresponding to the scene, when the computer equipment determines that the vehicle is in the scene again, the contextual model corresponding to the scene is called; further, the computer device issues operation instructions in the target instruction set corresponding to the scene mode to the vehicle controlled component, so that the vehicle controlled component executes the corresponding operation instructions. Then, in the case that the controlled component in the vehicle executes the corresponding operation instruction, the prompt animation and/or execution progress of the operation instruction being executed by the controlled component in the vehicle is displayed through the multimedia, thereby increasing the visual experience perception of the operation instruction execution process. In this way, the process of executing the operation instruction by the controlled component of the vehicle is associated with the visualization function, so that the visual dynamic experience of the user is improved, and the user experience is improved.
In an application scenario, in order to solve the above-mentioned problem that at least a native gadget can only execute a certain function of a single application and needs to be clicked manually for effect each time, and at least a problem that a predefined contextual model is limited in variety, design is relatively dead, custom actions are generally relatively less, expansibility is not available, and a vehicle user needs to click manually for effect is provided. An implementation flow diagram of a contextual model creation and execution process provided by an embodiment of the present application, herein, will be described with reference to the process shown in fig. 6,
first, under the condition that a contextual model application is created and a third party application accesses a contextual model interface of the contextual model application, the contextual model application is started in response to a starting operation of a user.
In the embodiment of the application, the contextual model application is an application service generated on the vehicle-mounted system based on the method provided by the embodiment, and has the capability of accessing the third-party application, namely the contextual model application is provided with a contextual model interface, and the Inter-process communication or Inter-process communication (Inter-Process Communication, IPC) between the third-party application and the contextual model application is realized through the contextual model interface, so that the automatic creation of the instruction for the third-party service is realized.
In the embodiment of the application, the third party application accesses the contextual model interface of the contextual model application to realize inter-process communication or inter-process communication. In an Android system, a process generally refers to an application or Service, and different applications cannot access each other under normal conditions, so that a method of cross-process communication is needed, so that a contextual model interface of a third party application accessing a contextual model application can be realized by one or more of the following cross-process communication modes,
first, the Bundle realizes the Parcelable interface, which can be conveniently transmitted between different processes (by adding Bundle additional information in the Intent)
Second, file sharing is used, which uses only single-threaded reads and writes at the same time. For the more common SharePoint, the bottom layer is based on xml, the system can read and write the SharePoint based on cache, and the data is lost in a multithreading mode.
Thirdly, the Message object can be transmitted between different approaches through the Message, and data to be transmitted is put into the Message. Messenger is a lightweight IPC solution, and the underlying implementation is Android interface definition language (Android Interface Definition Language, AIDL).
Fourth, AIDL is mainly used for calling remote service methods, and can register interfaces for different processes to use, so that the system has powerful functions, supports one-to-many concurrent communication, and supports real-time communication.
Fifth, contentProvider has the advantage of powerful functions in terms of data source processing, supports one-to-many concurrent data sharing, can expand other operations by call methods, and is suitable for one-to-many inter-process data sharing.
Sixth, socket is powerful, can support one-to-many concurrent real-time communication through network transmission byte stream, is applicable to network data transmission.
In the embodiment of the application, the implementation of the profile interface of the third party application access profile application through the AIDL communication mode is taken as an example for explanation, wherein the third party application access profile application AIDL interface, the related interface is implemented, the related interface is referred to as table 2,
TABLE 2
For IWINGET interface, getWidgets () is used to obtain all function lists of the components, and display them to user for user-defined combined function
Wherein, the WidgetConfig is configured for the component, the WidgetName () is used for obtaining the function name of the component, the WidgetLayout () is used for displaying the function small icon to the user, and the WidgetAction () is used for executing the method to inform the third party application to respond when the triggering condition is reached.
Secondly, a package management service (PackageManagerService) used in the contextual model application scans all accessed application programs;
thirdly, the contextual model application acquires the functional information of the component provided by the third party application;
fourthly, the contextual model application displays the functional information of the component to the user;
here, the contextual model application scans all the accessed Service services using the packagediananagerservice at startup, obtains component functions provided by the third party application, and then displays the component functions to the user.
Fifthly, the contextual model application responds to the combination operation of components provided by different third party applications by a user to obtain a combination instruction;
sixthly, the contextual model application stores a combined instruction;
seventh, the contextual model application detects a scene condition;
the scene condition may be a condition preset by a user, or may be a condition that the contextual model application automatically generates based on the combined instruction.
Eighth, obtaining scene information and determining whether the scene information meets a scene result of a preset scene condition;
a ninth step, under the condition that the contextual model application responds to the scene condition, the contextual model application issues a corresponding instruction to the component function of the third party application;
And tenth, the component of the third party application executes the corresponding instruction and feeds back the execution result.
It should be noted that, after selecting a series of component functions, the user saves, and when the scene condition is met, the contextual model application executes the widget action () method, so as to trigger each component to execute a corresponding instruction.
In the embodiment of the application, in the process of executing the corresponding instruction, namely, executing the contextual model, the steps and the progress of executing the instruction by the central control interface display component are displayed through the multimedia interface, and the user's goodness is further enhanced by being assisted by the animation. Meanwhile, some hidden operations can be added on the central control display interface, so that a user can modify or skip the contextual model, and the using mode is more flexible.
The embodiment of the application provides a computer device, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
Embodiments of the present application provide a storage medium storing one or more computer programs executable by one or more processors to implement some or all of the steps of the above-described methods. The storage medium may be transitory or non-transitory.
Embodiments of the present application provide a computer program comprising computer readable code which, when run in a computer device, causes a processor in the computer device to perform some or all of the steps for carrying out the above method.
Embodiments of the present application provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, in other embodiments the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It should be noted here that: the above description of various embodiments is intended to emphasize the differences between the various embodiments, the same or similar features being referred to each other. The above description of apparatus, storage medium, computer program and computer program product embodiments is similar to that of method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, the storage medium, the computer program and the computer program product of the present application, reference should be made to the description of the embodiments of the method of the present application.
Fig. 7 is a schematic diagram of a hardware entity of a computer device according to an embodiment of the present application, as shown in fig. 7, where the hardware entity of the computer device 7 includes: a processor 701, and a memory 702, wherein the memory 702 stores a computer program executable on the processor 701, the processor 701 executing the program to perform the steps of,
determining a scene where a vehicle is currently located;
responding to the scene to construct the situation mode requirement, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set;
invoking a historical instruction set of a vehicle user in a scene, and determining a target instruction set based on the current instruction set and the historical instruction set;
and creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by utilizing the target instruction set.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
acquiring scene information of a current vehicle; wherein the scene information includes one or more of: the method comprises the steps of current time information of a vehicle, current position information of the vehicle, environment information of the position of the vehicle and vehicle state information; and determining the current scene of the vehicle according to the scene information.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
detecting the occurrence frequency of scenes; and if the frequency meets the frequency condition, displaying first prompt information, wherein the first prompt information is used for prompting whether a vehicle user has a situation mode requirement corresponding to the construction scene.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
acquiring all historical instruction sets input by a vehicle user in a scene; classifying and counting all the historical instruction sets based on the carried operation instructions to obtain the historical execution times corresponding to each classified historical instruction set; and screening the historical instruction set with the historical execution times meeting the times condition from the classified historical instruction set.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
calculating the similarity between the current instruction set and the historical instruction set, and determining the historical instruction set with the highest similarity as a target instruction set; or combining the operation instructions belonging to the current instruction set and the historical instruction set to obtain a target instruction set; or combining the operation instruction carried by the current instruction set and the operation instruction carried by the historical instruction set to obtain the target instruction set.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
obtaining a first execution sequence of an operation instruction carried by a current instruction set and a second execution sequence of an operation instruction carried by a historical instruction set; calibrating a target execution order of the target instruction set based on the first execution order and the second execution order; and creating a first scene mode corresponding to the scene in the vehicle system of the vehicle by utilizing the target instruction set and the target execution sequence of the target instruction set.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
if the same editing operation is monitored to be executed on the target instruction set by the vehicle user under the condition that the vehicle is in a scene once or a plurality of times, editing information of the vehicle user is obtained; and adjusting a target instruction set corresponding to the scene based on the editing information, and modifying the first scene mode into a second scene mode.
In other embodiments of the present application, the processor 701 may also implement the steps when executing a program,
under the condition that a scene mode corresponding to the scene is created, determining that the vehicle is in the scene again, and obtaining the scene mode corresponding to the scene; issuing an operation instruction in a target instruction set corresponding to the execution scene mode to the vehicle controlled component so as to enable the vehicle controlled component to execute the corresponding operation instruction; at least one of a vehicle controlled component, execution progress, and a prompt animation that is executing an operation instruction in the target instruction set is displayed.
The memory 702 stores a computer program executable on a processor, and the memory 702 is configured to store instructions and applications executable by the processor 701, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or already processed by each module in the processor 701 and the computer device 7, and may be implemented by a FLASH memory (FLASH) or a random access memory (Random Access Memory, RAM).
Wherein the processor 701 implements the steps of the profile creation or profile execution method of any of the above when executing a program. The processor 701 generally controls the overall operation of the computer device 7.
Embodiments of the present application provide a computer storage medium storing one or more programs executable by one or more processors to implement the steps of the profile creation or profile execution method of any of the embodiments above.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application.
The processor may be at least one of a target application integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic device implementing the above-mentioned processor function may be other, and embodiments of the present application are not limited in detail.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Read Only optical disk (Compact Disc Read-Only Memory, CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence number of each step/process described above does not mean that the execution sequence of each step/process should be determined by its functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a vehicle-mounted terminal (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (10)

1. A contextual model creation method, the method comprising:
determining a scene where a vehicle is currently located;
responding to the scene to have a requirement of building a situation mode, collecting one or more operation instructions input by a vehicle user in the scene, and integrating the operation instructions to obtain a current instruction set;
invoking a historical instruction set of the vehicle user in the scene, and determining a target instruction set based on the current instruction set and the historical instruction set;
and creating a first scene mode corresponding to the scene in a vehicle-to-machine system of the vehicle by utilizing the target instruction set.
2. The method of claim 1, wherein the determining the scene in which the vehicle is currently located comprises:
acquiring scene information of the current vehicle;
wherein the scene information includes one or more of the following: the current time information, the current position information, the environment information of the position of the vehicle and the vehicle state information of the vehicle;
And determining the current scene of the vehicle according to the scene information.
3. The method of claim 1, wherein prior to said building a contextual model requirement in response to said scene existence, the method comprises:
detecting the occurrence frequency of the scene;
and if the frequency meets the frequency condition, displaying first prompt information, wherein the first prompt information is used for prompting whether the vehicle user has a situation mode requirement for constructing the scene.
4. A method according to any one of claims 1 to 3, wherein said invoking the historical instruction set of the vehicle user in the scenario comprises:
obtaining all historical instruction sets input by the vehicle user in the scene;
classifying and counting all the historical instruction sets based on the carried operation instructions to obtain the historical execution times corresponding to each classified historical instruction set;
and screening the historical instruction set with the historical execution times meeting the times condition from the classified historical instruction set.
5. A method according to any one of claims 1 to 3, the determining the target instruction set based on the current instruction set and the historical instruction set comprising:
Calculating the similarity between the current instruction set and the historical instruction set, and determining the historical instruction set with the highest similarity as the target instruction set; or alternatively, the first and second heat exchangers may be,
combining operation instructions belonging to the current instruction set and belonging to the historical instruction set to obtain the target instruction set; or alternatively, the first and second heat exchangers may be,
and merging the operation instruction carried by the current instruction set and the operation instruction carried by the historical instruction set to obtain the target instruction set.
6. A method according to any one of claims 1 to 3, wherein said creating a first scene mode corresponding to said scene within an on-board system of said vehicle using said target instruction set comprises:
obtaining a first execution sequence of the operation instructions carried by the current instruction set and a second execution sequence of the operation instructions carried by the historical instruction set;
calibrating a target execution order of the target instruction set based on the first execution order and the second execution order;
and creating a first scene mode corresponding to the scene in a vehicle system of the vehicle by utilizing the target instruction set and the target execution sequence of the target instruction set.
7. A method according to any one of claims 1 to 3, further comprising:
If the vehicle is in the scene for one or more times, if the same editing operation is monitored to be executed on the target instruction set by the vehicle user, editing information of the vehicle user is obtained;
and adjusting a target instruction set corresponding to the scene based on the editing information, and modifying the first scene mode into the second scene mode.
8. A method of contextual model execution, the method comprising:
under the condition that a scene mode corresponding to a scene is created, determining that a vehicle is in the scene again, and obtaining the scene mode corresponding to the scene;
issuing an operation instruction in a target instruction set corresponding to the scene mode to a vehicle controlled component so that the vehicle controlled component executes the corresponding operation instruction;
displaying at least one of the vehicle controlled component, execution progress, and cue animation that is executing an operation instruction in the target instruction set.
9. A computer device, characterized in that the in-vehicle terminal includes: a memory and a processor, wherein the memory is configured to store,
the memory stores a computer program executable on a processor,
the processor implements the contextual model creation method according to any one of claims 1 to 7 or the contextual model execution method according to claim 8 when executing the program.
10. A storage medium storing one or more computer programs executable by one or more processors to implement the contextual model creation method of any one of claims 1 to 7 or the contextual model execution method of claim 8.
CN202310655944.1A 2023-06-02 2023-06-02 Contextual model creation method, contextual model execution device and storage medium Pending CN116795468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310655944.1A CN116795468A (en) 2023-06-02 2023-06-02 Contextual model creation method, contextual model execution device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310655944.1A CN116795468A (en) 2023-06-02 2023-06-02 Contextual model creation method, contextual model execution device and storage medium

Publications (1)

Publication Number Publication Date
CN116795468A true CN116795468A (en) 2023-09-22

Family

ID=88041369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310655944.1A Pending CN116795468A (en) 2023-06-02 2023-06-02 Contextual model creation method, contextual model execution device and storage medium

Country Status (1)

Country Link
CN (1) CN116795468A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118061936A (en) * 2024-03-28 2024-05-24 重庆赛力斯凤凰智创科技有限公司 Function mode regulation and control method and device based on full-period custom car scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118061936A (en) * 2024-03-28 2024-05-24 重庆赛力斯凤凰智创科技有限公司 Function mode regulation and control method and device based on full-period custom car scene

Similar Documents

Publication Publication Date Title
CN109416733B (en) Portable personalization
CN110807178B (en) Vehicle authorization management method and device, terminal and server
TW202034195A (en) Vehicle door unlocking method and apparatus, system, vehicle, electronic device and storage medium
CN108891422A (en) Control method, device and the computer readable storage medium of intelligent vehicle
CN110816470B (en) User authorization adding method, system and medium based on vehicle-mounted terminal and vehicle-mounted terminal
CN111968635B (en) Speech recognition method, device and storage medium
CN103999152A (en) Speech recognition utilizing a dynamic set of grammar elements
CN107871001B (en) Audio playing method and device, storage medium and electronic equipment
CN116795468A (en) Contextual model creation method, contextual model execution device and storage medium
CN112017646A (en) Voice processing method and device and computer storage medium
CN107878370A (en) A kind of control method and device of vehicle
EP3396982B1 (en) Method and apparatus for vehicle heat dissipation
CN107436871A (en) A kind of data search method, device and electronic equipment
CN115312068A (en) Voice control method, device and storage medium
CN113589938A (en) Vehicle-mounted terminal information sharing system with bullet screen function
CN109146694B (en) Electronic device, user vehicle insurance preference level determining method and storage medium
CN115248889A (en) Vehicle driving strategy recommendation method and device
CN113709954A (en) Atmosphere lamp control method and device, electronic equipment and storage medium
CN114444042A (en) Electronic equipment unlocking method and device
CN116168698B (en) Vehicle, control method and device thereof, and computer readable storage medium
US11985197B2 (en) System and method for communicating with a motor vehicle
CN115437705A (en) Method and device for providing vehicle service, electronic equipment and storage medium
CN112259091B (en) Method for generating auxiliary operation voice, vehicle-mounted terminal and computer storage medium
CN107516031B (en) Temporary authorization method and related product
CN117877479A (en) Voice interaction mode determining method and device, medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination