CN117061559A - Scene engine management method and device and electronic equipment - Google Patents
Scene engine management method and device and electronic equipment Download PDFInfo
- Publication number
- CN117061559A CN117061559A CN202311083241.2A CN202311083241A CN117061559A CN 117061559 A CN117061559 A CN 117061559A CN 202311083241 A CN202311083241 A CN 202311083241A CN 117061559 A CN117061559 A CN 117061559A
- Authority
- CN
- China
- Prior art keywords
- scene
- information
- rule
- atomic
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007726 management method Methods 0.000 title claims abstract description 97
- 238000000034 method Methods 0.000 claims abstract description 58
- 230000006870 function Effects 0.000 claims description 34
- 230000009471 action Effects 0.000 claims description 18
- 230000004044 response Effects 0.000 claims description 14
- 238000013461 design Methods 0.000 claims description 13
- 230000001960 triggered effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 101710154918 Trigger factor Proteins 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 9
- 238000010586 diagram Methods 0.000 description 18
- 230000008569 process Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 7
- 238000001816 cooling Methods 0.000 description 6
- 238000012795 verification Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000002699 waste material Substances 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000004887 air purification Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44505—Configuring for program initiating, e.g. using registry, configuration files
- G06F9/4451—User profiles; Roaming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The disclosure provides a scene engine management method and device and electronic equipment, relates to the technical field of vehicles, and particularly relates to the technical fields of intelligent automobiles, automatic driving and scene engines. The specific implementation scheme is as follows: the method comprises the steps that the method is applied to a cloud platform, first information of a vehicle end is obtained, and the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end; matching the first information with target information associated with a scene, wherein the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M pieces of atomic capacity of the scene one by one, and the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capacity is applicable; and sending the configuration file of the scene to the vehicle end under the condition that the matching result indicates that the intersection of M pieces of second information which are in one-to-one association with the M atomic capacities of the scene comprises the first information.
Description
Technical Field
The disclosure relates to the technical field of vehicles, in particular to the technical fields of intelligent automobiles, automatic driving and scene engines, and particularly relates to a scene engine management method, a scene engine management device and electronic equipment.
Background
In the intelligent automobile era, rules can be preset or customized based on data of users, vehicles, environments and the like, namely, the vehicles execute different function combinations, namely, scenes under the condition that specific conditions are met. Different scenarios require that the scenario engines achieve collaboration through constraint and rule mechanisms.
Under the current age of wave of software defined automobiles, more and more intelligent automobiles support product operators and creators to flexibly create scenes by defining different rules for supporting brand promotion and popularization, business behaviors, content service recommendation and the like.
Currently, in vendor delivery mode, each vehicle model maintains a set of software for the scene engine separately.
Disclosure of Invention
The disclosure provides a scene engine management method, a scene engine management device and electronic equipment.
According to a first aspect of the present disclosure, a scene engine management method is provided, applied to a cloud platform, including:
acquiring first information of a vehicle end, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
matching the first information with target information associated with a scene, wherein the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M pieces of atomic capacity of the scene one by one, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capacity is applicable, and M is a positive integer;
And sending a configuration file of the scene to the vehicle end when the matching result indicates that the intersection of M pieces of second information which are in one-to-one association with M atomic capacities of the scene comprises the first information, wherein the configuration file is used for indicating the triggering and running of the scene.
According to a second aspect of the present disclosure, there is provided a scene engine management method, applied to a vehicle end, including:
the method comprises the steps of sending first information to a cloud platform, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of a vehicle end;
when the cloud platform matches the first information with target information associated with a scene, and the matching result indicates that intersections of M pieces of second information associated with M atomic capacities of the scene one by one comprise the first information, receiving a configuration file of the scene sent by the cloud platform, wherein the configuration file is used for indicating triggering and running of the scene; wherein,
the scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, the target information comprises M pieces of second information which are in one-to-one association with M pieces of atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer.
According to a third aspect of the present disclosure, there is provided a scene engine management apparatus applied to a cloud platform, including:
the system comprises a first acquisition module, a second acquisition module and a first processing module, wherein the first acquisition module is used for acquiring first information of a vehicle end, and the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
the matching module is used for matching the first information with target information associated with a scene, the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M atomic capabilities of the scene one by one, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer;
the first sending module is used for sending a configuration file of the scene to the vehicle end when the matching result indicates that the intersection set of M pieces of second information which are in one-to-one association with M atomic capacities of the scene comprises the first information, and the configuration file is used for indicating the triggering and running of the scene.
According to a fourth aspect of the present disclosure, there is provided a scene engine management apparatus applied to a vehicle end, including:
The second sending module is used for sending first information to the cloud platform, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
the receiving module is used for receiving a configuration file of the scene sent by the cloud platform when the cloud platform matches the first information with target information related to the scene and the matching result indicates that intersections of M pieces of second information related to M atomic capacities of the scene one by one comprise the first information, wherein the configuration file is used for indicating triggering and running of the scene; wherein,
the scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, the target information comprises M pieces of second information which are in one-to-one association with M pieces of atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods of the first aspect or to perform any one of the methods of the second aspect.
According to a sixth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any one of the methods of the first aspect, or to perform any one of the methods of the second aspect.
According to a seventh aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements any of the methods of the first aspect or implements any of the methods of the second aspect.
According to the technology disclosed by the invention, the problem of complex management of the scene engine in the related technology is solved, compatibility of scenes to different scene engine software, vehicle types and users can be realized, and the difficulty of scene engine management is reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a scene engine management method according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a framework of a cloud platform;
FIG. 3 is a schematic diagram of an engine version management page;
FIG. 4 is a schematic diagram of a scene new page;
FIG. 5 is a schematic diagram of a scene orchestration page;
FIG. 6 is a schematic diagram of a scene display page;
FIG. 7 is one of the schematics of an atomic capacity orchestration page;
FIG. 8 is a second schematic diagram of an atomic capacity orchestration page;
FIG. 9 is a schematic diagram of an atomic module creation page;
FIG. 10 is a flow diagram of a scene engine management method according to a second embodiment of the disclosure;
FIG. 11 is a schematic view of a frame of the vehicle end;
FIG. 12 is a flow chart of scene lifecycle management of the present embodiment;
fig. 13 is a schematic structural view of a scene engine management apparatus according to a third embodiment of the present disclosure;
fig. 14 is a schematic structural view of a scene engine management apparatus according to a fourth embodiment of the present disclosure;
fig. 15 is a schematic block diagram of an example electronic device used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The rapid development of intelligent automobiles not only brings about continuous updating and iteration of automobile types, but also has more and more abundant functions, and The frequency of software updating by adopting Over-The-Air technology (OTA) is also continuously improved. The method also brings certain challenges to scene management, and the complexity of scene management used by different users on different vehicle types, different vehicles of the same vehicle type and different software versions of the same vehicle is continuously improved, including the difficulty of understanding the scene by users/operators and the complexity of scene engine software.
For these scenes, the atomic capabilities of the component scenes, and the scene engine software at the vehicle end, a unified management platform is required for management and maintenance.
At present, under the mode of supplier delivery, each vehicle model independently maintains software of a set of scene engine, and development cost and maintenance cost are high; moreover, the current scene engine has relatively limited opening capability, and can be realized by using a set of general engines, but when aiming at the differential atomic capability of a specific vehicle type or a specific software version, the compatibility of the scene to the vehicle type and the scene engine version cannot be realized.
As can be seen, the difficulty of the scene engine management is relatively large, but the embodiment can realize compatibility of the scene to different scene engine software, vehicle types and users, and reduce the difficulty of the scene engine management, and the scene engine management method of the embodiment is described in detail below.
First embodiment
As shown in fig. 1, the present disclosure provides a scene engine management method applied to a cloud platform, including the following steps:
step S101: and acquiring first information of a vehicle end, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end.
In this embodiment, the scene engine management method relates to the technical field of vehicles, in particular to the technical fields of intelligent automobiles, automatic driving and scene engines, and can be widely applied to a cloud platform of a scene engine management system, wherein the scene engine management system comprises the cloud platform and a vehicle end. The scene engine management method of the embodiment of the present disclosure may be performed by the scene engine management apparatus of the embodiment of the present disclosure. The scene engine management apparatus of the embodiment of the present disclosure may be configured in any electronic device to perform the scene engine management method of the embodiment of the present disclosure.
The vehicle types are distinguished through unique Identification (ID) of the vehicle types, and different vehicle types are different in brand, vehicle type, annual style, configuration, version and the like.
The version of the scene engine can be distinguished through a version number or a version ID, the version number can be in an integer form, so that an operator can conveniently memorize and manage the version number, the version number can be matched with the version availability of each atomic capability, and the version identifier is a three-section version number and is used for version management of vehicle-end application and keeps consistent with the version specifications of other software of the vehicle end. The user information may include a user account number that the vehicle end scene engine software is logged in.
The cloud platform can receive a scene downloading request sent by the vehicle end, the scene downloading request can carry first information, and accordingly the cloud platform can acquire the first information of the vehicle end. The cloud platform can also obtain the first information of the vehicle end from the prestored related information under the condition of actively issuing the scene. The cloud platform can store the vehicle model, the scene engine version and the user information at the corresponding positions under the condition that the vehicle end reports the vehicle model, the scene engine version and the user information at the vehicle end.
Step S102: and matching the first information with target information associated with a scene, wherein the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M pieces of atomic capabilities of the scene one by one, and the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable.
Wherein M is a positive integer.
In recent years, the field of automobiles is changed in a manner of turning over the sky and covering the earth, so that not only is the traditional automobile technology greatly developed, but also more intelligent control technologies are added to automobiles. The purpose of the intelligent control technology is to get rid of the passive control technology in the past and realize active control. And the scenario may be used to implement active vehicle control.
The scene includes rules for characterizing the mapping between conditions and actions. A scene may include one or more rules, the condition of a rule being the execution condition of an action of the rule; the conditions of all rules or the conditions of part of rules within a scene may constitute trigger conditions of the scene, i.e. judgment conditions for determining whether or not the scene is in.
The rule condition and the rule action can be atomic capability of the atomic module, for example, the rule condition can be atomic information of the atomic module of the vehicle, and the rule action can be atomic action of the atomic module of the vehicle; the atomic module may be a functional module of the vehicle. Thus, a scenario is used to describe a series of actions or events that occur with atomic information of an atomic module as a trigger condition.
Each atomic capability of the scene may be associated with second information, which may include at least one of a vehicle model, a scene engine version, and user information to which the atomic capability is applicable. For example, the second information may include a model of a vehicle and a scene engine version to which the atomic capability is applicable.
In one example, the scenario is as follows: when the PM2.5 index in the automobile reaches more than 150, all windows and skylights are closed, air purification is automatically started, the gear is at the gear 2, and the user is informed by voice.
The required condition factors and execution factors for the scenario, atomic capabilities are shown in table 1 below.
Table 1 atomic capability table of the scenario
TTS broadcasting refers to Speech synthesis (TTS) broadcasting. The applicable minimum version refers to the minimum version number of the scene engine software to which the atomic capability applies, and the applicable maximum version refers to the maximum version number of the scene engine software to which the atomic capability applies. It is generally meant that the atomic capability may be adapted to any vehicle model or scene engine version.
As shown in table 1 above, the second information may include an applicable model and scene engine version associated with atomic capabilities, for example, for a first condition factor, the associated second information includes: model A high-level configuration, model B high-level configuration, version number 5 to the current largest scene engine version. The target information may be a second set of information associated with atomic capabilities, i.e. the target information is a set of second information associated with individual factors.
The matching is to determine whether the vehicle type, the scene engine version and the like at the vehicle end accord with the triggering and running of the scene through information matching. Specifically, an intersection of M pieces of second information may be obtained, and it is determined whether the intersection of M pieces of second information includes the first information, if so, it is determined that the scene is matched with the vehicle end, and if not, it is determined that the scene is not matched with the vehicle end. In the case where the atomic capability of the scene is only one, the intersection of the second information is itself.
For example, as shown in table 1 above, in this scenario, the intersection of the plurality of second information, i.e., the applicable vehicle model & applicable engine version corresponding to the scenario is as follows:
the method is suitable for vehicle types: the vehicle type B is high-matched;
engine minimum version: 5, a step of;
engine maximum version: currently existing maximum scene engine versions.
The scene downloading request sent by the vehicle end and received by the cloud platform can also carry a scene identifier, and the target information of the corresponding scene can be acquired based on the scene identifier. And matching the first information with target information associated with the scene.
And under the scene of the cloud platform actively launching the vehicle end, the scene suitable for the vehicle end can be screened from all the scenes to be launched. Specifically, the first information and the target information associated with each scene can be matched, so that the scene matched with the vehicle end can be screened out and issued. The scenes between the cloud platform and the vehicle end are screened, and can be selectively issued through the vehicle type information of the vehicle, the current scene engine version and the logged-in user ID, so that the vehicle has the identification of each dimension.
And the vehicle type information and the scene version information of the vehicle end and the cloud platform are matched, all issued scenes can be executed on the vehicle, and whether the scenes which are effective for the specific user are activated or not is distinguished according to different user IDs.
Step S103: and sending a configuration file of the scene to the vehicle end when the matching result indicates that the intersection of M pieces of second information which are in one-to-one association with M atomic capacities of the scene comprises the first information, wherein the configuration file is used for indicating the triggering and running of the scene.
And under the condition that the scene is matched with the vehicle end, the configuration file of the scene can be sent to the vehicle end and used for indicating the triggering and running of the vehicle end on the scene.
In this embodiment, the method includes that the atomic capability of the scene is associated with the applicable vehicle type, the scene engine version and the user information, and the intersection of M pieces of second information which are associated with the M atomic capabilities of the scene one by one is obtained to determine the vehicle type, the scene engine version and the user information applicable to the scene, and then whether the first information of the vehicle end is in the range of the vehicle type, the scene engine version and the user information applicable to the scene is determined, so as to determine whether the vehicle end is matched with the scene, and if the vehicle end is matched with the scene, the configuration file of the scene is issued to the vehicle end. Therefore, through management of various dimensions (such as a scene engine version, a vehicle type and the like) in the scene engine, decision judgment on the effectiveness of the scene according to different dimensions can be realized, and a matched scene is issued to a vehicle end, so that the problems of version disputed, difficult collaboration and repeated resource waste caused by the rapid iteration process of the hardware/software of the vehicle end can be effectively solved, compatibility of the scene to different scene engine software, vehicle types and users is realized, and the difficulty of scene engine management is reduced.
Optionally, the method further comprises:
under the condition that a version updating signal of the scene engine software of the vehicle end is obtained, creating a scene engine version of the vehicle end on a base line version of the scene engine software of the vehicle end based on an engine version management page to obtain scene engine version information of the vehicle end; wherein,
the scene engine version information comprises a version number of scene engine software of the vehicle end, and the version number is used for matching with the availability of M atomic capabilities of the scene.
The cloud platform can manage each dimension related to the scene engine, a frame schematic diagram of the cloud platform is shown in fig. 2, and the cloud platform can include a scene engine management module 201, where the scene engine management module is used to manage a scene engine version related to the scene engine, and can manage a baseline version of scene engine software.
The scene engine is maintained as independent software with corresponding versions. When the whole OTA is upgraded each time, if the scene engine needs to be updated, the scene engine of the latest version is integrated to issue an upgrade package. When relevant operation and configuration work is processed, only the version of the scene engine is needed to be concerned, and the complexity brought by introducing the version of the whole OTA software is avoided.
When an updated version is needed, then a new version of the scene engine is created at the management platform. Specifically, when the updating process or updating of the scene engine software of the vehicle end is completed, a version updating signal can be sent to the cloud platform, and the cloud platform can create the scene engine version of the vehicle end on the baseline version of the scene engine software of the vehicle end based on the engine version management page under the condition that the version updating signal of the scene engine software of the vehicle end is obtained, so that the scene engine version information of the vehicle end is obtained.
FIG. 3 is a schematic diagram of an engine version management page, as shown in FIG. 3, the maintenance information of the scene engine software includes, but is not limited to, the following:
version number: the system is an integer, is convenient for operators to memorize and manage, and can be matched with the version availability of each atomic capacity, namely, the atomic capacity can be related to the version number of the scene engine software;
version ID: the three-section version number is used for version management of the vehicle-end application and keeps consistent with the version specifications of other software of the vehicle-end;
version name;
version description.
As shown in fig. 3, the baseline version of the scene engine software at the vehicle end is a scene engine version with version number of 20 and version ID of 1.0.1.
The engine version management page may be actively triggered by a software update signal, or may be manually triggered by an operator if it is determined that a version update signal is received, which is not specifically limited herein.
When the follow-up cloud platform issues the scene, the scene can be matched according to the version number of the vehicle-end scene engine software, and the configuration file of the scene is issued under the condition that the matching is successful. Therefore, the management of the version of the scene engine related to the scene engine is realized through the interactive cooperation of the cloud platform and the vehicle end, the problems of version disputed and impurity, difficult cooperation and repeated waste of resources brought by the rapid iteration process of the hardware/software of the vehicle end can be effectively solved, the compatibility of the scene to different scene engine software is realized, and the management difficulty of the scene engine is reduced.
Optionally, the step S102 specifically includes:
and under the condition that the vehicle type of the vehicle end is matched with the target vehicle type information, matching the first information with the target information related to the scene, wherein the target vehicle type information is the vehicle type information supported by the scene engine acquired from the Internet of vehicles platform.
The cloud platform may manage each dimension related to the scene engine, as shown in fig. 2, and may include a vehicle type management module 202, which is used for managing the vehicle types related to the scene engine, that is, for managing all vehicle type systems supported by the scene engine.
The vehicle type management is mainly used for managing the vehicle type information supported by all scene engines. The model information is synchronized through a car networking car remote service provider (Telematics Service Provider, TSP) platform. The differentiation is realized mainly through the unique ID of the vehicle type, including brands, vehicle types, annual money, configuration, version and the like.
Correspondingly, when the follow-up cloud platform issues the scene, the scene can be matched according to the vehicle type, and secondary matching can be performed. The first matching is the matching of the vehicle type at the vehicle end and the vehicle type information supported by the scene engine and acquired from the Internet of vehicles platform so as to determine whether the vehicle end is the vehicle end supported by the scene engine, and the second matching is the matching of the first information and the target information associated with the scene so as to determine whether the vehicle end is matched with the scene, and the configuration file of the scene is issued under the condition that the matching is successful. Therefore, the problem that version is mixed, difficult to cooperate and resource is repeatedly wasted in the rapid iteration process of vehicle-end hardware/software can be effectively solved by managing the vehicle types related to the scene engine, the compatibility of the scene to different vehicle types is realized, and the difficulty of managing the scene engine is reduced.
Optionally, before the step S102, the method further includes:
Acquiring scene information and rule information of a scene based on a scene design page, wherein the rule information comprises rule groups of conditional factors and execution factors of the scene;
wherein the configuration file of the scene comprises the scene information and rule information.
As shown in fig. 2, the cloud platform may further include a scene design management module 203, where the scene design management module is used for design layout, test review, release and shelf, scene release synchronization, scene flow monitoring, and the like. The design and management of the scene can be realized by defining scene basic information, creating rules, arranging the scene, checking the scene and publishing the scene.
The scene design page may include a scene new page and a scene layout page, fig. 4 is a schematic diagram of the scene new page, and as shown in fig. 4, scene information of a scene may be obtained based on the scene new page, that is, scene information filled by an operator based on the scene new page may be obtained, where the scene information may include scene basic information, scene configuration information, and rule configuration information.
Fig. 5 is a schematic diagram of a scene layout page, and as shown in fig. 5, rule information of a scene can be obtained based on the scene layout page, an operator can fill in corresponding condition factors and execution factors according to definition of the scene, and accordingly, rule information filled in by the operator based on the scene layout page can be obtained.
After the scene arrangement is completed, the scene can be submitted, the scene state after the scene arrangement is the state to be issued, fig. 6 is a schematic diagram of a scene display page, and as shown in fig. 6, the scene in the bold line box is the scene in the state to be issued. The states of the scene are as follows:
in the arrangement: at the moment, the scene can be edited and deleted, and after the editing is finished, the scene can be submitted for auditing, and the state is changed to be released;
and (3) to be released: at the moment, checking and auditing are carried out on the scene, rejection and release can be carried out, the rejection is restored to the arrangement state, and the release is changed into release;
in the release: in this state, the service period may issue the scene when the vehicle is online, or the vehicle may autonomously query from the platform to download the scene. The operation of taking off the rack can be carried out, and the state is changed to be released after the operation.
Thus, the scene can be managed by designing and arranging the scene.
Optionally, the scene information includes rule configuration information of the scene, and the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
And when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
As shown in fig. 4, rule group type configuration may be performed based on a scene layout page to obtain rule configuration information of a scene, where each rule in a default rule type characterization rule group is performed according to sequential judgment, each rule in a unit rule type characterization rule group is performed according to sequential judgment when each rule in the unit rule type characterization rule group meets a condition, a rule in the rule group is determined according to an activated rule type characterization sequence, only the first rule meeting the condition is performed, and each rule is performed according to sequential judgment when a rule with the highest priority in the rule group meets the condition. Thus, the design of the scene can be flexibly performed, and the flexibility of the scene implementation is improved.
Optionally, before the scene information and the rule information of the scene are acquired based on the scene design page, the method further includes:
acquiring M atomic capabilities of the scene based on an atomic capability programming page, wherein the atomic capabilities indicate a conditional factor or an execution factor of the scene;
the atomic capability orchestration page has a compatible parameter configuration function of atomic capability, and the compatible parameter configuration function is used for configuring the second information related to the atomic capability.
As shown in fig. 2, the cloud platform may further include an atomic capability management module 204, which is configured to manage all hardware/software capabilities that can be determined and scheduled by the scene engine, where the atomic capability management module includes an atomic module management 2041 and an atomic capability management, and the atomic capability management includes a conditional factor management 2042 and an execution factor management 2043.
FIG. 7 is one of the schematics of an atomic capacity orchestration page, shown in FIG. 7, for orchestrating atomic capacities characterizing a conditional factor, and FIG. 8 is another of the schematics of an atomic capacity orchestration page, shown in FIG. 8, for orchestrating atomic capacities characterizing an execution factor.
The atomic capability orchestration page has a compatible parameter configuration function of atomic capability for configuring the second information associated with atomic capability. The "factor parameter information" column as shown in fig. 7 and 8 is for performing compatible parameter configuration of the atomic capacity to configure the model and scene engine version to which the atomic capacity is applicable.
The M atomic abilities of the scene can be acquired based on the atomic ability programming page, the operator can create corresponding condition factors and execution factors according to the definition of the scene, and correspondingly, the related information of the atomic ability filled in by the operator based on the atomic ability programming page can be acquired. Therefore, the association of the atomic capacity with the applicable vehicle type, scene engine version and user information can be realized, and the compatibility of scenes to different scene engine software, vehicle types and users can be realized.
In this way, management of atomic capabilities may be achieved.
Optionally, the atomic capability arranging page further has a type configuration function of an atomic module to which the atomic capability belongs, where the type configuration function is used for configuring a type of the atomic module to which the atomic capability belongs; the method further comprises the steps of:
creating a page based on the atomic module to acquire the type of the atomic module;
the types of the atomic modules are classified according to module attributes, module domain names and splitting dimensions of the module names.
The column "belonging to an atomic module type" as shown in fig. 7 and 8 is used to make a selection of an atomic module type to which an atomic capability belongs. The splitting dimension of the atomic capability is module type/module Domain name (Domain)/module/atomic capability (including a conditional factor and an execution factor). Accordingly, the types of the atomic modules may be classified according to the module attribute, the module domain name, and the split dimension of the module name, and as shown in fig. 9, the atomic module creation page may be created according to the module attribute, the module domain name, and the module name. Atomic capabilities are then associated with the scene engine version and the vehicle model.
In this way, management of the atomic modules can be achieved.
Second embodiment
As shown in fig. 10, the present disclosure provides a scene engine management method applied to a vehicle end, including the following steps:
step S1001: the method comprises the steps of sending first information to a cloud platform, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of a vehicle end;
step S1002: when the cloud platform matches the first information with target information associated with a scene, and the matching result indicates that intersections of M pieces of second information associated with M atomic capacities of the scene one by one comprise the first information, receiving a configuration file of the scene sent by the cloud platform, wherein the configuration file is used for indicating triggering and running of the scene; wherein,
the method comprises the steps that a scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, target information comprises M pieces of second information which are in one-to-one association with M atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information, the vehicle type, the scene engine version and the user information are applicable to the atomic capabilities, and M is a positive integer.
In this embodiment, the vehicle end may send a scenario downloading request to the cloud platform under the condition that the vehicle end has a scenario downloading requirement, where the scenario downloading request carries the first information. Or, the vehicle end can report the version update signal of the scene engine software to the cloud platform under the condition of updating the scene engine version so as to inform the scene engine version of the vehicle end. Or the vehicle end periodically reports the first information to the cloud platform so as to realize scene flow synchronization.
Correspondingly, the cloud platform can match the first information with the target information related to the scene, and when the matching result indicates that the intersection of M pieces of second information related to M atomic capacities of the scene one by one comprises the first information, the matching of the scene and the vehicle end is determined, and the configuration file of the scene is sent to the vehicle end, so that the vehicle end can receive the scene configuration file sent by the cloud platform.
In this embodiment, the atomic capability of the scene is associated with the applicable vehicle type, the scene engine version and the user information in the cloud platform, and the intersection of the M pieces of second information which are associated with the M atomic capabilities of the scene one by one is obtained to determine the vehicle type, the scene engine version and the user information applicable to the scene, and then whether the first information of the vehicle end is in the range of the vehicle type, the scene engine version and the user information applicable to the scene is determined, so as to determine whether the vehicle end is matched with the scene, and if the vehicle end is matched with the scene, the configuration file of the scene is issued to the vehicle end, and the corresponding vehicle end can receive the configuration file of the scene issued by the cloud platform. Therefore, through management of various dimensions (such as a scene engine version, a vehicle type and the like) in the scene engine, decision judgment on the effectiveness of the scene according to different dimensions can be realized, and a matched scene is issued to a vehicle end, so that the problems of version disputed, difficult collaboration and repeated resource waste caused by the rapid iteration process of the hardware/software of the vehicle end can be effectively solved, compatibility of the scene to different scene engine software, vehicle types and users is realized, and the difficulty of scene engine management is reduced.
Under the condition of acquiring a scene configuration file, a vehicle end needs to trigger and run a scene based on the configuration file. As shown in fig. 11, the frame schematic diagram of the vehicle end may include a scene validity management module 1101, a scene collaboration management module 1102, a scene configuration management module 1103, and a scene trigger execution module 1104.
The scene effectiveness management module is used for scene verification, screens available scenes under the conditions of current vehicles/versions/users, and can perform vehicle type effectiveness verification, scene engine version effectiveness verification and user/vehicle effectiveness verification. The scene cooperative management module is used for supervising and executing cooperative rules such as priority arbitration, breaking, queuing and the like among scenes; the scene configuration management module is used for configuring and realizing internal parameters of the scene, including timeout, circulation, cooling time and the like; the scene trigger execution module is used for in-vehicle intra-domain & cross-domain communication, and is used for realizing trigger judgment and action operation execution of a scene, wherein the trigger judgment and the action operation execution are input and output of a scene engine, the trigger is the change of a monitoring trigger factor, and the output is the triggering of the scene, and each execution factor is scheduled to execute the action.
This will be described in detail below.
Optionally, the configuration file includes scene information and rule information of the scene, and the method further includes:
and under the condition that the vehicle end meets the application range of the scene based on the rule information and the scene is triggered based on the first configuration parameter of the trigger factor in the scene information, responding to the scene based on the rule information and the scene information.
The scene validity management module is used for determining whether the vehicle end meets the application range of the scene based on the rule information, and can determine whether the vehicle end meets the application range of the scene by determining whether an intersection of M pieces of second information related to M pieces of atomic capacity in the rule information comprises first information of the vehicle end.
In general, the vehicle type information and version information of the vehicle end and the cloud platform are matched, all issued scenes can be executed on the vehicle, and whether the scenes which are effective for a specific user are activated or not is distinguished according to different user IDs. However, in some cases, there are situations where the cloud information of the vehicle end is not matched due to network reasons or other factors (such as offline brush version, offline debug, etc.), and at this time, the scene validity management module may identify that a part of the scenes issued by the cloud cannot be matched with the real vehicle end configuration. At this time, a spam policy can be added, and when the configuration file of the scene is updated, the scene effectiveness management module can directly read the data local to the vehicle end for verification, if the information accords with the data, the data is stored, and if the information does not accord with the data, the data is discarded.
The trigger factor is a signal that selects a trigger scenario from the condition factors, for example, the scenario shown in table 1 above, defining "bad air" as an in-vehicle PM2.5 index of 150 or more. The scene trigger execution module can acquire a first configuration parameter of a trigger factor in the scene information, monitor the change of the trigger factor, and determine that the scene is triggered currently if the trigger factor reaches an index indicated by the first configuration parameter. If the trigger factor (in-vehicle PM2.5 index) reaches above the first configuration parameter (150), then it is determined that the scene is currently triggered.
In this way, when the scene validity check passes, the scene is responded based on the rule information and the scene information, so that the accuracy of the scene response can be improved.
Optionally, the rule information includes a rule group of a condition factor and an execution factor of the scene, the atomic capability of the scene indicates the condition factor or the execution factor of the scene, the scene information further includes rule configuration information of the scene, and the responding to the scene based on the rule information and the scene information includes:
responding to the scene according to the execution mode indicated by the rule configuration information based on the rule information;
Wherein the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
In this way, flexibility in scene response may be improved.
Optionally, the scene information further includes a second configuration parameter of the scene, where the second configuration parameter is used to constrain and arbitrate a rule of the scene, and the responding to the scene based on the rule information and the scene information includes:
and under the condition that the rule of the condition factor in the rule information meets the condition and the scene is determined to meet the execution condition based on the second configuration parameter, controlling the target atomic module of the vehicle end to execute the response action based on the rule of the execution factor in the rule information, wherein the target atomic module is the atomic module to which the atomic capability of the scene belongs.
The second configuration parameters correspond to the scene configuration management module and the scene collaborative management module of the vehicle end, and constraint and arbitration are carried out on rules of the scene. Wherein the second configuration parameters include, but are not limited to, the following:
Interrupt condition: in the scene execution process, if the vehicle state change and the user operation occur, the execution of the scene needs to be stopped, and the corresponding data change is called an interrupt condition;
queuing: only one scene can be executed at the same time, when the scene is triggered, the scene is executed, and the triggered scene is required to be placed in a queue for queuing;
priority level: the scene is divided into 4 priorities, the priorities are ranked in the queuing queue according to the priority levels and the triggering time, and the higher the priority level, the earlier the triggering is, the earlier the queuing is;
breaking: breaking refers to immediately stopping a currently executing scene, executing a high-priority scene which is just triggered, wherein the scene with the priority of 1 can break other scenes;
timeout time: the method is that after the scene is triggered, the scene is still not executed within a set time (a plurality of scenes are executed or trigger is waiting to be executed when the scene is triggered) and is defined as overtime, and the execution is not performed any more;
cooling time: refers to the minimum interval between two trigger executions of the same scene;
interval time: refers to the interval between the completion of the scene and the execution of the next scene;
limiting the frequency: refers to the maximum number of times the same scene can trigger execution in a natural day.
The scene configuration management module and the scene cooperation management module may implement a response to the scene based on the second configuration parameter, and a specific response process thereof may be embodied in fig. 12.
In this embodiment, management of various dimensions (for example, a scene engine version, a vehicle type, etc.) in a scene engine can be realized through interaction cooperation between the cloud end and the vehicle end, a simplified and effective flow from scene construction to operation is provided, and decision judgment on effectiveness of the scene according to different dimensions is realized. Fig. 12 is a schematic flow chart of scene life cycle management in this embodiment, as shown in fig. 12, the scene life cycle management relates to a scene engine cloud service of the cloud, an operating system of the vehicle end, a scene engine client of the vehicle end, a scene queue of the vehicle end and a local cache of the vehicle end, and the specific flow chart is as follows:
under the condition that the vehicle engine is started in a cold/hot mode or is restarted from a start/application keep-alive mode, the vehicle-end scene engine is started;
the scene engine client of the vehicle end reads a local scene configuration file;
requesting cloud configuration by a local cache of a vehicle end;
the cloud can support real-time pushing of vehicles in online, and correspondingly screens out scene configuration files matched with the scene engine client of the vehicle end from the scene configuration file list to push the scene engine client of the vehicle end;
The scene engine client of the vehicle end compares the scene configuration file sent by the cloud with the locally cached scene configuration file of the vehicle end to determine whether the configuration file is updated or not; if yes, updating a locally cached scene configuration file of the vehicle end; if the cloud configuration is not obtained or the cloud configuration is not obtained, performing scene validity verification to obtain a scene configuration file matched with the vehicle end;
the scene engine client of the vehicle end carries out cycle monitoring so as to monitor scene triggering;
the scene engine client of the vehicle end determines whether the vehicle end accords with the condition factors; if yes, inquiring the execution record; if not, exiting the scene;
inquiring the execution record to determine whether the scene meets the triggering frequency condition and the frequency limit; if yes, judging whether a running scene exists at present; if not, exiting the scene;
when judging that the running scene exists currently, determining whether the existing scene needs to be interrupted, and when judging that the running scene does not exist currently, judging whether the scene is outside the limit of the cooling time;
executing scene action when the scene is outside the cooling time limit, and exiting the scene when the scene is not outside the cooling time limit;
when the existing scene is judged to be interrupted, the current executing scene is stopped, the scene action is executed, and when the existing scene is judged not to be interrupted, whether the scene queue has the same scene is inquired;
When the scene queues have the same scene, the scene is exited, and when the scene queues have no same scene, the scene queues are updated;
a team head scene is taken, and whether the scene meets all conditions such as a trigger factor, a condition factor, cooling time and the like is judged; if yes, executing scene action; if not, exiting the scene;
when the scene action is executed, the execution result is recorded, and the execution times are updated.
Third embodiment
As shown in fig. 13, the present disclosure provides a scene engine management apparatus 1300, applied to a cloud platform, including:
the first obtaining module 1301 is configured to obtain first information of a vehicle end, where the first information includes at least one of a vehicle type, a scene engine version, and user information of the vehicle end;
the matching module 1302 is configured to match the first information with target information associated with a scenario, where the scenario is configured to execute different function combinations at a vehicle end under a specific condition, the target information includes M pieces of second information associated with M atomic capabilities of the scenario one by one, the second information includes at least one of a vehicle model, a scenario engine version, and user information to which the atomic capabilities are applicable, and M is a positive integer;
the first sending module 1303 is configured to send a configuration file of the scene to the vehicle end when the matching result indicates that an intersection set of M pieces of second information associated with M atomic capabilities of the scene includes the first information, where the configuration file is used to indicate triggering and running of the scene.
Optionally, the apparatus further includes:
the creating module is used for creating the scene engine version of the vehicle end on the basis of the engine version management page under the condition that the version update signal of the scene engine software of the vehicle end is acquired, and obtaining the scene engine version information of the vehicle end; wherein,
the scene engine version information comprises a version number of scene engine software of the vehicle end, and the version number is used for matching with the availability of M atomic capabilities of the scene.
Optionally, the matching module 1302 is specifically configured to:
and under the condition that the vehicle type of the vehicle end is matched with the target vehicle type information, matching the first information with the target information related to the scene, wherein the target vehicle type information is the vehicle type information supported by the scene engine acquired from the Internet of vehicles platform.
Optionally, the apparatus further includes:
the second acquisition module is used for acquiring scene information and rule information of the scene based on a scene design page, wherein the rule information comprises rule groups of conditional factors and execution factors of the scene;
wherein the configuration file of the scene comprises the scene information and rule information.
Optionally, the scene information includes rule configuration information of the scene, and the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
Optionally, the apparatus further includes:
a third obtaining module, configured to obtain M atomic capabilities of the scene based on an atomic capability orchestration page, where the atomic capabilities indicate a condition factor or an execution factor of the scene;
the atomic capability orchestration page has a compatible parameter configuration function of atomic capability, and the compatible parameter configuration function is used for configuring the second information related to the atomic capability.
Optionally, the atomic capability arranging page further has a type configuration function of an atomic module to which the atomic capability belongs, where the type configuration function is used for configuring a type of the atomic module to which the atomic capability belongs; the apparatus further comprises:
the fourth acquisition module is used for acquiring the type of the atomic module based on the atomic module creation page;
The types of the atomic modules are classified according to module attributes, module domain names and splitting dimensions of the module names.
The scene engine management apparatus 1300 provided in the present disclosure can implement each process implemented by the first embodiment of the scene engine management method, and can achieve the same beneficial effects, so that repetition is avoided, and no description is repeated here.
Fourth embodiment
As shown in fig. 14, the present disclosure provides a scene engine management apparatus 1400, applied to a vehicle end, including:
a second sending module 1401, configured to send first information to a cloud platform, where the first information includes at least one of a vehicle type, a scene engine version, and user information of the vehicle end;
a receiving module 1402, configured to receive, when the cloud platform matches the first information with target information associated with a scene and a matching result indicates that intersections of M pieces of second information associated with M atomic capabilities of the scene include the first information, a configuration file of the scene sent by the cloud platform, where the configuration file is used to indicate triggering and running of the scene; wherein,
the scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, the target information comprises M pieces of second information which are in one-to-one association with M pieces of atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer.
Optionally, the configuration file includes scene information and rule information of the scene, and the apparatus further includes:
and the response module is used for responding to the scene based on the rule information and the scene information under the condition that the vehicle end meets the application range of the scene based on the rule information and the scene is triggered based on the first configuration parameter of the trigger factor in the scene information.
Optionally, the rule information includes a rule group of a condition factor and an execution factor of the scene, the atomic capability of the scene indicates the condition factor or the execution factor of the scene, the scene information further includes rule configuration information of the scene, and the response module is specifically configured to:
responding to the scene according to the execution mode indicated by the rule configuration information based on the rule information;
wherein the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
And when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
Optionally, the scene information further includes a second configuration parameter of the scene, where the second configuration parameter is used to constrain and arbitrate a rule of the scene, and the response module is specifically configured to:
and under the condition that the rule of the condition factor in the rule information meets the condition and the scene is determined to meet the execution condition based on the second configuration parameter, controlling the target atomic module of the vehicle end to execute the response action based on the rule of the execution factor in the rule information, wherein the target atomic module is the atomic module to which the atomic capability of the scene belongs.
The scene engine management apparatus 1400 provided in the present disclosure can implement each process implemented by the second embodiment of the scene engine management method, and can achieve the same beneficial effects, so that repetition is avoided, and no description is repeated here.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
FIG. 15 illustrates a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 15, the apparatus 1500 includes a computing unit 1501, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1502 or a computer program loaded from a storage unit 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data required for the operation of the device 1500 may also be stored. The computing unit 1501, the ROM 1502, and the RAM 1503 are connected to each other through a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
Various components in device 1500 are connected to I/O interface 1505, including: an input unit 1506 such as a keyboard, mouse, etc.; an output unit 1507 such as various types of displays, speakers, and the like; a storage unit 1508 such as a magnetic disk, an optical disk, or the like; and a communication unit 1509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1509 allows the device 1500 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The computing unit 1501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 1501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The computing unit 1501 performs the various methods and processes described above, for example, a scene engine management method. For example, in some embodiments, the scene engine management method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 1508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1500 via the ROM 1502 and/or the communication unit 1509. When a computer program is loaded into the RAM 1503 and executed by the computing unit 1501, one or more steps of the above-described scene engine management method may be performed. Alternatively, in other embodiments, the computing unit 1501 may be configured to perform the scene engine management method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (25)
1. A scene engine management method is applied to a cloud platform and comprises the following steps:
acquiring first information of a vehicle end, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
matching the first information with target information associated with a scene, wherein the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M pieces of atomic capacity of the scene one by one, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capacity is applicable, and M is a positive integer;
And sending a configuration file of the scene to the vehicle end when the matching result indicates that the intersection of M pieces of second information which are in one-to-one association with M atomic capacities of the scene comprises the first information, wherein the configuration file is used for indicating the triggering and running of the scene.
2. The method of claim 1, wherein the method further comprises:
under the condition that a version updating signal of the scene engine software of the vehicle end is obtained, creating a scene engine version of the vehicle end on a base line version of the scene engine software of the vehicle end based on an engine version management page to obtain scene engine version information of the vehicle end; wherein,
the scene engine version information comprises a version number of scene engine software of the vehicle end, and the version number is used for matching with the availability of M atomic capabilities of the scene.
3. The method of claim 1, wherein the matching the first information with target information associated with a scene comprises:
and under the condition that the vehicle type of the vehicle end is matched with the target vehicle type information, matching the first information with the target information related to the scene, wherein the target vehicle type information is the vehicle type information supported by the scene engine acquired from the Internet of vehicles platform.
4. The method of claim 1, wherein prior to said matching the first information with the target information associated with the scene, the method further comprises:
acquiring scene information and rule information of a scene based on a scene design page, wherein the rule information comprises rule groups of conditional factors and execution factors of the scene;
wherein the configuration file of the scene comprises the scene information and rule information.
5. The method of claim 4, wherein the context information comprises rule configuration information for the context, the rule configuration information comprising any of:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
6. The method of claim 4, wherein prior to the scene information and rule information for the scene being acquired based on the scene design page, the method further comprises:
acquiring M atomic capabilities of the scene based on an atomic capability programming page, wherein the atomic capabilities indicate a conditional factor or an execution factor of the scene;
The atomic capability orchestration page has a compatible parameter configuration function of atomic capability, and the compatible parameter configuration function is used for configuring the second information related to the atomic capability.
7. The method of claim 6, wherein the atomic capability orchestration page further has a type configuration function of an atomic module to which the atomic capability belongs, the type configuration function being used for configuring a type of the atomic module to which the atomic capability belongs; the method further comprises the steps of:
creating a page based on the atomic module to acquire the type of the atomic module;
the types of the atomic modules are classified according to module attributes, module domain names and splitting dimensions of the module names.
8. A scene engine management method is applied to a vehicle end and comprises the following steps:
the method comprises the steps of sending first information to a cloud platform, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of a vehicle end;
when the cloud platform matches the first information with target information associated with a scene, and the matching result indicates that intersections of M pieces of second information associated with M atomic capacities of the scene one by one comprise the first information, receiving a configuration file of the scene sent by the cloud platform, wherein the configuration file is used for indicating triggering and running of the scene; wherein,
The scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, the target information comprises M pieces of second information which are in one-to-one association with M pieces of atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer.
9. The method of claim 8, wherein the configuration file includes scene information and rule information for the scene, the method further comprising:
and under the condition that the vehicle end meets the application range of the scene based on the rule information and the scene is triggered based on the first configuration parameter of the trigger factor in the scene information, responding to the scene based on the rule information and the scene information.
10. The method of claim 9, wherein the rule information comprises a rule set of a condition factor and an execution factor of the scene, an atomic capability of the scene indicating the condition factor or the execution factor of the scene, the scene information further comprising rule configuration information of the scene, the responding to the scene based on the rule information and the scene information comprising:
Responding to the scene according to the execution mode indicated by the rule configuration information based on the rule information;
wherein the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
11. The method of claim 9, wherein the scenario information further comprises a second configuration parameter of the scenario for constraining and arbitrating rules of the scenario, the responding to the scenario based on the rule information and the scenario information comprising:
and under the condition that the rule of the condition factor in the rule information meets the condition and the scene is determined to meet the execution condition based on the second configuration parameter, controlling the target atomic module of the vehicle end to execute the response action based on the rule of the execution factor in the rule information, wherein the target atomic module is the atomic module to which the atomic capability of the scene belongs.
12. A scene engine management device applied to a cloud platform, comprising:
the system comprises a first acquisition module, a second acquisition module and a first processing module, wherein the first acquisition module is used for acquiring first information of a vehicle end, and the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
the matching module is used for matching the first information with target information associated with a scene, the scene is used for executing different function combinations at a vehicle end under the condition that a specific condition is met, the target information comprises M pieces of second information which are associated with M atomic capabilities of the scene one by one, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer;
the first sending module is used for sending a configuration file of the scene to the vehicle end when the matching result indicates that the intersection set of M pieces of second information which are in one-to-one association with M atomic capacities of the scene comprises the first information, and the configuration file is used for indicating the triggering and running of the scene.
13. The apparatus of claim 12, wherein the apparatus further comprises:
the creating module is used for creating the scene engine version of the vehicle end on the basis of the engine version management page under the condition that the version update signal of the scene engine software of the vehicle end is acquired, and obtaining the scene engine version information of the vehicle end; wherein,
The scene engine version information comprises a version number of scene engine software of the vehicle end, and the version number is used for matching with the availability of M atomic capabilities of the scene.
14. The apparatus of claim 12, wherein the matching module is specifically configured to:
and under the condition that the vehicle type of the vehicle end is matched with the target vehicle type information, matching the first information with the target information related to the scene, wherein the target vehicle type information is the vehicle type information supported by the scene engine acquired from the Internet of vehicles platform.
15. The apparatus of claim 12, wherein the apparatus further comprises:
the second acquisition module is used for acquiring scene information and rule information of the scene based on a scene design page, wherein the rule information comprises rule groups of conditional factors and execution factors of the scene;
wherein the configuration file of the scene comprises the scene information and rule information.
16. The apparatus of claim 15, wherein the context information comprises rule configuration information for the context, the rule configuration information comprising any of:
each rule in the rule group is judged and executed according to the sequence;
Each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
17. The apparatus of claim 15, wherein the apparatus further comprises:
a third obtaining module, configured to obtain M atomic capabilities of the scene based on an atomic capability orchestration page, where the atomic capabilities indicate a condition factor or an execution factor of the scene;
the atomic capability orchestration page has a compatible parameter configuration function of atomic capability, and the compatible parameter configuration function is used for configuring the second information related to the atomic capability.
18. The device of claim 17, wherein the atomic capability orchestration page further has a type configuration function of an atomic module to which the atomic capability belongs, the type configuration function being used for configuring a type of the atomic module to which the atomic capability belongs; the apparatus further comprises:
the fourth acquisition module is used for acquiring the type of the atomic module based on the atomic module creation page;
the types of the atomic modules are classified according to module attributes, module domain names and splitting dimensions of the module names.
19. A scene engine management device applied to a vehicle end, comprising:
the second sending module is used for sending first information to the cloud platform, wherein the first information comprises at least one of a vehicle type, a scene engine version and user information of the vehicle end;
the receiving module is used for receiving a configuration file of the scene sent by the cloud platform when the cloud platform matches the first information with target information related to the scene and the matching result indicates that intersections of M pieces of second information related to M atomic capacities of the scene one by one comprise the first information, wherein the configuration file is used for indicating triggering and running of the scene; wherein,
the scene is used for executing different function combinations at a vehicle end under the condition that specific conditions are met, the target information comprises M pieces of second information which are in one-to-one association with M pieces of atomic capabilities of the scene, the second information comprises at least one of a vehicle type, a scene engine version and user information to which the atomic capabilities are applicable, and M is a positive integer.
20. The apparatus of claim 19, wherein the configuration file includes scene information and rule information for the scene, the apparatus further comprising:
And the response module is used for responding to the scene based on the rule information and the scene information under the condition that the vehicle end meets the application range of the scene based on the rule information and the scene is triggered based on the first configuration parameter of the trigger factor in the scene information.
21. The apparatus of claim 20, wherein the rule information comprises a rule set of a condition factor and an execution factor of the scene, an atomic capability of the scene indicating the condition factor or the execution factor of the scene, the scene information further comprising rule configuration information of the scene, the response module being specifically configured to:
responding to the scene according to the execution mode indicated by the rule configuration information based on the rule information;
wherein the rule configuration information includes any one of the following:
each rule in the rule group is judged and executed according to the sequence;
each rule in the rule group is executed in sequence when meeting the conditions;
sequentially judging rules in the rule group, and executing only the first rule meeting the condition;
and when the rule with the highest priority in the rule group meets the condition, judging and executing each rule according to the sequence.
22. The apparatus of claim 20, wherein the scenario information further comprises a second configuration parameter of the scenario, the second configuration parameter being used for constraining and arbitrating rules of the scenario, the response module being specifically configured to:
and under the condition that the rule of the condition factor in the rule information meets the condition and the scene is determined to meet the execution condition based on the second configuration parameter, controlling the target atomic module of the vehicle end to execute the response action based on the rule of the execution factor in the rule information, wherein the target atomic module is the atomic module to which the atomic capability of the scene belongs.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7 or to perform the method of any one of claims 8-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7 or to perform the method of any one of claims 8-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-7 or implements the method according to any of claims 8-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311083241.2A CN117061559A (en) | 2023-08-25 | 2023-08-25 | Scene engine management method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311083241.2A CN117061559A (en) | 2023-08-25 | 2023-08-25 | Scene engine management method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117061559A true CN117061559A (en) | 2023-11-14 |
Family
ID=88666076
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311083241.2A Pending CN117061559A (en) | 2023-08-25 | 2023-08-25 | Scene engine management method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117061559A (en) |
-
2023
- 2023-08-25 CN CN202311083241.2A patent/CN117061559A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103380423B (en) | For the system and method for private cloud computing | |
CN111010426A (en) | Message pushing method and device | |
CN111459629A (en) | Azkaban-based project operation method and device and terminal equipment | |
CN112465448A (en) | Cross-organization workflow operation method and system based on block chain | |
CN113361838A (en) | Business wind control method and device, electronic equipment and storage medium | |
CN111008032A (en) | Page data updating method and device | |
CN111913793A (en) | Distributed task scheduling method, device, node equipment and system | |
CN114997414B (en) | Data processing method, device, electronic equipment and storage medium | |
CN113204425A (en) | Method and device for process management internal thread, electronic equipment and storage medium | |
CN113658351B (en) | Method and device for producing product, electronic equipment and storage medium | |
CN117076096A (en) | Task flow execution method and device, computer readable medium and electronic equipment | |
CN111767149B (en) | Scheduling method, device, equipment and storage equipment | |
CN116980859A (en) | Information synchronous interaction method for vehicle and cloud and related equipment | |
CN117389843A (en) | Intelligent operation and maintenance system, method, electronic equipment and storage medium | |
CN115984022B (en) | Unified account checking method and device for distributed payment system | |
US20230014025A1 (en) | Method and device for processing service using request, and computer readable storage medium | |
CN116578497A (en) | Automatic interface testing method, system, computer equipment and storage medium | |
CN117061559A (en) | Scene engine management method and device and electronic equipment | |
CN115061947B (en) | Resource management method, device, equipment and storage medium | |
US8554798B2 (en) | Asynchronous state engine with plug-ins for flexible application development | |
US20180341521A1 (en) | Managing job schedules | |
CN112418796A (en) | Sub-process node activation method and device, electronic equipment and storage medium | |
CN114564249A (en) | Recommendation scheduling engine, recommendation scheduling method, and computer-readable storage medium | |
CN111400003A (en) | Task processing method and device | |
Pan et al. | RETRACTED ARTICLE: Research on process customization technology for intelligent transportation cloud service platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |