CN117808002A - Scene file generation, scene file generation model training method and electronic equipment - Google Patents

Scene file generation, scene file generation model training method and electronic equipment Download PDF

Info

Publication number
CN117808002A
CN117808002A CN202311846570.8A CN202311846570A CN117808002A CN 117808002 A CN117808002 A CN 117808002A CN 202311846570 A CN202311846570 A CN 202311846570A CN 117808002 A CN117808002 A CN 117808002A
Authority
CN
China
Prior art keywords
data
vehicle
scene file
target
file generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311846570.8A
Other languages
Chinese (zh)
Inventor
孙向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zero Beam Technology Co ltd
Original Assignee
Zero Beam Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zero Beam Technology Co ltd filed Critical Zero Beam Technology Co ltd
Priority to CN202311846570.8A priority Critical patent/CN117808002A/en
Publication of CN117808002A publication Critical patent/CN117808002A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Stored Programmes (AREA)

Abstract

The application provides a scene file generation method, a scene file generation model training method and electronic equipment, wherein the scene file generation method comprises the following steps: acquiring first data corresponding to a first target vehicle, wherein the first data comprises vehicle data and environment data related to a vehicle scene of the first target vehicle; preprocessing the first data to obtain second data; and inputting the second data into a target scene file generation model, so that the target scene file generation model performs scene file generation processing according to the second data to obtain a target scene file, wherein the target scene file comprises control parameters corresponding to a target control in the running process of the first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to a vehicle scene. The target scene file for controlling the running of the vehicle is not required to be manually set, so that the time is greatly saved, and the operation of a user is simplified.

Description

Scene file generation, scene file generation model training method and electronic equipment
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method for generating a scene file, training a model for generating a scene file, and an electronic device.
Background
With the continuous development of vehicle industry technologies, more and more functions can be realized by the existing vehicle, such as adjustment of air-conditioning temperature, adjustment of seat position, opening and closing of windows and doors, and the like, can be intelligently adjusted through controls (such as an air-conditioning switch, a seat button, and the like) corresponding to each function of the vehicle, and control parameters corresponding to each control are generally determined through scene files for controlling the running of the vehicle.
In the prior art, setting and arrangement of control parameters corresponding to each control in a scene file for controlling the running of a vehicle are manually performed. For example, a developer intelligently compiles a vehicle running scene file at a cloud end through function setting type application software which is set in advance on a computer, for example, compiles control parameters of each control when the vehicle runs in a dragging mode to form a scene file, then sends the well-organized scene file to a vehicle end, and the vehicle end controls each control to execute corresponding functions according to the control parameters of each control in the scene file. Or the user arranges the running scene file of the vehicle at the mobile phone end or the vehicle end through the function setting type application software which is set in advance, and then the vehicle end controls each control to execute corresponding functions according to the control parameters of each control in the scene file. In this way, the scene file for controlling the running of the vehicle needs to be manually set and arranged in advance in the function setting application software, which is time-consuming and labor-consuming. Further, in the use process of the vehicle, the content of the scene file cannot be set in real time, and the control in the vehicle can be controlled to execute the corresponding function only according to the set scene file, so that the method is not applicable to many scenes. Therefore, these scene files set in advance do not satisfy the user's needs well. In addition, the manual work has subjectivity in the process of setting each control parameter in the scene file, and the numerical value of each set parameter does not necessarily accord with the current actual use requirement of the vehicle, so that the problem of inaccuracy exists.
Disclosure of Invention
The application provides a scene file generation method, a scene file generation model training method and electronic equipment, which can solve the problems that in the prior art, a manual setting mode of a scene file for controlling the running of a vehicle is time-consuming and labor-consuming, the set scene file is not necessarily accurate, the user requirements cannot be well met, and the user experience is affected.
In order to solve the above technical problem, in a first aspect, an embodiment of the present application provides a method for generating a scene file, which is applied to a server, where the method includes: acquiring first data corresponding to a first target vehicle, wherein the first data comprises vehicle data and environment data related to a vehicle scene of the first target vehicle; preprocessing the first data to obtain second data; inputting second data into a target scene file generation model, enabling the target scene file generation model to perform scene file generation processing according to the second data to obtain a target scene file, wherein the target scene file comprises control parameters corresponding to a target control in the running process of a first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to a vehicle scene.
In the implementation manner of the method, a server side obtains first data including vehicle data and environment data related to a first target vehicle scene, then inputs second data after preprocessing the first data into a target scene file generation model, and enables the target scene file generation model to perform scene file generation processing according to the second data to obtain a target scene file including control parameters corresponding to a target control in the running process of the first target vehicle. Therefore, the target scene file generation model can be generated based on the first data corresponding to the first target vehicle and the selected target scene file generation model, the target scene file for controlling the running of the vehicle is not required to be manually set, the time is greatly saved, and the operation of a user is simplified. The target scene file generation model is a model obtained by training a training data set comprising historical vehicle data and historical environment data related to a vehicle scene, the first data acquired in real time is input into the target scene file generation model, and the target scene file generation model can more accurately obtain the target scene file currently required by a user according to the environment data, the vehicle data and the like related to the vehicle scene of the target vehicle at each moment so as to control the vehicle, and can effectively meet the real-time requirement of the user. Further, the target scene file is obtained by reasoning the target scene file generation model according to the environment data and the vehicle data related to the vehicle scene of the target vehicle, and compared with the mode of manually setting the scene file by a user in the prior art, the method is more accurate and objective.
In one possible implementation manner of the first aspect, preprocessing the first data to obtain second data includes: vectorizing the first data to obtain third data; and performing position coding processing on the third data based on the time sequence information of the third data to obtain second data.
In the implementation mode of the method, the format of the second data obtained through vectorization processing and position coding processing is more uniform, the second data is more suitable for inputting a target scene file generation model, the target scene file is obtained, and the accuracy of the obtained target scene file is improved.
In one possible implementation of the first aspect, acquiring the first data includes: receiving first information input by a user; inputting the first information into a language processing model for semantic processing to obtain a scene text generation instruction; and acquiring first data according to the scene text generation instruction.
In the implementation manner of the application, first information input by a user is received, then the first information is input into a language processing model to obtain a scene text generation instruction, and then first data is obtained. Therefore, the target scene file can be generated according to the instruction of the user, so that the obtained target scene file meets the user requirement, and the user experience is improved.
In a possible implementation of the first aspect described above, the semantic processing includes semantic analysis processing and/or semantic similarity calculation processing.
In the implementation manner of the method, the scene text generation instruction obtained through semantic analysis processing and/or semantic similarity calculation processing is more accurate.
In a possible implementation manner of the first aspect, in a case where the first information is voice information, inputting the first information into a language processing model for semantic processing includes: performing format conversion processing on the first information to obtain corresponding text information; the text information is input into a language processing model for semantic processing.
In the implementation manner of the method, under the condition that the first information is voice information, format conversion processing is conducted on the voice information to obtain text information, and then the text information is input into a language processing model to conduct semantic processing, so that the obtained scene text generation instruction is more accurate.
In one possible implementation manner of the first aspect, the server is any one of a cloud server, a vehicle end and a mobile terminal.
In the implementation mode of the method, the target scene file is generated through multiple types of service terminals, and flexibility of generating the target scene file is improved.
In a possible implementation manner of the first aspect, in a case that the server is a cloud server, the method further includes: and sending the target scene file to a first target vehicle through a first data transmission mode, wherein the first data transmission mode comprises vehicle uplink and downlink channel protocol definition information, uplink and downlink channel data format information, request address form information and file content information, so that a user corresponding to the first target vehicle can set actual control parameters corresponding to the target pre-control according to the target scene file.
In the implementation manner of the method, the target scene file is sent to the first target vehicle through the first data transmission mode comprising the definition information of the uplink channel protocol and the downlink channel protocol of the vehicle, the data format information of the uplink channel and the downlink channel of the vehicle, the request address form information and the file content information, so that the readability and the safety of the transmission process of the target scene file are improved, and the weight of the target scene file is reduced as far as possible.
In a possible implementation of the first aspect, the method further includes: acquiring actual control parameters; and under the condition that the actual control parameters are inconsistent with the control parameters of the target controls included in the target scene file, updating the target scene file generation model according to the actual control parameters.
In the implementation manner of the application, when the actual control parameters corresponding to the target control are inconsistent with the control parameters of the target control included in the target scene file, the target scene file generation model is updated according to the actual control parameters. Therefore, the target scene file generation model can be updated in real time, the accuracy of the target scene file generation model is improved, and the accuracy of the obtained target scene file is further improved.
In one possible implementation of the first aspect described above, the vehicle data includes vehicle control data and vehicle intention type data, and the environment data includes natural environment data and three-way traffic data.
In the implementation mode of the method, more accurate target scene files can be obtained through the vehicle control data, the vehicle intention type data, the natural environment data and the three-party traffic data, and then the user experience is improved.
In a second aspect, an embodiment of the present application provides a training method for generating a model of a scene file, which is applied to a second server, where the method includes: determining a first training data set and determining an initial scene file generation model, wherein the first training data set comprises fourth data, and the fourth data comprises vehicle data and environment data related to a vehicle scene; and inputting the first training data set into the initial scene file generation model to perform model training to obtain a first target scene file generation model.
In the implementation mode of the application, a first training data set comprising vehicle data and environment data related to a vehicle scene of a vehicle is determined, and then the first training data set is input into an initial scene file generation model to perform model training, so that a first target scene file generation model is obtained. The first target scene file generation model can accurately obtain the target scene file currently required by the user according to the environment data, the vehicle data and the like related to the vehicle scene for controlling the vehicle, and the real-time requirement of the user can be effectively met.
In a possible implementation of the second aspect, the method further includes: determining a second training data set, the second training data set comprising fifth data, the fifth data comprising vehicle data and environmental data related to a second target vehicle scene; and inputting the second training data set into the first target scene file generation model to perform optimization processing on the first target scene file generation model to obtain a second target scene file generation model.
In the implementation manner of the application, according to a second training data set comprising vehicle data and environment data related to a second target vehicle scene, the first target scene file generation model is optimized to obtain a second target scene file generation model. The first target scene file generation model obtained through training according to the general data (namely the first training data set) can be optimized by utilizing the personalized data of each vehicle and the dynamic data (namely the vehicle data and the environment data related to the vehicle scene of the second target vehicle) newly generated by each vehicle, so that the obtained second target scene file generation model has personalized model parameters, and further an accurate target scene file is generated, so that the vehicle is controlled.
In a third aspect, an embodiment of the present application provides a scene file generating apparatus, including: the first processing module is used for acquiring first data corresponding to a first target vehicle, wherein the first data comprises vehicle data and environment data related to a vehicle scene of the first target vehicle; the second processing module is used for preprocessing the first data to obtain second data; the third processing module is used for inputting the second data into the target scene file generation model, enabling the target scene file generation model to perform scene file generation processing according to the second data to obtain a target scene file, wherein the target scene file comprises control parameters corresponding to a target control in the running process of the first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to a vehicle scene.
In a fourth aspect, embodiments of the present application provide a scene file generation model training apparatus, including: a fourth processing module configured to determine a first training data set, and determine an initial scene file generation model, the first training data set including fourth data, the fourth data including vehicle data and environmental data related to a vehicle scene; and the fifth processing module is used for inputting the first training data set into the initial scene file generation model to perform model training so as to obtain a first target scene file generation model.
In a fifth aspect, embodiments of the present application provide an electronic device, including: a memory for storing a computer program, the computer program comprising program instructions; a processor, configured to execute program instructions to cause an electronic device to perform the method for generating a scene file provided by the first aspect and/or any one of the possible implementation manners of the first aspect, or to perform the method for training a scene file generation model provided by the second aspect and/or any one of the possible implementation manners of the second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, the computer program including program instructions that are executed by an electronic device to cause performance of the scene file generation method provided by any one of the possible implementations of the first aspect and/or the first aspect, or the scene file generation model training method provided by any one of the possible implementations of the second aspect and/or the second aspect.
In a seventh aspect, embodiments of the present application provide a computer program product, including a computer program/instruction, which when executed by a processor implements the scene file generation method provided by any one of the foregoing first aspect and/or any one of the foregoing possible implementation manners of the first aspect, or performs the scene file generation model training method provided by any one of the foregoing second aspect and/or any one of the foregoing possible implementation manners of the second aspect.
The relevant advantageous effects of the third aspect to the seventh aspect may be referred to in the relevant description of the first aspect or the second aspect, and are not described herein.
The beneficial effects of the application are that:
according to the scene file generation method, the server side acquires first data which are related to the first target vehicle scene and comprise vehicle data and environment data, then inputs second data after preprocessing the first data into the target scene file generation model, and enables the target scene file generation model to generate and process scene files according to the second data to obtain the target scene file which comprises control parameters corresponding to the target control in the running process of the first target vehicle. Therefore, the target scene file generation model can be generated based on the first data corresponding to the first target vehicle and the selected target scene file generation model, the target scene file for controlling the running of the vehicle is not required to be manually set, the time is greatly saved, and the operation of a user is simplified. The target scene file generation model is a model obtained by training a training data set comprising historical vehicle data and historical environment data related to a vehicle scene, the first data acquired in real time is input into the target scene file generation model, and the target scene file generation model can more accurately obtain the target scene file currently required by a user according to the environment data, the vehicle data and the like related to the vehicle scene of the target vehicle at each moment so as to control the vehicle, and can effectively meet the real-time requirement of the user. Further, the target scene file is obtained by reasoning the target scene file generation model according to the environment data and the vehicle data related to the vehicle scene of the target vehicle, and compared with the mode of manually setting the scene file by a user in the prior art, the method is more accurate and objective.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the following description will briefly explain the drawings used in the description of the embodiments.
FIG. 1 is a flow diagram illustrating a method of generating a scene file according to some implementations of the present application;
FIG. 2 is a schematic flow diagram illustrating one process of obtaining second data, according to some implementations of the present application;
FIG. 3 is a flow chart illustrating one process of obtaining first data corresponding to a first target vehicle according to some implementations of the present application;
FIG. 4 is a flow diagram illustrating one process for processing first information, according to some implementations of the present application;
FIG. 5 is a flow diagram illustrating another scenario file generation method, according to some implementations of the present application;
FIG. 6 is a schematic diagram illustrating one type of request address form information, according to some implementations of the present application;
FIG. 7 is a flow diagram illustrating a scenario file generation model training method, according to some implementations of the present application;
FIG. 8 is a flow diagram illustrating another scenario file generation model training method, according to some implementations of the present application;
FIG. 9 is a schematic structural diagram of an intelligent scene technology architecture for scene file generation, according to some implementations of the present application;
FIG. 10 is a schematic diagram illustrating an intelligent scene file generation model generating a scene file according to some implementations of the present application;
FIG. 11 is a schematic diagram illustrating a configuration of a scene file generation device according to some implementations of the present application;
FIG. 12 is a schematic diagram illustrating a scenario file generation model training apparatus, according to some implementations of the present application;
fig. 13 is a schematic structural diagram of an electronic device, according to some implementations of the present application.
Detailed Description
The technical solutions of the present application will be described in further detail below with reference to the accompanying drawings.
As described above, in the prior art, setting and arrangement of control parameters corresponding to each control in a scene file for controlling vehicle operation are performed manually, which is time-consuming and labor-consuming, and the set scene file is not necessarily accurate, which cannot meet the user requirements and affects the user experience.
Based on the above, the application provides a scene file generation method, which can obtain a target scene file for controlling the operation of a vehicle based on a target scene file generation model trained in advance and vehicle data and environment data corresponding to a first target vehicle, wherein the vehicle data and the environment data are related to a vehicle scene of the first target vehicle. The target scene file for controlling the running of the vehicle is not required to be manually set, so that the time is greatly saved, and the operation of a user is simplified. And the real-time requirement of the user can be effectively met, so that the target scene file is more accurate and objective.
Next, implementation procedures and advantages of the scene file generation method provided in the present application are described in detail with reference to the accompanying drawings.
In an implementation manner of the present application, the method for generating a scene file provided by the present application may be applied to a first service end, where the first service end may be any one of a cloud server, a vehicle end and a mobile terminal, and of course may also be other devices.
Specifically, data collection is firstly performed, such as various data including vehicle owner data, third party traffic data, vehicle running data generated in the vehicle running process and the like related to a vehicle scene, and then a target scene file is generated through a server.
When the first server is a cloud server, a developer can receive collected data based on a developer platform of a computer terminal (Personal Computer, PC), then generate a target scene file for controlling vehicle operation through a vehicle deep learning neural network general large model (namely a target scene file generation model) stored in the cloud server in advance, the computer terminal can generate a graphical display form scene by receiving text or voice information input by the developer, the developer can continuously optimize and finally generate a corresponding scene file (namely the target scene file), then submit and issue the corresponding scene file to a digital mall to sell the target scene file on a commodity, and a vehicle owner browses the sold commodity through a mobile phone and a vehicle machine terminal to purchase and download the optimized target scene file. Or the developer directly sends the corresponding scene file generated finally to the matched vehicle machine so as to enable the vehicle to run according to the target scene file.
When the first service end is a mobile terminal (such as a mobile phone), a vehicle owner can input text or voice information based on the mobile terminal, generate a corresponding target scene file through a vehicle deep learning neural network general large model which is stored in the mobile terminal in advance according to acquired data, specifically generate a graphical display form scene, continuously optimize and finally generate a corresponding scene file, and submit the corresponding scene file to a cloud end for being synchronized to the vehicle machine end for use; or the mobile terminal directly sends the generated response scene file to the vehicle machine side so that the vehicle runs according to the target scene file.
When the first service end is a vehicle end, a vehicle owner can also input text or voice information based on the vehicle end, generate a corresponding target scene file through a vehicle deep learning neural network general large model pre-stored at the vehicle end according to the acquired data, and particularly can generate a graphical display form scene, and the vehicle owner can continuously optimize to finally generate a corresponding scene file at the vehicle end and use the corresponding scene file at the vehicle end or synchronously synchronize to a cloud end, a mobile phone end and the like so as to be browsed and edited by the vehicle owner, a developer and the like.
And when the vehicle runs according to the target scene file, final operation data of the vehicle owner to the vehicle can be detected in real time, so that continuous iterative tuning is performed on the general large model of the vehicle deep learning neural network according to the final operation data of the vehicle owner to the vehicle, and the requirement of the vehicle owner to the vehicle and the real-time state of the vehicle are mastered in time, so that the more accurate target scene file is output, and intelligent control of the vehicle and the vehicle machine is realized.
In one implementation manner of the present application, as shown in fig. 1, the scene file generating method includes the following steps:
s100: first data corresponding to a first target vehicle are acquired, wherein the first data comprise vehicle data and environment data related to a vehicle scene of the first target vehicle.
In one implementation of the present application, the vehicle data includes vehicle control data and vehicle intent type data, and the environmental data includes natural environment data and three-way traffic data.
The vehicle control data can comprise air conditioning data, primary and secondary driving data, rearview mirror data, atmosphere lamp data, seat data, vehicle door data, vehicle window data, vehicle lamp data, vehicle interior and exterior temperature data, battery data, system parameter data, multimedia data and the like.
The intention type data of the train may include the intention 1 to go to work, the intention 2 to go to work, the intention 3 to go to travel, the intention 4 to go to eat, the intention 5 to get to and get from school with the child, and the like.
The natural environment data may include weather data, geographical location data, road condition data, navigation map data, and the like.
The three-party traffic data can include traffic road condition data, traffic event data, road risk level data, road safety warning data, trip intensity data and the like.
It should be noted that, in order to facilitate time alignment of various collected data for subsequent processing, in the foregoing process of data collection, a time is generated for each collected data record, that is, a sequential sequence is captured, and further, a time sequence such as a year, month, day, week, season, holiday, time zone, etc. is required to be extracted according to a record occurrence time (that is, a data record generation time), that is, time stamp information is added to each collected data.
It is to be understood that the vehicle data and the environmental data include, but are not limited to, the specific data content described above.
S200: and preprocessing the first data to obtain second data.
The specific preprocessing may include fusion and cross processing, more specifically may be vectorization processing, position coding processing, and other processing means, and after preprocessing the first data, the obtained second data is input to the target scene file generation model.
S300: and inputting the second data into the target scene file generation model, so that the target scene file generation model performs scene file generation processing according to the second data to obtain the target scene file.
The target scene file comprises control parameters corresponding to a target control in the running process of the first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to the vehicle scene.
The historical vehicle data included in the third data for training the model may include vehicle owner data, vehicle control data, and vehicular intent type data, etc., and the historical environment data included in the third data may include natural environment data, three-way traffic data, etc.
The vehicle owner data can be different group data extracted according to data distribution situation statistical classification and key characteristics, such as industry professional group data, frequency group data and other group data.
The vehicle control data may include historical vehicle air conditioning data, primary and secondary driving data, rearview mirror data, atmosphere lamp data, seat data, door data, window data, lamp data, in-car temperature data, battery data, system parameter data, multimedia mass historical data, and the like.
The intention type data of the train may include the intention 1 to go to work, the intention 2 to go to work, the intention 3 to go to travel, the intention 4 to go to eat, the intention 5 to get to and get from school with the child, and the like.
The data described above also record the time series of data generation during acquisition to facilitate training of large models.
The format of the target scene file may be a JSON format file. The JSON format is a lightweight data exchange format. Easy to read and write, can exchange data among multiple languages, and is easy to analyze and generate by a machine. The format of the target scene file may be other file formats.
The historical vehicle data and the historical environment data related to the vehicle scene can be obtained by acquiring the vehicle log data of the first target vehicle, acquiring service information provided by a vehicle manufacturer and/or a third party through a vehicle networking function, accumulating the data and the like.
The target scene file generation model obtained based on the third data training can generate a target scene file according to the first data acquired in real time so as to be used for running the vehicle. The control parameters corresponding to the target controls in the running process of the first target vehicle, which are included in the target scene file, can include, for example, on-off control parameters of an air conditioner, on-off control parameters of a seat, and the like, can include control parameters of a plurality of controls, and can only generate the control parameters including the target controls according to the requirement of the vehicle owner, for example, the vehicle owner only wants to adjust the air conditioner, and can only generate the target scene file including the on-off control parameters of the air conditioner.
Further, the target scene file generation model provided by the application can be trained based on general data input to obtain a general large model of the vehicle, and then the individual model fine adjustment is performed according to the individual data of each vehicle and the dynamic data newly generated by each vehicle, so that an individual vehicle deep learning neural network model (namely a target scene file generation model) is trained for each vehicle, namely the target scene file generation model of each vehicle has corresponding individual model parameters.
According to the scene file generation method, the server side acquires first data which are related to the first target vehicle scene and comprise vehicle data and environment data, then inputs second data after preprocessing the first data into the target scene file generation model, and enables the target scene file generation model to generate and process scene files according to the second data to obtain the target scene file which comprises control parameters corresponding to the target control in the running process of the first target vehicle. Therefore, the target scene file generation model can be generated based on the first data corresponding to the first target vehicle and the selected target scene file generation model, the target scene file for controlling the running of the vehicle is not required to be manually set, the time is greatly saved, and the operation of a user is simplified. The target scene file generation model is a model obtained by training a training data set comprising historical vehicle data and historical environment data related to a vehicle scene, the first data acquired in real time is input into the target scene file generation model, and the target scene file generation model can more accurately obtain the target scene file currently required by a user according to the environment data, the vehicle data and the like related to the vehicle scene of the target vehicle at each moment to control the vehicle, so that the real-time requirement of the user can be effectively met. Further, the target scene file is obtained by reasoning the target scene file generation model according to the environment data and the vehicle data related to the vehicle scene of the target vehicle, and compared with the mode of manually setting the scene file by a user in the prior art, the method is more accurate and objective.
The first data acquired in step S100 has a great difference between the generation mode and the internal structure due to different data sources. Common data modalities include: images, text, sound, etc. An image is a continuous space that exists in nature, while text is a discrete space organized by human knowledge, grammatical rules. Different modes are different in expression mode, angles of objects to be watched are different, so that the phenomenon of crossing (information redundancy exists) and complementation (better than single characteristics) exists, and even a plurality of different information interactions can exist among the modes. If the multi-mode data can be reasonably processed, the rich characteristic data can be obtained. Therefore, preprocessing such as fusion and cross computation needs to be performed on the acquired original multi-modal data to obtain feature data from the original data and convert the feature data into a data form suitable for the target scene file generation model.
In one implementation manner of the present application, as shown in fig. 2, preprocessing is performed on the first data to obtain second data, including the following steps:
s210: and carrying out vectorization processing on the first data to obtain third data.
The method specifically may include performing an enabling vectorization process on the first data to obtain third data, and adding position codes to the third data according to a time sequence of occurrence of data records to obtain second data input to a target scene file generation model.
For example, the first data includes text and image data, and the text and the image may be subjected to an embedded vectorization process, respectively, and then the respective embedded vectors may be added or dot multiplied. The Embedding has the advantages of simplicity, convenience and lower calculation cost.
It will be appreciated that the fusion crossing of the first data is not limited to the vectorization of the images, but may be performed in other ways, such as the representation of image features and text features using a transducer architecture.
S220: and performing position coding processing on the third data based on the time sequence information of the third data to obtain second data.
In the actual application process, the first data may be obtained automatically by the server under a certain set condition (such as a certain interval time), or may be obtained according to the interaction information input by the user when the server receives the interaction information input by the user.
In one implementation manner of the present application, as shown in fig. 3, the step of obtaining first data corresponding to a first target vehicle includes the following steps:
s110: first information input by a user is received.
S120: and inputting the first information into a language processing model for semantic processing to obtain a scene text generation instruction.
S130: and acquiring first data corresponding to the first target vehicle according to the scene text generation instruction.
For example, a user inputs text information "good heat in car" (i.e. first information) through a server, the server inputs the received text information into a language processing model for semantic processing to obtain a corresponding instruction "open air conditioner" (i.e. scene file generation instruction), and then first data are obtained.
In one implementation of the present application, the semantic processing includes semantic analysis processing and/or semantic similarity calculation processing.
In practical applications, the specific contents of the semantic processing performed by the language processing model will be different for different specific contents of the first information input by the user. In one implementation of the present application, the semantic processing includes semantic analysis processing and/or semantic similarity calculation processing. The importance degree and the first information structure characteristics of each word included in the first information are determined through semantic analysis processing, so that the first information of the natural language can be understood conveniently. And calculating the semantic similarity of the first information keywords and the scene file generation instruction through semantic similarity calculation processing, so as to select the scene file generation instruction corresponding to the maximum similarity value, namely the scene file generation instruction required by the first information.
The first information input by the user can be text information or voice information. In the case where the first information is speech information, the speech information cannot be understood by the language processing model, and therefore, it is necessary to convert the speech information into a format that can be understood by the language processing model. In one implementation manner of the present application, when the first information is voice information, inputting the first information into a language processing model for semantic processing includes: performing format conversion processing on the voice information to obtain text information; inputting text information into language processing model for semantic processing
In one implementation manner of the present application, as shown in fig. 4, in the case that the first information is voice information, the first information is input to a language processing model for semantic processing, including the following steps:
s121: and carrying out format conversion processing on the first information to obtain corresponding text information.
S122: the text information is input into a language processing model for semantic processing.
After the target scene file is obtained through S100 to S300, the target scene file generating model may be dynamically updated by using the actual control parameter corresponding to the target control of the first target vehicle, so as to continuously optimize the first target scene file generating model, so that the second target scene file generating model may more accurately obtain the target scene file currently required by the second target vehicle user.
In an implementation manner of the present application, in a case that the first server is a cloud server, the method further includes: and sending the target scene file to a first target vehicle through a first data transmission mode, wherein the first data transmission mode comprises vehicle uplink and downlink channel protocol definition information, uplink and downlink channel data format information, request address form information and file content information, so that a user corresponding to the first target vehicle can set actual control parameters corresponding to the target pre-control according to the target scene file.
For example, the message specification formats defined by the vehicle uplink and downlink channel protocols may be shown in table 1 and table 2, where table 1 is a header format, the header has a total of 16 bytes, the length of each field is a fixed length, table 2 is a specific field format, and for different bits, the field definitions are different, and each field corresponds to a byte length and a remark. The newspaper adopting the data format has the characteristics of small data size, tamper resistance, replay resistance and the like.
Table 1 header format for upstream and downstream channel protocol definition information
Table 2 message domain format of upstream and downstream channel protocol definition information
Further, the method comprises the steps of. The message type identifier may consist of 4 fixed-length digital characters (ASCII characters), 4 pairs of 16 digits = 4 bytes. For example: 16-system ASCII code conversion, 0100 represents an uplink request message, 0101 represents an uplink response message, 0200 represents a downlink request message, and 0201 represents a downlink response message.
The encoding mode of the uplink and downlink channel data format information may include necessary fields (such as request id, vehicle encoding vin, user encoding userid, file content SceneContent, timestamp, etc.) and data types (such as 16-ary character string, string, integer, etc.).
The request address form information may be shown as follows, including a method (method), a content type (content-type), and a request (request), the specific method may be denoted as POST, the content-type may be denoted as application/json, and the request may be denoted as http:// server: port/screen/hit/add modification unit/v 1. Namely:
method:POST
content-type:application/json
request:http://server:port/scene/hit/addModifyscene/v1。
an example of a request in the request process is shown in fig. 5. Specific examples of the request include action (action), modification time (modification time), request number (requestId), scene description (sceneDescription), scene number (sceneId), source (source), user number (userId), VIN code, scene version (sceneVersion), file content (SceneContent), and the like. Further, the action may be addModifyscene, modyTime may be 1639477854567, requestId may be 1639477854567, sceneDescription may be scene file description, sceneId may be 800007l, source may be CLOUD, userId may be 9, VIN code may be SW00000000008866, sceneVersion may include Format (Format), further Format includes Major:2, minor:3, revise:1, scenecontent may be 05678 … …, etc. Of course, the request examples may include other content as well.
Further, the content SceneContent information may be 16-system, and includes 4 parts, which are a header, a message type identifier, a bitmap, a message field, and the like, and the corresponding format is shown in table 3.
Table 3 file content information format
Message header Message type identifier Message domain
The first data transmission mode is not only used for sending the target scene file to the first target vehicle, but also can be used for transmission in the interaction process of the vehicle end and the cloud end, the mobile terminal and the cloud end and the like.
In one implementation of the present application, as shown in fig. 6, the method further includes the following steps:
s400: and acquiring actual control parameters.
S500: and under the condition that the actual control parameters are inconsistent with the control parameters of the target controls included in the target scene file, updating the target scene file generation model according to the actual control parameters.
Illustratively, the air conditioner (i.e., the target control) in the target scene file obtained through S300 is set to 26 ℃, but the air conditioner is manually set to 25 ℃ by the vehicle owner during the use of the vehicle. That is, if the actual control parameters of the air conditioner are inconsistent with the control parameters of the air conditioner in the target scene file, the target scene file generation model is updated according to the actual control parameters of the current air conditioner, the related vehicle data and the environment data, so that the target scene file generated by the target scene file generation model more accords with the actual requirements of the user. According to the final operation data of the vehicle, continuous iterative tuning is carried out on the model, and the requirements of a vehicle owner on the vehicle and the real-time state of the vehicle are mastered in time, so that more accurate scene files are output, and intelligent control on the vehicle and the machine by one key is realized.
Under the condition that the actual control parameters are consistent with the control parameters of the target controls included in the target scene file, the output result of the target scene file generation model is accurate, and the target scene file generation model does not need to be updated.
The application also provides a training method of the scene file generation model, as shown in fig. 7, applied to the second server, the method comprises the following steps:
s10: determining a first training data set, and determining an initial scene file generation model, the first training data set including fourth data, the fourth data including vehicle data and environmental data related to a vehicle scene.
S20: and inputting the first training data set into the initial scene file generation model to perform model training to obtain a first target scene file generation model.
It is to be understood that the fourth data includes vehicle data and environment data related to a vehicle scene of the vehicle, the fourth data is obtained by data acquisition of vehicles of a plurality of vehicle types, and the first target scene file generation model is a model trained on the basis of the vehicle data and the environment data of the plurality of vehicle types, so that the first target scene file generation model is a multi-vehicle type common model. It will be appreciated that the scene file may be generated using the first target scene file generation model, but for a specific vehicle (e.g., the first target vehicle), the scene file generated by the first target scene file generation model may not yet meet the actual requirements of the user of the first target vehicle, and therefore, needs to be optimized based on the first target scene file generation model.
The first training data set may include the fourth data described above, or may include other data than the fourth data.
The second service end can be specifically any one of a cloud end, a vehicle end or a mobile terminal, can also be obtained by matching the cloud end, the vehicle end or the mobile terminal, or can also be other equipment.
In one implementation of the present application, as shown in fig. 8, the method further includes the following steps:
s30: a second training data set is determined, the second training data set including fifth data, the fifth data including vehicle data and environmental data related to a second target vehicle scene.
S40: and inputting the second training data set into the first target scene file generation model to perform optimization processing on the first target scene file generation model to obtain a second target scene file generation model.
The second training data set may include the fifth data as described above, or may include other data than the fifth data.
And optimizing the first target scene file generation model by using vehicle data and environment data related to the second target vehicle scene to obtain a second target scene file generation model. The personalized model fine adjustment is carried out according to the personalized data of each vehicle (namely the second target vehicle) and the newly generated dynamic data, so that personalized model parameters are trained for each vehicle, further, vehicle service required by a vehicle owner is deduced according to the environment of each moment of the vehicle and the current moment behavior of the vehicle owner, and finally, an accurate scene file is generated to control the vehicle, so that the second target scene file generation model can more accurately obtain the target scene file required by the second target vehicle user at present.
In an implementation manner of the present application, as shown in fig. 9, an intelligent scenario technology architecture for generating a scenario file is provided, which is also an intelligent Service-oriented architecture framework (i.e., AI SOAFramework, SOA, service-Oriented Architecture), where the architecture mainly includes an AI algorithm cloud and an intelligent scenario cloud, the AI algorithm cloud is mainly used for acquiring data, training, reasoning, optimizing, generating a scenario file, etc. of a model, and the intelligent scenario cloud is mainly used for synchronizing the scenario file generated by the AI algorithm cloud, publishing the scenario file to a digital mall for purchase and download by a vehicle owner, or receiving the scenario file sent by a developer from a developer platform, auditing and pushing the scenario file, and setting up the digital mall, etc. Specifically, the AI algorithm cloud may include a data acquisition module, a data processing and storing module, a data preprocessing module, a language model engine module, a scene generation engine module (vehicle end/cloud end), a model reasoning engine module, a model training/evaluation optimization module, and the like, and the aforementioned second service end may be a part of the AI algorithm cloud. The intelligent scene cloud may include a scene service cloud module, a digital mall module, a developer platform module (PC side), and the like.
The data processing and storing module processes the first data (such as cleaning and screening useful data) corresponding to the acquired first target vehicle, and the first data can be acquired from the vehicle log data through the data acquisition module and synchronously the three-party traffic data, the vehicle owner data and the like. And then the data processing and storing module synchronizes the processed data to the data preprocessing module for data preprocessing so as to obtain second data. The data preprocessing module synchronizes the second data to the model reasoning engine, and the model reasoning engine conducts reasoning to obtain reasoning data, wherein the reasoning data can comprise control parameters of target controls such as an air conditioner, a main driver, a secondary driver, a rearview mirror, an atmosphere lamp, a seat, a car door, a car window, a car lamp, multimedia data, a navigation map and the like and the type of the intention of a first target car owner. The data preprocessing module can synchronize the second data to the model training/evaluating and optimizing module to train, optimize and the like the target scene file generation model.
And the scene generation engine module obtains reasoning data from the model reasoning engine, and performs scene file generation processing according to the reasoning data to obtain the target scene file.
When the vehicle end and the mobile terminal are used as the service end, the vehicle end and the mobile terminal can provide limited computing power due to the limitation of hardware and software, and the generation process of the target scene file is usually realized by means of stronger computing power of a cloud.
As shown in fig. 9, when the first target vehicle (i.e., a vehicle end) is used as the first service end, the SOAFramework is provided with a scene vehicle end application, a scene execution engine and a vehicle cloud communication module, and a language model engine module is further deployed at the cloud end of the AI algorithm. The first target vehicle interacts with a user through a scene vehicle end application, receives first information (which can comprise text information or voice information) input by the user, and inputs the first information to a language model engine through a vehicle cloud communication module for semantic processing to obtain a scene text generation instruction. The language model engine module sends the scene text generation instruction to the scene generation engine, the scene generation engine acquires the reasoning data from the model reasoning engine according to the scene text generation instruction, and the scene generation engine generates the target scene file according to the reasoning data. The language model engine module can also be arranged at the vehicle end, directly receives the first information input by the vehicle end, and then inputs the first information to the language model engine for semantic processing to obtain a scene text generation instruction.
The generation process of the target scene file when the mobile terminal is used as the server is similar to the generation process of the target scene file when the vehicle terminal is used as the server, and the difference is that the related application (such as an application program in a mobile phone (an example of the mobile terminal)) in the mobile terminal or the web page (such as a developer platform) is interacted with a user, the first information input by the user is received, and the first information is input into a language model engine through a vehicle cloud communication module for semantic processing, which is not described again.
In the case that the cloud server is used as the server, the generated scene file is executed by the vehicle end, so that the generated scene file needs to be synchronized to the vehicle end through the cloud.
Further, the SOAFramework is provided with an intelligent scene cloud, after a scene generating engine in the AI algorithm cloud generates a target scene file, the target scene file is synchronized to the intelligent scene cloud, a scene service cloud synchronizes the target scene file to a vehicle cloud communication module for storage, the vehicle cloud communication module sends a scene instruction to a scene executing engine, and the scene executing engine responds to the scene instruction to acquire the target scene file from the vehicle cloud communication module for execution.
When the vehicle end is used as the first service end, under the condition of higher timeliness requirements or when the vehicle is offline, the mode of generating the scene file by the cloud end and resynchronizing to the vehicle end for execution cannot meet the high timeliness requirements of users due to the limitation of the vehicle cloud communication mode. Thus, the scene generation engine may be deployed at the vehicle end to directly generate the target scene file at the vehicle end.
And (3) deploying a scene generation engine on the first target vehicle, completing scene file generation model training and optimization on the AI algorithm cloud, and synchronizing model parameters to a model reasoning engine of the AI scene cloud so as to conduct reasoning. Meanwhile, the model inference engine of the AI scene cloud synchronizes model parameters to the scene generation engine of the first target vehicle. The scene generation engine of the first target vehicle uses the model parameters, generates a target scene file according to the first data acquired by the first target vehicle and by utilizing the edge computing capability, and sends the target scene file to the scene execution engine for execution. Therefore, the whole process of generating the target scene file and executing by the vehicle end does not need to pass through the cloud, and is not limited by a communication network between the vehicle clouds, so that the target scene file can be generated and executed more quickly, and the high timeliness requirement is met.
The intelligent scene cloud can push the scene file to the mobile terminal associated with the first target vehicle, so that a user can read and edit the scene file in a form of graphical display through the mobile terminal.
In addition, the scene file generated by the developer based on the developer platform through the PC can be sold by the digital mall in the intelligent scene cloud after the verification of the merchant. After a user logs in a digital mall to purchase a scene file through a mobile phone or a vehicle end, the digital mall synchronizes the purchased scene file to the vehicle end or the mobile phone end through a scene service cloud for the vehicle end to execute or the mobile phone end to read.
In one implementation of the present application, as shown in fig. 10, a schematic diagram of generating a target scene file by an intelligent scene file generation model (i.e., a target scene file generation model) is shown. In the use process of a scene real vehicle, vehicle-end multi-mode data (such as vehicle owner data, vehicle data, environment data, three-party traffic data and the like shown in fig. 10) are collected first. On the one hand, the prediction of the intention of the vehicle (such as driving on duty, sleeping in the vehicle, resting and entertainment in the vehicle, holiday blessing and the like) and the prediction of the control of the vehicle (such as air conditioner air volume temperature, brightness of a vehicle lamp switch, songs preferred by multimedia, navigation destination and the like) are carried out through the collected multi-mode data and a preset target scene file generation model, the scene is automatically generated according to the prediction of the intention of the vehicle and the prediction of the control of the vehicle, namely the scene file is automatically generated, and then the real vehicle is used. The automatic generation of the scene file can be based on a vehicle-end scene generation engine to perform scene vehicle-end reasoning generation, can be based on a mobile phone-end intelligent scene platform to perform scene cloud reasoning generation, and can also be based on a PC-end developer platform to perform scene cloud reasoning generation. In addition, in the process of using the real vehicle, the first target vehicle can receive natural language communication information prompting scene generation and running conditions, and a target scene file is generated according to the natural language communication information. On the other hand, the multi-mode data is synchronized to a large model technology base which is mainly used for training and optimizing a target scene file generation model. Specifically, the big data model technology base acquires multi-mode data to realize acquisition and storage of big data of a vehicle, then carries out multi-mode data fusion processing, label portrait extraction processing and the like on the stored data, then obtains a target scene file generation model through modes of model training evaluation and the like on the processed data, and finally uses the target scene file generation model for model reasoning calculation to obtain a target scene file.
Referring to fig. 11, fig. 11 shows a scene file generating apparatus of the present application, including: the first processing module is used for acquiring first data corresponding to a first target vehicle, wherein the first data comprises vehicle data and environment data related to a vehicle scene of the first target vehicle; the second processing module is used for preprocessing the first data to obtain second data; the third processing module is used for inputting the second data into the target scene file generation model, enabling the target scene file generation model to perform scene file generation processing according to the second data to obtain a target scene file, wherein the target scene file comprises control parameters corresponding to a target control in the running process of the first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to a vehicle scene.
The specific operation content that can be performed by each processing module refers to the scene file generating method corresponding to fig. 1. And, according to the specific operation steps of the above-mentioned scene file generation method, the scene file generation device may include more or fewer processing modules for processing the content in the above-mentioned scene file generation method. The first processing module may be the aforementioned data acquisition module, data processing and storage module, etc., the second processing module may be the aforementioned data preprocessing module, etc., and the third processing module may be the aforementioned model reasoning engine module, scene generation engine module, etc.
Referring to fig. 12, fig. 12 shows a scene file generation model training device of the present application, including: a fourth processing module configured to determine a first training data set, and determine an initial scene file generation model, the first training data set including fourth data, the fourth data including vehicle data and environmental data related to a vehicle scene; and the fifth processing module is used for inputting the first training data set into the initial scene file generation model to perform model training so as to obtain a first target scene file generation model.
The specific operation content that can be performed by each processing module refers to the above-mentioned scene file generation model training method corresponding to fig. 7. And, according to the specific operation steps of the above-mentioned scene file generation model training method, the scene file generation model training device may include more or fewer processing modules for processing the content in the above-mentioned scene file generation model training method. Fourth processing module the fifth processing module may train and evaluate optimization modules for the models described above, etc.
Referring to fig. 13, fig. 13 is a block diagram illustrating a structure of an electronic device according to an implementation of the present application. The electronic device can include one or more processors 1002, system control logic 1008 coupled to at least one of the processors 1002, system memory 1004 coupled to the system control logic 1008, non-volatile memory (NVM) 1006 coupled to the system control logic 1008, and a network interface 1010 coupled to the system control logic 1008.
The processor 1002 may include one or more single-core or multi-core processors. The processor 1002 may include any combination of general-purpose and special-purpose processors (e.g., graphics processor, application processor, baseband processor, etc.). In implementations herein, the processor 1002 may be configured to perform the aforementioned scene file generation method or scene file generation model training method.
In some implementations, the system control logic 1008 may include any suitable interface controller to provide any suitable interface to at least one of the processors 1002 and/or any suitable device or component in communication with the system control logic 1008.
In some implementations, the system control logic 1008 may include one or more memory controllers to provide an interface to the system memory 1004. The system memory 1004 may be used for loading and storing data and/or instructions. The system memory 1004 of the electronic device can include any suitable volatile memory in some implementations, such as suitable dynamic random access memory (Dynamic Random Access Memory, DRAM).
NVM/memory 1006 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some implementations, NVM/memory 1006 may include any suitable nonvolatile memory, such as flash memory, and/or any suitable nonvolatile storage device, such as at least one of a Hard Disk Drive (HDD), compact Disc (CD) Drive, digital versatile Disc (Digital Versatile Disc, DVD) Drive.
NVM/memory 1006 may include a portion of a memory resource installed on an apparatus of an electronic device, or it may be accessed by, but not necessarily part of, the device. For example, NVM/memory 1006 may be accessed over a network via network interface 1010.
In particular, system memory 1004 and NVM/storage 1006 may each include: a temporary copy and a permanent copy of instruction 1020. The instructions 1020 may include: instructions that, when executed by at least one of the processors 1002, cause the electronic device to implement the aforementioned scene file generation method or scene file generation model training method. In some implementations, instructions 1020, hardware, firmware, and/or software components thereof may additionally/alternatively be disposed in system control logic 1008, network interface 1010, and/or processor 1002.
The network interface 1010 may include a transceiver to provide a radio interface for electronic devices to communicate with any other suitable device (e.g., front end module, antenna, etc.) over one or more networks. In some implementations, the network interface 1010 may be integrated with other components of the electronic device. For example, the network interface 1010 may be integrated with at least one of the processor 1002, the system memory 1004, the nvm/storage 1006, and a firmware device (not shown) having instructions that, when executed by at least one of the processor 1002, implement the aforementioned scene file generation method or scene file generation model training method.
The network interface 1010 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 1010 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one implementation, at least one of the processors 1002 may be packaged together with logic for one or more controllers of the system control logic 1008 to form a system package (System In a Package, siP). In one implementation, at least one of the processors 1002 may be integrated on the same die with logic for one or more controllers of the System control logic 1008 to form a System on Chip (SoC).
The electronic device may further include: input/output (I/O) devices 1012. The I/O device 1012 may include a user interface to enable a user to interact with the electronic device; the design of the peripheral component interface enables the peripheral component to also interact with the electronic device. In some implementations, the electronic device further includes a sensor for determining at least one of environmental conditions and location information associated with the electronic device.
In some implementations, the user interface may include, but is not limited to, a display (e.g., a liquid crystal display, a touch screen display, etc.), a speaker, a microphone, one or more cameras (e.g., still image cameras and/or video cameras), a flashlight (e.g., light emitting diode flash), and a keyboard.
In some implementations, the peripheral component interface may include, but is not limited to, a non-volatile memory port, an audio jack, and a power interface.
In some implementations, the sensors may include, but are not limited to, gyroscopic sensors, accelerometers, proximity sensors, ambient light sensors, and positioning units. The positioning unit may also be part of the network interface 1010 or interact with the network interface 1010 to communicate with components of a positioning network, such as global positioning system (Global Positioning System, GPS) satellites.
It should be understood that the structure illustrated in the implementation of the present invention does not constitute a specific limitation on the electronic device. In other implementations of the application, the electronic device may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of implementations of the present application, a processing system includes any system having a processor such as, for example, a digital signal processor (Digital Signal Processor, DSP), microcontroller, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described herein are not limited in scope to any particular programming language. In either case, the language may be a compiled or interpreted language.
One or more aspects of at least one implementation may be implemented by representative instructions stored on a computer-readable storage medium, which represent various logic in a processor, which when read by a machine, cause the machine to fabricate logic to perform the techniques described herein. These representations, referred to as "IP cores," may be stored on a tangible computer readable storage medium and provided to a plurality of customers or production facilities for loading into the manufacturing machine that actually manufactures the logic or processor.
It should be noted that in the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some implementations, the features can be arranged in a different manner and/or order than shown in the illustrative drawings. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all implementations, and in some implementations, such features may not be included or may be combined with other features.
It should be noted that the terms "first," "second," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
It should be noted that in the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be understood that such a particular arrangement and/or ordering may not be required. Rather, in some embodiments, these features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of structural or methodological features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
While the present application has been shown and described with respect to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that the foregoing is a further detailed description of the present application in conjunction with the specific embodiments and is not intended to limit the practice of the present application to such descriptions. Various changes in form and detail may be made therein by those skilled in the art, including a few simple inferences or alternatives, without departing from the spirit and scope of the present application.

Claims (12)

1. A method for generating a scene file, which is applied to a first service end, the method comprising:
acquiring first data corresponding to a first target vehicle, wherein the first data comprises vehicle data and environment data related to a vehicle scene of the first target vehicle;
preprocessing the first data to obtain second data;
and inputting the second data into a target scene file generation model, so that the target scene file generation model carries out scene file generation processing according to the second data to obtain a target scene file, wherein the target scene file comprises control parameters corresponding to a target control in the running process of the first target vehicle, the target scene file generation model is a scene file generation model obtained by training based on a training data set, and the training data set comprises historical vehicle data and historical environment data related to a vehicle scene.
2. The scene file generation method according to claim 1, wherein preprocessing the first data to obtain second data comprises:
vectorizing the first data to obtain third data;
and carrying out position coding processing on the third data based on the time sequence information of the third data to obtain the second data.
3. The scene file generation method according to claim 1 or 2, wherein acquiring first data corresponding to a first target vehicle comprises:
receiving first information input by a user;
inputting the first information into a language processing model for semantic processing to obtain a scene text generation instruction;
and acquiring the first data corresponding to the first target vehicle according to the scene text generation instruction.
4. A scene file generation method according to claim 3, characterized in that the semantic processing comprises semantic analysis processing and/or semantic similarity calculation processing.
5. The scene file generation method according to claim 3 or 4, wherein, in the case where the first information is voice information, inputting the first information to a language processing model for semantic processing, comprises:
Performing format conversion processing on the first information to obtain corresponding text information;
and inputting the text information into the language processing model for semantic processing.
6. The method for generating a scene file according to any one of claims 1 to 5, wherein the first server is any one of a cloud server, a vehicle end and a mobile terminal.
7. The method for generating a scene file according to claim 6, wherein in the case that the first server is a cloud server, the method further comprises:
and sending the target scene file to the first target vehicle through a first data transmission mode, so that a user corresponding to the first target vehicle sets actual control parameters corresponding to the target pre-control according to the target scene file, wherein the first data transmission mode comprises vehicle uplink and downlink channel protocol definition information, uplink and downlink channel data format information, request address form information and file content information.
8. The scene file generation method according to claim 7, wherein the method further comprises:
acquiring the actual control parameters;
and under the condition that the actual control parameters are inconsistent with the control parameters of the target control included in the target scene file, updating the target scene file generation model according to the actual control parameters.
9. The scene file generation method according to any one of claims 1 to 8, wherein the vehicle data includes vehicle control data and vehicle intention type data, and the environment data includes natural environment data and three-way traffic data.
10. A method for training a scene file generation model, which is applied to a second server, the method comprising:
determining a first training data set and determining an initial scene file generation model, wherein the first training data set comprises fourth data, and the fourth data comprises vehicle data and environment data related to a vehicle scene;
and inputting the first training data set into the initial scene file generation model to perform model training to obtain a first target scene file generation model.
11. The scene file generation model training method of claim 10, wherein the method further comprises:
determining a second training data set, the second training data set comprising fifth data, the fifth data comprising vehicle data and environmental data related to a second target vehicle use scenario;
and inputting the second training data set into the first target scene file generation model to optimize the first target scene file generation model to obtain a second target scene file generation model.
12. An electronic device, comprising:
a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions; the processor executes computer-executable instructions stored in the memory to cause the electronic device to implement the scene file generation method of any of claims 1-9 and/or to implement the scene file generation model training method of claim 10 or 11.
CN202311846570.8A 2023-12-28 2023-12-28 Scene file generation, scene file generation model training method and electronic equipment Pending CN117808002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311846570.8A CN117808002A (en) 2023-12-28 2023-12-28 Scene file generation, scene file generation model training method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311846570.8A CN117808002A (en) 2023-12-28 2023-12-28 Scene file generation, scene file generation model training method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117808002A true CN117808002A (en) 2024-04-02

Family

ID=90421372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311846570.8A Pending CN117808002A (en) 2023-12-28 2023-12-28 Scene file generation, scene file generation model training method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117808002A (en)

Similar Documents

Publication Publication Date Title
US11743357B2 (en) Message pushing method, storage medium, and server
CN107210033A (en) The language understanding sorter model for personal digital assistant is updated based on mass-rent
CN104731854A (en) Voice recognition query response system
CN107656996B (en) Man-machine interaction method and device based on artificial intelligence
CN110573837A (en) Navigation method, navigation device, storage medium and server
US20230273040A1 (en) System and method of creating custom dynamic neighborhoods for individual drivers
JP7121145B2 (en) Context-Aware Navigation Voice Assistant
CA3116378A1 (en) System and method for cloud computing-based vehicle configuration
CN111405324B (en) Method, device and system for pushing audio and video file
CN117808002A (en) Scene file generation, scene file generation model training method and electronic equipment
WO2017049896A1 (en) Screen control method and apparatus for electronic navigation, and storage medium
CN115545118A (en) Vehicle driving evaluation and training method, device, equipment and medium of model thereof
KR20180137724A (en) Image Sharing System Based On Speech Recognition and Image Sharing System Using It
CN113158803A (en) Classroom vacant seat query system, real-time video analysis system and method
CN111209430A (en) Meteorological information processing method based on travel
CN116702785B (en) Processing method and device of relational tag, storage medium and electronic equipment
JP7476847B2 (en) Method, information processing device, and program
US20240127810A1 (en) Dialogue Management Method, Dialogue Management System, And Computer-Readable Recording Medium
CN117725719A (en) Simulation method and device for model simulation deduction
Manasseh Technologies for Mobile ITS Applications and Safer Driving
CN117537834A (en) Map navigation method and device based on driving preference
CN115512703A (en) Processing method, device and equipment for voice interaction data of vehicle machine
CN113630743A (en) V2X network-based intelligent networking radio frequency benefit analysis method
CN114399351A (en) Information recommendation method and device, electronic equipment and storage medium
CN116907524A (en) Multimodal route data collection for improved route selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination