CN118014085A - Joint reasoning data processing method and device, electronic equipment, medium and vehicle - Google Patents

Joint reasoning data processing method and device, electronic equipment, medium and vehicle Download PDF

Info

Publication number
CN118014085A
CN118014085A CN202410310124.3A CN202410310124A CN118014085A CN 118014085 A CN118014085 A CN 118014085A CN 202410310124 A CN202410310124 A CN 202410310124A CN 118014085 A CN118014085 A CN 118014085A
Authority
CN
China
Prior art keywords
reasoning
scene
joint
inference
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410310124.3A
Other languages
Chinese (zh)
Inventor
苟少帅
姚萌
彭胜波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202410310124.3A priority Critical patent/CN118014085A/en
Publication of CN118014085A publication Critical patent/CN118014085A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method and a device for processing combined reasoning data, electronic equipment, media and vehicles, and belongs to the technical field of data processing, in particular to the technical fields of automatic driving, artificial intelligence, federal reasoning and the like. The specific implementation scheme is as follows: acquiring a joint reasoning scene of an automatic driving vehicle, and determining a to-be-executed reasoning mode corresponding to the joint reasoning scene from a plurality of preset joint reasoning modes according to the joint reasoning scene; and obtaining reasoning data according to the reasoning mode to be executed, and processing the reasoning data to obtain a combined reasoning result.

Description

Joint reasoning data processing method and device, electronic equipment, medium and vehicle
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the technical fields of autopilot, artificial intelligence, federal reasoning, and the like. In particular, the disclosure relates to a joint reasoning data processing method and device, an electronic device, a computer readable storage medium and an automatic driving vehicle.
Background
With the development of artificial intelligence technology, automatic driving has come into the life of people more and more, and when a vehicle is automatically driven, the movement track of an obstacle needs to be predicted. There may be many other autonomous vehicles in the area where the autonomous vehicle is located, and these autonomous vehicles may have predicted the movement track of the obstacle, and the prediction of the movement track of the obstacle may be performed by referring to the prediction results of these autonomous vehicles.
Disclosure of Invention
The disclosure provides a joint reasoning data processing method and device, electronic equipment, a computer readable storage medium and an automatic driving vehicle.
According to a first aspect of the present disclosure, there is provided a joint reasoning data processing method, the method comprising:
Acquiring a joint reasoning scene of an automatic driving vehicle, and determining a to-be-executed reasoning mode corresponding to the joint reasoning scene from a plurality of preset joint reasoning modes according to the joint reasoning scene;
and obtaining reasoning data according to the reasoning mode to be executed, and processing the reasoning data to obtain a combined reasoning result.
According to a second aspect of the present disclosure, there is provided a joint reasoning data processing apparatus, the apparatus comprising:
The reasoning mode module is used for acquiring a joint reasoning scene where the automatic driving vehicle is located, and determining a to-be-executed reasoning mode corresponding to the joint reasoning scene from a plurality of preset joint reasoning modes according to the joint reasoning scene;
and the reasoning execution module is used for acquiring reasoning data according to the reasoning mode to be executed, processing the reasoning data and acquiring a combined reasoning result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the joint inference data processing method.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the above-described joint inference data processing method.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the above-described joint reasoning data-processing method.
According to a sixth aspect of the present disclosure, there is provided an autonomous vehicle comprising the electronic device described above.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow diagram of a method for federated reasoning data processing provided by an embodiment of the present disclosure;
FIG. 2 is a flow diagram of partial steps of another federated reasoning data processing method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of some steps of another method for federated reasoning data processing provided in an embodiment of the present disclosure;
FIG. 4 is a flow chart of some steps of another federated reasoning data processing method provided by an embodiment of the present disclosure;
FIG. 5 is a flow diagram of some steps of another federated reasoning data processing method provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a joint reasoning data processing apparatus provided in an embodiment of the present disclosure;
Fig. 7 is a block diagram of an electronic device for implementing a joint inference data processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In some related technologies, federal learning (FEDERATED LEARNING) is an emerging artificial intelligence basic technology, and in 2016, it is proposed that a design objective is to develop efficient machine learning between multiple participants or multiple computing nodes on the premise of guaranteeing information security during large data exchange, protecting terminal data and personal data privacy, and guaranteeing legal compliance.
The federal learning model training is a mature method in the related technology, but the trained model is used for online reasoning, namely joint reasoning, and particularly the implementation of the online joint reasoning related technology in the automatic driving field is still difficult.
The embodiment of the disclosure provides a joint reasoning data processing method and device, electronic equipment, a computer-readable storage medium and an automatic driving vehicle, and aims to solve at least one of the technical problems in the prior art.
The method for processing the joint inference data provided by the embodiments of the present disclosure may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor invoking computer readable program instructions stored in a memory. Or the method may be performed by a server.
Fig. 1 shows a flow diagram of a method for processing federated reasoning data provided by an embodiment of the present disclosure. As shown in fig. 1, the joint inference data processing method provided by the embodiment of the present disclosure may include step S110 and step S120.
In step S110, a joint reasoning scenario where the automatic driving vehicle is located is obtained, and a to-be-executed reasoning mode corresponding to the joint reasoning scenario is determined from a plurality of preset joint reasoning modes according to the joint reasoning scenario;
in step S120, the inference data is obtained according to the inference mode to be executed, and the inference data is processed to obtain a joint inference result.
For example, in step S110, the acquiring the joint reasoning scene where the automatic driving vehicle is located may be based on the types of scene participants, the number of scene participants, and the distance between the scene participants and the automatic driving vehicle, which may participate in the joint reasoning within the preset range of the automatic driving vehicle.
Among other things, scene participants may include other vehicles, obstacles, traffic indicators (e.g., lights, lane lines, etc.), pedestrians, etc., that may affect the travel of an autonomous vehicle.
Scene participants that can participate in joint reasoning can be scene participants that can communicate data with the autonomous vehicle in a wired or wireless manner and that have data processing capabilities.
In some possible implementations, the scene participants that can participate in joint reasoning can be determined by the autonomous vehicle transmitting communication signals.
In some possible implementations, after the joint reasoning scenario is acquired, an inference mode to be performed corresponding to the joint reasoning scenario may be determined from preset joint reasoning modes through analysis of the joint reasoning scenario.
The preset joint reasoning mode may include a preset method for acquiring reasoning data and a preset method for processing the reasoning data.
Different joint reasoning modes define different data acquisition methods and data processing methods, so that the requirements of the different joint reasoning modes on computing resources are different, and the time required in the process of executing the methods is also different.
Thus, the analysis of the joint reasoning scenario may be an analysis of the computational resources of the autonomous vehicle to select a joint reasoning pattern that coincides with the computational resources as the reasoning pattern to be performed corresponding to the joint reasoning scenario.
In some possible implementations, because the execution of the method may require interaction with the scenario participants of the federated inference scenario, the analysis of the federated inference scenario may also include an analysis of the number, type, and computing resources of the scenario participants as to whether they meet the federated inference model.
In some possible implementations, in step S120, the obtaining of the inference data according to the to-be-performed inference mode, and the processing of the inference data to obtain the joint inference result may be obtaining of the inference data according to a method for obtaining the inference data corresponding to the to-be-performed inference model, and the processing of the obtained inference data according to a method for processing the inference data corresponding to the to-be-performed inference model, to obtain the joint inference result.
The inference data may include data required for joint inference of models that can be acquired by the autonomous vehicle, and the required inference data may be different according to different inference modes to be performed.
In the method for processing the combined reasoning data, which is provided by the embodiment of the disclosure, the mode for acquiring the reasoning data and processing the reasoning data is determined according to the combined reasoning scene where the automatic driving vehicle is located, the reasoning data is acquired according to the preset method corresponding to the mode, the reasoning data is processed, and the combined reasoning result is acquired. Aiming at different joint reasoning scenes, the joint reasoning data processing method provided by the embodiment of the disclosure can use different joint reasoning modes to meet the requirements of different joint reasoning scenes and can meet the requirements of complex and changeable automatic driving scenes.
The method for processing the joint reasoning data provided by the embodiment of the disclosure is specifically described below.
As described above, in some possible implementations, the to-be-performed inference mode corresponding to the joint inference scenario may be determined from among preset joint inference modes through analysis of the joint inference scenario.
In some possible implementations, the inference frequency requirement of the joint inference scene can be obtained through analysis of the joint inference scene, and the to-be-executed inference mode corresponding to the joint inference scene is determined from preset joint inference modes according to the inference frequency requirement.
In some possible implementations, the obtaining of the inference frequency requirement of the joint inference scene through analysis of the joint scene may be that the automatic driving vehicle judges the emergency degree of the automatic driving vehicle on the inference result requirement according to the distances of the scene participants of different kinds, and the computing power of the automatic driving vehicle and other scene participants is integrated to determine the inference frequency requirement of the joint inference scene.
In some possible implementations, the inference frequency requirements of the joint inference scenario can be represented using QPS (query-per-second). And determining the reasoning mode to be executed from the joint reasoning modes according to different QPS ranges corresponding to the different joint reasoning modes and the QPS range of the QPS of the joint reasoning scene.
In some possible implementations, the preset joint inference mode may include a joint inference mode based on model fusion.
In some possible implementations, the model fusion-based joint reasoning model refers to combining models before reasoning and deploying to one party for reasoning.
Fig. 2 shows a flow chart of one implementation manner of obtaining a joint reasoning result in the case that the reasoning mode to be performed is a joint reasoning mode based on model fusion, and as shown in fig. 2, obtaining the joint reasoning result may include step S210, step S220, step S230, and step S240.
In step S210, receiving inference data sent by other scene participants of the joint inference scene, where the inference data includes model parameters of the scene participants;
In step S220, a fused inference model is calculated according to model parameters corresponding to the autonomous vehicle and model parameters of a plurality of scene participants;
In step S230, obtaining a vehicle self-driving reasoning feature corresponding to the automatic driving vehicle, inputting the vehicle self-driving reasoning feature into a fusion reasoning model, and obtaining a vehicle self-driving reasoning result;
In step S240, a joint reasoning result is obtained at least according to the self-vehicle reasoning result.
In some possible implementations, in step S210, the inference data sent by the other scene participants may be received by wired or wireless means.
The reasoning data comprise model parameters saved by scene participants, and the model parameters can comprise model structures and weights corresponding to the structures.
In some possible implementations, it is not necessary to receive all of the scenario participants' transmitted reasoning data, but only scenario participants related to reasoning.
The embodiments of the present disclosure do not limit how or how the scenario participants related to reasoning are determined.
In some possible implementations, in step S220, model parameters corresponding to the autonomous vehicle may be acquired first, and then the model parameters corresponding to the autonomous vehicle and model parameters of the scene participant are fused to generate a fused inference model.
The mode of fusion of the embodiments of the present disclosure is not limited.
In some possible implementations, in step S230, features that the autonomous vehicle needs to infer may be obtained as the corresponding vehicle-by-vehicle inference features of the autonomous vehicle, and the obtained vehicle-by-vehicle inference features are input into the obtained fusion inference model to obtain the vehicle-by-vehicle inference result.
The embodiments of the present disclosure are not limited to obtaining self-vehicle reasoning features corresponding to an autonomous vehicle.
In some possible implementations, the inference features provided by other scenario participants may also be obtained, and the scenario-party inference results may be obtained according to the inference features provided by other scenario participants.
In some possible implementations, a sample identification sent by a scene participant may be received, and a scene inference feature is obtained from a sample identification query.
In some specific implementations, in order to ensure the security of the data, the inference data is not directly transmitted, but all scene participants, including the automated driving vehicle, upload the inference data to a third party service system or database, and use the sample identifier as the identifier of the inference data.
After receiving the sample identification sent by the scene participant, the scene reasoning feature can be obtained from the third-party service system or database query according to the sample identification.
In some possible implementations, a set of generic data reading interfaces may be designed so that all scene participants can read data from a third party business system or database using generic protocols such as HTTP (hypertext transfer protocol), gRPC (open source high performance remote procedure call framework), etc.
In some possible implementations, after the scenario-inference features are obtained, the scenario-party inference results may be obtained by inputting the scenario-inference features into a fused inference model.
In some possible implementations, the scene side reasoning results can be obtained by inputting scene reasoning features into the fusion reasoning model, and the acquired vehicle reasoning features can be obtained by inputting the acquired fusion reasoning model to be executed in an isolated environment so as to further improve the data security.
In some possible implementations, in step S240, a final joint inference result may be obtained according to the scenario-side inference result and the own-vehicle inference result.
The embodiment of the disclosure does not limit a specific method for acquiring the final combined reasoning result according to the scene party reasoning result and the vehicle reasoning result.
In the above-mentioned joint reasoning mode based on model fusion, the interaction between the scene participant and the automatic driving vehicle is very simple, the interaction times between the scene participant and the automatic driving vehicle are reduced as much as possible, the communication time delay is reduced, and the time of the whole reasoning process is further reduced.
In some possible implementations, the preset joint inference mode may include a TEE (trusted execution environment) -based joint inference mode.
In some possible implementations, the joint reasoning mode based on the TEE is to deploy a model in the TEE, and reasoning is performed after each scene participant uploads reasoning data to the TEE to obtain a joint reasoning result.
Fig. 3 shows a flow chart of one implementation of obtaining a joint reasoning result in the case that the reasoning mode to be performed is a TEE-based joint reasoning mode, and as shown in fig. 3, obtaining the joint reasoning result may include step S310 and step S320.
In step S310, obtaining model parameters corresponding to the automatic driving vehicle, and sending the model parameters corresponding to the automatic driving vehicle as reasoning data to the trusted execution environment, so that the trusted execution environment calculates a fusion reasoning model according to the model parameters corresponding to the automatic driving vehicle;
In step S320, the self-vehicle reasoning features are obtained according to the sample identifications corresponding to the self-driving vehicles, and the self-vehicle reasoning features are sent to the trusted execution environment, so that the trusted execution environment can obtain the combined reasoning result according to the self-vehicle reasoning features and the fusion reasoning model.
In some possible implementations, in step S310, all scene participants (including the autonomous vehicle) of all joint reasoning scenes send model parameters to the TEE for the TEE to fuse according to all model parameters to generate a fused reasoning model. The mode of fusion of the embodiments of the present disclosure is not limited.
In some possible implementations, in step S320, features that the autonomous vehicle needs to infer may be obtained as the corresponding self-vehicle inference features of the autonomous vehicle, and the obtained self-vehicle inference features are sent to the TEE, so that the TEE inputs the self-vehicle inference features into the fusion inference model to obtain the joint inference result.
In some possible implementations, the sample identifier corresponding to the automatic driving vehicle may also be sent to other scene participants of the joint reasoning scene, so that the scene participants may obtain scene reasoning features according to the sample identifier corresponding to the automatic driving vehicle, and send the scene reasoning features to the TEE, and the TEE may input the scene reasoning features into the fusion reasoning model to obtain the joint reasoning result together with the self-vehicle reasoning features.
In the same way, the reasoning features are obtained through the sample identifiers, so that the reasoning data are not directly transmitted, but all scene participants including the automatic driving vehicle upload the reasoning data to the third-party service system or the database, and the sample identifiers are used as the identifiers of the reasoning data to ensure the safety of the data.
Likewise, a set of generic data reading interfaces may be designed so that all scene participants may read data from a third party business system or database using a generic protocol such as HTTP, gRPC, etc.
In some possible implementations, the TEE may send the joint inference result to the autonomous vehicle through a communication manner after obtaining the joint inference result, for the autonomous vehicle to perform subsequent processing according to the joint inference result.
In some possible implementations, the preset joint inference mode may include a joint inference mode based on result merging.
The combined reasoning mode based on result combination is that each scene participant separately calculates the reasoning result, and finally the results are combined.
Fig. 4 is a flow chart of an implementation manner of obtaining a joint inference result in the case that the inference mode to be performed is a joint inference mode based on result merging, and as shown in fig. 4, obtaining the joint inference result may include step S410, step S420, and step S430.
In step S410, the sample identifier corresponding to the automatic driving vehicle is sent to other scene participants of the joint reasoning scene, so that the scene participants can obtain scene reasoning features according to the sample identifier corresponding to the automatic driving vehicle, and obtain scene side reasoning results according to the scene reasoning features;
In step S420, obtaining a self-vehicle reasoning feature according to a sample identifier corresponding to the self-driving vehicle, and obtaining a self-vehicle reasoning result according to the self-vehicle reasoning feature;
In step S430, a scenario party inference result sent by the scenario party is received, and a joint inference result is obtained according to the scenario party inference result and the self-vehicle inference result.
In some possible implementations, in step S410, the sample identifier corresponding to the autonomous vehicle is sent to other scene participants, so that the other scene participants obtain, according to the sample identifier corresponding to the autonomous vehicle, an inference feature that can provide information for the driving of the autonomous vehicle, that is, a scene inference feature.
In some possible implementations, the scene participant inputs scene reasoning features into a model stored by the scene participant to obtain scene-party reasoning results.
In some possible implementations, in step S420, features that the autonomous vehicle needs to infer may be obtained as the corresponding self-vehicle inference features of the autonomous vehicle, and the obtained self-vehicle inference features are input into a model stored in the autonomous vehicle itself to obtain the self-vehicle inference result.
In some possible implementations, in step S430, a scenario party inference result sent by the scenario party is received, and a joint inference result is obtained according to the scenario party inference result and the self-vehicle inference result.
The specific method for combining the inference results in the embodiments of the present disclosure is not limited.
In the same way, the reasoning features are obtained through the sample identifiers, so that the reasoning data are not directly transmitted, but all scene participants including the automatic driving vehicle upload the reasoning data to the third-party service system or the database, and the sample identifiers are used as the identifiers of the reasoning data to ensure the safety of the data.
Likewise, a set of generic data reading interfaces may be designed so that all scene participants may read data from a third party business system or database using a generic protocol such as HTTP, gRPC, etc.
Meanwhile, a scene party reasoning result can be acquired according to scene reasoning features, and a vehicle reasoning result can be acquired according to the acquired vehicle reasoning features and is executed in an isolation environment, so that the data safety is further improved.
Fig. 5 shows a flowchart of another implementation manner of obtaining the joint inference result in the case that the inference mode to be performed is the joint inference mode based on the result combination, and as shown in fig. 5, obtaining the joint inference result may include step S510, step S520, step S530, and step S540.
In step S510, a sample identifier sent by a scene participant of the joint reasoning scene is received, and a scene reasoning feature is obtained according to the sample identifier query;
In step S520, a scene-side reasoning result is obtained according to the scene reasoning feature;
In step S530, the scenario party reasoning result is sent to the scenario party, so that the scenario party obtains a joint reasoning result according to the scenario party reasoning result;
in step S540, the joint reasoning result transmitted by the scene reasoning party is received.
In some possible implementations, in step S510, a certain scene participant sends a sample identifier corresponding to the certain scene participant to other scene participants in the joint reasoning scene, and the autopilot vehicle serves as the other scene participants of the scene participant, receives the sample identifier sent by the scene participant, and acquires the scene reasoning feature according to the received sample identifier.
In some possible implementations, after the scene reasoning features are acquired, the scene reasoning features are input into a model stored by the autonomous vehicle to acquire scene party reasoning results in step S520.
In some possible implementations, in step S530, the scenario-party inference result is sent to the scenario-party sending the sample identification through communication, so that the scenario-party obtains the joint inference result according to the inference result sent by the scenario-party including the automatic driving vehicle.
In some possible implementations, after the scenario participants that send the sample identifier acquire the joint inference result in step S540, the following steps may be continued according to the joint inference result, or the joint inference result may be sent to the scenario participants that include the autopilot vehicle, so that the scenario participants may acquire information according to the joint inference result.
That is, the autonomous vehicle may assist other scenario participants in performing joint reasoning, and acquire information desired by itself according to the joint reasoning result acquired by the other scenario participants.
In the same way, the reasoning features are obtained through the sample identifiers, so that the reasoning data are not directly transmitted, but all scene participants including the automatic driving vehicle upload the reasoning data to the third-party service system or the database, and the sample identifiers are used as the identifiers of the reasoning data to ensure the safety of the data.
Likewise, a set of generic data reading interfaces may be designed so that all scene participants may read data from a third party business system or database using a generic protocol such as HTTP, gRPC, etc.
Meanwhile, the scene party reasoning result can be acquired according to the scene reasoning characteristics and executed in the isolation environment so as to further improve the data security.
Different joint reasoning scenes are covered by the joint reasoning mode based on model fusion, the joint reasoning mode based on TEE and the joint reasoning mode based on result combination, so that the method can be suitable for the automatic driving field with complex and changeable scenes. Meanwhile, in the 3-mode reasoning process, the interaction among scene participants participating in the joint reasoning is less, so that the communication time delay is lower. In particular, the joint reasoning mode based on the TEE is to send data to the centralized reasoning of the TEE, the participants interact with the TEE, interaction among the participants is less, and communication time delay is lower.
Based on the same principle as the method shown in fig. 1, fig. 6 shows a schematic structural diagram of a joint reasoning data processing apparatus provided by an embodiment of the present disclosure, and as shown in fig. 6, the joint reasoning data processing apparatus 60 may include:
the reasoning mode module 610 is configured to obtain a joint reasoning scenario where the automatic driving vehicle is located, and determine a to-be-executed reasoning mode corresponding to the joint reasoning scenario from a plurality of preset joint reasoning modes according to the joint reasoning scenario;
The inference execution module 620 is configured to obtain inference data according to the inference mode to be executed, and process the inference data to obtain a joint inference result.
In the joint reasoning data processing device provided by the embodiment of the disclosure, a mode for acquiring the reasoning data and processing the reasoning data is determined according to a joint reasoning scene where an automatic driving vehicle is located, the reasoning data is acquired according to a preset method corresponding to the mode, the reasoning data is processed, and a joint reasoning result is acquired. Aiming at different joint reasoning scenes, the joint reasoning data processing device provided by the embodiment of the disclosure can use different joint reasoning modes to meet the requirements of different joint reasoning scenes and can meet the requirements of complex and changeable automatic driving scenes.
In some possible implementations, the inference mode module 610 is further configured to: and determining an inference mode to be executed corresponding to the joint inference scene from a plurality of preset joint inference modes according to the inference frequency requirements of the joint inference scene.
In some possible implementations, the plurality of preset joint inference modes includes a joint inference mode based on model fusion; the inference execution module 620 includes: the first model fusion unit is used for receiving the reasoning data sent by other scene participants of the joint reasoning scene, wherein the reasoning data comprises model parameters of the scene participants; the second model fusion unit is used for calculating a fusion reasoning model according to the model parameters corresponding to the automatic driving vehicle and the model parameters of the scene participants; the third model fusion unit is used for acquiring self-vehicle reasoning features corresponding to the automatic driving vehicle, inputting the self-vehicle reasoning features into the fusion reasoning model and acquiring a self-vehicle reasoning result; and the fourth model fusion unit is used for acquiring a joint reasoning result at least according to the self-vehicle reasoning result.
In some possible implementations, the inference execution module 620 includes: the fifth model fusion unit is used for receiving the sample identification sent by the scene participant and inquiring and acquiring scene reasoning characteristics according to the sample identification; inputting scene reasoning features into a fusion reasoning model to obtain scene party reasoning results; the fourth model fusion unit is further configured to: and acquiring a joint reasoning result according to the self-vehicle reasoning result and the scene party reasoning result.
In some possible implementations, the fifth model fusion unit is further configured to: and inquiring and acquiring scene reasoning features from the database through the data reading interface according to the sample identification.
In some possible implementations, the step of obtaining the scene reasoning features according to the sample identification query and the step of inputting the scene reasoning features into the fusion reasoning model to obtain the scene party reasoning results are performed in an isolated environment.
In some possible implementations, the plurality of preset joint reasoning modes includes a joint reasoning mode based on a trusted execution environment; the inference execution module 620 includes: the first trusted environment unit is used for acquiring model parameters corresponding to the automatic driving vehicle, and sending the model parameters corresponding to the automatic driving vehicle as reasoning data to the trusted execution environment so that the trusted execution environment can calculate a fusion reasoning model according to the model parameters corresponding to the automatic driving vehicle; the second trusted environment unit is used for acquiring the self-vehicle reasoning features and sending the self-vehicle reasoning features to the trusted execution environment so that the trusted execution environment can acquire a combined reasoning result according to the self-vehicle reasoning features and the fusion reasoning model.
In some possible implementations, the inference execution module 620 further includes: the third trusted environment unit is used for sending the sample identification corresponding to the automatic driving vehicle to other scene participants of the joint reasoning scene so that the scene participants can acquire scene reasoning features according to the sample identification corresponding to the automatic driving vehicle and send the scene reasoning features to the trusted execution environment; and receiving the joint reasoning result sent by the trusted execution environment.
In some possible implementations, the plurality of preset joint inference modes includes a joint inference mode based on result merging; the inference execution module 620 includes: the first result merging unit is used for sending the sample identification corresponding to the automatic driving vehicle to other scene participants of the joint reasoning scene so that the scene participants can acquire scene reasoning features according to the sample identification corresponding to the automatic driving vehicle and acquire scene side reasoning results according to the scene reasoning features; the second result merging unit is used for acquiring the self-vehicle reasoning characteristics according to the sample identifications corresponding to the automatic driving vehicle and acquiring the self-vehicle reasoning results according to the self-vehicle reasoning characteristics; and the third result merging unit is used for receiving the scene party reasoning results sent by the scene party and acquiring a combined reasoning result according to the scene party reasoning results and the self-vehicle reasoning results.
In some possible implementations, the plurality of preset joint inference modes includes a joint inference mode based on result merging; the inference execution module 620 is further configured to: receiving a sample identifier sent by a scene participant of the joint reasoning scene, and inquiring according to the sample identifier to acquire scene reasoning characteristics; acquiring a scene party reasoning result according to scene reasoning characteristics; the scene party reasoning result is sent to the scene party so that the scene party can acquire a joint reasoning result according to the scene party reasoning result; and receiving a joint reasoning result sent by the scene reasoning party.
It will be appreciated that the above-described modules of the federated reasoning data processing apparatus in the embodiments of the present disclosure have functionality to implement the corresponding steps of the federated reasoning data processing method in the embodiment shown in FIG. 1. The functions can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules may be software and/or hardware, and each module may be implemented separately or may be implemented by integrating multiple modules. The functional description of each module of the above-mentioned joint reasoning data processing apparatus may be specifically referred to the corresponding description of the joint reasoning data processing method in the embodiment shown in fig. 1, and will not be repeated herein.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium, a computer program product, and an autonomous vehicle.
The electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a joint inference data processing method as provided by embodiments of the present disclosure.
Compared with the prior art, the electronic equipment determines the mode of acquiring the reasoning data and processing the reasoning data according to the joint reasoning scene of the automatic driving vehicle, acquires the reasoning data according to the preset method corresponding to the mode, processes the reasoning data and acquires the joint reasoning result. Aiming at different joint reasoning scenes, the electronic equipment provided by the embodiment of the disclosure can use different joint reasoning modes to meet the requirements of different joint reasoning scenes and can meet the requirements of complex and changeable automatic driving scenes.
The readable storage medium is a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a joint reasoning data processing method as provided by an embodiment of the present disclosure.
Compared with the prior art, the readable storage medium determines a mode for acquiring the reasoning data and processing the reasoning data according to the joint reasoning scene of the automatic driving vehicle, acquires the reasoning data according to a preset method corresponding to the mode, processes the reasoning data and acquires a joint reasoning result. Aiming at different joint reasoning scenes, the readable storage medium provided by the embodiment of the disclosure can meet the requirements of different joint reasoning scenes by using different joint reasoning modes, and can meet the requirements of complex and changeable automatic driving scenes.
The computer program product comprises a computer program which, when executed by a processor, implements a joint reasoning data processing method as provided by embodiments of the present disclosure.
Compared with the prior art, the computer program product determines a mode for acquiring the reasoning data and processing the reasoning data according to the joint reasoning scene of the automatic driving vehicle, acquires the reasoning data according to a preset method corresponding to the mode, processes the reasoning data and acquires a joint reasoning result. Aiming at different joint reasoning scenes, the computer program product provided by the embodiment of the disclosure can meet the requirements of different joint reasoning scenes by using different joint reasoning modes, and can meet the requirements of complex and changeable automatic driving scenes.
The automatic driving vehicle comprises the electronic equipment.
Compared with the prior art, the automatic driving vehicle determines a mode for acquiring the reasoning data and processing the reasoning data according to the joint reasoning scene of the automatic driving vehicle, acquires the reasoning data according to a preset method corresponding to the mode, processes the reasoning data and acquires a joint reasoning result. Aiming at different joint reasoning scenes, the automatic driving vehicle provided by the embodiment of the disclosure can meet the requirements of different joint reasoning scenes by using different joint reasoning modes, and can meet the requirements of complex and changeable automatic driving scenes.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as a joint inference data processing method. For example, in some embodiments, the federated inference data processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the above-described joint inference data processing method may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the joint inference data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (15)

1. A method of joint reasoning data processing, comprising:
Acquiring a joint reasoning scene of an automatic driving vehicle, and determining a to-be-executed reasoning mode corresponding to the joint reasoning scene from a plurality of preset joint reasoning modes according to the joint reasoning scene;
and obtaining reasoning data according to the reasoning mode to be executed, and processing the reasoning data to obtain a combined reasoning result.
2. The method of claim 1, wherein the determining, from the plurality of preset joint inference patterns according to the joint inference scenario, an inference pattern to be performed corresponding to the joint inference scenario includes:
And determining an inference mode to be executed corresponding to the joint inference scene from a plurality of preset joint inference modes according to the inference frequency requirements of the joint inference scene.
3. The method of claim 1, wherein the plurality of preset joint inference modes includes a joint inference mode based on model fusion;
under the condition that the to-be-executed reasoning mode is a combined reasoning mode based on model fusion, the method for obtaining the reasoning data according to the to-be-executed reasoning mode, processing the reasoning data and obtaining a combined reasoning result comprises the following steps:
Receiving reasoning data sent by other scene participants of the joint reasoning scene, wherein the reasoning data comprises model parameters of the scene participants;
Calculating a fusion inference model according to the model parameters corresponding to the automatic driving vehicle and the model parameters of the scene participants;
Acquiring self-vehicle reasoning features corresponding to the automatic driving vehicle, inputting the self-vehicle reasoning features into the fusion reasoning model, and acquiring a self-vehicle reasoning result;
and acquiring a combined reasoning result at least according to the self-vehicle reasoning result.
4. The method according to claim 3, wherein after the calculating a fused inference model according to the model parameters corresponding to the autonomous vehicle and the model parameters of the plurality of scene participants, the method further comprises:
receiving a sample identifier sent by the scene participant, and inquiring and acquiring scene reasoning characteristics according to the sample identifier;
inputting the scene reasoning features into the fusion reasoning model to obtain scene party reasoning results;
The obtaining the combined reasoning result at least according to the self-vehicle reasoning result comprises the following steps:
And acquiring a joint reasoning result according to the self-vehicle reasoning result and the scene party reasoning result.
5. The method of claim 4, wherein the obtaining scene reasoning features from the sample identification query comprises:
And inquiring and acquiring the scene reasoning features from a database through a data reading interface according to the sample identification.
6. The method of claim 4, wherein the step of obtaining scene reasoning features from the sample identification query and the step of inputting the scene reasoning features into the fused reasoning model are performed in an isolated environment to obtain scene party reasoning results.
7. The method of claim 1, wherein the plurality of preset federated inference patterns comprise federated inference patterns based on trusted execution environments;
Under the condition that the to-be-executed reasoning mode is a joint reasoning mode based on a trusted execution environment, the method for obtaining the reasoning data according to the to-be-executed reasoning mode, processing the reasoning data and obtaining a joint reasoning result comprises the following steps:
obtaining model parameters corresponding to the automatic driving vehicle, and sending the model parameters corresponding to the automatic driving vehicle as reasoning data to a trusted execution environment so that the trusted execution environment can calculate a fusion reasoning model according to the model parameters corresponding to the automatic driving vehicle;
The self-vehicle reasoning features are obtained, the self-vehicle reasoning features are sent to the trusted execution environment, and the trusted execution environment obtains a combined reasoning result according to the self-vehicle reasoning features and the fusion reasoning model.
8. The method of claim 7, wherein the obtaining the model parameters corresponding to the autonomous vehicle, after sending the model parameters corresponding to the autonomous vehicle as inference data to a trusted execution environment, further comprises:
the sample identification corresponding to the automatic driving vehicle is sent to other scene participants of the joint reasoning scene so that the scene participants can acquire scene reasoning features according to the sample identification corresponding to the automatic driving vehicle and send the scene reasoning features to the trusted execution environment;
And receiving the joint reasoning result sent by the trusted execution environment.
9. The method of claim 1, wherein the plurality of preset joint inference modes includes a joint inference mode based on result merging;
Under the condition that the to-be-executed reasoning mode is a combined reasoning mode based on result combination, the method for obtaining the reasoning data according to the to-be-executed reasoning mode, processing the reasoning data and obtaining a combined reasoning result comprises the following steps:
The sample identification corresponding to the automatic driving vehicle is sent to other scene participants of the joint reasoning scene so that the scene participants can acquire scene reasoning features according to the sample identification corresponding to the automatic driving vehicle and scene side reasoning results according to the scene reasoning features;
Acquiring a self-vehicle reasoning feature according to the sample identification corresponding to the automatic driving vehicle, and acquiring a self-vehicle reasoning result according to the self-vehicle reasoning feature;
and receiving a scene party reasoning result sent by the scene party, and acquiring the joint reasoning result according to the scene party reasoning result and the self-vehicle reasoning result.
10. The method of claim 1, wherein the plurality of preset joint inference modes includes a joint inference mode based on result merging;
Under the condition that the to-be-executed reasoning mode is a combined reasoning mode based on result combination, the method for obtaining the reasoning data according to the to-be-executed reasoning mode, processing the reasoning data and obtaining a combined reasoning result comprises the following steps:
Receiving a sample identifier sent by a scene participant of the joint reasoning scene, and inquiring according to the sample identifier to acquire scene reasoning characteristics;
Acquiring a scene party reasoning result according to the scene reasoning characteristics;
The scene party reasoning result is sent to the scene party so that the scene party can acquire the joint reasoning result according to the scene party reasoning result;
And receiving the joint reasoning result sent by the scene reasoning party.
11. An apparatus for joint inference data processing, comprising:
The reasoning mode module is used for acquiring a joint reasoning scene where the automatic driving vehicle is located, and determining a to-be-executed reasoning mode corresponding to the joint reasoning scene from a plurality of preset joint reasoning modes according to the joint reasoning scene;
and the reasoning execution module is used for acquiring reasoning data according to the reasoning mode to be executed, processing the reasoning data and acquiring a combined reasoning result.
12. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
13. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
14. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-10.
15. An autonomous vehicle comprising: the electronic device of claim 12.
CN202410310124.3A 2024-03-18 2024-03-18 Joint reasoning data processing method and device, electronic equipment, medium and vehicle Pending CN118014085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410310124.3A CN118014085A (en) 2024-03-18 2024-03-18 Joint reasoning data processing method and device, electronic equipment, medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410310124.3A CN118014085A (en) 2024-03-18 2024-03-18 Joint reasoning data processing method and device, electronic equipment, medium and vehicle

Publications (1)

Publication Number Publication Date
CN118014085A true CN118014085A (en) 2024-05-10

Family

ID=90944570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410310124.3A Pending CN118014085A (en) 2024-03-18 2024-03-18 Joint reasoning data processing method and device, electronic equipment, medium and vehicle

Country Status (1)

Country Link
CN (1) CN118014085A (en)

Similar Documents

Publication Publication Date Title
CN112654020B (en) Method, device, equipment and storage medium for establishing wireless connection
CN112560724B (en) Vehicle monitoring method and device and cloud control platform
CN107690149B (en) Method for triggering network policy update, management function entity and core network equipment
CN113795039B (en) Operator network switching method, device, equipment and computer readable storage medium
CN114660956A (en) Intelligent driving simulation test system, method, electronic device and storage medium
US20220284888A1 (en) Method and apparatus for in-vehicle call, device, medium and product
CN115471805A (en) Point cloud processing and deep learning model training method and device and automatic driving vehicle
CN112702660B (en) Multimedia data transmission method and device, automatic driving vehicle and cloud server
CN113723607A (en) Training method, device and equipment of space-time data processing model and storage medium
CN118014085A (en) Joint reasoning data processing method and device, electronic equipment, medium and vehicle
CN114579311B (en) Method, device, equipment and storage medium for executing distributed computing task
CN117112065A (en) Large model plug-in calling method, device, equipment and medium
CN114143907B (en) Wireless connection establishment method, device, electronic equipment and storage medium
CN114285873B (en) Method, device, equipment and storage medium for establishing communication connection
CN114692898A (en) MEC federal learning method, device and computer readable storage medium
CN116340930B (en) Method, device, equipment and medium for confirming system change
CN115600671B (en) Data processing method, device, equipment and storage medium of deep learning framework
CN115277675B (en) Data stream processing method, device, electronic equipment and readable storage medium
CN115988609B (en) Equipment classification method and device, electronic equipment and storage medium
CN118133649A (en) Decision method, device, equipment and storage medium for meta universe
CN116502079A (en) Model training method and device, electronic equipment and storage medium
CN113963433B (en) Motion search method, motion search device, electronic equipment and storage medium
CN113640739B (en) Cooperative positioning method and device in three-dimensional scene
CN114202947B (en) Internet of vehicles data transmission method and device and automatic driving vehicle
CN113360688B (en) Method, device and system for constructing information base

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination