WO2023087248A1 - 一种信息处理方法及装置 - Google Patents

一种信息处理方法及装置 Download PDF

Info

Publication number
WO2023087248A1
WO2023087248A1 PCT/CN2021/131761 CN2021131761W WO2023087248A1 WO 2023087248 A1 WO2023087248 A1 WO 2023087248A1 CN 2021131761 W CN2021131761 W CN 2021131761W WO 2023087248 A1 WO2023087248 A1 WO 2023087248A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
terminal
season
algorithm
regional
Prior art date
Application number
PCT/CN2021/131761
Other languages
English (en)
French (fr)
Inventor
宋思达
马莎
周铮
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180103931.2A priority Critical patent/CN118215612A/zh
Priority to PCT/CN2021/131761 priority patent/WO2023087248A1/zh
Publication of WO2023087248A1 publication Critical patent/WO2023087248A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions

Definitions

  • the present application relates to the technical field of intelligent driving, and in particular to an information processing method and device.
  • ADAS advanced driving assistant system
  • the functions to be realized by autonomous driving mainly include perception prediction, decision-making planning, and vehicle control.
  • the perception system is the "eye" of the autonomous vehicle, which is used to perceive the environment and obstacles.
  • the perception system is whether the autonomous vehicle can drive safely.
  • the basis of such a perception system includes various sensors that can obtain environmental information around the vehicle.
  • the present application provides an information processing method and device for improving the performance of an intelligent driving system.
  • the present application provides an information processing method, which is applied to an information processing device, such as any sensor installed on a terminal, and the method includes: acquiring first information, the first information including the first The environment information of the terminal where a device is located receives second information from the second device, the second information is used to indicate the region information and/or season information where the terminal is located, and outputs the first information for the terminal according to the first information and the second information. perception information.
  • the terminal's perception can be improved.
  • This method can improve the performance of the intelligent driving system of the vehicle. performance.
  • acquiring the first information includes: detecting environment information of the terminal; or receiving the first information from the second device.
  • the region information and/or season information corresponds to a first algorithm
  • the first algorithm belongs to a predefined algorithm set.
  • Outputting the first sensing information for the terminal according to the first information and the second information includes: outputting the first sensing information for the terminal based on the first algorithm and the first information.
  • a plurality of algorithms corresponding to different regional information and/or season information can be predefined, and the terminal's environmental information is processed through the first algorithm corresponding to the regional information and/or seasonal information, and the first algorithm is output.
  • Perception information can improve the applicability of the terminal's perception system to scenarios corresponding to different spatio-temporal information, thereby improving the performance of the intelligent driving system.
  • regional information and/or season information are used to construct a first neural network model; according to the first information and second information, outputting the first perception information for the terminal includes: according to the first neural network A model, outputting the first perception information for the terminal.
  • an initial neural network model can be deployed, and by inputting regional information and/or seasonal information into an initial neural network model, it can be automatically adjusted to adapt to different spatiotemporal information (including regional information and/or seasonal information ) corresponds to the first neural network model of the scene, thereby reducing the overhead of training and deploying the perception system.
  • outputting first sensing information for the terminal according to the first information and the second information includes: sending the first sensing information to a fusion unit.
  • the sensory information output by the first device and the sensory information output by other sensors can be fused by the merging unit, so that fused information with higher reliability can be obtained.
  • the regional information includes: one or more of continent names, country names, region names, provinces, or cities; and/or, the season information includes time system information, and the time system information includes season , festivals, solar terms or seasons in one or more.
  • the first device includes any one of the following: a camera device; a laser radar; a sound pickup device; an ultrasonic sensor; or a millimeter wave radar.
  • the present application provides an information processing method, which is applied to an information processing device.
  • the information processing device may be, for example, a fusion unit in a terminal.
  • the method includes: receiving at least one first piece of information from at least one first device,
  • the first information includes the environment information of the terminal where the first device is located; receiving second information from the second device, the second information is used to indicate the region information and/or season information where the terminal is located; according to at least one of the first information and the second information , output the first fusion information for the terminal.
  • the region information and/or season information where the terminal is located is first received, and then the first fusion information is output according to the environment information of the terminal where the first device is located and the region information and/or season information where the terminal is located, thereby improving intelligent driving.
  • This method can improve the performance of the intelligent driving system of the vehicle. performance.
  • the regional information and/or season information corresponds to at least one first algorithm
  • the at least one first algorithm belongs to a predefined algorithm set.
  • Outputting first fusion information for the terminal according to at least one first information and second information includes: outputting first fusion information for the terminal according to at least one first algorithm and at least one first information.
  • a plurality of first algorithms corresponding to different regional information and/or season information can be predefined, and the environment information of the terminal is processed through the first algorithm corresponding to the regional information and/or season information, and output
  • the first fusion of information can improve the applicability of the terminal's perception system to scenarios corresponding to different spatio-temporal information, thereby improving the performance of the intelligent driving system.
  • multiple algorithms in the predefined algorithm set correspond to different sensors, and/or, correspond to different regional information and/or seasonal information.
  • regional information and/or season information are used to construct a first neural network model; according to at least one piece of first information and second information, outputting first fusion information for the terminal includes: according to the first The neural network model outputs the first fusion information for the terminal.
  • an initial neural network model can be deployed, and by inputting regional information and/or seasonal information into the initial neural network model, it can be automatically adjusted to the first neural network model that adapts to scenarios corresponding to different spatiotemporal information , which can reduce the overhead of perception system training deployment.
  • At least one piece of first information includes one or more of at least one sensor, user, communication module, or map module.
  • the regional information includes: one or more of continent names, country names, region names, provinces, or cities; and/or, the season information includes time system information, and the time system information includes season , festivals, solar terms or seasons in one or more.
  • the first device includes any one of the following: a camera device; a laser radar; a sound pickup device; an ultrasonic sensor; or a millimeter wave radar.
  • the present application provides an information processing method applied to an information processing device.
  • the information processing device may be, for example, an external module in a terminal or an external module installed on a terminal.
  • the method includes: determining the second information, the second The second information is used to indicate the region information and/or season information where the terminal is located; and send the second information to the first device.
  • the first device can be any sensor on the terminal, and can provide the sensor with information about the area where the terminal is located and/or seasonal information, so that the sensor can use the environment information of the terminal where the first device is located and the area information where the terminal is located.
  • the first device may be a fusion unit on the terminal, which provides the fusion unit with region information and/or season information where the terminal is located, so that the fusion unit may be configured according to the location of the first device.
  • the environment information of the terminal and the regional information and/or season information where the terminal is located output the first fusion information, so that the applicability of the terminal's perception system to scenarios corresponding to different spatio-temporal information can be improved, wherein the different spatio-temporal information can be used to distinguish spatial And/or scenarios with a large time span, for example, the terminal is a vehicle, the performance of the intelligent driving system of the vehicle can be improved through this method.
  • determining the second information includes: determining the second information based on user input, where the user input is based on a display screen or voice input; or determining the second information based on an operator's push; or , to determine the second information based on the information from the map module.
  • the present application further provides an information processing device configured to execute the methods in the foregoing aspects or any possible implementation manners of the aspects.
  • the information transmission device may include modules/units for executing methods in the above aspects or any possible implementation thereof; these modules/units may be implemented by hardware, or by executing corresponding software on hardware.
  • the present application also provides an information processing device, the information processing device includes at least one processor and a communication interface, and the communication interface is used to receive signals from other communication devices other than the information processing device and transmit them to the At least one processor or send a signal from the at least one processor to other communication devices other than the information processing device; the at least one processor is used to implement the first aspect or the first
  • the present application further provides a terminal, the terminal includes a device for performing the method in the above first aspect or any possible implementation of the first aspect, and a device for performing the above third aspect or the third aspect or, the terminal includes a device for performing the method in the second aspect or any possible implementation of the second aspect, and a device for performing the method in the third aspect or the first aspect
  • a terminal includes a device for performing the method in the second aspect or any possible implementation of the second aspect, and a device for performing the method in the third aspect or the first aspect
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed, the first aspect or any possible implementation of the first aspect can be realized
  • the operation steps of the method in the manner, or, realize the operation steps of the method in the above-mentioned second aspect or any possible implementation of the second aspect, or, realize the above-mentioned third aspect or any possible implementation of the third aspect The operation steps of the method in the method.
  • the present application also provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a communication device, any possible The operation steps of the method in the implementation manner, or, realize the operation steps of the method in the above-mentioned second aspect or any possible implementation manner of the second aspect, or, realize the operation steps of the method in the above-mentioned third aspect or any possible implementation manner of the third aspect Operational steps of the method in the implementation manner.
  • FIG. 1 is a schematic diagram of a vehicle perception system provided by an embodiment of the present application
  • Fig. 2 is a schematic diagram of the system provided by the embodiment of the present application.
  • Fig. 3 is a schematic diagram of the system provided by the embodiment of the present application.
  • Fig. 4 is a schematic diagram of the system provided by the embodiment of the present application.
  • FIG. 5 is a schematic flowchart of an information processing method provided in an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of an information processing method provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an information processing device provided in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an information processing device provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an information processing device provided by an embodiment of the present application.
  • intelligent driving vehicles can perceive the surrounding environment of the vehicle through the perception system, which includes various sensors, for example, and can perceive the environment information around the vehicle.
  • the perception system includes various sensors, for example, and can perceive the environment information around the vehicle.
  • what these sensors collect is the local environmental information around the vehicle.
  • traffic Environment around the world is roughly the same, there are still regional differences in some aspects, especially aspects closely related to road traffic.
  • traffic Facilities such as signs, markings, signs, signal lights, etc.
  • traffic police such as clothing, gestures, etc.
  • special traffic vehicles such as police cars, ambulances, fire engines, etc.
  • the first way is to train a large
  • the comprehensive algorithm is applicable to different traffic environments around the world, but this method requires a large amount of labeled data from all over the world for sample training, and the super-large network puts a lot of pressure on the on-board computing unit, so from the perspective of training and deployment costs It is not realistic; another way is to use the image data of the area for the training of intelligent driving vehicles operating in a certain area.
  • This method needs to pay attention to the area where the vehicle operates when deploying, because once the vehicle leaves the training stage Familiar areas, that is, the scope of application of the design, then the performance of the algorithm will decrease, which will affect the safety of intelligent driving.
  • This method will limit the operation of intelligent driving vehicles only in a specific area, but the traffic scenes in many areas have Different scene features, such as Hong Kong in China, Zhuhai and Macau in Guangdong, mainland China, and European countries have different traffic environments.
  • Using the second method will make the perception system sensitive to different spaces and/or The applicability of scenes with a large time span is poor. Therefore, a method is urgently needed to solve the problem of how to improve the performance of the intelligent driving system.
  • an embodiment of the present application provides an information processing method.
  • Any sensor installed on a terminal outputs first sensing information for the terminal by combining environmental information of the terminal where the sensor is located and regional information and/or season information where the terminal is located;
  • the fusion unit on the terminal outputs the first fusion information for the terminal by combining the environmental information of the terminal where the fusion unit is located and the regional information and/or season information where the terminal is located, so as to improve the sensitivity of the terminal's perception system to different spatio-temporal information.
  • Scenario applicability Different spatiotemporal information can be used to distinguish scenarios with large spatial and/or temporal spans.
  • the terminal is a vehicle. This method makes it easier to meet the user's needs for vehicles driving in scenarios corresponding to different spatiotemporal information. , therefore, the performance of the intelligent driving system of the vehicle can be improved by this method.
  • the industry has proposed grading standards for driving automation systems.
  • the driving automation grading standards proposed by the Society of Automotive Engineers International include six levels including L0-L5, among which, L0-L2 levels.
  • the driver support system can provide some support functions for the driver, but regardless of whether the driver support function of the vehicle is turned on or not, the driver must drive the vehicle himself and supervise these support functions provided by the driver support system at all times, which must be carried out as needed Steering, braking, or accelerating to ensure safety, the difference between the support functions of L0, L1 and L2 is that: L0 is non-driving automation, and its support functions are limited to providing warnings and instant assistance, and the support functions of L1 are The driver provides steering or braking/acceleration support, and the support functions at the L2 level provide the driver with steering and braking/acceleration support.
  • L3 level semi-autonomous driving the automatic driving system can not only complete certain driving tasks, but also monitor the driving environment under certain circumstances, but the driver needs to be ready to regain driving control at any time, for example, the driver must drive when the function is requested.
  • L4 level highly automatic driving the automatic driving system can complete driving tasks and monitor the driving environment under certain environments and specific conditions.
  • L5 level fully automatic driving the automatic driving system can complete all driving tasks under all conditions.
  • the intelligent driving system can be divided into ADAS and automatic driving system.
  • the advanced driver assistance system ADAS
  • the automatic driving system can be a driving automation system reaching the L3-L5 level and above.
  • Level driving automation system ADAS
  • the terminal in the embodiment of the present application may be, for example, a vehicle or other devices in the vehicle.
  • the other devices include but are not limited to: vehicle-mounted terminals, vehicle-mounted controllers, vehicle-mounted modules, vehicle-mounted modules, vehicle-mounted components, vehicle-mounted chips, vehicle-mounted units, vehicle-mounted radars, or vehicle-mounted cameras. , vehicle module, vehicle module, vehicle component, vehicle chip, vehicle unit, vehicle radar or vehicle camera.
  • the above-mentioned terminal may also be other intelligent terminals other than the vehicle, or components provided in other intelligent terminals other than the vehicle.
  • the smart terminal may be a smart transportation device, a smart home device, a robot, and the like. For example, it includes but is not limited to smart terminals or controllers, chips, radars or cameras and other sensors in the smart terminals, and other components.
  • the terminal is shown as a vehicle, and a camera (such as the camera in Figure 1), a laser radar, and a millimeter-wave radar (such as the long-distance millimeter sensor in Figure 1) are installed on the vehicle.
  • Wave radar, medium/short-range millimeter wave radar), ultrasonic sensors, etc. to obtain environmental information around the vehicle through the sensor, and analyze and process the acquired environmental information, such as obstacle perception, target recognition, vehicle positioning, path Planning, driver monitoring/reminder and other functions, so as to improve the safety, automation and comfort of vehicle driving.
  • the environmental information acquired by the sensor is mainly for things around the terminal, such as traffic light information, traffic sign information, traffic police, obstacles, etc., which will not be described in detail later.
  • LiDAR is the abbreviation of the light laser detection and ranging (LiDAR) system, which mainly consists of a transmitter, a receiver and a signal processing unit.
  • the transmitter is the laser emitting mechanism in the LiDAR; After the target object is irradiated, it will be reflected by the target object, and the reflected light will converge to the receiver through the lens group.
  • the signal processing unit is responsible for controlling the emission of the transmitter, processing the signal received by the receiver, and calculating information such as the position, speed, distance, and/or size of the target object.
  • Millimeter-wave radar uses millimeter-wave as the detection medium, which can measure the distance, angle and relative speed between the millimeter-wave radar and the measured object.
  • Millimeter-wave radar can be divided into long-range radar (LRR), mid-range radar (MRR) and short-range radar (SRR) according to its detection distance.
  • LRR long-range radar
  • MRR mid-range radar
  • SRR short-range radar
  • the main application scenarios for LRR include active cruise and brake assist, etc.
  • LRR does not have high requirements for the angular width of the detection, and the reflection on the antenna is that the 3dB beamwidth of the antenna is relatively low.
  • the main application scenarios for MRR/SRR include automatic parking, lane merging assistance, and blind spot detection, etc.
  • MRR/SRR has high requirements for the angular width of the detection, and the antenna has a high requirement for the 3dB beam width of the antenna, and Antennas with low sidelobe levels are required.
  • the beam width is used to ensure the detectable angular range, and the low sidelobe is used to reduce the clutter energy reflected by the ground, reduce the probability of false alarms, and ensure driving safety.
  • LRR can be installed in front of the vehicle body, and MRR/SRR can be installed in the four corners of the vehicle. Together, they can achieve 360-degree coverage around the vehicle body.
  • the millimeter-wave radar can include a housing, and at least one printed circuit board (PCB) is built into the housing, for example, it can include a power supply PCB and a radar PCB, wherein the power supply PCB can provide the internal voltage of the radar, and can also provide a The interface and safety function of device communication; the radar PCB can provide the transmission and reception and processing of millimeter wave signals, on which are integrated components for millimeter wave signal processing and antennas for millimeter wave signal transmission and reception (transmitting antenna Tx and receiving antenna Rx) .
  • the antenna can be formed on the back of the radar PCB in the form of a microstrip array for transmitting and receiving millimeter waves.
  • Ultrasonic sensor also known as ultrasonic radar, is a sensing device that uses ultrasonic detection. Its working principle is to emit ultrasonic waves through the ultrasonic transmitting device, and receive the ultrasonic waves reflected by obstacles through the receiving device. According to the time difference of ultrasonic reflection and reception to measure the distance. At present, the distance measured by the ultrasonic sensor can be used to prompt the distance from the car body to obstacles, assist parking or reduce unnecessary collisions. It should be understood that the above-mentioned sensors are only examples of sensors that may be configured on the vehicle in the embodiment of the present application without any limitation. In other embodiments, the sensors may include but are not limited to the above-mentioned examples.
  • the above terminal can also be installed with a fusion unit and an external module.
  • the external module can be a module inside the terminal capable of interacting with the sensor and/or the fusion unit, such as an electronic control unit, a domain control unit, a multi-domain controller, a cockpit domain controller, etc., and the external module can also be other equipment.
  • An embodiment of the present application also provides an information processing system, which includes at least one sensor, a fusion unit, and an external module.
  • an information processing system which includes at least one sensor, a fusion unit, and an external module.
  • N sensors are included and N is greater than 2 as an example, but the number of sensors is not limited.
  • N sensors are connected to external modules through a first logical interface layer, wherein the first logical interface layer includes a plurality of first logical interfaces , the external module is connected to each sensor through a different first logical interface.
  • each first logical interface can provide one or more of the following information:
  • the operating state of the sensor is used to define the working mode of the sensor, such as initialization mode, normal working mode or calibration mode.
  • the environmental information may include the following information: weather conditions, ambient light, air temperature, air pressure, relative humidity, road characteristics (eg, high speed, urban area, etc.), road surface conditions (temperature, water accumulation, icing, roughness).
  • Vehicle state which is used to describe the dynamic state of the self-vehicle.
  • the dynamic state can be understood as the state of the self-vehicle in motion.
  • the dynamic state can be described by using kinematic parameters, and the kinematic parameters include but not limited to speed, acceleration, jerk, orientation, steering, and so on.
  • the operation mode of the logical interface is, for example, a working mode (such as transmitting data) or an idle mode;
  • the additional parameter can be supplementary information of the data transmitted to the sensor through the logical interface, for example, the transmitted data is image data, and the additional parameter can be is the resolution of the image data.
  • the N sensors are connected to the fusion unit through the second logical interface layer.
  • the second logic interface layer includes a plurality of second logic interfaces, and the fusion unit is connected to each sensor through different second logic interfaces.
  • the second logical interface may provide information on object level (eg, potentially moving objects, road objects, static objects), characteristics and detection levels based on sensor technology specific information, and more supporting information as available.
  • object level eg, potentially moving objects, road objects, static objects
  • characteristics and detection levels based on sensor technology specific information, and more supporting information as available.
  • the external module can send the region information and/or season information of the vehicle to each of the N sensors through the first logic interface layer, and each sensor combines the environmental information perceived by itself and the region where the vehicle is located Information and/or season information, and output the first perception information to the fusion unit.
  • the surrounding environmental information of the terminal that can be obtained by sensors of different types, functions, and installation locations may be redundant, complementary or even contradictory.
  • the fusion unit can combine The fusion processing is performed on the first sensing information of multiple sensors, so as to obtain a fusion result of sensing information with high reliability.
  • the fusion unit is connected to the external module through a third logic interface, and the fusion unit is connected to the N sensors through a second logic interface layer .
  • the external module can send the regional information and/or season information where the vehicle is located to the fusion unit through the third logic interface, and the N sensors can send the perceived environment information around the vehicle to the fusion unit through the second logic interface layer, and then the fusion unit Combining the region information and/or season information with the environment information of the vehicle, the first fused information for the vehicle is output.
  • the external module and the fusion unit are connected through a third logical interface, and the external module and N sensors are connected through a first logical interface layer , the fusion unit is connected to the N sensors through the second logical interface layer, and the external module can send the regional information and/or season information of the vehicle to the fusion unit through the third logical interface, and at the same time send the information to the N sensors through the first logical interface layer.
  • the sensors send geographical information and/or seasonal information, respectively.
  • the external modules in the above three information processing systems may include, but not limited to, electronic control units, domain control units, multi-domain controllers, cockpit domain controllers, or vehicle-mounted terminals.
  • Fig. 5 shows a schematic flowchart of an information processing method according to an embodiment of the present application. This method works for the system shown in Figure 2. As shown in Figure 5, the method may include the following steps:
  • the second device determines second information, where the second information is used to indicate region information where the terminal is located (information about the region where the terminal is located) and/or season information (information about the season where the terminal is currently located).
  • region information where the terminal is located information about the region where the terminal is located
  • season information information about the season where the terminal is currently located.
  • spatio-temporal information may also be referred to as spatio-temporal information, and different spatio-temporal information may be used to distinguish scenes with large spatial and/or temporal spans, which will not be described in detail below.
  • the second device may be an external module in FIG. 2 , for example, the second device is an electronic control unit, a domain control unit, a multi-domain controller, a cockpit domain controller, or a vehicle terminal.
  • the regional information may include any of the following items: one or more of continent names, country names, region names, provinces, or cities, where the continent names are, for example, Asia, Europe, and the like.
  • the season information includes time system information, and the time system information includes one or more of seasons, festivals, solar terms, or seasons. Seasons can be spring, summer, autumn or winter.
  • the time system information formulated by each country can be different.
  • the time period can be Chinese solar terms such as winter solstice and autumnal equinox; for example, the time period can be the summer time or winter time of the United States, from the second Sunday in March to the first in November. Daylight saving time is adopted on Sunday, and winter time is adopted from the second Monday in November to the third Monday in March of the following year.
  • the second device in the above S501 can determine the second information in various ways:
  • the second device may determine the second information based on user input, where the user input is based on a display screen or voice input.
  • the second device may determine the second information based on push from an operator.
  • the second device may determine the second information based on information from the map module.
  • the second device sends second information to the first device.
  • the first device receives the second information from the second device.
  • the first device can be any one of the sensors in FIG. 2, for example, the first device is a camera, laser radar, sound pick-up, ultrasonic sensor, or millimeter-wave radar, wherein the sound pick-up can be any sound pick-up device. device, such as a microphone.
  • the first device acquires first information, where the first information includes environment information of a terminal where the first device is located.
  • the first device may detect environment information of the terminal.
  • the first device is a millimeter-wave radar that can detect road information.
  • the first device is a camera device, which can collect image data of the surrounding environment.
  • the first device may receive the first information from the second device.
  • the second device can be an external module.
  • the external module is a large screen, and the first device can receive input from the large screen.
  • the external module is a microphone, and the first device can receive voice input from the microphone.
  • the first device outputs first sensing information for the terminal according to the first information and the second information.
  • the first device outputs the first sensing information for the terminal according to the first information and the second information, and there may be many possible implementation manners.
  • the regional information and/or season information corresponds to a first algorithm
  • the first algorithm belongs to a predefined algorithm set
  • the first device may output the first algorithm for the terminal based on the first algorithm and the first information. perception information.
  • the plurality of first algorithms included in the predefined algorithm set may be obtained by training data sets of different regional information and/or seasonal information, and these data sets have unique scene characteristics corresponding to regional information and/or seasonal information,
  • the trained algorithm is deployed in the terminal according to the requirement.
  • the first algorithm may include but not limited to any one or more of the following: special vehicle recognition algorithm, special vehicle siren recognition algorithm, policeman and its gesture recognition algorithm, road sign and line recognition algorithm, traffic signal light recognition algorithm, etc., when When the first algorithm includes multiple algorithms, the first algorithm can also be understood as an algorithm package.
  • the predefined algorithm set may include multiple first algorithms respectively corresponding to the preset regional information.
  • the first device may select the first algorithm corresponding to the received regional information from the predefined algorithm set according to the regional information received from the second device, and then process the first information through the first algorithm, and output the first perception information. For example, if the terminal is a vehicle, this method can improve the applicability of the terminal's perception system to scenes with large spans in different spaces, thereby improving the performance of the intelligent driving system.
  • the predefined algorithm set may include multiple preset first algorithms respectively corresponding to the season information.
  • the first device may select the first algorithm corresponding to the received season information from the predefined algorithm set according to the season information received from the second device, and then process the first information through the first algorithm, and output first perception information. For example, if the terminal is a vehicle, this method can improve the applicability of the terminal's perception system to scenarios with a large span in different time spans, thereby improving the performance of the intelligent driving system.
  • the predefined algorithm set may include a plurality of preset region information and season information respectively corresponding to The first algorithm, each preset region information and preset time information corresponds to a first algorithm.
  • the first device can select the first algorithm corresponding to the received regional information and seasonal information from the predefined algorithm set according to the regional information and seasonal information received from the second device, and then use the first algorithm to process the first information Processing is performed to output the first sensing information for the terminal. For example, if the terminal is a vehicle, this method can improve the applicability of the terminal's perception system to scenarios with a large span in different spaces and time, thereby improving the performance of the intelligent driving system.
  • a plurality of algorithms corresponding to different regional information and/or season information can be predefined, and the environment information of the terminal is processed through the first algorithm corresponding to the regional information and/or season information, and the first Perception information can improve the applicability of the terminal's perception system to scenarios corresponding to different spatiotemporal information, and this method can improve the performance of the intelligent driving system.
  • the regional information and/or the season information are used to construct the first neural network model; the first device may output the first perception information for the terminal according to the first neural network model and the first information.
  • the second information includes regional information as an example, that is, the regional information is used to construct a first neural network model corresponding to the regional information.
  • the first device may input regional information into an initial model of a neural network, and the initial model is automatically adjusted to become a first neural network model adapted to the regional information. Then, the first information is used as an input of the first neural network model, and then the first perception information for the terminal is output.
  • This method can improve the applicability of the terminal's perception system to scenes with large spans in different spaces, thereby improving the performance of the intelligent driving system.
  • the season information is used to construct the first neural network model corresponding to the season information.
  • the first device may input the season information into an initial model of the neural network, and the initial model is automatically adjusted to become the first neural network model adapted to the season information. Then, the first information is used as an input of the first neural network model, and then the first perception information for the terminal is output.
  • the second information includes region information and season information
  • the region information and season information are used to construct a first neural network model corresponding to the region information and season information.
  • the first device may input regional information and seasonal information into an initial model of a neural network, and the initial model is automatically adjusted to become a first neural network model adapted to the regional information and seasonal information. Then, the first information is used as the input of the first neural network model, and then the first perception information for the terminal is output.
  • an initial neural network model can be deployed in the terminal, and by inputting regional information and/or seasonal information into an initial neural network model, it can be automatically adjusted to adapt to the first neural network model corresponding to different spatiotemporal information.
  • a neural network model which can reduce the overhead of training and deployment of the perception system, and can improve the performance of the intelligent driving system.
  • the first device first receives the region information and/or season information where the terminal is located, and then outputs the first perception information according to the environment information of the terminal where the first device is located and the region information and/or season information where the terminal is located,
  • the applicability of the perception system of the terminal to different spatio-temporal information (corresponding scenes) can be improved, wherein different spatio-temporal information can be used to distinguish scenes with large spatial and/or temporal spans, for example, the terminal is a vehicle, and this method can improve The performance of the vehicle's intelligent driving system.
  • outputting the first sensing information for the terminal in S504 above may include: sending the first sensing information to the fusion unit, so that the fusion unit can The first perception information output by the device and the first perception information output by other sensors are fused to obtain fused information.
  • the information processing method may further include S505 and S506.
  • the first device sends the first sensing information to the third device.
  • the third device receives the first sensing information sent from the first device.
  • the third device is, for example, a fusion unit.
  • the third device performs fusion processing on the first sensing information output by the first device and sensing information output by other sensors to obtain fusion information.
  • the fusion unit may also start from The second device receives regional information and/or season information, and the fusion unit may further perform fusion processing on the first sensing information sent by each sensor based on the received regional information and/or season information.
  • the sensing information output by the first device and the sensing information output by other sensors can be fused by the fusion unit, so that fusion information with higher reliability can be obtained.
  • Fig. 6 shows a schematic flowchart of an information processing method according to an embodiment of the present application. This method works for the system shown in Figure 3. As shown in Figure 6, the method may include the following steps:
  • the second device determines second information, where the second information is used to indicate region information where the terminal is located (information about the region where the terminal is located) and/or season information (information about the season where the terminal is currently located).
  • region information where the terminal is located information about the region where the terminal is located
  • season information information about the season where the terminal is currently located.
  • spatio-temporal information may also be referred to as spatio-temporal information, and different spatio-temporal information may be used to distinguish scenes with large spatial and/or temporal spans, which will not be described in detail below.
  • the second device may be an external module in FIG. 3 , for example, the second device is an electronic control unit, a domain control unit, a multi-domain controller, a cockpit domain controller, or a vehicle terminal.
  • region information or the season information For the specific realization of the region information or the season information, please refer to the related description of the region information or the season information in S401, which will not be repeated here.
  • the second device in S601 above may have multiple implementations for determining the second information:
  • the second device may determine the second information based on user input, where the user input is based on a display screen or voice input.
  • the second device may determine the second information based on push from an operator.
  • the second device may determine the second information based on information from the map module.
  • the second device sends second information to the third device.
  • the third device receives the second information from the second device.
  • the third device may be the fusion unit in FIG. 3 .
  • the third device receives at least one piece of first information from at least one first device, where the first information includes environment information of a terminal where the first device is located.
  • the first device can be any one of the sensors in Figure 3, for example, the first device is a camera, laser radar, sound pick-up, ultrasonic sensor, or millimeter wave radar, inertial navigation system, GNSS, wherein the sound pick-up It can be any device that can pick up sound, such as a microphone; the first device can also be a large screen, a communication module, or a map module.
  • at least one first piece of information in S603 includes information from at least one sensor, a user, a communication module (such as a V2X module that can directly or indirectly perform information interaction with a third device), or a map module or more.
  • the third device outputs first fusion information for the terminal according to at least one piece of first information and second information.
  • the third means outputs the first fusion information for the terminal according to at least one piece of first information and second information, and there may be many possible implementation manners.
  • the regional information and/or season information corresponds to at least one first algorithm, and the at least one first algorithm belongs to a predefined algorithm set; the third device may use the at least one first algorithm and the at least one first information, and output the first fusion information for the terminal.
  • a plurality of first algorithms respectively corresponding to different regional information and/or season information can be predefined, and the environment information of the terminal is processed through the first algorithm corresponding to the regional information and/or seasonal information, and the first integrated algorithm is output.
  • Information can improve the applicability of the terminal's perception system to scenarios corresponding to different spatiotemporal information, wherein different spatiotemporal information can be used to distinguish scenes with large spatial and/or temporal spans, thereby improving the performance of the intelligent driving system.
  • multiple algorithms included in the predefined algorithm set correspond to different sensors, and/or, multiple algorithms correspond to different regional information and/or seasonal information.
  • the multiple algorithms included in the predefined algorithm set correspond to different sensors, or, the multiple algorithms included in the predefined algorithm set correspond to different regional information and/or seasonal information, or, the predefined algorithm set
  • the multiple algorithms included correspond to different sensors, and different regional information and/or seasonal information.
  • the plurality of first algorithms included in the predefined algorithm set may be obtained by training data sets of different regional information and/or seasonal information, and these data sets have unique scene characteristics corresponding to regional information and/or seasonal information,
  • the trained algorithm is deployed in the terminal according to the requirement.
  • the first algorithm may include but not limited to any one or more of the following: special vehicle recognition algorithm, special vehicle siren recognition algorithm, policeman and its gesture recognition algorithm, road sign marking recognition algorithm, traffic signal light recognition algorithm, etc., when the When the first algorithm includes multiple algorithms, the first algorithm can also be understood as an algorithm package.
  • the second information includes regional information, and the regional information corresponds to a first algorithm, that is, the first algorithm can process the first information from different sensors.
  • the third device may select a first algorithm corresponding to the received regional information from a predefined algorithm set according to the regional information received from the second device, and then perform at least one first algorithm from each sensor through the first algorithm.
  • the information is processed, and the first fusion information for the terminal is output.
  • the second information includes season information, and the season information corresponds to a first algorithm.
  • the second information includes region information and season information, and the region information and season information correspond to a first algorithm.
  • the second information includes regional information
  • the regional information corresponds to a plurality of first algorithms as an example
  • the plurality of first algorithms respectively correspond to processing the first information from different sensors
  • the first algorithm 1 is used to process
  • the first algorithm 2 is used to process the first information from the lidar
  • the first algorithm 3 is used to process the first information from the millimeter wave radar.
  • the third device may select a plurality of first algorithms corresponding to the received region information from a predefined algorithm set according to the region information received from the second device, and then use the plurality of first algorithms to respectively At least one piece of first information from each sensor is processed, and first fusion information for the terminal is output.
  • the second information includes time season information, and the time season information corresponds to multiple first algorithms.
  • the second information includes region information and season information, and the region information and season information correspond to a plurality of first algorithms.
  • the regional information and/or season information are used to construct the first neural network model; the third device may output the first fusion information for the terminal according to the first neural network model and the first information.
  • an initial neural network model is deployed, and by inputting regional information and/or seasonal information into the initial neural network model, it can be automatically adjusted to the first neural network model that adapts to scenarios corresponding to different spatiotemporal information, thereby improving
  • the performance of the intelligent driving system can be improved, and the overhead of training and deployment of the perception system can be reduced.
  • the second information includes regional information as an example, that is, the regional information is used to construct a first neural network model corresponding to the regional information.
  • the first device may input regional information into an initial model of a neural network, and the initial model automatically adjusts the first neural network model to be adapted to the regional information. Then, the first information is used as an input of the first neural network model, and then the first fusion information for the terminal is output.
  • the second information includes season information, and the season information is used to construct a first neural network model corresponding to the regional information.
  • the second information includes regional information.
  • the second information includes regional information and seasonal information, and the regional information and seasonal information are used to construct a first neural network model corresponding to the regional information and seasonal information.
  • the second information includes regional information as an example, that is, the regional information is used to construct multiple first neural network models corresponding to the regional information, and different first neural network models correspond to different sensors.
  • the first device can input regional information into the initial models of multiple neural networks corresponding to different sensors. Taking the camera device, lidar and millimeter wave radar installed on the terminal as an example, the regional information can be input into the initial model corresponding to the camera device.
  • the initial model corresponding to the camera device is automatically adjusted to become the first neural network model adapted to the regional information and the camera device;
  • the regional information is input into the initial model corresponding to the laser radar, and the initial model corresponding to the laser radar Automatically adjust to become the first neural network model adapted to the regional information and lidar;
  • input the regional information into the initial model corresponding to the millimeter wave radar, and the initial model corresponding to the millimeter wave radar is automatically adjusted to adapt to the regional information and millimeter wave radar
  • the first neural network model for wave radar is automatically adjusted to become the first neural network model adapted to the regional information and the camera device.
  • the second information includes season information, and the season information is used to construct a plurality of first neural network models corresponding to the regional information.
  • the second information includes regional information.
  • the second information includes regional information and seasonal information, and the regional information and seasonal information are used to construct a plurality of first neural network models corresponding to the regional information and seasonal information.
  • the first device such as a sensor
  • the region information and/or season information may be received from the second device, and then the first device may process the first information based on the region information and/or season information, and then send the processed first information to a third device (for example, by combining unit).
  • the third device (for example, the fusion unit) first receives the region information and/or season information where the terminal is located, and then outputs the first fusion information in combination with the environment information where the terminal is located, and the region information and/or season information, Therefore, the applicability of the intelligent driving system to scenarios corresponding to different spatio-temporal information can be improved, thereby improving the performance of the intelligent driving system.
  • the embodiments of the present application also provide an information processing device, which is used to execute the steps performed by the first device in the above method embodiments.
  • an information processing device which is used to execute the steps performed by the first device in the above method embodiments.
  • the information processing apparatus 700 may include an acquisition unit 701 , a communication unit 702 and a processing unit 703 .
  • An acquiring unit 701 configured to acquire first information, where the first information includes environment information of the terminal where the first device is located;
  • a communication unit 702 configured to receive second information from a second device, where the second information is used to indicate the region information and/or season information where the terminal is located;
  • the processing unit 703 is configured to output first sensing information for the terminal according to the first information and the second information.
  • the obtaining unit 701 is specifically configured to: detect the environment information of the terminal; or receive the first information from the second device.
  • the regional information and/or season information corresponds to a first algorithm
  • the first algorithm belongs to a predefined algorithm set
  • the processing unit 703 is specifically configured to: based on the first algorithm and the first information, output an algorithm for the terminal first perception information.
  • the regional information and/or the season information are used to construct the first neural network model; the processing unit 703 is specifically configured to: output the first perception information for the terminal according to the first neural network model.
  • the communication unit 702 is specifically configured to: send the first perception information to the fusion unit.
  • the embodiment of the present application also provides an information processing device, configured to execute the steps performed by the second device or the third device in the above method embodiment.
  • the information processing apparatus 800 may include a communication unit 801 and a processing unit 802 .
  • the communication unit 801 is used to receive at least one first information from at least one first device, the first information includes Environmental information of the terminal; receiving second information from the second device, the second information is used to indicate the region information and/or season information where the terminal is located; the processing unit 802 is configured to output according to at least one first information and second information The first fusion information for the terminal.
  • the regional information and/or season information corresponds to at least one first algorithm, and at least one first algorithm belongs to a predefined algorithm set; the processing unit 802 is specifically configured to:
  • Output first fusion information for the terminal according to at least one first algorithm and at least one first information.
  • multiple algorithms in the predefined algorithm set respectively correspond to different sensors, and/or correspond to different regional information and/or seasonal information.
  • regional information and/or seasonal information are used to construct the first neural network model; the processing unit 802 is specifically used for:
  • At least one piece of first information includes one or more of at least one sensor, user, communication module, or map module.
  • the regional information includes any of the following: one or more of the continent name, country name, region name, province, or city; and/or, the season information includes time system information, and the time system The information includes one or more of seasons, festivals, solar terms, or seasons.
  • the first device includes any one of the following: a camera device; a laser radar; a sound pickup device; an ultrasonic sensor; or a millimeter wave radar.
  • the processing unit 802 is used to determine the second information, and the second information is used to indicate the region information and/or season information where the terminal is located;
  • a communication unit 801, configured to send second information to the first device or the third device.
  • the processing unit 802 is specifically configured to: determine the second information based on user input, where the user input is an input based on a display screen or voice input; or determine the second information based on an operator's push; or , to determine the second information based on the information from the map module.
  • FIG. 9 is a schematic structural diagram of an information processing device provided in the embodiment of the present application.
  • the information processing device 900 may be the first device, or the second device, or the third device in the above embodiments device.
  • the information processing apparatus 900 may include a memory 901, a processor 902, and may also include a bus system, and the processor 902 and the memory 901 may be connected through the bus system.
  • the above-mentioned processor 902 may be a chip.
  • the processor 902 may be a field programmable gate array (field programmable gate array, FPGA), may be an application specific integrated circuit (ASIC), may also be a system chip (system on chip, SoC), or It can be a central processor unit (CPU), a network processor (network processor, NP), a digital signal processing circuit (digital signal processor, DSP), or a microcontroller (micro controller) unit, MCU), it can also be a programmable controller (programmable logic device, PLD) or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processing circuit
  • microcontroller micro controller
  • MCU microcontroller
  • PLD programmable logic device
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 902 or instructions in the form of software.
  • the steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor 902 .
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory 901, and the processor 902 reads the information in the memory 901, and completes the steps of the above method in combination with its hardware.
  • the processor 902 in the embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above-mentioned method embodiments may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • Various methods, steps, and logic block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory 901 in this embodiment of the present application may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM direct memory bus random access memory
  • direct rambus RAM direct rambus RAM
  • the present application also provides a vehicle, which may include the information processing device mentioned above.
  • the vehicle may be the first vehicle involved in this application.
  • the present application also provides a computer program product, the computer program product including: computer program code or instruction, when the computer program code or instruction is run on the computer, the computer is made to execute the above method The method of any one of the embodiments in the embodiments.
  • the present application also provides a computer-readable storage medium, the computer-readable medium stores program codes, and when the program codes are run on a computer, the computer executes the method described in the above-mentioned embodiments. The method of any one of the embodiments.
  • the above-mentioned embodiments may be implemented in whole or in part by software, hardware, firmware or other arbitrary combinations.
  • the above-described embodiments may be implemented in whole or in part in the form of computer program products.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center that includes one or more sets of available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, DVD), or semiconductor media.
  • the semiconductor medium may be a solid state drive (SSD).
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to realize the purpose of the technical solution of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种信息处理方法及装置,其中方法包括:第一装置获取第一信息,第一信息包含第一装置所在终端的环境信息;接收来自第二装置的第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;然后,根据第一信息和第二信息,输出针对终端的第一感知信息。通过该方法,先接收到终端所在的地域信息和/或时节信息,然后根据第一装置所在终端的环境信息和终端所在的地域信息和/或时节信息输出第一感知信息,从而可以提高终端的感知系统对不同时空信息所对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法可以提高车辆的智能驾驶系统的性能。

Description

一种信息处理方法及装置 技术领域
本申请涉及智能驾驶技术领域,尤其涉及一种信息处理方法及装置。
背景技术
随着社会的发展,现代生活中越来越多的机器向自动化、智能化发展,智能汽车正在逐步进入人们的日常生活中,智能驾驶系统也成为智能汽车的重要组成部分,智能驾驶系统可以分为驾驶辅助系统、自动驾驶系统等。近些年,高级驾驶辅助系统(advanced driving assistant system,ADAS)在智能汽车中发挥着十分重要的作用,它是利用安装在车上的各式各样传感器,在车辆行驶过程中感应周围的环境,收集数据,进行静止、移动物体的辨识、侦测与追踪,并结合导航仪地图数据,进行系统的运算与分析,从而预先让驾驶者察觉到可能发生的危险,有效增加汽车驾驶的舒适性和安全性,ADAS为真正实现自动驾驶奠定基础。
自动驾驶要实现的功能主要包括感知预测、决策规划及整车控制等,感知系统是自动驾驶汽车的“眼睛”,用于感知环境与障碍物等,该感知系统是自动驾驶车辆是否可以安全行驶的基础,例如感知系统包括各种传感器,可以获得车辆周围的环境信息。
基于此,如何提高高级驾驶辅助系统或者自动驾驶系统的性能是一种亟需解决的技术问题。
发明内容
本申请提供一种信息处理方法及装置,用以提高智能驾驶系统的性能。
第一方面,本申请提供一种信息处理方法,该方法应用于信息处理装置,该信息处理装置例如可以为终端上安装的任一传感器,该方法包括:获取第一信息,第一信息包含第一装置所在终端的环境信息,接收来自第二装置的第二信息,第二信息用于指示终端所在的地域信息和/或时节信息,根据第一信息和第二信息,输出针对终端的第一感知信息。通过该方法,先接收到终端所在的地域信息和/或时节信息,然后根据第一装置所在终端的环境信息以及终端所在的地域信息和/或时节信息输出第一感知信息,可以提高终端的感知系统对不同时空信息所对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法可以提高车辆的智能驾驶系统的性能。
在一种可能地实现方式中,获取第一信息包括:探测终端的环境信息;或者,接收来自第二装置的第一信息。
在一种可能地实现方式中,地域信息和/或时节信息对应第一算法,第一算法属于预定义的算法集合。根据第一信息和第二信息,输出针对终端的第一感知信息,包括:基于第一算法以及第一信息,输出针对终端的第一感知信息。该实现方式中,可以预定义分别与不同地域信息和/或时节信息对应的多个算法,通过所在的地域信息和/或时节信息所对应第一算法对终端的环境信息进行处理,输出第一感知信息,可以提高终端的感知系统对不同时空信息所对应的场景的适用性,从而可以提高智能驾驶系统的性能。
在一种可能地实现方式中,地域信息和/或时节信息用于构建第一神经网络模型;根据 第一信息和第二信息,输出针对终端的第一感知信息,包括:根据第一神经网络模型,输出针对终端的第一感知信息。该实现方式中,可以部署一个初始的神经网络模型,通过将地域信息和/或时节信息输入到一个初始的神经网络模型,就可以自动调整为适应不同时空信息(包括地域信息和/或时节信息)对应的场景的第一神经网络模型,从而可以降低感知系统训练部署的开销。
在一种可能地实现方式中,根据第一信息和第二信息,输出针对终端的第一感知信息,包括:向融合单元发送第一感知信息。通过融合单元可以对第一装置输出的感知信息以及其它传感器输出的感知信息进行融合,从而可以得到可靠性更高的融合信息。
在一种可能地实现方式中,地域信息包括:洲名、国家名、地区名、省、或者市中的一项或多项;和/或,时节信息包含时间制度信息,时间制度信息包括季节、节日、节气或者时令中的一个或多个。
在一种可能地实现方式中,第一装置包括以下任一种:摄像装置;激光雷达;拾音装置;超声波传感器;或者毫米波雷达。
第二方面,本申请提供一种信息处理方法,应用于信息处理装置,该信息处理装置例如可以为终端中的融合单元,该方法包括:接收来自至少一个第一装置的至少一个第一信息,第一信息包含第一装置所在终端的环境信息;接收来自第二装置的第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;根据至少一个第一信息和第二信息,输出针对终端的第一融合信息。通过该方法,先接收到终端所在的地域信息和/或时节信息,然后根据第一装置所在终端的环境信息和终端所在的地域信息和/或时节信息输出第一融合信息,从而可以提高智能驾驶系统对不同时空信息所对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法可以提高车辆的智能驾驶系统的性能。
在一种可能地实现方式中,地域信息和/或时节信息对应至少一个第一算法,至少一个第一算法属于预定义的算法集合。根据至少一个第一信息和第二信息,输出针对终端的第一融合信息,包括:根据至少一个第一算法和至少一个第一信息,输出针对终端的第一融合信息。该实现方式中,可以预定义分别与不同地域信息和/或时节信息对应的多个第一算法,通过所在的地域信息和/或时节信息所对应第一算法对终端的环境信息进行处理,输出第一融合信息,可以提高终端的感知系统对不同时空信息所对应的场景的适用性,从而可以提高智能驾驶系统的性能。
在一种可能地实现方式中,预定义的算法集合中的多个算法对应不同传感器,和/或,对应不同地域信息和/或时节信息。
在一种可能地实现方式中,地域信息和/或时节信息用于构建第一神经网络模型;根据至少一个第一信息和第二信息,输出针对终端的第一融合信息,包括:根据第一神经网络模型,输出针对终端的第一融合信息。该实现方式中,可以部署一个初始的神经网络模型,通过将地域信息和/或时节信息输入到初始的神经网络模型,就可以自动调整为适应不同时空信息所对应的场景的第一神经网络模型,从而可以降低感知系统训练部署的开销。
在一种可能地实现方式中,至少一个第一信息包含来自至少一个传感器、用户、通信模块、或者地图模块中的一个或多个。
在一种可能地实现方式中,地域信息包括:洲名、国家名、地区名、省、或者市中的一项或多项;和/或,时节信息包含时间制度信息,时间制度信息包括季节、节日、节气或 者时令中的一个或多个。
在一种可能地实现方式中,第一装置包括以下任一种:摄像装置;激光雷达;拾音装置;超声波传感器;或者毫米波雷达。
第三方面,本申请提供一种信息处理方法,应用于信息处理装置,该信息处理装置例如可以为终端中的外部模块或者安装在终端上的外部模块,该方法包括:确定第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;向第一装置发送第二信息。通过该方法,例如第一装置可以为终端上的任一传感器,向传感器提供终端所在的地域信息和/或时节信息,可以使得该传感器根据第一装置所在终端的环境信息和终端所在的地域信息和/或时节信息输出第一感知信息,又例如,第一装置可以为终端上的融合单元,向融合单元提供终端所在的地域信息和/或时节信息,可以使得该融合单元根据第一装置所在终端的环境信息和终端所在的地域信息和/或时节信息输出第一融合信息,从而可以提高终端的感知系统对不同时空信息所对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法可以提高车辆的智能驾驶系统的性能。
在一种可能地实现方式中,确定第二信息,包括:基于用户输入确定第二信息,用户输入为基于显示屏的输入、或者语音输入;或者,基于运营商的推送确定第二信息;或者,基于来自地图模块的信息确定第二信息。通过该实现方式,可提供多种确定第二信息的方式。
第四方面,本申请还提供一种信息处理装置,用于执行上述各个方面或各个方面的任意可能的实现方式中的方法。具体地,信息传输装置可以包括用于执行上述各个方面或其任意可能的实现方式中的方法的模块/单元;这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第五方面,本申请还提供一种信息处理装置,该信息处理装置包括至少一个处理器和通信接口,该通信接口用于接收来自除该信息处理装置以外的其它通信装置的信号并传输至该至少一个处理器或者将来自该至少一个处理器的信号发送给除该信息处理装置以外的其它通信装置;该至少一个处理器通过逻辑电路或执行代码指令用于实现如上述第一方面或第一方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第二方面或第二方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第三方面或第三方面的任意可能的实现方式中的方法的操作步骤。
第六方面,本申请还提供一种终端,该终端包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的装置、以及用于执行上述第三方面或第三方面的任意可能的实现方式中的方法的装置;或者,该终端包括用于执行上述第二方面或第二方面的任意可能的实现方式中的方法的装置、以及用于执行上述第三方面或第三方面的任意可能的实现方式中的方法的装置。
第六方面,本申请还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,当该计算机程序被运行时,实现如上述第一方面或第一方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第二方面或第二方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第三方面或第三方面的任意可能的实现方式中的方法的操作步骤。
第六方面,本申请还提供一种计算机程序产品,该计算机程序产品包括计算机程序或 指令,当该计算机程序或指令被通信装置执行时,实现如上述第一方面或第一方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第二方面或第二方面的任意可能的实现方式中的方法的操作步骤,或者,实现如上述第三方面或第三方面的任意可能的实现方式中的方法的操作步骤。
附图说明
图1为本申请实施例提供的车辆感知系统的示意图;
图2为本申请实施例提供的系统的示意图;
图3为本申请实施例提供的系统的示意图;
图4本申请实施例提供的系统的示意图;
图5为本申请实施例提供的信息处理方法的流程示意图;
图6为本申请实施例提供的信息处理方法的流程示意图;
图7为本申请实施例提供的信息处理装置示意图;
图8为本申请实施例提供的信息处理装置示意图;
图9为本申请实施例提供的信息处理装置示意图。
具体实施方式
下面将结合附图对本申请作进一步地详细描述。显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。方法实施例中的具体操作方法也可以应用于装置实施例中。其中,在本申请实施例的描述中,本领域普通技术人员可以理解:本申请中涉及的第一、第二等各种数字编号仅为描述方便进行的区分,并不用来限制本申请实施例的范围,也不用来表示先后顺序。“多个”的含义是两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。“至少一个”是指一个或者多个。至少两个是指两个或者多个。“至少一个”、“任意一个”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。
目前,智能驾驶车辆可以通过感知系统感知车辆周围的环境,感知系统例如包括各种传感器,可以感知车辆周围的环境信息。但是这些传感器采集到的都是车辆周围的局部环境信息,而世界各地的交通环境虽然大致相同,但在某些方面仍会存在地域性的差别,特别是与道路交通息息相关的方面,例如,交通设施(如标志、标线、标牌、信号灯等)、交通警察(如服装、手势等)、特殊交通车辆(如警车、救护车、消防车等)等方面。对于自动驾驶的感知系统来说,交通环境的地域性或时间上差异会给算法的训练和部署带来挑战,目前可以有两种方式来训练和部署这些算法,第一种方式为训练一个大而全的算法,适用于世界各地的不同交通环境,但是这种方式需要世界各地的大量标注数据进行样本训练,而且超大网络对于车载计算单元压力较大,所以从训练和部署的成本角度来说并不现实;另一种方式为对于在某一区域运行的智能驾驶车辆,使用该区域的图像数据进行训练,这种方式在部署时需要注意车辆运行的区域,因为一旦车辆驶出了训练阶段所熟悉的区域,即设计适用范围,那么算法性能将会下降,从而影响智能驾驶的安全性,这种方式会限制智能驾驶车辆只能在某一特定区域运行,但是很多地区的交通场景都具有不同的场景特征, 例如中国境内的香港、广东珠海和澳门、大陆地区,以及欧洲大陆各国等这些地区的交通环境都具有不同的特征,采用第二种方式会使得感知系统对不同空间和/或时间上跨度较大的场景的适用性差,因此,亟需一种方法以解决如何提高智能驾驶系统的性能的问题。
鉴于此,本申请实施例提供一种信息处理方法,终端上安装的任一传感器通过结合传感器所在终端的环境信息以及终端所在的地域信息和/或时节信息,输出针对终端的第一感知信息;或者,终端上的融合单元通过结合融合单元所在终端的环境信息以及终端所在的地域信息和/或时节信息输出针对终端的第一融合信息,从而可以提高终端的感知系统对不同时空信息所对应的场景的适用性,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法更容易满足用户对于车辆在不同时空信息所对应的场景中行驶的需求,因此,通过该方法可以提高车辆的智能驾驶系统的性能。
业界关于驾驶自动化系统提出了分级标准,其中,国际自动机工程师学会(society of automotive engineers international,简称SAE International)提出的驾驶自动化分级标准包括L0-L5等六个级别,其中,L0-L2级,驾驶员支持系统能够为驾驶员提供一些支持功能,但是无论车辆的驾驶员支持功能是否已经开启,驾驶员都必须自己驾驶车辆,并时刻监督驾驶员支持系统提供的这些支持功能,必须根据需要进行转向、制动、或加速以保证安全,L0级、L1级以及L2级的支持功能的区别在于:L0级为无驾驶自动化,其支持功能仅限于提供警告和瞬时协助,L1级的支持功能为驾驶员提供转向或制动/加速支持,L2级的支持功能为驾驶员提供转向和制动/加速支持。L3级半自动驾驶,自动驾驶系统既能完成某些驾驶任务,也能在某些情况下监控驾驶环境,但驾驶员需要随时准备重新取得驾驶控制权,例如当功能请求时,驾驶员必须驾驶。L4级高度自动驾驶,自动驾驶系统在某些环境和特定条件下,能够完成驾驶任务并监控驾驶环境。L5级完全自动驾驶,自动驾驶系统在所有条件下都能完成的所有驾驶任务。
本申请实施例中,智能驾驶系统可以分为ADAS和自动驾驶系统,其中,高级驾驶辅助系统(ADAS)可以为达到L0-L2级别的驾驶自动化系统,自动驾驶系统为达到L3-L5级及以上级别的驾驶自动化系统。
需要说明的是,本申请实施例中的终端,例如可以为车辆,或者车辆中的其它装置。该其它装置包括但不限于:车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头等其他传感器,车辆可通过该车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头。当然,上述终端还可以为除了车辆之外的其它智能终端,或设置在除了车辆之外的其它智能终端中的部件。该智能终端可以为智能运输设备、智能家居设备、机器人等。例如包括但不限于智能终端或智能终端内的控制器、芯片、雷达或摄像头等其它传感器、以及其它部件等。
终端上可以安装多种传感器,例如图1中是以终端为车辆进行示意,在车辆上安装摄像装置(例如图1中的摄像头)、激光雷达、毫米波雷达(例如图1中的长距毫米波雷达,中/短距毫米波雷达)、超声波传感器等,以通过传感器获取车辆周围的环境信息,并对获取的环境信息进行分析和处理,实现例如障碍物感知、目标识别、车辆定位、路径规划、驾驶员监控/提醒等功能,从而提升车辆驾驶的安全性、自动化程度和舒适度。其中,传感器所获取的环境信息主要针对终端周边的事物,例如交通灯信息、交通指示路牌信息、交警、障碍物等,后文不再赘述。
其中,摄像装置用于获取车辆所在环境的图像信息,目前车辆上可以安装多个摄像头以实现对更多角度的信息的获取。激光雷达是激光探测及测距(light laser detection and ranging,LiDAR)系统的简称,主要包括发射器,接收器和信号处理单元组成,发射器是激光雷达中的激光发射机构;发射器发射的激光照射到目标物体后,通过目标物体反射,反射光线会经由镜头组汇聚到接收器上。信号处理单元负责控制发射器的发射,以及处理接收器接收到的信号,并计算出目标物体的位置、速度、距离、和/或大小等信息。
毫米波雷达以毫米波作为探测介质,可以测量从毫米波雷达到被测物体之间的距离、角度和相对速度等。毫米波雷达根据其探测距离的远近可以分为长距雷达(long range radar,LRR)、中距雷达(mid-range radar,MRR)以及短距雷达(short range radar,SRR)。LRR主要面向的应用场景包括主动巡航以及制动辅助等,LRR对探测的角域宽度要求不高,反应到天线上是对天线的3dB波束宽度要求较低。MRR/SRR主要面向的应用场景包括自动泊车,并道辅助以及盲点检测等,MRR/SRR对探测的角域宽度要求较高,反应到天线上是对天线的3dB波束宽度要求较高,且要求天线有较低的副瓣水平。波束宽度用于保证可探测角域范围,低副瓣用于减少地面反射的杂波能量,降低虚警概率,保证驾驶安全。LRR可以安装于车身前方,MRR/SRR可以安装于车的四角位置,共同使用可以实现对于车身四周360范围的覆盖。
毫米波雷达可以包括壳体,壳体内置有至少一片印制电路板(printed circuit board,PCB),例如可以包括电源PCB和雷达PCB,其中电源PCB可以提供雷达内部使用电压,也可以提供与其它设备通信的接口和安全功能;雷达PCB可以提供毫米波信号的收发和处理,其上集成有用于毫米波信号处理的元器件以及用于毫米波信号收发的天线(发射天线Tx和接收天线Rx)。天线可以微带阵列的方式形成于雷达PCB的背面,用于发射和接收毫米波。
超声波传感器,又可以称为超声波雷达,是利用超声波探测的传感装置,其工作原理是通过超声波发射装置向外发射超声波,通过接收装置接收经障碍物反射回来的超声波,根据超声波反射接收的时间差来测算距离。目前利用超声波传感器测算的距离可以用于提示车体到障碍物距离,辅助停车或减少不必要碰撞。应理解的是,上述传感器仅是对本申请实施例中车辆上可能配置的传感器的示例说明而非任何限定,在其他实施例中,传感器可以包括但不限于上述举例。
上述终端除了安装多种传感器之外,还可以安装融合单元和外部模块。其中,外部模块可以为终端内部能够与传感器和/或融合单元交互的模块,例如为电子控制单元、域控制单元、多域控制器、座舱域控制器等,外部模块也可以为终端外部的其它设备。
本申请实施例还提供一种信息处理系统,该系统包括至少一个传感器、融合单元和外部模块。以下图2和图3所示的系统中以包括N个传感器、N大于2为例,但并不对传感器的数量造成限定。
在一种可能的实现方式中,如图2所示的信息处理系统中,N个传感器与外部模块之间通过第一逻辑接口层连接,其中,第一逻辑接口层包括多个第一逻辑接口,外部模块分别与每个传感器之间通过不同的第一逻辑接口连接,示例的,每个第一逻辑接口可以提供如下信息中的一项或多项:
(1)传感器运作状态,用于定义传感器的工作模式,工作模式例如为初始化模式、正常工作模式或校准模式。
(2)环境信息,用于描述自车周围的天气、时间、交通状况等。
示例的,环境信息可以包括以下信息:天气情况、环境光照度、气温、气压、相对湿度、道路特性(例如,高速、城区等)、路面情况(温度、积水、结冰、粗糙度)。
(3)车辆状态,用于描述自车的动态状态,动态状态可以理解为自车处于运动状态。示例的,动态状态可以采用运动学参数进行描述,运动学参数包括但不限于速度、加速度、加加速度、朝向、转向等。
(4)<接口>操作,用于定义每个逻辑接口的操作模式,并提供额外的参数。其中,逻辑接口的操作模式例如为工作模式(比如正在传输数据)或空闲模式;额外的参数可以为通过逻辑接口向传感器传输的数据的补充信息,例如传输的数据为图像数据,额外的参数可以为图像数据的分辨率。
(5)地域信息和/或时节信息。
如图2所示,N个传感器与融合单元之间通过第二逻辑接口层连接。该第二逻辑接口层包括多个第二逻辑接口,融合单元分别与每个传感器之间通过不同的第二逻辑接口连接。
第二逻辑接口可以提供有关对象级别的信息(例如,潜在的移动对象、道路对象、静态对象)、基于传感器技术特定信息的特征和检测级别的信息,以及可获得的更多支持性信息。
在该实现方式中,外部模块可以通过第一逻辑接口层向N个传感器中的每个传感器发送车辆所在的地域信息和/或时节信息,每个传感器结合自身感知的环境信息以及车辆所在的地域信息和/或时节信息,并向融合单元输出第一感知信息,不同类型、不同功能、不同安装位置的传感器能够获取的终端周围环境信息可能存在冗余、互补甚至矛盾的情况,融合单元可以将多个传感器的第一感知信息进行融合处理,从而得到可靠性较高的感知信息的融合结果。
在另一种可能的实现方式中,如图3所示的信息处理系统中,融合单元与外部模块之间通过第三逻辑接口连接,融合单元与N个传感器之间通过第二逻辑接口层连接。此处的第二逻辑接口层的具体实现参见前述图2中的第二逻辑接口层的相关描述,此处不再赘述。外部模块可以通过第三逻辑接口向融合单元发送车辆所在的地域信息和/或时节信息,N个传感器可以通过第二逻辑接口层向融合单元发送各自感知到的车辆周围的环境信息,然后融合单元结合地域信息和/或时节信息和车辆的环境信息,输出针对车辆的第一融合信息。
在又一种可能的实现方式中,如图4所示的信息处理系统中,外部模块与融合单元之间通过第三逻辑接口连接,外部模块与N个传感器之间通过第一逻辑接口层连接,融合单元与N个传感器之间通过第二逻辑接口层连接,外部模块可通过第三逻辑接口向融合单元发送车辆所在的地域信息和/或时节信息,同时通过第一逻辑接口层向N个传感器分别发送地域信息和/或时节信息。
上述三种信息处理系统中的外部模块可以包括但不限于电子控制单元、域控制单元、多域控制器、座舱域控制器或车载终端等。
下面结合方法流程图介绍本申请实施例的信息处理方法。
实施例一
图5示出了本申请实施例的一种信息处理方法的流程示意图。该方法适用于图2中所示的系统。如图5所示,该方法可以包括以下步骤:
S501,第二装置确定第二信息,第二信息用于指示终端所在的地域信息(终端所在的地域的信息)和/或时节信息(终端当前所处的时节的信息)。本申请实施例中,地域信息和/或时节信息也可以称为时空信息,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,后文不再赘述。
此处,第二装置可以为图2中的外部模块,例如第二装置为电子控制单元、域控制单元、多域控制器、座舱域控制器或者车载终端。
其中,地域信息可以包括以下任一项:洲名、国家名、地区名、省、或者市中的一项或多项,其中,洲名例如为亚洲、欧洲等。
时节信息包含时间制度信息,时间制度信息包括季节、节日、节气或者时令中的一个或多个。季节可以为春、夏、秋或冬。各个国家制定的时间制度信息的可以不同,例如时令可以为中国的冬至、秋分等节气;例如,时令可以为美国的夏令时或冬令时,从每年3月第二个星期日至11月第一个星期日采用夏令时,从每年11月第二个星期一至次年3月第三个星期一采用冬令时。
本申请实施例中,上述S501中第二装置确定第二信息可以有多种实施方式:
一种可能的实施方式中,第二装置可以基于用户输入确定第二信息,用户输入为基于显示屏的输入、或者语音输入。
另一种可能的实施方式中,第二装置可以基于运营商的推送确定第二信息。
又一种可能的实施方式中,第二装置可以基于来自地图模块的信息确定第二信息。
S502,第二装置向第一装置发送第二信息。相应的,第一装置接收来自第二装置的第二信息。
此处,第一装置可以为图2中的任一个传感器,例如第一装置为摄像装置、激光雷达、拾音装置、超声波传感器、或者毫米波雷达,其中,拾音装置可以为任何可以拾取声音的装置,例如麦克风。
S503,第一装置获取第一信息,第一信息包含第一装置所在终端的环境信息。
本申请实施例中,第一装置获取第一信息可以有多种实施方式。
在一种可能的实施方式中,第一装置可以探测终端的环境信息。例如,第一装置为毫米波雷达,可以探测到道路信息。又例如,第一装置为摄像装置,可以采集到周围环境的图像数据。
在另一种可能的实施方式中,第一装置可以接收来自第二装置的第一信息。此处,第二装置可以为外部模块,例如,外部模块为大屏,第一装置可以接收来自大屏的输入,又例如外部模块为麦克风,第一装置可以接收来自麦克风的语音输入。
S504,第一装置根据第一信息和第二信息,输出针对终端的第一感知信息。
本申请实施例中,S504中第一装置根据第一信息和第二信息,输出针对终端的第一感知信息可以有多种可能的实施方式。
在一种可能的实施方式中,地域信息和/或时节信息对应第一算法,第一算法属于预定义的算法集合;第一装置可以基于第一算法以及第一信息,输出针对终端的第一感知信息。
该预定义的算法集合中包括的多个第一算法可以是针对不同地域信息和/或时节信息的数据集进行训练得到,这些数据集具有地域信息和/或时节信息对应的独特的场景特征,训练好的算法根据需求部署于终端中。该第一算法可以为包括但不限于以下任一项或多项:特殊车辆识别算法、特殊车辆警笛识别算法、警察及其手势识别算法、道路标牌标线识别 算法、交通信号灯识别算法等,当该第一算法包括多项算法时,所述第一算法也可以理解为算法包。
在一个示例中,以第二信息包括地域信息为例,即地域信息对应第一算法,预定义的算法集合中可以包括多个预设的地域信息分别对应的第一算法。第一装置可以根据从第二装置接收到地域信息,从预定义的算法集合中选择与接收到的地域信息对应的第一算法,然后通过该第一算法对第一信息进行处理,输出针对终端的第一感知信息。例如终端为车辆,该方式可以提高终端的感知系统对不同空间上跨度较大的场景的适用性,从而可以提高智能驾驶系统的性能。
在另一个示例中,以第二信息包括时节信息为例,即时节信息对应第一算法,预定义的算法集合中可以包括多个预设的时节信息分别对应的第一算法。第一装置可以根据从第二装置接收到时节信息,从预定义的算法集合中选择与接收到的时节信息对应的第一算法,然后通过该第一算法对第一信息进行处理,输出针对终端的第一感知信息。例如终端为车辆,该方式可以提高终端的感知系统对不同时间上跨度较大的场景的适用性,从而可以提高智能驾驶系统的性能。
在又一个示例中,以第二信息包括地域信息和时节信息为例,即地域信息和时节信息对应第一算法,预定义的算法集合中可以包括多个预设的地域信息以及时节信息分别对应的第一算法,每个预设的地域信息以及预设的时间信息对应一个第一算法。第一装置可以根据从第二装置接收到地域信息和时节信息,从预定义的算法集合中选择与接收到的地域信息和时节信息对应的第一算法,然后通过该第一算法对第一信息进行处理,输出针对终端的第一感知信息。例如终端为车辆,该方式可以提高终端的感知系统对不同空间和时间上跨度较大的场景的适用性,从而可以提高智能驾驶系统的性能。
通过该实施方式,可以预定义分别与不同地域信息和/或时节信息对应的多个算法,通过所在的地域信息和/或时节信息所对应第一算法对终端的环境信息进行处理,输出第一感知信息,可以提高终端的感知系统对不同时空信息所对应的场景的适用性,通过该方法可以提高智能驾驶系统的性能。
在另一种可能的实施方式中,地域信息和/或时节信息用于构建第一神经网络模型;第一装置可以根据第一神经网络模型和第一信息,输出针对终端的第一感知信息。
在一个示例中,以第二信息包括地域信息为例,即地域信息用于构建与地域信息对应的第一神经网络模型。第一装置可以将地域信息输入到一个神经网络的初始模型中,该初始模型自动调整成为适应该地域信息的第一神经网络模型。然后,第一信息作为第一神经网络模型的输入,然后输出针对终端的第一感知信息。该方式可以提高终端的感知系统对不同空间上跨度较大的场景的适用性,从而可以提高智能驾驶系统的性能。
在另一个示例中,以第二信息包括时节信息为例,即时节信息用于构建与时节信息对应的第一神经网络模型。第一装置可以将时节信息输入到一个神经网络的初始模型中,该初始模型自动调整成为适应该时节信息的第一神经网络模型。然后,第一信息作为第一神经网络模型的输入,然后输出针对终端的第一感知信息。
在又一个示例中,以第二信息包括地域信息和时节信息为例,即地域信息和时节信息用于构建与地域信息和时节信息对应的第一神经网络模型。第一装置可以将地域信息和时节信息输入到一个神经网络的初始模型中,该初始模型自动调整成为适应该地域信息和时节信息的第一神经网络模型。然后,第一信息作为第一神经网络模型的输入,然后输出针 对终端的第一感知信息。
通过该实施方式,可以在终端中部署一个初始的神经网络模型,通过将地域信息和/或时节信息输入到一个初始的神经网络模型,就可以自动调整为适应不同时空信息所对应的场景的第一神经网络模型,从而可以降低感知系统训练部署的开销,而且可以提高智能驾驶系统的性能。
本申请实施例中,第一装置先接收到终端所在的地域信息和/或时节信息,然后根据第一装置所在终端的环境信息和终端所在的地域信息和/或时节信息输出第一感知信息,从而可以提高终端的感知系统对不同时空信息(对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,例如终端为车辆,通过该方法可以提高车辆的智能驾驶系统的性能。
在一种可能的实施方式中,在前两种可能的实施方式的基础上,上述S504中输出针对终端的第一感知信息可以包括:向融合单元发送第一感知信息,以便融合单元基于第一装置输出的第一感知信息以及其它传感器输出的第一感知信息进行融合得到融合信息。
在另一种可能的实施方式中,在上述S504之后,该信息处理方法还可以包括S505和S506。
S505,第一装置向第三装置发送第一感知信息。相应的,第三装置接收来自第一装置发送的第一感知信息。
此处,第三装置例如为融合单元。
S506,第三装置对第一装置输出的第一感知信息以及其它传感器输出的感知信息进行融合处理,得到融合信息。
在其它一些实施例中,可参考上述图4所示的信息处理系统,该实施例的实施方式与上述图5所示的实施例的实施方式区别在于,在S504之后,融合单元也可以从第二装置接收地域信息和/或时节信息,融合单元可以基于接收到的地域信息和/或时节信息,进一步对各个传感器发送的第一感知信息进行融合处理。
本申请实施例中,通过融合单元可以对第一装置输出的感知信息以及其它传感器输出的感知信息进行融合,从而可以得到可靠性更高的融合信息。
实施例二
图6示出了本申请实施例的一种信息处理方法的流程示意图。该方法适用于图3中所示的系统。如图6所示,该方法可以包括以下步骤:
S601,第二装置确定第二信息,第二信息用于指示终端所在的地域信息(终端所在的地域的信息)和/或时节信息(终端当前所处的时节的信息)。本申请实施例中,地域信息和/或时节信息也可以称为时空信息,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,后文不再赘述。
此处,第二装置可以为图3中的外部模块,例如第二装置为电子控制单元、域控制单元、多域控制器、座舱域控制器或者车载终端。
关于地域信息或时节信息的具体实现可以参见针对S401中的地域信息或时节信息的相关描述,此处不再赘述。
本申请实施例中,上述S601中第二装置确定第二信息可以有多种实施方式:
一种可能的实施方式中,第二装置可以基于用户输入确定第二信息,用户输入为基于 显示屏的输入、或者语音输入。
另一种可能的实施方式中,第二装置可以基于运营商的推送确定第二信息。
又一种可能的实施方式中,第二装置可以基于来自地图模块的信息确定第二信息。
S602,第二装置向第三装置发送第二信息。相应的,第三装置接收来自第二装置的第二信息。
此处,第三装置可以为图3中的融合单元。
S603,第三装置接收来自至少一个第一装置的至少一个第一信息,第一信息包含第一装置所在终端的环境信息。
此处,第一装置可以为图3中的任一个传感器,例如第一装置为摄像装置、激光雷达、拾音装置、超声波传感器、或者毫米波雷达、惯性导航系统、GNSS,其中,拾音装置可以为任何可以拾取声音的装置,例如麦克风;第一装置也可以为大屏、通信模块或地图模块等。一种可能的实施方式中,S603中的至少一个第一信息包含来自至少一个传感器、用户、通信模块(例如与第三装置可以直接或间接进行信息交互的V2X模块)、或者地图模块中的一个或多个。
S604,第三装置根据至少一个第一信息和第二信息,输出针对终端的第一融合信息。
本申请实施例中,S604中第三装置根据至少一个第一信息和第二信息,输出针对终端的第一融合信息可以有多种可能的实施方式。
在一种可能的实施方式中,地域信息和/或时节信息对应至少一个第一算法,至少一个第一算法属于预定义的算法集合;第三装置可以根据至少一个第一算法和至少一个第一信息,输出针对终端的第一融合信息。如此,可以预定义分别与不同地域信息和/或时节信息对应的多个第一算法,通过所在的地域信息和/或时节信息所对应第一算法对终端的环境信息进行处理,输出第一融合信息,可以提高终端的感知系统对不同时空信息所对应的场景的适用性,其中,不同时空信息可以用来区分空间和/或时间上跨度较大的场景,从而可以提高智能驾驶系统的性能。
本申请实施例中,预定义的算法集合中包括的多个算法对应不同传感器,和/或,多个算法对应不同地域信息和/或时节信息。具体来说,预定义的算法集合中包括的多个算法对应不同传感器,或者,预定义的算法集合中包括的多个算法对应不同地域信息和/或时节信息,或者,预定义的算法集合中包括的多个算法对应不同传感器、以及不同地域信息和/或时节信息。
该预定义的算法集合中包括的多个第一算法可以是针对不同地域信息和/或时节信息的数据集进行训练得到,这些数据集具有地域信息和/或时节信息对应的独特的场景特征,训练好的算法根据需求部署于终端中。该第一算法可以包括但不限于以下任一项或多项:特殊车辆识别算法、特殊车辆警笛识别算法、警察及其手势识别算法、道路标牌标线识别算法、交通信号灯识别算法等,当该第一算法包括多项算法时,所述第一算法也可以理解为算法包。
在一个示例中,以第二信息包括地域信息,地域信息对应一个第一算法为例,也就是说,这个第一算法可以处理来自不同传感器的第一信息。第三装置可以根据从第二装置接收到的地域信息,从预定义的算法集合中选择与接收到的地域信息对应的第一算法,然后通过该第一算法对来自各个传感器的至少一个第一信息进行处理,输出针对终端的第一融合信息。类似的,第二信息包括时节信息,时节信息对应一个第一算法,具体实现也可以 参考该示例。第二信息包括地域信息和时节信息,地域信息和时节信息对应一个第一算法,具体实现也可以参考该示例,此处不再赘述。
在另一个示例中,以第二信息包括地域信息,地域信息对应多个第一算法为例,多个第一算法分别对应处理来自不同传感器的第一信息,例如,第一算法1用于处理来自摄像装置的第一信息,第一算法2用于处理来自激光雷达的第一信息,第一算法3用于处理来自毫米波雷达的第一信息。在该示例中,第三装置可以根据从第二装置接收到的地域信息,从预定义的算法集合中选择与接收到的地域信息对应的多个第一算法,然后通过多个第一算法分别对来自各个传感器的至少一个第一信息进行处理,输出针对终端的第一融合信息。类似的,第二信息包括时节信息,时节信息对应多个第一算法,具体实现也可以参考该示例。第二信息包括地域信息和时节信息,地域信息和时节信息对应多个第一算法,具体实现也可以参考该示例,此处不再赘述。
在另一种可能的实施方式中,地域信息和/或时节信息用于构建第一神经网络模型;第三装置可以根据第一神经网络模型和第一信息,输出针对终端的第一融合信息。如此,部署一个初始的神经网络模型,通过将地域信息和/或时节信息输入到初始的神经网络模型,就可以自动调整为适应不同时空信息所对应的场景的第一神经网络模型,从而可以提高智能驾驶系统的性能,而且可以降低感知系统训练部署的开销。
在一个示例中,以第二信息包括地域信息为例,即地域信息用于构建与地域信息对应的一个第一神经网络模型。第一装置可以将地域信息输入到一个神经网络的初始模型中,该初始模型自动调整称为适应该地域信息的第一神经网络模型。然后,第一信息作为第一神经网络模型的输入,然后输出针对终端的第一融合信息。类似的,第二信息包括时节信息,时节信息用于构建与地域信息对应的一个第一神经网络模型,具体实现也可以参考该第二信息包括地域信息的示例。第二信息包括地域信息和时节信息,地域信息和时节信息用于构建与地域信息和时节信息对应的一个第一神经网络模型,具体实现也可以参考该第二信息包括地域信息的示例,此处不再赘述。
在另一个示例中,以第二信息包括地域信息为例,即地域信息用于构建与地域信息对应的多个第一神经网络模型,不同的第一神经网络模型对应不同的传感器。第一装置可以将地域信息分别输入到与不同传感器对应的多个神经网络的初始模型中,以终端安装摄像装置、激光雷达和毫米波雷达为例,将地域信息输入至与摄像装置对应的初始模型中,该与摄像装置对应的初始模型自动调整成为适应该地域信息和摄像装置的第一神经网络模型;将地域信息输入至与激光雷达对应的初始模型中,该与激光雷达对应的初始模型自动调整成为适应该地域信息和激光雷达的第一神经网络模型;将地域信息输入至与毫米波雷达对应的初始模型中,该与毫米波雷达对应的初始模型自动调整成为适应该地域信息和毫米波雷达的第一神经网络模型。然后,将来自摄像装置的环境信息输入至适应该地域信息和摄像装置的第一神经网络模型,输出该地域信息和摄像装置对应的第一处理结果;将来自激光雷达的环境信息输入至适应该地域信息和激光雷达的第一神经网络模型,输出该地域信息和毫米波雷达对应的第二处理结果;将来自毫米波雷达的环境信息输入至适应该地域信息和毫米波雷达的第一神经网络模型,输出该地域信息和毫米波雷达对应的第三处理结果;然后融合单元对第一处理结果、第二处理结果和第三处理结果进行融合,输出针对终端的第一融合信息。类似的,第二信息包括时节信息,时节信息用于构建与地域信息对应的多个第一神经网络模型,具体实现也可以参考该第二信息包括地域信息的示例。第二 信息包括地域信息和时节信息,地域信息和时节信息用于构建与地域信息和时节信息对应的多个第一神经网络模型,具体实现也可以参考该第二信息包括地域信息的示例,此处不再赘述。
在其它一些实施例中,可参考上述图4所示的信息处理系统,该实施例的实施方式与上述图6所示的实施例的区别在于,在S603之前,第一装置(例如传感器)也可以从第二装置接收地域信息和/或时节信息,然后第一装置可以基于地域信息和/或时节信息对第一信息进行处理,然后将处理后的第一信息发送给第三装置(例如融合单元)。
本申请实施例中,第三装置(例如融合单元)先接收到终端所在的地域信息和/或时节信息,然后结合终端所在的环境信息、以及地域信息和/或时节信息输出第一融合信息,从而可以提高智能驾驶系统对不同时空信息所对应的场景的适用性,从而可以提高智能驾驶系统的性能。
基于以上实施例以及相同构思,本申请实施例还提供一种信息处理装置,用于执行上述方法实施例中第一装置所执行的步骤,相关特征的可参见上述方法实施例,在此不再赘述。
如图7所示,该信息处理装置700可以包括获取单元701、通信单元702和处理单元703。
获取单元701,用于获取第一信息,第一信息包含第一装置所在终端的环境信息;
通信单元702,用于接收来自第二装置的第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;
处理单元703,用于根据第一信息和第二信息,输出针对终端的第一感知信息。
一种可能的实现方式中,获取单元701,具体用于:探测终端的环境信息;或者,接收来自第二装置的第一信息。
一种可能的实现方式中,地域信息和/或时节信息对应第一算法,第一算法属于预定义的算法集合;处理单元703,具体用于:基于第一算法以及第一信息,输出针对终端的第一感知信息。
一种可能的实现方式中,地域信息和/或时节信息用于构建第一神经网络模型;处理单元703,具体用于:根据第一神经网络模型,输出针对终端的第一感知信息。
一种可能的实现方式中,通信单元702,具体用于:向融合单元发送第一感知信息。
本申请实施例还提供一种信息处理装置,用于执行上述方法实施例中第二装置或第三装置所执行的步骤,相关特征的可参见上述方法实施例,在此不再赘述。如图8所示,该信息处理装置800可以包括通信单元801和处理单元802。
当该信息处理装置用于执行上述方法实施例中第三装置所执行的步骤时,通信单元801,用于接收来自至少一个第一装置的至少一个第一信息,第一信息包含第一装置所在终端的环境信息;接收来自第二装置的第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;处理单元802,用于根据至少一个第一信息和第二信息,输出针对终端的第一融合信息。
一种可能的实现方式中,地域信息和/或时节信息对应至少一个第一算法,至少一个第一算法属于预定义的算法集合;处理单元802,具体用于:
根据至少一个第一算法和至少一个第一信息,输出针对终端的第一融合信息。
一种可能的实现方式中,预定义的算法集合中的多个算法分别对应不同传感器,和/或,对应不同地域信息和/或时节信息。
一种可能的实现方式中,地域信息和/或时节信息用于构建第一神经网络模型;处理单元802,具体用于:
根据第一神经网络模型,输出针对终端的第一融合信息。
一种可能的实现方式中,至少一个第一信息包含来自至少一个传感器、用户、通信模块、或者地图模块中的一个或多个。
一种可能的实现方式中,地域信息包括以下任一项:洲名、国家名、地区名、省、或者市中的一项或多项;和/或,时节信息包含时间制度信息,时间制度信息包括季节、节日、节气或者时令中的一个或多个。
一种可能的实现方式中,第一装置包括以下任一种:摄像装置;激光雷达;拾音装置;超声波传感器;或者毫米波雷达。
当该信息处理装置用于执行上述方法实施例中第二装置所执行的步骤时,处理单元802,用于确定第二信息,第二信息用于指示终端所在的地域信息和/或时节信息;通信单元801,用于向第一装置或第三装置发送第二信息。
一种可能的实现方式中,处理单元802,具体用于:基于用户输入确定第二信息,用户输入为基于显示屏的输入、或者语音输入;或者,基于运营商的推送确定第二信息;或者,基于来自地图模块的信息确定第二信息。
根据前述方法,图9为本申请实施例提供的信息处理装置的结构示意图,如图9所示,该信息处理装置900可以为上述实施例中的第一装置、或者第二装置、或者第三装置。该信息处理装置900可以包括存储器901、处理器902,还可以包括总线系统,处理器902和存储器901可以通过总线系统相连。
应理解,上述处理器902可以是一个芯片。例如,该处理器902可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器902中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器902中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器901,处理器902读取存储器901中的信息,结合其硬件完成上述方法的步骤。
应注意,本申请实施例中的处理器902可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或 者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器901可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本申请还提供一种车辆,所述车辆可以包含上述涉及的信息处理装置。在一种示例中,所述车辆可以为本申请涉及的第一车辆。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码或指令,当该计算机程序代码或指令在计算机上运行时,使得该计算机执行上述方法实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行上述方法实施例中任意一个实施例的方法。
上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载或执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘(solid state drive,SSD)。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请的技术方案的目的。
以上所述,仅为本申请的具体实施方式。熟悉本技术领域的技术人员根据本申请提供的具体实施方式,可想到变化或替换,都应涵盖在本申请的保护范围之内。

Claims (32)

  1. 一种信息处理方法,其特征在于,包括:
    获取第一信息,所述第一信息包含第一装置所在终端的环境信息;
    接收来自第二装置的第二信息,所述第二信息用于指示所述终端所在的地域信息和/或时节信息;
    根据所述第一信息和所述第二信息,输出针对所述终端的第一感知信息。
  2. 如权利要求1所述的方法,其特征在于,所述获取第一信息,包括:
    探测所述终端的环境信息;或者,
    接收来自所述第二装置的所述第一信息。
  3. 如权利要求1或2所述的方法,其特征在于,所述地域信息和/或时节信息对应第一算法,所述第一算法属于预定义的算法集合;
    所述根据所述第一信息和所述第二信息,输出针对所述终端的第一感知信息,包括:
    基于所述第一算法以及所述第一信息,输出针对所述终端的所述第一感知信息。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述地域信息和/或时节信息用于构建第一神经网络模型;
    所述根据所述第一信息和所述第二信息,输出针对所述终端的第一感知信息,包括:
    根据所述第一神经网络模型,输出针对所述终端的第一感知信息。
  5. 如权利要求1-4任一项所述的方法,其特征在于,所述根据所述第一信息和所述第二信息,输出针对所述终端的第一感知信息,包括:
    向融合单元发送所述第一感知信息。
  6. 一种信息处理方法,其特征在于,包括:
    接收来自至少一个第一装置的至少一个第一信息,所述第一信息包含第一装置所在终端的环境信息;
    接收来自第二装置的第二信息,所述第二信息用于指示所述终端所在的地域信息和/或时节信息;
    根据所述至少一个第一信息和所述第二信息,输出针对所述终端的第一融合信息。
  7. 如权利要求6所述的方法,其特征在于,所述地域信息和/或时节信息对应至少一个第一算法,所述至少一个第一算法属于预定义的算法集合;
    所述根据所述至少一个第一信息和所述第二信息,输出针对所述终端的第一融合信息,包括:
    根据至少一个所述第一算法和所述至少一个第一信息,输出针对所述终端的第一融合信息。
  8. 如权利要求7所述的方法,其特征在于,所述预定义的算法集合中的多个算法对应不同传感器,和/或,对应不同地域和/或时节。
  9. 如权利要求6-8任一项所述的方法,其特征在于,所述地域信息和/或时节信息用于构建第一神经网络模型;
    所述根据所述至少一个第一信息和所述第二信息,输出针对所述终端的第一融合信息,包括:
    根据所述第一神经网络模型,输出针对所述终端的第一融合信息。
  10. 如权利要求6-9任一项所述的方法,其特征在于,所述至少一个第一信息包含来自至少一个传感器、用户、通信模块、或者地图模块中的一个或多个。
  11. 如权利要求1-9任一项所述的方法,其特征在于,所述地域信息包括:洲名、国家名、地区名、省、或者市中的一项或多项;和/或,
    所述时节信息包含时间制度信息,所述时间制度信息包括季节、节日、节气或者时令中的一个或多个。
  12. 如权利要求1-10任一项所述的方法,其特征在于,所述第一装置包括以下任一种:
    摄像装置;激光雷达;拾音装置;超声波传感器;或者毫米波雷达。
  13. 一种信息处理方法,其特征在于,包括:
    确定第二信息,所述第二信息用于指示所述终端所在的地域信息和/时节信息;
    向第一装置发送所述第二信息。
  14. 如权利要求13所述的方法,其特征在于,所述确定第二信息,包括:
    基于用户输入确定所述第二信息,所述用户输入为基于显示屏的输入、或者语音输入;或者,
    基于运营商的推送确定所述第二信息;或者,
    基于来自地图模块的信息确定所述第二信息。
  15. 一种信息处理装置,其特征在于,包括:
    获取单元,用于获取第一信息,所述第一信息包含第一装置所在终端的环境信息;
    通信单元,用于接收来自第二装置的第二信息,所述第二信息用于指示所述终端所在的地域信息和/或时节信息;
    处理单元,用于根据所述第一信息和所述第二信息,输出针对所述终端的第一感知信息。
  16. 如权利要求15所述的装置,其特征在于,所述获取单元,具体用于:
    探测所述终端的环境信息;或者,
    接收来自所述第二装置的所述第一信息。
  17. 如权利要求15或16所述的装置,其特征在于,所述地域信息和/或时节信息对应第一算法,所述第一算法属于预定义的算法集合;所述处理单元,具体用于:
    基于所述第一算法以及所述第一信息,输出针对所述终端的所述第一感知信息。
  18. 如权利要求15-17任一项所述的装置,其特征在于,所述地域信息和/或时节信息用于构建第一神经网络模型;所述处理单元,具体用于:
    根据所述第一神经网络模型,输出针对所述终端的第一感知信息。
  19. 如权利要求15-18任一项所述的装置,其特征在于,所述通信单元,具体用于:
    向融合单元发送所述第一感知信息。
  20. 一种信息处理装置,其特征在于,包括:
    通信单元,用于接收来自至少一个第一装置的至少一个第一信息,所述第一信息包含第一装置所在终端的环境信息;接收来自第二装置的第二信息,所述第二信息用于指示所述终端所在的地域信息和/或时节信息;
    处理单元,用于根据所述至少一个第一信息和所述第二信息,输出针对所述终端的第一融合信息。
  21. 如权利要求20所述的装置,其特征在于,所述地域信息和/或时节信息对应至少一 个第一算法,所述至少一个第一算法属于预定义的算法集合;所述处理单元,具体用于:
    根据至少一个所述第一算法和所述至少一个第一信息,输出针对所述终端的第一融合信息。
  22. 如权利要求21所述的装置,其特征在于,所述预定义的算法集合中的多个算法分别对应不同传感器,和/或,对应不同地域和/或时节。
  23. 如权利要求20-22任一项所述的装置,其特征在于,所述地域信息和/或时节信息用于构建第一神经网络模型;所述处理单元,具体用于:
    根据所述第一神经网络模型,输出针对所述终端的第一融合信息。
  24. 如权利要求20-23任一项所述的装置,其特征在于,所述至少一个第一信息包含来自至少一个传感器、用户、通信模块、或者地图模块中的一个或多个。
  25. 如权利要求15-24任一项所述的装置,其特征在于,所述地域信息包括以下任一项:洲名、国家名、地区名、省、或者市中的一项或多项;和/或,
    所述时节信息包含时间制度信息,所述时间制度信息包括季节、节日、节气或者时令中的一个或多个。
  26. 如权利要求15-25任一项所述的装置,其特征在于,所述第一装置包括以下任一种:
    摄像装置;激光雷达;拾音装置;超声波传感器;或者毫米波雷达。
  27. 一种信息处理装置,其特征在于,包括:
    处理单元,用于确定第二信息,所述第二信息用于指示所述终端所在的地域信息和/或时节信息;
    通信单元,用于向第一装置或第三装置发送所述第二信息。
  28. 如权利要求27所述的装置,其特征在于,所述处理单元,具体用于:
    基于用户输入确定所述第二信息,所述用户输入为基于显示屏的输入、或者语音输入;或者,
    基于运营商的推送确定所述第二信息;或者,
    基于来自地图模块的信息确定所述第二信息。
  29. 一种信息处理装置,其特征在于,包括至少一个处理器和通信接口,所述通信接口用于接收来自除所述信息处理装置以外的其它通信装置的信号并传输至所述至少一个处理器或者将来自所述至少一个处理器的信号发送给除所述信息处理装置以外的其它通信装置;所述至少一个处理器通过逻辑电路或执行代码指令用于实现如上述权利要求1-5中任一项所述的方法、或实现如上述权利要求6-12中任一项所述的方法、或实现如上述权利要求13-14中任一项所述的方法。
  30. 一种终端,其特征在于,所述终端包括如权利要求15至19中任一项所述的装置、以及如权利要求27至28中任一项所述的装置;或者,
    所述终端包括如权利要求20至26中任一项所述的装置、以及如权利要求27至28中任一项所述的装置。
  31. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序被运行时,实现如上述权利要求1-5中任一项所述的方法、或实现如上述权利要求6-12中任一项所述的方法、或实现如上述权利要求13-14中任一项所述的方法。
  32. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序或指令, 当所述计算机程序或指令被通信装置执行时,实现如上述权利要求1-5中任一项所述的方法、或实现如上述权利要求6-12中任一项所述的方法、或实现如上述权利要求13-14中任一项所述的方法。
PCT/CN2021/131761 2021-11-19 2021-11-19 一种信息处理方法及装置 WO2023087248A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180103931.2A CN118215612A (zh) 2021-11-19 2021-11-19 一种信息处理方法及装置
PCT/CN2021/131761 WO2023087248A1 (zh) 2021-11-19 2021-11-19 一种信息处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/131761 WO2023087248A1 (zh) 2021-11-19 2021-11-19 一种信息处理方法及装置

Publications (1)

Publication Number Publication Date
WO2023087248A1 true WO2023087248A1 (zh) 2023-05-25

Family

ID=86395973

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/131761 WO2023087248A1 (zh) 2021-11-19 2021-11-19 一种信息处理方法及装置

Country Status (2)

Country Link
CN (1) CN118215612A (zh)
WO (1) WO2023087248A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107202983A (zh) * 2017-05-19 2017-09-26 深圳佑驾创新科技有限公司 基于图像识别和毫米波雷达融合的自动刹车方法和系统
CN108983219A (zh) * 2018-08-17 2018-12-11 北京航空航天大学 一种交通场景的图像信息和雷达信息的融合方法及系统
CN111923919A (zh) * 2019-05-13 2020-11-13 广州汽车集团股份有限公司 车辆控制方法、装置、计算机设备和存储介质
US20210009156A1 (en) * 2018-09-12 2021-01-14 Huawei Technologies Co., Ltd. Intelligent Driving Method and Intelligent Driving System
CN112655226A (zh) * 2020-04-09 2021-04-13 华为技术有限公司 车辆感知的方法、装置和系统
CN112673379A (zh) * 2018-09-14 2021-04-16 Avl 里斯脱有限公司 动态空间场景分析
CN113313154A (zh) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 一体化融合多传感器自动驾驶智能感知装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107202983A (zh) * 2017-05-19 2017-09-26 深圳佑驾创新科技有限公司 基于图像识别和毫米波雷达融合的自动刹车方法和系统
CN108983219A (zh) * 2018-08-17 2018-12-11 北京航空航天大学 一种交通场景的图像信息和雷达信息的融合方法及系统
US20210009156A1 (en) * 2018-09-12 2021-01-14 Huawei Technologies Co., Ltd. Intelligent Driving Method and Intelligent Driving System
CN112673379A (zh) * 2018-09-14 2021-04-16 Avl 里斯脱有限公司 动态空间场景分析
CN111923919A (zh) * 2019-05-13 2020-11-13 广州汽车集团股份有限公司 车辆控制方法、装置、计算机设备和存储介质
CN112655226A (zh) * 2020-04-09 2021-04-13 华为技术有限公司 车辆感知的方法、装置和系统
CN113313154A (zh) * 2021-05-20 2021-08-27 四川天奥空天信息技术有限公司 一体化融合多传感器自动驾驶智能感知装置

Also Published As

Publication number Publication date
CN118215612A (zh) 2024-06-18

Similar Documents

Publication Publication Date Title
JP7355877B2 (ja) 車路協同自動運転の制御方法、装置、電子機器及び車両
US10531254B2 (en) Millimeter wave vehicle-to-vehicle communication system for data sharing
CN111770451B (zh) 一种基于车路协同的道路车辆定位及感知方法和装置
JP2019145077A (ja) 自律走行車(adv)に対して車両とクラウド間のリアルタイム交通地図を構築するためのシステム
JP7205204B2 (ja) 車両の制御装置及び自動運転システム
JP6973351B2 (ja) センサ校正方法、及びセンサ校正装置
JP2021099793A (ja) インテリジェント交通管制システム及びその制御方法
CN112793586B (zh) 汽车的自动驾驶控制方法、装置及计算机存储介质
SE542590C2 (en) Method and system for calibration of sensor signals in a vehicle
CN112534297A (zh) 信息处理设备和信息处理方法、计算机程序、信息处理系统以及移动设备
US11501539B2 (en) Vehicle control system, sensing device and sensing data processing method
US20210043090A1 (en) Electronic device for vehicle and method for operating the same
EP4047581A1 (en) Information processing system, information processing method, and information processing device
US20210323577A1 (en) Methods and systems for managing an automated driving system of a vehicle
JP5494411B2 (ja) 走行支援装置
WO2023087248A1 (zh) 一种信息处理方法及装置
US20230065727A1 (en) Vehicle and vehicle control method
US20220410904A1 (en) Information processing device, information processing system and information processing method
CN115366900A (zh) 车辆故障的检测方法、装置、车辆和存储介质
CN114527735A (zh) 用于控制自动驾驶车辆的方法和装置、车辆及存储介质
CN113301105A (zh) 智能基础设施错误报警系统
CN113771845A (zh) 预测车辆轨迹的方法、装置、车辆和存储介质
CN113838299A (zh) 用于查询车辆路况信息的方法和设备
US20240240966A1 (en) Information providing device and information providing method
US11854269B2 (en) Autonomous vehicle sensor security, authentication and safety

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964402

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021964402

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021964402

Country of ref document: EP

Effective date: 20240529